cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
15275
Views
15
Helpful
22
Replies

C3850 output drops

Mika J
Level 1
Level 1

Version: 3.06.06E

Platform: WS-C3850-48U

 

Hello,

 

I'm seeing output drops on an interface connected to a Wifi AP

The switch uplink is 10G and the AP is connected to 2x1gi => Po

N9K <-- Po=2x10G --> C3850 <-- Po=2x1G --> AP2802i

 

My understanding is there's a kind of speed mismatch and too many packets want to exit the port.

I tried to reallocate buffers but I still see these drops (see end of the post for configuration).

The traffic has no DSCP and the second port in the Po has no error

Users seem to feel disconnections even when the traffic is not high 10Mbit/s (or even lower)

What am I missing, I read the Cisco documentations

 

On the switch

 

#show int gi6/0/24 | i drops|error
  Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 1569626
     1569626 output errors, 0 collisions, 0 interface resets

 

No other error such as CRC, collisions

 

#show int gi6/0/24  counters errors
Port        Align-Err     FCS-Err    Xmit-Err     Rcv-Err  UnderSize  OutDiscards
Gi6/0/24            0           0     1569626           0          0      1569626

 

#show policy-map int gi6/0/24
 GigabitEthernet6/0/24

  Service-policy output: NODROP
    Class-map: class-default (match-any)
      0 packets
      Match: any
      Queueing
      (total drops) 1569626
      (bytes output) 33589434502
      bandwidth 100% (1000000 kbps)
      queue-buffers ratio 100

 

#show platform qos queue config gi 6/0/24
DATA Port:6 GPN:956 AFD:Disabled QoSMap:1 HW Queues: 48 - 55
  DrainFast:Disabled PortSoftStart:2 - 10000
----------------------------------------------------------
  DTS Hardmax   Softmax  PortSMin GlblSMin  PortStEnd
  --- --------  -------- -------- --------- ---------
 0   1  4     0  8 10000  7   800   3   300   2 10000


 Priority   Shaped/shared   weight  shaping_step
 --------   ------------   ------  ------------
 0      7     Shared            50           0

   Weight0 Max_Th0 Min_Th0 Weigth1 Max_Th1 Min_Th1 Weight2 Max_Th2 Min_Th2
   ------- -------  ------  ------  ------  ------  ------  ------ ------
 0      0    7968       0       0    8906       0       0   10000       0

 

#show platform qos queue stats gi 6/0/24
DATA Port:6 Enqueue Counters
-------------------------------
Queue Buffers Enqueue-TH0 Enqueue-TH1 Enqueue-TH2
----- ------- ----------- ----------- -----------
    0       0           0           0 33589711669

DATA Port:6 Drop Counters
-------------------------------
Queue Drop-TH0    Drop-TH1    Drop-TH2    SBufDrop    QebDrop
----- ----------- ----------- ----------- ----------- -----------
    0           0           0     1569626           0           0

 

 

interface GigabitEthernet6/0/24
 switchport access vlan 10
 switchport mode access
 storm-control broadcast level 1.00 0.50
 storm-control action trap
 channel-group 4 mode active
 spanning-tree portfast
 service-policy output NODROP
end

 

policy-map NODROP
 class class-default
  bandwidth percent 100
  queue-buffers ratio 100

 

 

22 Replies 22

Ok, I am still seeing the drops after implementing the following command and rebooting the 3850 switches (yes, seeing this on more than one campus):

 

qos queue-softmax-multiplier 1200

 

The rebooting of the switch was based on another recent bug mentioned in the previous message, whereas the softmax command may not become active until the switch is rebooted.

 

The drops did change and are now spread across multiple queues:

 

SwitchA#show platform hardware fed switch active qos queue stats interface tenGigabitEthernet 1/0/10

DATA Port:1 Enqueue Counters
-------------------------------
Queue Buffers Enqueue-TH0 Enqueue-TH1 Enqueue-TH2
----- ------- ----------- ----------- -----------
     0         0                88481      43551186        2293139
     1         0                       0                    0 26434008830

Hello,

 

try and also implement the policy below:

 

policy-map QUEUE_BUFFERS
class class-default
bandwidth percent 100
queue-buffers ratio 100


interface GigabitEthernet1/0/10
service-policy output QUEUE_BUFFERS

Applied and still an issue:

 

SwitchA#sh run int TenGigabitEthernet1/0/4
Building configuration...

Current configuration : 204 bytes
!
interface TenGigabitEthernet1/0/4
description Switch-to-Switch Trunk
switchport trunk allowed vlan 1,150,151,998,999
switchport mode trunk
load-interval 30
service-policy output QUEUE_BUFFERS
end

SwitchA#

 

SwitchA#sh int TenGigabitEthernet1/0/4 | in clear|output drops
Last clearing of "show interface" counters 00:01:36
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 4294452164


SwitchA#sh run | sec softmax|QUEUE_BUFFERS
qos queue-softmax-multiplier 1200
policy-map QUEUE_BUFFERS
class class-default
bandwidth percent 100
queue-buffers ratio 100

 

SwitchA#sh int TenGigabitEthernet1/0/4 | in clear|output drops
Last clearing of "show interface" counters 00:01:36
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 4294452164

 

That is a massive amount of drops withing such a short period of time. Are you actually experiencing problems with applications ?

 

Can you post the full config of the switch, as well as the output of 'show buffers' ?

Georg,

 

That's what's odd, I cannot replicate packet loss with pings, but we have had complaints from users about streaming video buffering and pixelation, and people complain of download fails on large files.  I could privately send the complete configuration, not cannot publicly due to employee rules.  I can tell you that this is a fairly dumb switch, pure layer two distribution.  All of the ports are basically configured like the one I previously provided.  All LAN policy is being enforced at the 6800 series core or 2960-X series access edge.  These 3850s are trunks in and trunks out, with no policy, other than the one you had me test.  Here is the show buffers output:

 

SwitchA#show buffers
Buffer elements:
2171 in free list
176558819 hits, 0 misses, 2270 created

Public buffer pools:
Small buffers, 104 bytes (total 6144, permanent 6144):
6143 in free list (2048 min, 8192 max allowed)
283430448 hits, 0 misses, 0 trims, 0 created
0 failures (0 no memory)
Medium buffers, 256 bytes (total 3000, permanent 3000):
2994 in free list (64 min, 3000 max allowed)
25997005 hits, 0 misses, 0 trims, 0 created
0 failures (0 no memory)
Middle buffers, 600 bytes (total 512, permanent 512):
511 in free list (64 min, 1024 max allowed)
62702327 hits, 0 misses, 0 trims, 0 created
0 failures (0 no memory)
Big buffers, 1536 bytes (total 1000, permanent 1000):
1000 in free list (64 min, 2000 max allowed)
242602015 hits, 0 misses, 0 trims, 0 created
0 failures (0 no memory)
VeryBig buffers, 4520 bytes (total 10, permanent 10, peak 66 @ 1w2d):
10 in free list (0 min, 100 max allowed)
4296680 hits, 73 misses, 146 trims, 146 created
0 failures (0 no memory)
Large buffers, 9240 bytes (total 20, permanent 20):
20 in free list (10 min, 30 max allowed)
8106 hits, 0 misses, 0 trims, 0 created
0 failures (0 no memory)
Huge buffers, 18024 bytes (total 2, permanent 2, peak 5 @ 6d08h):
2 in free list (0 min, 13 max allowed)
376613 hits, 47 misses, 94 trims, 94 created
0 failures (0 no memory)

Interface buffer pools:
CF Small buffers, 104 bytes (total 101, permanent 100, peak 101 @ 4w2d):
101 in free list (100 min, 200 max allowed)
0 hits, 0 misses, 273 trims, 274 created
0 failures (0 no memory)
UDLD Private Pool buffers, 256 bytes (total 120, permanent 120):
60 in free list (60 min, 120 max allowed)
60 hits, 0 fallbacks, 0 trims, 0 created
0 failures (0 no memory)
60 max cache size, 60 in cache
0 hits in cache, 0 misses in cache
Generic ED Pool buffers, 512 bytes (total 101, permanent 100, peak 101 @ 4w2d):
101 in free list (100 min, 100 max allowed)
0 hits, 0 misses
CF Middle buffers, 600 bytes (total 101, permanent 100, peak 101 @ 4w2d):
101 in free list (100 min, 200 max allowed)
0 hits, 0 misses, 273 trims, 274 created
0 failures (0 no memory)
LI Middle buffers, 600 bytes (total 512, permanent 256, peak 512 @ 4w2d):
256 in free list (256 min, 768 max allowed)
171 hits, 85 fallbacks, 0 trims, 256 created
0 failures (0 no memory)
256 max cache size, 256 in cache
0 hits in cache, 0 misses in cache
Syslog ED Pool buffers, 600 bytes (total 433, permanent 432, peak 433 @ 4w2d):
401 in free list (432 min, 432 max allowed)
687 hits, 0 misses
CF Big buffers, 1536 bytes (total 26, permanent 25, peak 26 @ 4w2d):
26 in free list (25 min, 50 max allowed)
0 hits, 0 misses, 273 trims, 274 created
0 failures (0 no memory)
LI Big buffers, 1536 bytes (total 512, permanent 256, peak 512 @ 4w2d):
256 in free list (256 min, 768 max allowed)
171 hits, 85 fallbacks, 0 trims, 256 created
0 failures (0 no memory)
256 max cache size, 256 in cache
0 hits in cache, 0 misses in cache
mgmt0 buffers, 2240 bytes (total 2400, permanent 2400):
1200 in free list (0 min, 2400 max allowed)
1200 hits, 0 fallbacks
1200 max cache size, 176 in cache
1024 hits in cache, 0 misses in cache
IPC buffers, 4096 bytes (total 18042, permanent 18042):
18038 in free list (6014 min, 60140 max allowed)
15144581 hits, 0 fallbacks, 0 trims, 0 created
0 failures (0 no memory)
CF VeryBig buffers, 4520 bytes (total 3, permanent 2, peak 3 @ 4w2d):
3 in free list (2 min, 4 max allowed)
0 hits, 0 misses, 273 trims, 274 created
0 failures (0 no memory)
LI Very Big buffers, 4520 bytes (total 256, permanent 128, peak 256 @ 4w2d):
128 in free list (128 min, 384 max allowed)
86 hits, 42 fallbacks, 0 trims, 128 created
0 failures (0 no memory)
128 max cache size, 128 in cache
0 hits in cache, 0 misses in cache
CF Large buffers, 5024 bytes (total 2, permanent 1, peak 2 @ 4w2d):
2 in free list (1 min, 2 max allowed)
0 hits, 0 misses, 273 trims, 274 created
0 failures (0 no memory)
IPC Medium buffers, 16384 bytes (total 2, permanent 2):
2 in free list (1 min, 8 max allowed)
361945 hits, 0 fallbacks, 0 trims, 0 created
0 failures (0 no memory)
Private Huge IPC buffers, 18024 bytes (total 2, permanent 2):
2 in free list (0 min, 13 max allowed)
0 hits, 0 misses, 0 trims, 0 created
0 failures (0 no memory)
Private Huge buffers, 65280 bytes (total 2, permanent 2):
2 in free list (0 min, 13 max allowed)
2 hits, 0 misses, 0 trims, 0 created
0 failures (0 no memory)
IPC Large buffers, 65535 bytes (total 17, permanent 16, peak 17 @ 4w2d):
17 in free list (16 min, 16 max allowed)
0 hits, 0 misses, 44162 trims, 44163 created
0 failures (0 no memory)

Header pools:

Particle Clones:
1024 clones, 0 hits, 0 misses

Public particle pools:
Global FS Particle Pool buffers, 704 bytes (total 512, permanent 512):
0 in free list (0 min, 512 max allowed)
512 hits, 0 misses
512 max cache size, 512 in cache
0 hits in cache, 0 misses in cache

Private particle pools:
IBC0/1 buffers, 704 bytes (total 114688, permanent 114688):
0 in free list (114688 min, 114688 max allowed)
114688 hits, 0 misses
114688 max cache size, 57344 in cache
139935898 hits in cache, 0 misses in cache

Hello,

 

you can private message me with the configuration.

 

By the way, I seem to remember seeing a port channel somewhere in your output, did you apply the service policy to the port channel interface ?

Also, the ports which I applied the service policy to appear to be dropping at a much higher rate than before and their drop counts are nearly the same:

 

Interfaces that have the output service policy applied:

Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 4294510964
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 4294699688
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 4294238393
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 4293845984
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 4294854500
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 4294828638
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 4294964380

 

Interfaces that do not have the output service policy applied:
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 2800
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 0
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 7000
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 0
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 11200
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 11200
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 0
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 1400
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 22590
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 5600
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 11200
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 1400

Remove all the stuff I recommended, it doesn.t seemvto help, we.ll have to look elsewhere...