cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Bookmark
|
Subscribe
|
14206
Views
4
Helpful
6
Replies

High Total Ouput Drops/OutDiscards on Catalyst 2960S/2960X

g.ska
Level 1
Level 1

Hi everyone,

After many tests, I've seen a lot of "Total Output Drops" (with show interfaces command) and Outdiscards (show interfaces counters errors). 

Does someone have any idea of the issue? Do you think that can be a problem with buffer allocation? Could you help me and give me a very basic explanation about buffer allocation and egress queue?

Many thanks.

Greg.

Note: Here's the output of different commands for int gig1/0/8 (maybe more readable in .txt in attachment).

show interfaces counters errors:

Port Align-Err FCS-Err Xmit-Err Rcv-Err UnderSize OutDiscards
Gi1/0/1 0 0 0 0 0 0
Gi1/0/2 0 0 0 0 0 0
Gi1/0/3 0 0 0 0 0 15
Gi1/0/4 0 0 0 0 0 0
Gi1/0/5 0 0 0 0 0 0
Gi1/0/6 0 0 0 0 0 0
Gi1/0/7 0 0 0 0 0 0
Gi1/0/8 0 0 0 0 0 35265929

show interfaces:

GigabitEthernet1/0/8 is up, line protocol is up (connected)
Hardware is Gigabit Ethernet, address is c07b.bc66.ea88 (bia c07b.bc66.ea88)
Description: *** GBE1 ***
MTU 9000 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 103/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 1000Mb/s, media type is 10/100/1000BaseTX
input flow-control is off, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input never, output 00:00:01, output hang never
Last clearing of "show interface" counters 17:30:13
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 35209937
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 5000 bits/sec, 1 packets/sec
5 minute output rate 406185000 bits/sec, 7257 packets/sec
94182 packets input, 46089660 bytes, 0 no buffer
Received 4753 broadcasts (3474 multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 3474 multicast, 0 pause input
0 input packets with dribble condition detected
467233249 packets output, 3266747929861 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 unknown protocol drops
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 pause output
0 output buffer failures, 0 output buffers swapped out

Show interfaces summary:


*: interface is up
IHQ: pkts in input hold queue IQD: pkts dropped from input queue
OHQ: pkts in output hold queue OQD: pkts dropped from output queue
RXBS: rx rate (bits/sec) RXPS: rx rate (pkts/sec)
TXBS: tx rate (bits/sec) TXPS: tx rate (pkts/sec)
TRTL: throttle count

Interface IHQ IQD OHQ OQD RXBS RXPS TXBS TXPS TRTL
-----------------------------------------------------------------------------------------------------------------
Vlan1 0 0 0 0 0 0 0 0 0
* Vlan518 0 0 0 0 0 0 0 0 0
* Vlan610 0 0 0 0 0 0 0 0 0
* Vlan622 0 0 0 0 3000 1 0 0 0
FastEthernet0 0 0 0 0 0 0 0 0 0
GigabitEthernet1/0/1 0 0 0 0 0 0 0 0 0
GigabitEthernet1/0/2 0 0 0 0 0 0 0 0 0
* GigabitEthernet1/0/3 0 0 0 15 3000 0 8000 4 0
GigabitEthernet1/0/4 0 0 0 0 0 0 0 0 0
GigabitEthernet1/0/5 0 0 0 0 0 0 0 0 0
GigabitEthernet1/0/6 0 0 0 0 0 0 0 0 0
GigabitEthernet1/0/7 0 0 0 0 0 0 0 0 0
* GigabitEthernet1/0/8 0 0 0 35176753 5000 1 406286000 7268 0

Show mls qos interfaces statistics:

output queues enqueued:
queue: threshold1 threshold2 threshold3
-----------------------------------------------
queue 0: 0 0 0
queue 1: 6687771 23905 5842429
queue 2: 0 0 0
queue 3: 0 0 2445953914

output queues dropped:
queue: threshold1 threshold2 threshold3
-----------------------------------------------
queue 0: 0 0 0
queue 1: 0 0 0
queue 2: 0 0 0
queue 3: 0 0 1974565047

Show buffers:

SW1#show buffers
Buffer elements:
2312 in free list (500 max allowed)
294159252 hits, 0 misses, 1641 created

Public buffer pools:
Small buffers, 104 bytes (total 50, permanent 50, peak 382 @ 6w0d):
49 in free list (20 min, 150 max allowed)
144349681 hits, 353 misses, 1026 trims, 1026 created
0 failures (0 no memory)
Middle buffers, 600 bytes (total 25, permanent 25, peak 73 @ 6w0d):
25 in free list (10 min, 150 max allowed)
12877 hits, 16 misses, 48 trims, 48 created
0 failures (0 no memory)
Big buffers, 1536 bytes (total 67, permanent 50, peak 167 @ 2w5d):
37 in free list (5 min, 150 max allowed)
22072067 hits, 1359 misses, 4232 trims, 4249 created
0 failures (0 no memory)
VeryBig buffers, 4520 bytes (total 10, permanent 10):
9 in free list (0 min, 100 max allowed)
1 hits, 0 misses, 0 trims, 0 created
0 failures (0 no memory)
Large buffers, 5024 bytes (total 1, permanent 0, peak 1 @ 6w0d):
1 in free list (0 min, 10 max allowed)
0 hits, 0 misses, 380 trims, 381 created
0 failures (0 no memory)
Huge buffers, 18024 bytes (total 4, permanent 0, peak 8 @ 6w0d):
4 in free list (0 min, 4 max allowed)
6026578 hits, 93348 misses, 186455 trims, 186459 created
0 failures (0 no memory)

Interface buffer pools:
Syslog ED Pool buffers, 600 bytes (total 133, permanent 132, peak 133 @ 6w0d):
101 in free list (132 min, 132 max allowed)
3360 hits, 0 misses
RxQ12 buffers, 2176 bytes (total 115, permanent 115):
19 in free list (0 min, 115 max allowed)
96 hits, 0 misses
HRPCRespFallback buffers, 1500 bytes (total 80, permanent 80):
80 in free list (0 min, 80 max allowed)
0 hits, 0 misses
IPC buffers, 2048 bytes (total 300, permanent 300):
299 in free list (150 min, 500 max allowed)
1 hits, 0 fallbacks, 0 trims, 0 created
0 failures (0 no memory)
RxQFB buffers, 2176 bytes (total 904, permanent 904):
904 in free list (0 min, 904 max allowed)
3181 hits, 0 misses
RxQ14 buffers, 2176 bytes (total 64, permanent 64):
16 in free list (0 min, 64 max allowed)
48 hits, 0 fallbacks, 0 trims, 0 created
0 failures (0 no memory)
RxQ0 buffers, 2176 bytes (total 1200, permanent 1200):
700 in free list (0 min, 1200 max allowed)
500 hits, 0 misses
RxQ2 buffers, 2176 bytes (total 153, permanent 153):
25 in free list (0 min, 153 max allowed)
128 hits, 0 fallbacks, 0 trims, 0 created
0 failures (0 no memory)
RxQ1 buffers, 2176 bytes (total 153, permanent 153):
25 in free list (0 min, 153 max allowed)
29396067 hits, 0 fallbacks
RxQ4 buffers, 2176 bytes (total 153, permanent 153):
25 in free list (0 min, 153 max allowed)
935830 hits, 0 misses
RxQ15 buffers, 2176 bytes (total 4, permanent 4):
0 in free list (0 min, 4 max allowed)
73438730 hits, 73438726 misses
RxQ10 buffers, 2176 bytes (total 76, permanent 76):
12 in free list (0 min, 76 max allowed)
6510099 hits, 0 fallbacks
RxQ7 buffers, 2176 bytes (total 192, permanent 192):
64 in free list (0 min, 192 max allowed)
6339 hits, 0 misses
RxQ8 buffers, 2176 bytes (total 76, permanent 76):
12 in free list (0 min, 76 max allowed)
8251940 hits, 0 misses
RxQ5 buffers, 2176 bytes (total 128, permanent 128):
64 in free list (0 min, 128 max allowed)
64 hits, 0 misses
RxQ6 buffers, 2176 bytes (total 153, permanent 153):
25 in free list (0 min, 153 max allowed)
128 hits, 0 misses
RxQ3 buffers, 2176 bytes (total 153, permanent 153):
25 in free list (0 min, 153 max allowed)
8585557 hits, 3181 fallbacks
RxQ11 buffers, 2176 bytes (total 19, permanent 19):
3 in free list (0 min, 19 max allowed)
16 hits, 0 misses
Jumbo buffers, 9368 bytes (total 200, permanent 200):
200 in free list (0 min, 200 max allowed)
0 hits, 0 misses
IPC Medium buffers, 16384 bytes (total 2, permanent 2):
2 in free list (1 min, 8 max allowed)
0 hits, 0 fallbacks, 0 trims, 0 created
0 failures (0 no memory)
IPC Large buffers, 65535 bytes (total 2, permanent 2, peak 17 @ 6w0d):
2 in free list (1 min, 8 max allowed)
0 hits, 0 misses, 1 trims, 1 created
0 failures (0 no memory)

1 Accepted Solution

Accepted Solutions

Joseph W. Doherty
Hall of Fame
Hall of Fame

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages wha2tsoever (including, without limitation, damages for loss of use, data or profit) arising  out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

I understand the 2960 architecture to be similar to the 3560/3750 architecture but with even less RAM available for egress buffering.

On the 3750 series, when dealing with bursty traffic, I've had great success by tuning the hardware buffers.  Basically, rather than using the defaults, which reserve 50% of your buffer space to interfaces (and such reservation is split across each egress queue [this all assuming QoS is enabled]), I reduce the interface reservations (putting more RAM into the common pool) and increase how much buffer RAM an interface can dynamically have.

View solution in original post

6 Replies 6

Joseph W. Doherty
Hall of Fame
Hall of Fame

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages wha2tsoever (including, without limitation, damages for loss of use, data or profit) arising  out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

I understand the 2960 architecture to be similar to the 3560/3750 architecture but with even less RAM available for egress buffering.

On the 3750 series, when dealing with bursty traffic, I've had great success by tuning the hardware buffers.  Basically, rather than using the defaults, which reserve 50% of your buffer space to interfaces (and such reservation is split across each egress queue [this all assuming QoS is enabled]), I reduce the interface reservations (putting more RAM into the common pool) and increase how much buffer RAM an interface can dynamically have.

Hi joseph,

Thank you for your advice. I tried to reduce "reserved" threshold from 50 to 20 for each queue and increase "buffer", "threshold 1" and "maximum" for the queue 1 (with qos enabled) and it work like a charm!!! Before I had more than 19900 output drops with default settings and now I have less than 70 output drop for the same test!!! :-) :-)

Thanks you very much for your help.

Best regards,

Greg.

Hi Greg/Joseph.  I read your thread with great interest as I am experiencing what seems to be a very similar situation.  My bursty traffic nature (zerto replication) causes huge increases output drops on interfaces that have a bitrate that is different to the others.  I'd really love to see what your config statements look like and give them a whirl.  You guys would really be helping me.

Thank you for reading

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages wha2tsoever (including, without limitation, damages for loss of use, data or profit) arising  out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

mls qos queue-set output 1 threshold 1 <max - tail drop> <max - tail drop> <int % reserved> <max - tail drop>
mls qos queue-set output 1 threshold 2 <max - tail drop> <max - tail drop> <int % reserved> <max - tail drop>
mls qos queue-set output 1 threshold 3 <max - tail drop> <max - tail drop> <int % reserved> <max - tail drop>
mls qos queue-set output 1 threshold 4 <max - tail drop> <max - tail drop> <int % reserved> <max - tail drop>

<max - tail drop> = logical queue limits - older IOS versions support up to 400(%), later versions up to 1600(%)

Why they are percentages, and how they map into actual allocations, is rather complicated to explain.  What's important, is they set logical queue limits for different QoS marked traffic, and unless you want to manage various drop settings, I suggest they all be the same and increased, often to the maximum possible.

<int % reserved> - controls the buffer space reserved to the interface vs. the common pool.  I suggest decreasing this value (i.e. moving reserved interface buffers to [shared] common pool).  I've found you can often take this value to the minimum supported.

Hello

Can you share with us the applied configuration?

Hello G.Ska

Can you share with me the applied configuration?

I have similar issue on my Cisco 2960