cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1394
Views
0
Helpful
5
Replies

Output drops - please help

carl_townshend
Spotlight
Spotlight

Hi All

Can anyone answer me this question, it seems to bug me a lot

We see a lot of output drops on switches that are not even being utilised at all, for example, I just checked a 2960 switch,

why are we getting drops at no utilization?

SHOW INTerfaces FA0/1
FastEthernet0/1 is up, line protocol is up (connected)
Hardware is Fast Ethernet, address is ccd5.39fb.9881 (bia ccd5.39fb.9881)
MTU 1500 bytes, BW 100000 Kbit, DLY 100 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 100Mb/s, media type is 10/100BaseTX
input flow-control is off, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:19, output 00:00:01, output hang never
Last clearing of "show interface" counters never
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 188380
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 0 bits/sec, 0 packets/sec
5 minute output rate 0 bits/sec, 0 packets/sec
629188130 packets input, 172061119628 bytes, 0 no buffer
Received 2027080 broadcasts (1770674 multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 1770674 multicast, 0 pause input
0 input packets with dribble condition detected
883478670 packets output, 556799397753 bytes, 0 underruns
0 output errors, 0 collisions, 1 interface resets
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 PAUSE output
0 output buffer failures, 0 output buffers swapped out

5 Replies 5

AFROJ AHMAD
Cisco Employee
Cisco Employee

Hi Carl,

This could be a BUG:

Switch stack shows incorrect values for output drops on show interfaces
CSCtq86186
Confirm the drops with this command "show platform port-asic stats drops'" this will show the actual drops:
Use the following link for implementing QOS  and queuing as well on 2960 switches  :   http://www.cisco.com/en/US/docs/switches/lan/catalyst2960/software/release/1 2.2_25_see/configuration/guide/swqos.html#wp1203619
Hope the above info. will help
Thanks-
Afroz
***Ratings Encourages Contributors ***
Thanks- Afroz [Do rate the useful post] ****Ratings Encourages Contributors ****

Mark Malone
VIP Alumni
VIP Alumni

Hi

micro burst traffic can flood a buffer and cause it to drop traffic  , you don't need to be sitting at high utilization for it to happen , if that port gets more than 100mb traffic even for  second or 2 its too much,  buffers fill and drop whatever goes over in that time resulting in output drops

Few options for output drops

check by plotting graphs see if it is micro burst causing it

http://www.cisco.com/c/en/us/support/docs/lan-switching/switched-port-analyzer-span/116260-technote-wireshark-00.html

Use qos to try and limit the impact , qos can cause more drops if not configured correctly

Get a switch that can handle better volumes of traffic and has better buffers internally , you will see issues like this on 2960s and 3750s etc but newer 3850s and 3650s have not seen these issues as much

Good Doc on qos for output drops

http://www.cisco.com/c/en/us/support/docs/switches/catalyst-3750-series-switches/116089-technote-switches-output-drops-qos-00.html

Output Drops

Note: Never increase the output queue in an attempt to prevent output drops. If packets stay too long in the output queue, TCP timers can expire and trigger retransmission. Retransmitted packets only congest the outgoing interface even more.

If output drops still occur after you adjust the configuration of the router as recommended, it means that you cannot prevent or decrease output drops. However, you can control them, and this can be as effective as prevention. There are two approaches to control output drops:

  • Congestion management

  • Congestion avoidance

Both approaches are based on traffic classification, and you can use them in parallel.

Congestion management ensures, with appropriate configuration, that important packets are always forwarded, while less important packets are dropped when the link is congested. Congestion management comprises fancy queueing mechanisms such as:

Congestion avoidance is based on intentional packet drops. The window size in TCP connections depends on the round trip time. Therefore, these intentional drops slow down the rate at which the source device sends packets. Congestion avoidance uses weighted random early detection.

If unwanted output drops still occur after you implement these mechanisms, you need to increase the line speed.

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages wha2tsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

Some commentary . . .

Note: Never increase the output queue in an attempt to prevent output drops. If packets stay too long in the output queue, TCP timers can expire and trigger retransmission. Retransmitted packets only congest the outgoing interface even more.

"Never" is a very strong adjective.

I believe there are instances where increasing the egress queue might be necessary.  For example, to get TCP to fully utilize a path's bandwidth, the TCP receiver's RWIN needs to be able to handle the BDP (bandwidth delay product).  If the network's device egress bandwidth is less than the sender's, that device will queue the sender's TCP transmission bursts.  So, you'll want to insure that network device can accept the difference between the sender's transmission rate and the device's transmission rate for half the BDP.

e.g.

You have a 100 "WAN" interface with 100 ms, RTT so BDP is 10,000,000 bits.

If the sender is sending at gig, which is 10x the "WAN" bandwidth, you'll need to queue 90% of half the BDP on the WAN, which is 4,500,000 bits.  If sending 1500 byte packets, you'll need to queue 375 packets.  (Which if sent at 100 Mbps, will take 45 ms.)

Or, for another example, since network devices queue/count frames/packets, but BDP is in bits, smaller frames/packets will require a larger frame/packet count to meet BDP goals.

If the prior example, if packets were only 150 bytes, you would need to queue 3,750 of them.

If you correctly allow for the above, TCP timeouts, due to queuing latency, should be a non-issue.

BTW, sometimes you need to reduce a default queue depth to insure a sender's "knows" ASAP when it hits the BDP.

Congestion avoidance is based on intentional packet drops. The window size in TCP connections depends on the round trip time. Therefore, these intentional drops slow down the rate at which the source device sends packets. Congestion avoidance uses weighted random early detection.

If unwanted output drops still occur after you implement these mechanisms, you need to increase the line speed.

CA may indeed use intentional drops, but there are other techniques too.  Some later TCP implementations watch for jumps in latency.  Some traffic shaping appliances might shape TCP ACKs and/or spoof the TCP receiver's advertised window size.  In theory, ECN can be used to indicate congestion w/o the need to drop a packet.

WRED is actually rather complex to get "right".  Unless you're a QoS expert, I advise staying away from it.  FQ tails drops, I believe, often works better than WRED.  In theory, Cisco's FRED should work better than WRED, but it too can still be difficult to get "right".

With really effective bandwidth management for rate adaptive traffic, you shouldn't get unwanted output drops and shouldn't need to increase bandwidth unless you need to improve throughput rate.  With non-rate adaptive traffic, "right sizing" bandwidth is really a requirement.

Leo Laohoo
Hall of Fame
Hall of Fame

Carl, 

Look at the ratio between the Total Output Drops vs the Packets Input and Packets Output. 

I wouldn't worry about it, however, good exercise for QoS stuff which Joseph is our forum go-to guy.  

christo russouw
Level 1
Level 1

Hi Carl

It might be worth it to be 100% sure theres "no traffic", I have seen those counters get stuck in the past, although its quite rare.

Maybe clear the counters and see what the total input and output packets say or check on a graphing tool  to see what the link is really utilizing.