cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
22225
Views
5
Helpful
11
Replies

Output Que Drops even interface is not overloaded :(

taimoor.jan
Level 1
Level 1

Dear friends,

I am having Output Que drops in my EDN network at different hops and interfaces. The critical delay and jitter sencitive traffic is being effected.

The interfaces are not even crossing 50% BW utilization. But the OQD keeps on increasing.

Please suggest

SW02#sh int   gi1/0/10 summ

*:   interface is up
IHQ: pkts in input hold queue     IQD: pkts dropped from input queue
OHQ: pkts in output hold queue   OQD: pkts dropped from output queue
RXBS: rx rate (bits/sec)         RXPS: rx rate (pkts/sec)
TXBS: tx rate (bits/sec)         TXPS: tx rate (pkts/sec)
TRTL: throttle count

  Interface               IHQ   IQD   OHQ   OQD RXBS RXPS   TXBS TXPS TRTL
-------------------------------------------------------------------------
* GigabitEthernet1/0/10   0       0   0 26165 109000   23 2179000   1131   0

SW02#sh int   gi1/0/10
GigabitEthernet1/0/10 is up, line   protocol is up (connected)
  Hardware is Gigabit Ethernet, address is 0024.13c6.1f47 (bia   0024.13c6.1f47)
  Description: *** GE-UTP ***
  Internet address is 10.151.102.37/30
  MTU 1500 bytes, BW 100000 Kbit, DLY 100 usec,
       reliability 255/255, txload 5/255, rxload 1/255
  Encapsulation ARPA, loopback not set
  Keepalive set (10 sec)
  Full-duplex, 100Mb/s, media type is 10/100/1000BaseTX
  input flow-control is off, output flow-control is unsupported
  ARP type: ARPA, ARP Timeout 04:00:00
  Last input 00:00:06, output 00:00:03, output hang never
  Last clearing of "show interface" counters 5w0d
Input queue: 0/75/0/0   (size/max/drops/flushes); Total output drops: 26165
  Queueing strategy: fifo
  Output queue: 0/40 (size/max)
  30 second input rate 131000 bits/sec, 27 packets/sec
  30 second output rate 2208000 bits/sec, 1128 packets/sec
       901055158 packets input, 515900143345 bytes, 0 no buffer
       Received 418735 broadcasts (0 IP multicasts)
       0 runts, 0 giants, 0 throttles
       0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
       0 watchdog, 418735 multicast, 0 pause input
       0 input packets with dribble condition detected
       6726652514 packets output, 2451934874197 bytes, 0 underruns
       0 output errors, 0 collisions, 0 interface resets
       0 babbles, 0 late collision, 0 deferred
       0 lost carrier, 0 no carrier, 0 PAUSE output
       0 output buffer failures, 0 output buffers swapped out
11 Replies 11

Ryan Newell
Cisco Employee
Cisco Employee

Hello,

  Output Queue Drops are typically caused by congested interfaces. The congestion can be a result constant high-rate flows or short bursts of data. Looking at your output I would say it is a result of short bursts of data. To address this you would need to configure queuing so that your delay and jitter sensitive traffic gets preferential treatment. Are you familiar with Class-based Weighted Fair Queuing?

  That's my two cents..

Regards,
Ryan

my hierarchy is as follows:

server <--> access switch <-->MPLS domain<--> access switch <-->server

mostly trunk ports are being used with SVIs. Would I have to deploy QoS every where ?

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

I agree with Ryan that your drops are most likely due to short bursts (since you only see < 50% utilization).  Your drop % is also very low, also often an indication of just transient burst congestion.

QoS queuing that Ryan describes is what might be used to manage congestion as you prioritize how traffic will forwarded at the congested point.  However, if only drops are an issue, a possible alternate solution would be to just increase queue depths where you see the drops.  This might be enough to mitigate the drops, yet not increase latency enough to be adverse to the traffic.  For 100 Mbps, you might be alright with a queue depth of 400 vs. the default of 40.

Well any link that is dropping for starters. Typically this will be where you have speed mismatch (high speed LAN link in router and low speed WAN link out) or aggregation points. If this is on a catalyst platform then you may need to configure another form of QoS. What kind of model is your access switch? Using your diagram as a reference which interface gi1/0/10 be?

Regards,

Ryan

taimoor.jan
Level 1
Level 1

I think "hold que in " will solve the issue, bcz the que size is set on 75, increasing it will not drop these packets.

Disclaimer

The  Author of this posting offers the information contained within this  posting without consideration and with the reader's understanding that  there's no implied or expressed suitability or fitness for any purpose.  Information provided is for informational purposes only and should not  be construed as rendering professional advice of any kind. Usage of this  posting's information is solely at reader's own risk.

Liability Disclaimer

In  no event shall Author be liable for any damages whatsoever (including,  without limitation, damages for loss of use, data or profit) arising out  of the use or inability to use the posting's information even if Author  has been advised of the possibility of such damage.

Posting

taimoor.jan wrote:

I think "hold que in " will solve the issue, bcz the que size is set on 75, increasing it will not drop these packets.

"in"?  The drops on your post appear to be "out".

yeah "out".

Basically Innput and output Hold queues (on platform where it is present) are only used for software switched traffic. In this case we clearly get ouput buffer overflow which is a different problem. Hold queue tuning will not help. Solution can be configuring QoS and assigning bigger threshold for  output queue. Port have limited thresholds assigned to it by default - but that is tunable so you can get more buffer from the common queues.

Ideally would be to configure QoS and split you traffic into several queues and first share the port buffer among those queues in proportion of traffic. If still drops will be happening (and I guess they will) tune threshold to get additional buffers from common pool.

Hope this helps.

Nik

HTH,
Niko

Disclaimer

The   Author of this posting offers the information contained within this   posting without consideration and with the reader's understanding that   there's no implied or expressed suitability or fitness for any purpose.   Information provided is for informational purposes only and should not   be construed as rendering professional advice of any kind. Usage of  this  posting's information is solely at reader's own risk.

Liability Disclaimer

In   no event shall Author be liable for any damages whatsoever (including,   without limitation, damages for loss of use, data or profit) arising  out  of the use or inability to use the posting's information even if  Author  has been advised of the possibility of such damage.

Posting

Nik, excellent points about the difference between software queues and hardware queue, and about assigning larger hardware buffer thresholds!

However, I differ on the opinion that defining multiple queues is necessarily the ideal solution in this instance.  When you do that, you then generally split your buffers to support the multiple queues which reduces resources for each.  Often whatever traffic that is exhausting resources will then encounter buffer exhaustion even sooner.  (NB: This seems to be a common issue on the 3750 series when QoS is enabled; as noted on these forums.  I.e., drops are seen that were not seen before.)

On the other hand, I'm not proposing that enabling QoS features are necessarily bad in this instance either, as splitting traffic into different queues can protect other traffic from drops caused by the bandwidth intensive traffic,  Doing it "right" can be more complex and, again, for brief transient bursts might be counter productive, at least for simply reducing drops.  (NB: The QoS approach Nik is recommending can offer other advantages, such a reducing latency for some traffic, etc., and you can even "tune" drops, but also with additional complexity.  For example, Nik mentions tuning buffers from the common pool, and this can be done, but with the risk of insufficient buffers for other interfaces.)

Agree with Joseph,

Basically all tuning is a subject for a thorugh design and various testing. But if we return to the main topic of this thread I got that as:

"The critical delay and jitter sencitive traffic is being effected."

If we can outline that traffic with all sent through that interface - than we can apply the QoS for it, as this is the primary QoS function. If all traffic within that int is critical we might just get another link bundled with this one in port-channel for better throughput.

Nik

HTH,
Niko

Disclaimer

The    Author of this posting offers the information contained within this    posting without consideration and with the reader's understanding that    there's no implied or expressed suitability or fitness for any  purpose.   Information provided is for informational purposes only and  should not   be construed as rendering professional advice of any kind.  Usage of  this  posting's information is solely at reader's own risk.

Liability Disclaimer

In    no event shall Author be liable for any damages whatsoever  (including,   without limitation, damages for loss of use, data or  profit) arising  out  of the use or inability to use the posting's  information even if  Author  has been advised of the possibility of such  damage.

Posting

Touché!  Can't believe I missed that - but I did - Nik you're spot on!

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card