cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1665
Views
5
Helpful
5
Replies

QOS (HQF) : Trying to find out why output queue drop happens

Hi,

   I've got a router with a gigabit interface connected to ISP. I bought 30Mbps Point-To-Point Link(Lease-Line) from them. To treat critical applications, I need to shape bandwidth to not above than 30Mbps(gigabit interface) and then treat critical applications by using CBWFQ. 

   It seems to be work but I've always seen total drops from "show policy-map interface gx/y". I know there are 2 reasons why?

1. WRED

2. I have exceeded the configured queue limit.

   I would focus on 2nd reason. Then I increased queue-limet to 32768(Max). But I still see total drops.

#########

(queue depth/total drops/no-buffer drops) 0/1112/0

#########

   And "show interface gx/y summary" output showed me a couter of OQD: pkts dropped from output queue.

#######################################

CHILD:

policy-map CHILD

class CRITICAL_APP

  bandwidth 27000

  queue-limit 32768 packets

!

Parent:

policy-map PARENT

    class class-default

      queue-limit 32768 packets

      shape average 30000000

     service-policy CHILD

!

Interface gx/y

service -policy output PARENT

!

#######################################

       I know It's going to work when the interface get congested.  Please share your ideas/experience. What have I missed?

TIA

Toshi

5 Replies 5

paolo bevilacqua
Hall of Fame
Hall of Fame

Drops are totally normal, otherwise the traffic contract would be violated.

Post the complete show interface and and show policy.

Hi,

     Paolo  How have you been?

     I'm trying to get drop from ISP. That's why I try to slow down(Shape) critical applications. I'd like to know what causes total drops in this case.

TIA

Toshi

Your traffic burts cause drops. They must happen for TCP to slow down sending. You can shape and these will still happen, as it is perfectly normal and expected.

All is good with me and so I hope about you. Thanks for the nice rating and good luck!

Joseph W. Doherty
Hall of Fame
Hall of Fame

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

Trying to avoid all packet drops can be counter productive.  For example, as Paolo has already (correctly) noted, TCP uses drops as a flow control mechanism.  Even if very large queue depths avoid any drops, it can create very highly variable (and perhaps high) latency.  Consider with your queue depth set to 32,768 packets, the last packet in the queue has to wait for transmission of the 32,767 packets ahead of it.

Generally, to allow for maximum TCP transmission rate, your queue depth should not need to be larger than about half the BDP (bandwidth delay product) to reach the furthest TCP destination.

If you're using HQF QoS, I would suggest enabling fair-queue in your CRITICAL_APP class.  I would also suggest, explicity defining the class-default class in your child policy and also enabling fair-queue in it too.

Fair queuing's two major advantages are: low volume flows will likely be less impacted by high volume flows; both regarding latency and packet dropping.  In fact, packet dropping tends to only hit the high volume flows, i.e. the flow(s) causing the congestion, and the flow(s) we want to "signal" to slow its/their transmission rate.

This is not for topic-starter, but for those, who finds the topic searching web.

 

The drops may come, as hold-queue is not adjusted on the interface; if we are allocating more than 1000 packet-queue for a single physical interface, the "hold-queue out" must be configured under physical interface.

More details may be found here - http://www.cisco.com/c/en/us/products/collateral/ios-nx-os-software/quality-of-service-qos/sol_ov_c22-708224.html

Please pay attention to behaviour difference between versions per- and post-CSCtq27141 (enhancement embedded into 15.1(2)T5+, 15.2(1)T2+, 15.0(1)M8+, 15.1(4)M4+, 15.2(4)M+ and etc.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: