Showing results for 
Search instead for 
Did you mean: 

LLQ and CBWFQ Functionality During Non-Congestion

Question... During times of no congestion on an interface with LLQ configured, is the bandwidth truly reserved for the LLQ queue? Or can other traffic classes use as much bandwidth as they need until there is congestion? For example, we have a FastEthernet interface with a LLQ queue defined for 50% of the bandwidth. Does that mean that even if the traffic defined in the LLQ (voice for example) is only using 1Mb that there is 49Mb of wasted bandwidth that other traffic classes can not use? Or does the LLQ bandwidth only come into play if/when there is congestion on the interface?

InayathUlla Sharieff
Cisco Employee

 LLQ defined under the interface would only come into picture when there is congestion if not then it can be utilize by other classes.






*Plz rate all usefull posts.

So what is defined as 'congestion'. Is congestion defined as reaching the interface bandwidth (or configured bandwidth) or is congestion just exceeding the hardware buffer AKA 'tx-ring-limit'? I'm trying to get a good answer on when to expect to see packet reach the software queues (CBWFQ, LLQ, etc). The reason being is that I have a Serial T3 interface (shaped down to 10Mb since thats the CIR) that I keep seeing drops and packets in queue on the 'class-default' queue although I never see the bandwidth reaching the limit when these drops/queues occur. So i'm confused as to why there is packets reaching the queue to begin with, since there's plenty of free bandwidth and also why there are drops as well.



Can you please share "show ver" , "show module" and interface configuration. I guess on few LAN cards, inter cos bursting is not supported.




After further researching and working with TAC, it was determined that the reason that I am seeing drops on what appears to be an under utilized interface is due to micro-burst. The micro-burst are filling up the tx-ring and software queues which are causing the drops that I am seeing. And since micro-burst are very short spikes, they aren't showing up in the SNMP or interface usage averages.

Hi, did the TAC guys tell you how to detect micro-bursts then? Thanks, Milan


The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.


I consider you have congestion anytime a frame/packet needs to be queued.

As your later post describes, congestion issues can be on a micro time scale, i.e. multi-second to subsecond, and are often "invisible" to higher monitoring tools monitoring on multi-minute time scales.

A short term high transmission ingress rate can often easily overflow a slower egress's queue.

Resulting drops may cause a fast sender to slow/pause (a lot) which lowers its average transmission rate - this also will "hide" the issue from "bandwidth" monitoring.



Akash Agrawal
Cisco Employee



As per my understanding, LLQ configuration affects behavior of scheduler that how scheduler will dequeue next packet. In LLQ, scheduler always check first to LLQ , if any packet in LLQ, dequeue it, if not, pick next packet from other Non-LLQ queues. With this logic if there is no traffic on LLQ, scheduler will pick continuously from Non-LLQ till Tx ring has room. If no room, it will wait and again apply same logic. So if there is no traffic in LLQ, other classes can utilize complete available/link bandwidth.


I hope explanation will help.