cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2584
Views
10
Helpful
10
Replies

Justification for bandwidth limit when using the QoS priority percent command

pweinhold
Level 1
Level 1

Hi,

When using the priority percent and bandwidth percent commands in a QoS policy-map, the priority percent option reserves a certain percentage of an interface's bandwidth, but also limits it to that bandwidth. Any of that priority traffic that exceeds its reserved bandwidth may be dropped. On the other hand, non-priority traffic that has bandwidth reserved with the bandwidth percent command is allowed to exceed that reserved bandwidth limit.

Why is that? Isn't it likely that your priority traffic might be your most important, or at least the kind of traffic that isn't tolerant of drops, such as real-time voice or video? Why would Cisco put that limitation on the priority percent command?

Thanks.

10 Replies 10

cflory
Level 1
Level 1

From:

http://www.cisco.com/en/US/tech/tk543/tk757/technologies_tech_note09186a0080103eae.shtml#configuringthebandwidthcommand

"During congestion conditions, the traffic class is guaranteed bandwidth equal to the specified rate. (Recall that bandwidth guarantees are only an   issue when an interface is congested.) In other words, the priority command provides a minimum bandwidth guarantee.

In addition, the priority command implements a maximum bandwidth guarantee. Internally, the priority queue uses a token bucket that measures the offered load and ensures that the traffic stream conforms to the configured rate. Only traffic that conforms to the token bucket is guaranteed low latency. Any excess traffic is sent if the link is not congested or is dropped if the link is congested. For more information, refer to What Is a Token Bucket?. "

"The purpose of the built-in policer is to ensure that the other queues are serviced by the queueing scheduler. In the original Cisco priority queueing       feature, which uses the priority-group and priority-list commands, the scheduler always serviced the highest priority queue first. In extreme cases, the lower priority queues rarely were serviced and effectively were starved of bandwidth.

The real benefit of the priority command—and its major difference from the bandwidth command—is how it provides a strict de-queueing priority to provide a bound on latency.  Here is how the Cisco IOS Configuration Guide describes this benefit: "A strict priority queue (PQ) allows delay-sensitive data such as voice to be de-queued and sent before packets in other queues are de-queued....."

HTH!

-Chris

First please note that with LLQ non-priority traffic cannot take bandwidth reserved for priority traffic. With that said the reason why the priority traffic cannot exceed is to protect the non-priority traffic. If priority traffic were allowed to exceed a scenario could exist where non-priority traffic would all be dropped as the priority traffic is serviced.

Cisco's implementation of priority queuing (PQ) actually does just that. It will starve or deny service to lower priority traffic for the sake of high priority traffic.

Regards,

Ryan

Hi,

Just to back up Ryan's input (Well put Ryan +5)

Have a look at this link which discusses the comparisions between PRIORITY & BANDWIDTH QOS settings

http://www.cisco.com/en/US/partner/tech/tk543/tk757/technologies_tech_note09186a0080103eae.shtml

Regards

Alex

Regards, Alex. Please rate useful posts.

Joseph W. Doherty
Hall of Fame
Hall of Fame

Disclaimer

The      Author of this posting offers the information contained within this      posting without consideration and with the reader's understanding   that    there's no implied or expressed suitability or fitness for any    purpose.   Information provided is for informational purposes only and    should not   be construed as rendering professional advice of any  kind.   Usage of  this  posting's information is solely at reader's own  risk.

Liability Disclaimer

In      no event shall Author be liable for any damages whatsoever    (including,   without limitation, damages for loss of use, data or    profit) arising  out  of the use or inability to use the posting's    information even if  Author  has been advised of the possibility of   such  damage.

Posting

Just wanted to clarify a few points noted by the other posters.

The implicit LLQ policer only engages when there's congestion so it's possible for LLQ to use more than the bandwidth specified.

Classes can use bandwidth not being used by other classes, including LLQ.  In other words, if 20% is defined for LLQ, but not being used, other classes can use this bandwidth.

Hi,

Thanks for the responses, but my question was why is the priority queue bandwidth-restricted, whereas the bandwidth classes are not?

If I say to my customers - "We'll put your voice traffic in a Low-Latency queue and reserve 20% of the circuit bandwidth for it, but if it exceeds that bandwidth (during periods of congestion), it will be dropped" - they'll probably ask, "Why are we putting our voice traffic in a queue that may drop the traffic?"

I assume there's some technical justification for it. I'd just ike to know what that justification is...

Thanks.

Joseph - I was referring to LLQ configured on Frame Relay on the Cisco 7200 router and other non-Route/Switch Processor (RSP) platforms before 12.2 where this is an exception to what you stated. I am making a joke of course!!!

Excellent point and a detail I completely overlooked! If the Tx-ring is empty then queuing and policer are not engaged.

Okay so I think we all understand that during periods of congestion the behavior of the priority queue is to dequeue the packets in the low-latency queue first. If the scheduler is servicing another queue and a packet arrives in the LLQ it will finish forwarding that packet, queue the remaining packets, and service the LLQ. Again if there were no policer on the LLQ, the scheduler could potentially service the LLQ and not allow any queues to be serviced. The design of the LLQ is to provide low latency performance by not queuing packets and dropping instead.

I think the poster understands that which leads me to believe the poster is asking why can't the LLQ be configured to use a portion of the unallocated bandwidth or allocated but unused during periods of congestion? I am not really sure why. IMHO the design of the LLQ would have to change. The other queues are serviced in a weighted fair queue fashion. This design, I believe, allows them to share the remaining bandwidth proportionally . The LLQ is a priority queue and is serviced whenever a packet enters the queue. In my mind it would need to change to an LLQ with a two rate policer. The first being static configuration and the second based on the same formula used to determine how the unallocated bandwidth is shared amongst the remaining queues. This would allow a share of the unallocated bandwidth to the policer based on current weight and still protect the the other queues.

As for allocated and but unused this would present a separate set of challenges. In the two rate policer I described above, the second rate would need to dynamically adjust to the available allocated but unused.

So if I had to summarize my two cents it would simply be because this is not the current design of the LLQ. There would need to be a second policer which adjust dynamically to the available allocated or unallocated traffic.

Not sure if any of that made sense, but I hope it helps and hope others chime in...

Regards,
Ryan

I was speaking with a fellow engineer and he was telling me that it was basically because you couldn't have a priority queue coupled with an unbounded access to bandwidth, otherwise traffic in that class could potentially starve the other traffic. I guess the fact that LLQ traffic gets strict priority over all other queues means that you have to set some limit on it. I'm not sure if that's the correct answer, but it makes a certain amount of sense.

You got it!

That's what we've been trying to say - without a policer the scheduler would only service the  LLQ and starve the remaining queues. Priority Queuing (PQ), which is  another queuing technique, does just that. Lower priority queues only  get serviced after the high priority queue has not packets to transmit.

Regards,

Ryan

Disclaimer

Disclaimer

The  Author of this posting offers the information contained within this  posting without consideration and with the reader's understanding that  there's no implied or expressed suitability or fitness for any purpose.  Information provided is for informational purposes only and should not  be construed as rendering professional advice of any kind. Usage of this  posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever  (including, without limitation, damages for loss of use, data or  profit) arising out of the use or inability to use the posting's  information even if Author has been advised of the possibility of such  damage.

Posting

pweinhold wrote:

Hi,

Thanks for the responses, but my question was why is the priority queue bandwidth-restricted, whereas the bandwidth classes are not?

If I say to my customers - "We'll put your voice traffic in a Low-Latency queue and reserve 20% of the circuit bandwidth for it, but if it exceeds that bandwidth (during periods of congestion), it will be dropped" - they'll probably ask, "Why are we putting our voice traffic in a queue that may drop the traffic?"

I assume there's some technical justification for it. I'd just ike to know what that justification is...

Thanks.

Ryan's first post answered why, which he further clarified in his last post, i.e. the implicit policer can preclude LLQ from using all the bandwidth which could starve other classes.  He's also correct PQ can totally starve subordinate classes, and PQ, unlike LLQ, supports 4 queues; highest queue can starve the next 3, the 2nd highest can starve the next two, and the 3rd highest can stave the bottom.

If you could set the implicit policer to 100%, you would have behavior like PQ between LLQ class and other traffic.

As to why put VoIP in a class that might drop it; the answer is you (logically) don't.  You reserve/provide sufficient bandwidth to provide the bandwidth the VoIP traffic might ever need, or, ideally, you have call bandwidth management reservation, so when you a new call would exceed what you've allowed for, you get a special busy signal to indicate the call can not be accepted at the moment.  If the LLQ policer engages for something like VoIP, although it might only be the last placed called that exceeded the bandwidth reservation, LLQ isn't flow specific, so all active calls can be degraded.  (NB: behavior at the implicit policer limit would be similar to a circuit devoted to just the LLQ traffic and you offered more bandwidth than the link could accept.)

acalderone
Level 1
Level 1

To go a little bit further about why it has to work this way, remember that even though voice and video traffic are usually the most delay-sensative types of traffic on your network, that doesn't make them the most important.

For instance, if your routing protocol packets are getting choked out by VoIP traffic, then you are in big trouble.  Your voice traffic probably won't even get to its destination, and links can appear to be flapping while your routing reconverges.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card