cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2343
Views
36
Helpful
12
Replies

LLQ questions

bellocarico
Level 1
Level 1

Hello all.

I'm trying to clarify a couple of concepts to myself.

I'm not setting up LLQ:

R1(config)#policy-map MYPOLICY

R1(config-pmap)#class MYCLASS

R1(config-pmap-c)#priority 64 ?

<32-2000000> Burst in bytes

<cr>

Can anybody clarify:

1) I guess that "priority" guarantee bandwidth where the burst limits it (not to starve other queues).

Is that right?

2) What is the time interval used to calculate the speed/burst?

3) I guess the way priority is controlling the speed is by dropping packets in exceed, so that TCP on the sender side reduces the transmit rate. Is that right?

4) Why is shaping not allowed with priority?

5) Would you use priority and police together? That would give policing control withing the priority reserved+burst bandwidth I guess...

Just asking... :-)

Thanks to read!

12 Replies 12

bellocarico
Level 1
Level 1

I think I've founf the answer to point 1 and 5:

"Enforcing an upper limit like this is a good idea on priority queues because it helps to prevent the highest priority traffic from starving the other queues. However, it is not necessary to enforce the upper limit this way just to avoid starving the lower queues. This is because the priority command stops giving strict priority to this queue when the bandwidth rises above the specified limit."

Any help on 2,3 and 4?

Thanks!

Bellocarico,

priority is priority, that is supposed the treatment given to delay-sensitive and non-droppable traffic traffic that a circuit is meant to carry. In most cases, that is voice /rtp. You should not design or assume that will induce tcp talkers slowdown on packet loss on priority queues, in a sane design there should be no tcp traffic in it.

This is also why shaping is not needed - router makes a "service assurance" for a given amount (more or less)of voice traffic, and when offered more than a maximum, drop that. In a way, you are shaping it.

A typical value for the time interval (Tc) is 1/8 of a sec, but it may vary with speed of interface.

Hope this helps, please rate post if does!

b.henshaw
Level 1
Level 1

In addition to p.bevilacqua's response:

Shaping would also defeat the purpose of using LLQ, as shaping will delay packets and the goal of the priority queue is to expedite forwarding of packets in that queue.

Also on some platforms you must use priority and policing together to prevent starvation of the other queues, as the police command won't accept a rate parameter.

Hi Brad,

My understanding is that LLQ does policing by default in order to prevent starvation of the other queues (which is the main difference between LLQ and PQ), what platforms need the use of policing explicitly when using LLQ ?

best regards,

Mohammed Mahmoud.

Hi Mohammed,

The GE WAN OSM on 7600 routers and the 3750 Metro switches require policing to prevent starvation as the 'priority' command doesn't support any parameters.

This may also be the case on other platforms where advanced queueing structures are supported in hardware.

Regards,

Brad

Summary:

1) LLQ gives priority to packets in the priority queue up to a limit. After that by default it shapes them.

2) Priority command accepts 2 parameters:

priority X Y

where X is the bandwidth and Y is the burst. All the packets after the burst are shaped.

3) Normally traffic in the priority queue is rtp (udp), so you may wat to apply policing to drop packets over the burst. As voice traffic shouldn;t be shaped (delayed)

4) The only way to limit bandwidth on the router side is to drop packets

My question now is:

Is there any way to tell an UDP sender to reduce it's speed? UDP doesn't support windowing of course.

It seems to me that UDP is pretty static in this way, and the network needs to be properly set to compensate this protocol limitation.

Thanks to read!

Friend,

LLQ does give priority to a limit but does not shape them.

When you enable LLQ with the priority command, it enables the policing function. You can change the burst size of this policer which defaults to 20% of the configured policing rate

HTH, rate if it does

Narayan

You are all forgetting something that is often required in a real-world example.

If you are running GTS or FRTS you may need to artificially induce a slowdown on one side of the link so your QOS will work properly. Lets say you have a 3750 on metro ethernet, but your service commitment is only 10Mbps. You could set the port to 10/full, and run qos as is (after you set the bandwidth parameter, too)

But now you are relying on what ever policing configuration your ISP is using, which may not be clear, and you will not get the desired results of LLQ if you are actually are being dropped with a car type, markdown policy.

A better solution here is nested policy-maps. A parent policy-map sets the "shape average 10000000" on the default-class, while the service-policy (or sub policy-map) does the llq to let voip leak out first. Does this induce latency, or break voip? no. You are technically shaping here, but you're defining a policy to determine what leaves the queue first. Without this 10 Megabits of backpressure you are controlling, you risk the ISP ignoring your markings, dropping at a lowe r than expected rate (say 9.5mbps). I see this all the time. Also, you should never let a link get more than 70% saturated on a business day average. its a bad practice. too little room for growth, qos will always have to mark down, shape, police, or drop traffic.

Regarding UDP, the best strategy I can think of since WRED of course won't work (not tcp) is looking into ECN for UDP with your application/server/operating system. This was designed with things like reliable multicast in mind an perhaps can trigger some host aware slowdowns.

-Joe

Correct, practical and well explained. My rate a "5" :)

Indeed it does require the rating of 5 :)

Narayan

Hi joe,

Really nicely explained, i've faced this issue before and i couldn't analyze it. When i used FRTS i couldn't peak to the link CIR (case1), but when nesting the LLQ with CB-Shaping it worked fine (case2), i wonder if any body has an explaination for this phenomena.

Case1

-----

class-map match-all voice

match access-group 101

!

policy-map voice

class voice

priority 64

set precedence 5

compress header ip rtp

class class-default

set precedence 0

fair-queue

map-class frame-relay FRTS

frame-relay cir 128000

frame-relay bc 1280

frame-relay be 0

frame-relay mincir 128000

service-policy output voice

NOTE: Another problem arise here, when i apply the service-policy under the FRTS, LLQ never works, but when applying the service-policy under the interface, the LLQ works fine.

Case2

-----

class-map match-all voice

match access-group 101

policy-map voice

class voice

priority 64

set precedence 5

compress header ip rtp

class class-default

set precedence 0

fair-queue

policy-map CB-shaping

class class-default

shape average 128000 1280 0

service-policy voice

Interface virtual-template 1

service-policy output CB-shaping

Thanks in advance,

Mohammed Mahmoud.

In response to your query about how to 'slow down' the traffic going into the priority queue.... It is typically assumed that the traffic is voice. There are really two parts to controlling voice traffic. One part is the QoS parameters discussed in this thread to guarantee (min and max) bandwidth to the priority queue. The other part is to enable Call Admission Control within your VoIP environment (either on CallManager in a Cisco environment or using Gatekeepers in an H.323 environment). If CAC is configured, then there should not be more voice traffic injected into the network than the priority queue is configured to handle - outside of small bursts that may be due to excessive jitter. So CAC + LLQ should work well hand in hand to control voice and provide it priority.

Review Cisco Networking for a $25 gift card