cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1997
Views
20
Helpful
7
Replies

QoS for dummies

SJ K
Level 5
Level 5

Hi all,

 

Been on this QoS on and off for some time, decided to get these questions off my chest,  hope gurus here can help...

 

q1) does policing/shaping happen before packets enter the software queues (queuing) or it happens after queuing - when the packets are dequeue from the software queues ?  (saw different variations, some have the policer after the software/queue, some have the policer before the software/queue)

 

q2) if CBWFQ is based on bandwidth weight. -> Does that mean if i have a AF4 tagged traffic where its class is assigned a smaller bandwidth then a AF1 class,  the AF4 traffic will be dequeued / serviced at a slower rate then the AF1 class by the CBWFQ scheduler despite having a higher priority tag ?

(e.g. AF4 class, tag with AF4 but low bandwidth allocated - 100Kbps

        AF1 class, tag with AF1 but higher bandwidth allocated - 500Kbps )

 

q3) if i have a tagged traffic (high priority AF4) which i happen to classify it in a low priority class/queue and I did not remark it -  Does that means this particular traffic might be handle with higher priority in the next hop ?

 

Regards,

Noob

1 Accepted Solution

Accepted Solutions

q2 - Yes (and no)

As you increase the bandwidth allocation of a class, that class should get "priority" over other traffic because it obtains more of the scheduler's attention. Whether that class needs the "priority" depends on it's latency/jitter requirements because otherwise if the traffic is in conformance with its bandwidth allocation it shouldn't be excessively delayed.

If you do allocate much more bandwidth, then "needed", to increase a class's "priority" you need to be aware that if that class uses more than the bandwidth expected, it will obtain it (which might be okay, or not). If this is a possible issue, you could further police or shape that class to not take too much bandwidth (much in concept like LLQ's implicit policer.

(BTW, I once had a policy that gave lots of bandwidth to telnet and SSH, to insure remote console traffic wasn't delayed. Policy worked great until the day someone used SCP. )

For example:

policy-map sample1
class foreground
bandwidth percent 99
class class-default
bandwidth percent 1

In the above, foreground will behave close to LLQ in that it has a 99:1 ratio over the default class. If we were doing this to insure foreground usually got dequeued first, but we don't want it to hog all the bandwidth, we might try:

policy-map sample2
class foreground
bandwidth percent 99
shape average percent 25 !unsure the this is acceptable syntax
class class-default
bandwidth percent 1

q3 - If a client program is tagging with ToS marking you don't want, you could remark the packets.

How a later router treats a tagged packet (in the DiffServ model) is, again, up to that router. I.e. the PE router might provide priority treatment to CS5 marked packets, or not. It doesn't matter whether you prioritized that traffic or not, and, again, perhaps even if the traffic is tagged with CS5. The PE router could use other attributes for traffic classification. For example, if the traffic is marked CS5 but doesn't appear to be VoIP or Video, the PE might not prioritize the traffic. You need to find out what the PE's QoS policy is.

View solution in original post

7 Replies 7

Joseph W. Doherty
Hall of Fame
Hall of Fame
q1 - I'm not really sure in all cases. I would expect policing is generally independent of software queuing, although I know the implicit LLQ policer is dependent on interface congestion (as least on the IOS versions I've used).

Shaping, though, creates its own queue(s) and when used with tiered policy maps is generally a parent to subordinate policies. Unsure what other software queues it might be "behind", unless you could define levels of shapers.

Do you have an example case or cases where your question is a relevant consideration?

q2 - Yes (Which creates, I believe, a whole lot of less than ideal QoS configurations basing allocations on "bandwidth".)

q3 - Depends on the QoS model being supported, i.e. DiffServ vs. IntServ. I suspect you're thinking/using the DiffServ model, and in that model, every hop makes its own QoS treatment decisions. Also remarking the traffic, or not, might not even impact that traffic's treatment on the next hop.

Hi Joseph,

 

It's good to see your reply.


Do you have an example case or cases where your question is a relevant consideration?

I have a policer that re-mark exceeded traffic with a different marking... just wonder that traffic got actually sent out (policer after queue)  or get a new marking and placed into the software queue (policer before queue)


q2 - Yes (Which creates, I believe, a whole lot of less than ideal QoS configurations basing allocations on "bandwidth".)

So can i say that in CBWFQ,  the tagging/priority of a traffic class must be matched with the bandwidth ratio allocated to it ? Higher priority traffic - must be allocated higher bandwidth (even though they might actually require just little bandwidth)

 If a traffic is marked with AF4 but if it is allocated a small bandwidth, it will be handled with less priority compared to other classes with higher bandwidth allocation (even though those classes might be tag with a lower priority dscp marking) -- am i right ?

 


q3 - Depends on the QoS model being supported, i.e. DiffServ vs. IntServ. I suspect you're thinking/using the DiffServ model, and in that model, every hop makes its own QoS treatment decisions. Also remarking the traffic, or not, might not even impact that traffic's treatment on the next hop.

Yes, i am using the diffserv model.  I got traffic tagged with CS5 that is being classified into the default class in the CE router - it was not re-mark though.

I need this traffic to be in the default class but the client program tagged it with a CS5 marking.

So i wonder, when this particular class of traffic move into the PE with a CS5 marking, it will be handle with priority instead (even though i wanted it to be in default).  Could this be a possible scenario ?

 

Regards,

Noob


q2 - Yes (and no)

As you increase the bandwidth allocation of a class, that class should get "priority" over other traffic because it obtains more of the scheduler's attention. Whether that class needs the "priority" depends on it's latency/jitter requirements because otherwise if the traffic is in conformance with its bandwidth allocation it shouldn't be excessively delayed.

If you do allocate much more bandwidth, then "needed", to increase a class's "priority" you need to be aware that if that class uses more than the bandwidth expected, it will obtain it (which might be okay, or not). If this is a possible issue, you could further police or shape that class to not take too much bandwidth (much in concept like LLQ's implicit policer.

(BTW, I once had a policy that gave lots of bandwidth to telnet and SSH, to insure remote console traffic wasn't delayed. Policy worked great until the day someone used SCP. )

For example:

policy-map sample1
class foreground
bandwidth percent 99
class class-default
bandwidth percent 1

In the above, foreground will behave close to LLQ in that it has a 99:1 ratio over the default class. If we were doing this to insure foreground usually got dequeued first, but we don't want it to hog all the bandwidth, we might try:

policy-map sample2
class foreground
bandwidth percent 99
shape average percent 25 !unsure the this is acceptable syntax
class class-default
bandwidth percent 1

q3 - If a client program is tagging with ToS marking you don't want, you could remark the packets.

How a later router treats a tagged packet (in the DiffServ model) is, again, up to that router. I.e. the PE router might provide priority treatment to CS5 marked packets, or not. It doesn't matter whether you prioritized that traffic or not, and, again, perhaps even if the traffic is tagged with CS5. The PE router could use other attributes for traffic classification. For example, if the traffic is marked CS5 but doesn't appear to be VoIP or Video, the PE might not prioritize the traffic. You need to find out what the PE's QoS policy is.

Hey Joseph,

Thanks for your reply.

 


(BTW, I once had a policy that gave lots of bandwidth to telnet and SSH, to insure remote console traffic wasn't delayed. Policy worked great until the day someone used SCP. )

LOL...ok this is 1 real simple but classic example.

==========================

Just some food for thoughts, if i have 2 type of traffic_A and B,  in which Traffic_A should have a higher priority then B,  is it better to

** assume the pipe is small and will gonna get congested by both traffic **

 

a) create 1 big queue,  tag AF31 to trafficA and AF32 to trafficB , and use WRED DSCP to configure more drop rate for AF32 markings


or


b) 2 different queue - AF31 with a higher bandwidth/priority and AF32(or even mark with AF21) with a lower bandwidth/priority

 

Regards,

Noob

"** assume the pipe is small and will gonna get congested by both traffic **"

BTW, not only "small" might congest. (For example, I once worked in a SP environment where one night we had an Etherchannel of eight 10g links dropping traffic like crazy due to congestion. QoS, though, was in place to protect the more "important" traffic from being dropped.)

I typically recommend against using WRED unless you're a QoS expert. So, if the choice is A or B, I would go with the B solution.

That said, I believe the 11/12 class model is oversold, and if you have FQ support, that often works very well. I.e. if you dump both Traffic_A and Traffic_B into FQ, many applications are happy. Sometimes you do need to separate classes of FQ classes, giving the superset classes different overall priorities.

For example, my "generic" policy map looks like:

policy-map Generic
class real-time
priority percentage 33
class foreground !or better than BE
bandwidth remaining percentage 81
fair-queue
class background !or less than BE, or scavenger
bandwidth remaining percentage 1
fair-queue
class class-default !BE
bandwidth remaining percentage 9
fair-queue

In the above, the real-time class only has traffic with strict latency/jitter/drop requirements, like VoIP bearer traffic. Class-default contains most traffic, but foreground allows known bandwidth usage traffic that needs "better" latency/jitter/drop than BE, e.g. VoIP control traffic. Background contains traffic "happy" with available bandwidth, e.g. huge data transfers.

One of nice things about FQ, flows get proportional treatment, and generally, bandwidth hogs will queue themselves up subjecting themselves to drops much more so than light usage flows. Having flows encounter drops is nice as you cannot often easily "tell" what an application is. For example, what applications might be using Microsoft NetBEUI ports, http/https ports or IPSec?

@Joseph W. Doherty 

Kudos Joseph very good post!


Please rate and mark as an accepted solution if you have found any of the information provided useful.
This then could assist others on these forums to find a valuable answer and broadens the community’s global network.

Kind Regards
Paul

Thank you Paul.
Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Innovations in Cisco Full Stack Observability - A new webinar from Cisco