cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1782
Views
5
Helpful
15
Replies

QoS - Policing - Policy-map with multiple ClassMaps = CIR?

_|brt.drml|_
Level 1
Level 1

Hi, I'm reading me in about QoS. Also undertanding my company settings

So far so good. 

I understood from theory that Policing can remark traffic. I understand that, but how will it behave? 

eg:

line Speed: 1G
CIR: 5Mbps

Bc = Tc*CIR = 0.125 * 5 000 000 = 625 000
Be = 625000 => no change (we are ISP!) (No Burst traffic allowed -> maybe calculate Bc 10% less then CIR?)

 

config in configuration book of cisco :police [cir] bps [Bc] burst-normal [Be]
burst-excess [conform-action action]
[exceed-action action] [violate-action action]
Cisco Example:

Router(config-pmap-c)# police [4000000] [2000] [5000] conform-action transmit exceed-action set-dscp-transmit 5

 

On lab Router:

Idea: 5Mbps is my line speed. I would like to have a PRIORITY class that is policed and remarks the 'dropped' to a lower class. 

The line speed is fixed, no burst allowed. 

I guess the correct config line will be:

 

LAB router: => police 5000000 625000 625000 conform-action transmit exceed-action set-qos-transmit 0 violate-action drop

 

What will the set-qos-transmit do? 

It will go to qos group 0 => will it accept and prioritize in that group above all other 'default' traffic? Or will it just fill up the queue. 

Is my understanding of the Bc and Be correct? 

 

The PRIO class has a remaining % configured => is that my max line speed? 

 

 

I hope this is understandable. 

Thank you in advance.

 

Bart 

 

 

 

 

15 Replies 15

Giuseppe Larosa
Hall of Fame
Hall of Fame

Hello Burt,

you probably would like to use

set-dscp-transmit 0

to down mark traffic in excess

using qos groups is for a different use case, when you need locally defined markings that you will use elsewhere in the same node as a match criteria.

In your case you want to down mark so the more appropriate is setting the DSCP to 0 CS0 or best efffort

 

Hope to help

Giuseppe

 

Giuseppe

 

Yes right ! Good notice and thank you. That's for another purpose (MPLS not?). 

But I wonder:

- If you have 3 policy: High Medium Low: High is dedicated BW % and the other two are 'reserved'.

Do I calculate the policing for MEDIUM at what I suggested in first post? Purpose is to remark Medium to low.  

- secondary question: will the remarked packet just queue in line? Or will it have 'priority' over original Low? 

 

B

 

Hello Burt,

>>> Yes right ! Good notice and thank you. That's for another purpose (MPLS not?)

 

No , for MPLS you would set or mark the EXP field in MPLS label .

QoS groups are a form of internal marking that cannot exit the node.

There are two very good posts from @Joseph W. Doherty in this thread.

 

As noted by Joseph you need to clarify that policing is not related to queueing.

Policing is a form of metering and the decision about how to treat a packet depends on how many packets of the same class have been processed in the reference interval.

The metering algorythm is called Token bucket each token is the right to transmit a bit.

Depending on the time elapsed since penultimate packet of same class has been processed the current packet of size Xp bits can be conforming, exceeding or violating.

The Bc and Be act as token containers and are filled in different ways depending on the type of policer.

The original Cisco policer CAR is an example of a policer with two colors and single rate.

The modular QoS allows to implement other two types of policers:

three color single rate policer

three color dual rate policer.

 

The number of colors refers to the packet classification

two colors conform or exceed   (CAR name) violate in mQoS

three colors conform , exceed , violate modular QoS only

 

So the check is the packet Xp bits size is:

conform if Bc  contain more then Xp tokens . Xp tokens are removed from Bc

exceed   Xp > X-Bc  but Xp is less then X-Be size. Xp tokens are removed from Be

violate Xp > X-Bc and Xp > X-Be packet is dropped no tokens are removed from either Be or Bc.

in three color single rate Bc is filled at CIR in the time interval.  Be is filled by unused tokens in previous intervals .

 

with two rates three colors you specify a CIR and a PIR peak information rate.

Bc is filled at CIR rate and Be is filled at PIR rate.

in this case for the exceed action Xp tokens are removed from both Bc and Be.

 

Hope to help

Giuseppe

 

 

"Be is filled by unused tokens in previous intervals ."

BTW, that's how I believe it should work, and some Cisco documentation has that, but in many cases, Cisco's implementation appears to replenish Be the same way it replenishes Bc.

Joseph W. Doherty
Hall of Fame
Hall of Fame

I'm unsure how to approach your questions because I suspect you have some basic misunderstandings about how policing works.  For example, you cannot generally eliminate all bursting from policing although you can define parameters for policing's bursting.

Without a good understanding of policing, it's very easy to "muck up" traffic with a policer.  You can, indeed (at least on later Cisco policers), mark packets differently based on a flow's variable usage rates, and further (later) use those marking to treat that traffic differently, but generally what you would normally do would be to "defer" dropping out-of-contract traffic until you really needed to.  Generally you don't prioritze part of one traffic flow's packets differently.

Bc vs. Be, I recall (?), on Cisco platforms, sometimes works differently from how they are documented, and/or the documentation is inconsistent.   Be, should, I believe, be a reserve that can be borrowed against, which allows the type of bursting I believe you have in mind.  However, I believe on most platforms,  Be just allows Bc and Be to work simply as the sum of two.

Hi thank you for reply.



What I understood of Policing:

- When traffic hits that queue, and the queue is full, policing either drops or remarks.

I think that is the general definition.

O instead of dropping the traffic at that queue, I can remark it and give it to another traffic class.

That is the intention.

Only I do not understand the Bc and Be..

If my line is 5Mbps, the Bc is 625000. The line is what I get and is maximum 5Mbps.

So I understood no burst? Then, Bc = Be?



Secondary question:

What if? I have multiple queues each with their own 'bandwidth' taken from that 5Mbps, do I create the policer at that 5Mbps, or at the given Bandwidth?

I do not find any reference on the internet.



I think I read some more



Thank you a lot for your time and giving me some directions.



B


"When traffic hits that queue, and the queue is full, policing either drops or remarks."

No.  Queue, full or not, per se, doesn't have anything to do with policing.  What policing does is measure transmission volume over some time interval.  If then determines whether that measure traffic volume is under some acceptable value, over that value, or (optionally) over an even higher value.  Depending on how traffic is qualified (i.e. under, over or over even more) it can drop or tag the traffic.

Yes, when you remark traffic, you might mark it such that it's placed into another class, but this is unusual (because different classes often will reorder packets, something usually to be avoided for any one traffic flow).

Yup, Bc and Be are often a bit confusing, because what they are really doing is being used to determine time intervals for data transfer rates.  For example you've mentioned wanting an effective rate of 5 Mbps on a gig link.  First, understand any physical link always transfers bits at its rate, for example, a gig link transmits bits at 1 Gbps.  So, how do we achieve 5 Mbps?

We do that by limiting transfer rate to 5 Mbps over some time interval.  If we were using an interval of a second, we allow a total of 5 Mbps to be transmitted during the second, more than that, a policer would drop or remark packets.  During the measured second, it doesn't matter how the transmission is actually done.  It could all be done during a brief burst during the beginning of the second, at the end of the second, or scattered across the second.  What's critical is measuring volume during some time interval.

In actual usage, time intervals being used are fractions of a second (usually in the 25 to 40 ms range).  Again, actual transmission rates are the physical links bps.  The latter is why I mention there are always bursts.  On a "real" 5 Mbps link, frames/packets will flow down the link at 5 Mbps, but on a gig link, those same frames/packets are going across the wire at 1 Gbps.  I.e. individual frames/packets and/or back-to-back frames/packets will be transmitted faster than the link you're trying to emulate.

As to your second question, if you can further explain what you're trying to accomplish, I might be able to make some further suggestions.

Thank you for the response on the first - clear and understandable.



Elaborate on the second:

What if:

My 5Mbps is partitioned over my policy-map where each class is receive some bandwidth

2 Classes VOICE & VIDEO have dedicated

All others are reserved.

My class CRITICAL DATA has a : bandwidth remaining percent X%



The intent is to fill that class with packets that originate from several business critical apps.

These apps do not act all together at the same time. But, if they do I thing of using POLICING to re-mark the overspill to a lower class.

So packets go from CRITICAL to LESS-CRITICAL and then to DEFAULT.



So, do I calculate the policer at 5Mbps or at the logical remaining bandwidth?

And, packets arriving to those new-queues are I guess just queued and not interweaved first.



Thank you



B








Could you provide a copy of the CBWFQ policy-map that does what you've described (or at least think it does)?

BTW, generally CBWFQ classes don't reserve bandwidth, although they can provide minimum bandwidth guarantees, but even those often don't actually work as policy-map commands seem to configure it.

Again, rearranging any traffic's order of packets is often something to be avoided.  For some applications, like VoIP and Video, reordering packets is often effectively like discarding the packets but still wasting the bandwidth of such packets you don't actually drop.

For "fun", consider:

policy-map exampleA
class a
bandwidth percent 50
class b
bandwidth percent 25
class class-default
bandwidth percent 25

policy-map exampleB
class a
bandwidth percent 40
class b
bandwidth percent 20
class class-default
bandwidth percent 20

policy-map exampleC
class a
bandwidth percent 50
class b
bandwidth remaining percent 50
class class-default
bandwidth remaining percent 50

policy-map exampleD
class a
bandwidth percent 50
class b
bandwidth percent 25
class class-default
bandwidth remaining percent 100

For each of the above, if class a and class-default each wanted 100% of the bandwidth, and there's no class b traffic, what bandwidth would each get?

_|brt.drml|_
Level 1
Level 1

Exams 

First off all, again thank you for explaining. In the meantime I already read a lot in the book (Cisco - QOS Design). Rather good book. 

I do understand all of your explanation.

 

PM A: Class A 50% of the BW, so fo B 25% and default 25% => of the entire available BW.

PM B: Class A 40%, B 20%, C 20% and you loose 20% => correct? doubts here

PM C: Class A 50% dedicated, the others combine again 100% of the remaining and at least both get 50% . When more is available in the remaining, they will take it. Correct.

PM Class A 50%, B, 25% and Default the entire remaining 100%

 

Good example to find an answer to another POLICING question. 

The remaining percent? 

 

What I hope to achieve: [percentages are just as example and not real calculation on the codec/clients]]

- Class VOICE: dedicated bandwidth 5%

- Class VIDEO: dedicated bandwidth 5%

- Class HIGH PRIO: Remaining 15%

- Class MED PRIO: Remaining 25%

- Class MEDIUM: Remaining 25%

- Class Default: Remaining  35%

 

The objective: When it does happen that the class HIGH PRIO is full, then I would think that policing decides to remark and that the traffic ends up in the CLASS MED PRIO, when that is Full, it falls policed again to the Class Medium, and so on. 

Do you calculate your values on the remaining percentage? I guess, yes, as that is during complete congestion that the remaining % is your offered 'BW'. Anything more, needs to be dropped or remark. Just to ensure a chance to the traffic to leave.

Reason, the lower classes are not always full during a certain time interval.

I think that is clear? 

And what do you think about that? I did not read about it on the internet, nor in the book yet.

 

B

 

 

 

Hello Burt,

I see you have moved to queueing and scheduling that is CBWFQ.

CBWFQ has a variant called LLQ CBWFQ where you can use one or more priority queues  called Low Latency Queue.

 

In the policy-map configuration an LLQ uses the priority command instead of bandwdth

 

example:

policy-map  PM-LLQ

class VOICE

priority percent 5

class VIDEO

priority percent 10

class DATAPLUS

bandwdith remaining percent 50

class class-default

bandwidth remaining percent 50

 

The LLQ reduce the effective bandwidth to 85% of the total 5 Mbps . The bandwidth remaining percent is used in this context to refer to this 85% of 5 Mpbs available to standard queues . In the example above each of the standard queue receives 50% of 85% * 5 Mbps . This allows to write policy maps that auto adapt to the amount of "remaining BW" without the neee to rewrite the commands for standard queues if you in a later time change a LLQ priority queue BW usage.

 

The LLQ queues are served first in round robin and in their case there can be an explicit or implicit policer to avoid starvation of standard queues.

 

I may be wrong but you should not mix bandwidth percent and bandwidth remaining percent in the same policy map.

 

>> When it does happen that the class HIGH PRIO is full, then I would think that policing decides to remark and that the traffic ends up in the CLASS MED PRIO, when that is Full, it falls policed again to the Class Medium, and so on. 

 

First of all, here we are dealing with output queueing and not with policing.

These are called software queues that hold the packets waiting to be sent out of the interface.

The queue management allows for packets to be stored in multiple queues with each of them served in different manner depending on policy-map configuration : first any LLQ if present then the standard queues depending on their quota of bandwidth.

What is not permitted is to move a packet enqueued in a specific queue to another one : the queue management allows to transmit or drop the packet.

 

So what you would like to do is not possible in CBWFQ or LLQ-CBWFQ

Drops caused by a full queue are called tail drops.

There is a congestion avoidance mechanism rather complex that is called WRED Weighted Random Early Detection.

When using WRED some selective drops can happen in each queue in an attempt to avoid to reach the full queue status to avoid tail drops.

The reasoning behind RED and WRED is based on TCP behaviour . After a drop a TCP session will slow down by reducing the window size.

IF drops are generalized like in tail drops all TCP flows are slowed down and then they will grow again in a saw traffic graph.

Specially for backbone links it has been found that using selective drops based on DSCP value for packets that are in the same queue this synchronization of TCP sessons slow down can be avoided and a better goodput can be achieved.

 

The usage of WRED is complex and it was introduced when most of the traffic was TCP based ( 80% or more in volume).

Nowdays, this TCP predominance has been reduced as most streaming protocols for audio and video are based on UDP that has no built flow control.

In addition the tuning of WRED has been difficult in the past as it was difficult to test a WRED configuration using syntethic traffic (generated by a traffic generator but missing the TCP intellgence for flow control).

 

Hope to help

Giuseppe

 

 

 

Some (possible) clarifications:

"The LLQ reduce the effective bandwidth to 85% of the total 5 Mbps ."

Yes and no.  Other non-LLQ classes can still use bandwidth not being used by LLQ.

"The bandwidth remaining percent is used in this context to refer to this 85% of 5 Mpbs available to standard queues . "

Technically, as shown, what the percentages are based on is unknown.  Generally either the physical interface's bandwidth is used (noted as being gig), or from a parent policy's shaper (not discussed, unless I missed it).

"The LLQ queues are served first in round robin and in their case there can be an explicit or implicit policer to avoid starvation of standard queues."

I think (?) CBWFQ (in the HQF variant) guarantee at least 1% of bandwidth for class-default, but LLQ can use 100% if class-default is using zero.

Also BTW, on most (all?) platforms, the implicit policer doesn't trigger unless there's enough traffic which causes congestion.  I.e. something like priority 10 percent can still use 100% of bandwidth.

"I may be wrong but you should not mix bandwidth percent and bandwidth remaining percent in the same policy map."

I don't recall (?) there being any restrictions.  The real point of "remaining", as you've noted, it "auto adjusts".

"So what you would like to do is not possible in CBWFQ or LLQ-CBWFQ"

True in a single policy map instance, but as my other post notes, believe it's doable between multiple policy-maps.

"Nowdays, this TCP predominance has been reduced as most streaming protocols for audio and video are based on UDP that has no built flow control."

UDP, itself, has no flow control, but sometimes the applications using it, does have flow control.  Some video streaming applications will adjust their transmission rate (and quality) based on receiver sending quality received stats back to sender.

Laugh, my question about the four policy maps was a bit of a "trick" as all four, basically, all provide the same effective results, which, in my question about bandwidths received by class a and the class-default (wanting bandwidth, while class b wants no bandwidth), would be, 2/3 and 1/3, respectively.

If you're wondering why, first bandwidth statements provide a minimum guarantee, i.e. classes a, b and class-default might all obtain 100% of bandwidth.  Second, regardless of how you configure the bandwidth allocations in the policy-map, under-the-covers, weights are being assigned to determine how one class is dequeued relative to all the others.  The actual weights should be the same for examples A, C and D, but the ratios between those weights should be the same for all for examples.

In all my examples, the three classes have the ratios of 2:1:1.  If only classes a and class-default were competing for bandwidth, the ratios would be 2:1, which is how class a obtains 2/3s the bandwidth and class-default 1/3 the bandwidth.

Back to your objective, you cannot shift classes (to my knowledge) within the same policy, but traffic can be marked such that the next policy (on the same platform, or the next) will process the remarked traffic by different classes.

For example, on the same platform, an ingress policy can remark traffic so that an egress policy will process the remarked traffic by separate classes.  If the platform's egress policy also remarks (for your "cascade" results), the next platform will also process based on how the remarked traffic is marked, and can "repeat" the remark cycle.  Your cascade processing would be limited by number of platforms your traffic will transit.

(Note: syntax may be inaccurate)

class-map high match-any
match ip dscp cs4

class-map mid match-any
match ip dscp cs3

class-map low match-any
match ip dscp cs2

policy-map ExampleE-Ingress
class high
police average # conform-action transmit exceed-action set-dscp-transmit cs3 violate-action set-dscp-transmit cs2

class mid
police average # conform-action transmit exceed-action set-dscp-transmit cs2 violate-action set-dscp-transmit be

class low
police average # conform-action transmit exceed-action set-dscp-transmit be

The above could also be merged into an egress policy.

_|brt.drml|_
Level 1
Level 1

All,

 

Thank you for this real useful stuff. I really have had good inputs. Usable.

I will config the test lab first and generate some traffic inside it. I should be able to see the iteration. 

 

The only question not yet answered.

Your policer on the classes with the remaining percentage => do you use the 'remaining' percentage as the policer? I guess, yes : when the bandwidth available is more than 'remaining' policing won't happen. I think i'm correct there. Not?

 

Sincerely,

B

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card