cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Announcements
552
Views
10
Helpful
3
Replies
TisTos
Beginner

Priority in Nested QoS Policy between classes

Hi All,

 

From a QoS view, do you know if in the following configuration, the D1 Class would have priority (So may be better latency/Jitter...) over D2, then D2 over D3 ?

 

policy-map COS_TO_PE
 class RT
   priority percent 50
 class D1
   bandwidth remaining percent 50
 class D2
   bandwidth remaining percent 30
 class D3
   bandwidth remaining percent 19

 

Many thanks

1 ACCEPTED SOLUTION

Accepted Solutions
Joseph W. Doherty
Hall of Fame Expert

Yes and no.  D1 has a larger bandwidth guarantee than D2 which also has more than D3, so for all three classes, D1 will have "priority", in that it will be dequeued more often than D2, like wise for D2 over D3.  Yet, since each class is guaranteed a minimum bandwidth, if all three classes have a packet ready to be dequeued, a lower bandwidth class packet might be dequeued next to meet its class guarantee.

Or, in regards to latency and jitter, D1's overall averages may be better than D2's or D3's, but D1's stats will drop with competing traffic in either or both of those classes.

In theory, only the RT class, as a priority class will always be dequeued before any of the D classes, i.e. it's latency and jitter shouldn't be impacted.  However, in practice, if there's a D class packet being transmitted, the RT class will have to wait for it.  Further, QoS management only applies to packets that overflow the interface's TX-ring queue.  So, RT class's jitter and latency will degrade with other class traffic, although not to the same degree as between D classes.

With regards to latency and jitter, what we can expect, is on average, they will be much more bounded (and better), per class, then if there's no QoS.

View solution in original post

3 REPLIES 3
Joseph W. Doherty
Hall of Fame Expert

Yes and no.  D1 has a larger bandwidth guarantee than D2 which also has more than D3, so for all three classes, D1 will have "priority", in that it will be dequeued more often than D2, like wise for D2 over D3.  Yet, since each class is guaranteed a minimum bandwidth, if all three classes have a packet ready to be dequeued, a lower bandwidth class packet might be dequeued next to meet its class guarantee.

Or, in regards to latency and jitter, D1's overall averages may be better than D2's or D3's, but D1's stats will drop with competing traffic in either or both of those classes.

In theory, only the RT class, as a priority class will always be dequeued before any of the D classes, i.e. it's latency and jitter shouldn't be impacted.  However, in practice, if there's a D class packet being transmitted, the RT class will have to wait for it.  Further, QoS management only applies to packets that overflow the interface's TX-ring queue.  So, RT class's jitter and latency will degrade with other class traffic, although not to the same degree as between D classes.

With regards to latency and jitter, what we can expect, is on average, they will be much more bounded (and better), per class, then if there's no QoS.

View solution in original post

Thank you, that's clear.

 

So as an example, in the following configuration, we would have the same "priority" for DATA Classes :

 

policy-map COS_TO_PE
class RT
priority percent 25
class D1
bandwidth remaining percent 25
class D2
bandwidth remaining percent 25
class D3
bandwidth remaining percent 25
class D4
bandwidth remaining percent 25

 

 

Yes.

What CBWFQ does is assigned a "weight", for processing, i.e. dequeuing, to all that class's traffic.

With all four classes having the same bandwidth allocation, they would get the same weight.

If you had:

policy-map COS_TO_PE
class RT
priority percent 25
class D1
bandwidth remaining percent 10
class D2
bandwidth remaining percent 10
class D3
bandwidth remaining percent 10
class D4
bandwidth remaining percent 10

Each D class, again, would get the same internal weight as the other D classes, although the weight would be different from when 25% was assigned.  However, from a scheduling perspective, as all four classes have the same weight, whether 25% or 10%, they would all be dequeued, relative to each other, on the same 1:1:1:1 basis.  I.e. if there was only traffic for the D classes, and offered rate, for all classes exceeded 25%, each would obtain 25%, again even if assigned 10%, as in the above example.

Perhaps somewhat confusing, but whether you assign bandwidth as a percentage or Kbps, what's actually guaranteed, depends on all the class allocations and what's actually running through the policy at that moment.

For example, given:

policy-map example
class D1
bandwidth percent 50
class D2
bandwidth remaining percent 10
class D3
bandwidth remaining percent 10

If all classes were active, each offering full capacity (discounting class-default, which on later CBWFQ implementations is guaranteed 1%). Allocation ratios would be 5:1:1.

If just classes D1 and D2 active, allocation ratios would be 5:1.

If just classes D2 and D3 active, allocation ratios would be 1:1.