cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
624
Views
3
Helpful
2
Replies

QoS and Service Providers

JohnTylerPearce
Level 7
Level 7

Let's say I have the following setup.

Customer Networks -> ASR -> ISP (100Mbps Circuit)

So, let's say we have 3 customers, Customer_A, Customer_B, and Customer_C.

Now, I know that by default, Cisco puts around 25% of the bandwidth for protocol overhead etc (I think this is correct?)

I know this can be modified as well, but we'll just leave it at the default

I'm going to guarantee minimum bandwidth of 20Mbps per customer.

ip access-list Customer_A

permit ip 174.25.111.28.0 255.255.255.248 any

ip access-list Customer_B

permit ip 174.25.111.28.8 255.255.255.248 any

ip access-list Customer_C

permit ip 174.25.111.28.16 255.255.255.248 any

class class_CustomerA

match access-group Customer_A

class class_CustomerB

match access-group Customer_B

class class_CustomerC

match access-group Customer_C

policy_map INETOUT

class class_CustomerA

  bandwidth 20000

class class_CustomerB

  bandwidth 20000

class class_CustomerC

  bandwidth 20000

This would leave 15Mbps left over.

From my undersatnding, if there is congestion on the interface that I apply this to (leading to the ISP's nxt hop from the ASR),

then it will reserve 20Mbps minimum for CustomerA, B, and C, and allow it to go over the 20Mbps, but proportional if it can.

I was just wondering if this was correct?

1 Accepted Solution

Accepted Solutions

Joseph W. Doherty
Hall of Fame
Hall of Fame

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

Now, I know that by default, Cisco puts around 25% of the bandwidth for protocol overhead etc (I think this is correct?)

Not exactly.  Pre-HQF would let you only allocate up to 75% of the bandwidth for all non-default class bandwidths unless you, as you also noted, overrode the default.  However, bandwidth isn't really reserved, the bandwidth statement sets relative weight.

From my undersatnding, if there is congestion on the interface that I apply this to (leading to the ISP's nxt hop from the ASR),

then it will reserve 20Mbps minimum for CustomerA, B, and C, and allow it to go over the 20Mbps, but proportional if it can.

I was just wondering if this was correct?

Again, 20 Mbps, as 20 Mbps, isn't guaranteed, it really just sets relative weights.  What you should see is all 3 non-default classes assigned the same weight, i.e. they would share bandwidth 1:1:1.  If any class didn't "want/use" as much bandwidth as the other two classes, those two would share the excess 1:1.  (I.e., yes to your last question.)

If you had any class-default traffic, it would share bandwidth proportionally based on its assigned weight (which pre-HQF should, by default, use 25% of the nominal bandwidth).  If the interface was configured with bandwidth for 100 Mbps, ratios for all 4 classes would be 20:20:20:25 or 4:4:4:5.  (NB: pre-HQF, class-default configured with fair-queue "skews" relative weights as each class-default flow is scheduled with its own weight.)

Disregarding class-default, you would get about the same results if you assigned your 3 classes 8 Kbps each, 25 Mbps each, 10% each; actual flow weights would differ, but dequeuing scheduling would still be 1:1:1 (or each class would be "guaranteed" about 1/3 of bandwidth).

PS:

What I'm describing might be difficult to understand until you realize the difference between what the CBWFQ policy shows vs. how it's actually implemented.  At least for pre-HQF CBWFQ, it appears to me it principally uses WFQ "under the covers".  CBWFQ (pre-HQF) maps multiple flows, for the same class, to a singe flow queue and computes a weight for that flow based on the bandwidth statement vs. nominal interface bandwidth (unlike using WFQ's IP Prec for flow weight).  If you understand WFQ, it's easier to understand CBWFQ.

View solution in original post

2 Replies 2

Peter Koltl
Level 7
Level 7

This approach controls only the Customer -> ISP direction (upload) traffic.

Joseph W. Doherty
Hall of Fame
Hall of Fame

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

Now, I know that by default, Cisco puts around 25% of the bandwidth for protocol overhead etc (I think this is correct?)

Not exactly.  Pre-HQF would let you only allocate up to 75% of the bandwidth for all non-default class bandwidths unless you, as you also noted, overrode the default.  However, bandwidth isn't really reserved, the bandwidth statement sets relative weight.

From my undersatnding, if there is congestion on the interface that I apply this to (leading to the ISP's nxt hop from the ASR),

then it will reserve 20Mbps minimum for CustomerA, B, and C, and allow it to go over the 20Mbps, but proportional if it can.

I was just wondering if this was correct?

Again, 20 Mbps, as 20 Mbps, isn't guaranteed, it really just sets relative weights.  What you should see is all 3 non-default classes assigned the same weight, i.e. they would share bandwidth 1:1:1.  If any class didn't "want/use" as much bandwidth as the other two classes, those two would share the excess 1:1.  (I.e., yes to your last question.)

If you had any class-default traffic, it would share bandwidth proportionally based on its assigned weight (which pre-HQF should, by default, use 25% of the nominal bandwidth).  If the interface was configured with bandwidth for 100 Mbps, ratios for all 4 classes would be 20:20:20:25 or 4:4:4:5.  (NB: pre-HQF, class-default configured with fair-queue "skews" relative weights as each class-default flow is scheduled with its own weight.)

Disregarding class-default, you would get about the same results if you assigned your 3 classes 8 Kbps each, 25 Mbps each, 10% each; actual flow weights would differ, but dequeuing scheduling would still be 1:1:1 (or each class would be "guaranteed" about 1/3 of bandwidth).

PS:

What I'm describing might be difficult to understand until you realize the difference between what the CBWFQ policy shows vs. how it's actually implemented.  At least for pre-HQF CBWFQ, it appears to me it principally uses WFQ "under the covers".  CBWFQ (pre-HQF) maps multiple flows, for the same class, to a singe flow queue and computes a weight for that flow based on the bandwidth statement vs. nominal interface bandwidth (unlike using WFQ's IP Prec for flow weight).  If you understand WFQ, it's easier to understand CBWFQ.