cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1329
Views
5
Helpful
2
Replies

Remaining bandwidth ratio

m-johansson
Level 1
Level 1

Hi

Are doing tests with a HQOS profile for L2VPN with 4 cos levels where EF and AF4 are prioritized and limited to take max 50%. The remaining bandwidth are to be shared between AF3 and AF1 in a 9 to 1 relation. For this "bandwidth remaining ratio" is used. When testing a hich load scenario with all 4 cos levels loaded and running several TCP flows in AF1 to push slightly over the top there is queue build up in AF3 and AF4 even if the load in these classes are within limit. The result is high delay variation in AF3 and even some in AF4. Have tried different different ratio settings like 9/1, 90/10, 27/3 and can se different results. So my question is what entity the "remaining bandwidth ratio" is? Is it amount of queue particles (256 bytes?) served per scheduler round? And if so is there a best practize, like not setting a lower value than max mtu size or so?

Appreciate any help I can get.

Best regards,

Martin

Here is a sample policy:

policy-map QOS_CUST_L2_OUT_10M_M
 class QOS_CUST_EF_QOSGROUP
  set cos 5
  police rate 3000 kbps
  !
  priority level 1
 !
 class QOS_CUST_AF4_QOSGROUP
  set cos 4
  police rate 3000 kbps
  !
  priority level 2
 !
 class QOS_CUST_AF3_QOSGROUP
  set cos 3
  bandwidth remaining ratio 9
  queue-limit 188 kbytes
 !
 class class-default
  set cos 1
  bandwidth remaining ratio 1
  queue-limit 188 kbytes
 !
 end-policy-map
!
policy-map QOS_CUST_SHAPE_L2_OUT_10M_M
 class class-default
  service-policy QOS_CUST_L2_OUT_10M_M
  shape average 10000 kbps
 !
 end-policy-map
!

 

2 Replies 2

Aleksandar Vidakovic
Cisco Employee
Cisco Employee

I suppose the platform in question is asr9k. Let me know otherwise. ;)

 

"bandwidth remaining ratio" (aka BRR) translates into two parameters: queue limit (in function of the visible BW) and weight. No BW guarantee is provided.

 

Run the "show qos interface <interface> {input|output}" to see what was programmed in HW. Also look up BRKSPG-2904 from Cisco Live Berlin 2016. The "Queuing and Scheduler Hierarchy" section addresses your questions. If any still remain, do let me know.

 

/Aleksandar

Hi

Tested the "low-burst mode" mentioned in BRKRSP-2904 which give a faster token refresh. This gave much better latency figures. There might be some drawbacks with this configuration and the requirement for reload of the LC to apply the configuration complicates things but it is certainly a way forward.

 

Thanks,

Martin

 

Description I found in the bugtool:

low-burst mode allows for a faster token refresh, but it may affect wred accuracy or shape rate accuracy when there are numerous queues defined on the npu. low-burst effectively runs the Tc at a much lower interval then the default of 4msec.

 

https://bst.cloudapps.cisco.com/bugsearch/bug/CSCup93005/?referring_site=ss&dtid=osscdc000283