03-06-2020 10:56 AM - edited 03-07-2020 02:19 AM
Hi all
I am trying to fine tune egress buffers on one of our C9500-48Y4C switches (High Performance). I've read few documents and watched couple Cisco Live presentations and still can't fully grasp how egress buffers memory is being allocated.
My understanding WAS the following
Here's the config on a 1Gbps port with no QoS policy applied
DATA Port:20 GPN:101 AFD:Disabled FlatAFD:Disabled QoSMap:0 HW Queues: 160 - 167
DrainFast:Disabled PortSoftStart:7 - 1008
----------------------------------------------------------
DTS Hardmax Softmax PortSMin GlblSMin PortStEnd
----- -------- -------- -------- -------- ---------
0 1 10 112 23 448 16 224 0 0 8 1344
1 1 4 0 24 672 16 336 8 168 8 1344
2 1 4 0 5 0 0 0 0 0 8 1344
3 1 4 0 5 0 0 0 0 0 8 1344
4 1 4 0 5 0 0 0 0 0 8 1344
5 1 4 0 5 0 0 0 0 0 8 1344
6 1 4 0 5 0 0 0 0 0 8 1344
7 1 4 0 5 0 0 0 0 0 8 1344
--- cut for brevity ---
As can be seen, Q0 has 112 buffers assigned to HardMax, which is indeed 40% of 280. In turn, SoftMax is 448, which is 4 times 112. Q1 has 0 HardMax and 672 Softmax, which is (Total - Hardmax) * 4, or (280 - 112) * 4 = 672. All good so far.
I have then applied a custom 1P3Q3T policy, such as shown below
policy-map QoS-1P3Q3T
class REALTIME-QUEUE
priority level 1 percent 30
class CRITICAL-DATA-QUEUE
bandwidth remaining percent 30
queue-limit dscp cs2 percent 80
queue-limit dscp cs3 percent 90
queue-limit dscp cs6 percent 100
queue-limit dscp cs7 percent 100
queue-limit dscp af41 percent 100
queue-limit dscp af42 percent 90
queue-limit dscp af43 percent 80
queue-limit dscp af31 percent 90
queue-limit dscp af32 percent 80
queue-limit dscp af33 percent 80
queue-limit dscp af21 percent 90
queue-limit dscp af22 percent 80
queue-limit dscp af23 percent 80
class BULK-SCAVENGER-QUEUE
bandwidth remaining percent 5
queue-limit dscp cs1 percent 80
queue-limit dscp af11 percent 100
queue-limit dscp af12 percent 90
queue-limit dscp af13 percent 80
class class-default
bandwidth remaining percent 35
Once I checked buffers allocation I became a bit confused
DATA Port:20 GPN:101 AFD:Disabled FlatAFD:Disabled QoSMap:0 HW Queues: 160 - 167
DrainFast:Disabled PortSoftStart:7 - 420
----------------------------------------------------------
DTS Hardmax Softmax PortSMin GlblSMin PortStEnd
----- -------- -------- -------- -------- ---------
0 1 10 70 23 70 0 0 0 0 0 560
1 1 4 0 24 140 32 140 16 70 0 560
2 1 4 0 24 140 32 140 16 70 0 560
3 1 4 0 25 280 16 140 8 70 0 560
4 1 4 0 5 0 0 0 0 0 0 560
5 1 4 0 5 0 0 0 0 0 0 560
6 1 4 0 5 0 0 0 0 0 0 560
7 1 4 0 5 0 0 0 0 0 0 560
--- cut for brevity ---
Let's check another example with 2 LLQs (Level 1 and Level 2)
policy-map QoS-2P3Q3T
class REALTIME-VOICE-QUEUE
priority level 1 percent 10
police rate percent 10
class REALTIME-VIDEO-QUEUE
priority level 2 percent 20
police rate percent 20
class CRITICAL-DATA-QUEUE
bandwidth remaining percent 30
queue-limit dscp cs2 percent 80
queue-limit dscp cs3 percent 90
queue-limit dscp cs6 percent 100
queue-limit dscp cs7 percent 100
queue-limit dscp af41 percent 80
queue-limit dscp af42 percent 80
queue-limit dscp af43 percent 80
queue-limit dscp af31 percent 80
queue-limit dscp af32 percent 80
queue-limit dscp af33 percent 80
queue-limit dscp af21 percent 80
queue-limit dscp af22 percent 80
queue-limit dscp af23 percent 80
class BULK-SCAVENGER-QUEUE
bandwidth remaining percent 5
queue-limit dscp af11 percent 100
queue-limit dscp af12 percent 90
queue-limit dscp af13 percent 80
queue-limit dscp cs1 percent 80
class class-default
bandwidth remaining percent 35
This one confused me even more. Check how buffers are now allocated:
DATA Port:20 GPN:101 AFD:Disabled FlatAFD:Disabled QoSMap:0 HW Queues: 160 - 167
DrainFast:Disabled PortSoftStart:7 - 336
----------------------------------------------------------
DTS Hardmax Softmax PortSMin GlblSMin PortStEnd
----- -------- -------- -------- -------- ---------
0 1 10 56 23 56 0 0 0 0 8 448
1 1 10 56 24 224 16 112 0 0 8 448
2 1 4 0 25 112 32 112 16 56 8 448
3 1 4 0 24 224 16 112 8 56 8 448
4 1 4 0 24 224 16 112 8 56 8 448
5 1 4 0 5 0 0 0 0 0 8 448
6 1 4 0 5 0 0 0 0 0 8 448
7 1 4 0 5 0 0 0 0 0 8 448
--- cut for brevity ---
I am not sure if I will be accurate but the following conclusion can be made
Is logic going to be different for WRED queues (I haven't tested) or is it the same as WTD?
Why we get less buffers allocated if WTD is configured? What's the logic behind it?
Is my understanding correct or do I miss something else?
Why 'qos queue-softmax-multiplier' command is not available on C9500-48Y4C? Is it High Performance platform limitation? Is it going to be add in later releases or does it mean we don't have this flexibility with Y4Cs?
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide