cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1040
Views
0
Helpful
0
Replies

Catalyst 9500-48Y4C Queuing

Hi all

 

I am trying to fine tune egress buffers on one of our C9500-48Y4C switches (High Performance). I've read few documents and watched couple Cisco Live presentations and still can't fully grasp how egress buffers memory is being allocated.

 

My understanding WAS the following

 

  • HardMax buffers (static allocation on LLQs) = Total-interface-buffers / Total-queues
  • SoftMax buffers (dynamic allocation) = GlobalSMin * 4 (or 400%)
  • GlobalSMin = (Total-interface-buffers - Total-Hardmax-reservations) / Total-non-priority-queues
  • 1Gbps interface has 280 buffers (on this particular platform)
  • By default there are Q0 and Q1
    • Q0 is LLQ and it gets 40% reserved as HardMax and 4 times HardMax allocated as SoftMax (400%)
    • Q1 has no HardMax allocation, only SoftMax as per above ((Total - Hardmax) * 4)
  • If custom QoS policy is applied, then each queue is allocated fair amount of buffers as per below
    • Priority queues get HardMax allocated using logic described above (not 40%!)
    • Priority queues get SoftMax allocated as 4 times HardMax
    • Each non-priority queue gets SoftMax allocated using the logic described above.

Here's the config on a 1Gbps port with no QoS policy applied

 

DATA Port:20 GPN:101 AFD:Disabled FlatAFD:Disabled QoSMap:0 HW Queues: 160 - 167
DrainFast:Disabled PortSoftStart:7 - 1008
----------------------------------------------------------
DTS Hardmax Softmax PortSMin GlblSMin PortStEnd
----- -------- -------- -------- -------- ---------
0 1 10 112 23 448 16 224 0 0 8 1344
1 1 4 0 24 672 16 336 8 168 8 1344
2 1 4 0 5 0 0 0 0 0 8 1344
3 1 4 0 5 0 0 0 0 0 8 1344
4 1 4 0 5 0 0 0 0 0 8 1344
5 1 4 0 5 0 0 0 0 0 8 1344
6 1 4 0 5 0 0 0 0 0 8 1344
7 1 4 0 5 0 0 0 0 0 8 1344
--- cut for brevity ---

 

As can be seen, Q0 has 112 buffers assigned to HardMax, which is indeed 40% of 280. In turn, SoftMax is 448, which is 4 times 112. Q1 has 0 HardMax and 672 Softmax, which is (Total - Hardmax) * 4, or (280 - 112) * 4 = 672. All good so far.

 

I have then applied a custom 1P3Q3T policy, such as shown below

 

policy-map QoS-1P3Q3T
class REALTIME-QUEUE
priority level 1 percent 30
class CRITICAL-DATA-QUEUE
bandwidth remaining percent 30
queue-limit dscp cs2 percent 80
queue-limit dscp cs3 percent 90
queue-limit dscp cs6 percent 100
queue-limit dscp cs7 percent 100
queue-limit dscp af41 percent 100
queue-limit dscp af42 percent 90
queue-limit dscp af43 percent 80
queue-limit dscp af31 percent 90
queue-limit dscp af32 percent 80
queue-limit dscp af33 percent 80
queue-limit dscp af21 percent 90
queue-limit dscp af22 percent 80
queue-limit dscp af23 percent 80
class BULK-SCAVENGER-QUEUE
bandwidth remaining percent 5
queue-limit dscp cs1 percent 80
queue-limit dscp af11 percent 100
queue-limit dscp af12 percent 90
queue-limit dscp af13 percent 80
class class-default
bandwidth remaining percent 35

 

Once I checked buffers allocation I became a bit confused

 

DATA Port:20 GPN:101 AFD:Disabled FlatAFD:Disabled QoSMap:0 HW Queues: 160 - 167
DrainFast:Disabled PortSoftStart:7 - 420
----------------------------------------------------------
DTS Hardmax Softmax PortSMin GlblSMin PortStEnd
----- -------- -------- -------- -------- ---------
0 1 10 70 23 70 0 0 0 0 0 560
1 1 4 0 24 140 32 140 16 70 0 560
2 1 4 0 24 140 32 140 16 70 0 560
3 1 4 0 25 280 16 140 8 70 0 560
4 1 4 0 5 0 0 0 0 0 0 560
5 1 4 0 5 0 0 0 0 0 0 560
6 1 4 0 5 0 0 0 0 0 0 560
7 1 4 0 5 0 0 0 0 0 0 560
--- cut for brevity ---

 

  • Priority Queue (Q0) has got 70 buffers as HardMax. This is what I expected and it matches the logic described above. 280 / 4 queues = 70. However, SoftMax is now also 70 (and not 4 times HardMax, or 280).
  • Non-Priority Queues Q1 and Q2 with WTD enabled have got SoftMax as 2 times GlobalSMin (I expected it to be 4 times GlobalSMin)
  • Default non-priority Queue (Q3) with no WTD enabled has 280 buffers as SoftMax (this is what I expected to see for all non-priority queues)

Let's check another example with 2 LLQs (Level 1 and Level 2)

 

policy-map QoS-2P3Q3T
class REALTIME-VOICE-QUEUE
priority level 1 percent 10
police rate percent 10
class REALTIME-VIDEO-QUEUE
priority level 2 percent 20
police rate percent 20
class CRITICAL-DATA-QUEUE
bandwidth remaining percent 30
queue-limit dscp cs2 percent 80
queue-limit dscp cs3 percent 90
queue-limit dscp cs6 percent 100
queue-limit dscp cs7 percent 100
queue-limit dscp af41 percent 80
queue-limit dscp af42 percent 80
queue-limit dscp af43 percent 80
queue-limit dscp af31 percent 80
queue-limit dscp af32 percent 80
queue-limit dscp af33 percent 80
queue-limit dscp af21 percent 80
queue-limit dscp af22 percent 80
queue-limit dscp af23 percent 80
class BULK-SCAVENGER-QUEUE
bandwidth remaining percent 5
queue-limit dscp af11 percent 100
queue-limit dscp af12 percent 90
queue-limit dscp af13 percent 80
queue-limit dscp cs1 percent 80
class class-default
bandwidth remaining percent 35

 

This one confused me even more. Check how buffers are now allocated:

 

DATA Port:20 GPN:101 AFD:Disabled FlatAFD:Disabled QoSMap:0 HW Queues: 160 - 167
DrainFast:Disabled PortSoftStart:7 - 336
----------------------------------------------------------
DTS Hardmax Softmax PortSMin GlblSMin PortStEnd
----- -------- -------- -------- -------- ---------
0 1 10 56 23 56 0 0 0 0 8 448
1 1 10 56 24 224 16 112 0 0 8 448
2 1 4 0 25 112 32 112 16 56 8 448
3 1 4 0 24 224 16 112 8 56 8 448
4 1 4 0 24 224 16 112 8 56 8 448
5 1 4 0 5 0 0 0 0 0 8 448
6 1 4 0 5 0 0 0 0 0 8 448
7 1 4 0 5 0 0 0 0 0 8 448
--- cut for brevity ---

 

  • Q0 used identical logic. HardMax = 56 (or Total-buffers / Total-queues = 280 / 5 = 56). Its SoftMax equals HardMax (identical to previous example)
  • Q1, however, being LLQ level 2 has got 4 times HardMax as its SoftMax allocation, or 224 buffers. This is actually what I expected from Q0 as well
  • Q2-Q4 have not changed (same logic - WTD queues get 2 times GlobalSMin as SoftMax and non-WTD queues get 4 times GlobalSMin as SoftMax allocation

I am not sure if I will be accurate but the following conclusion can be made

 

  • LLQ Level 1: SoftMax always equals HardMax. This queue should be only used for voice and this traffic is not bursty hence we don't need a lot of buffers allocated.
  • LLQ Level 2: SoftMax always equals 4 times HardMax. This queue should be used for video, which is indeed bursty hence the flexibility of the buffers.
  • Non-priority queue with WTD logic: HardMax is 0, SoftMax is 2 times GlobalSMin - WHY?!
  • Non-priority queue without WTD logic: HardMax is 0, SoftMax is 4 times GlboalSMin - this is what I expected for all non-priority queues

 

Is logic going to be different for WRED queues (I haven't tested) or is it the same as WTD?

Why we get less buffers allocated if WTD is configured? What's the logic behind it?

 

Is my understanding correct or do I miss something else?

 

Why 'qos queue-softmax-multiplier' command is not available on C9500-48Y4C? Is it High Performance platform limitation? Is it going to be add in later releases or does it mean we don't have this flexibility with Y4Cs?

0 Replies 0
Review Cisco Networking for a $25 gift card