My question is specific to the "maximum" value set in "mls qos queue-set output ..." . Is this value (say 400%) 4 times the reserved amount or 4 times the allocated buffers amount. I seemed to now own configs that add up to more than 100% of all available buffers.
MySwitch4#sho mls qos queue-set
Queueset: 1 (Applied by default)
Queue : 1 2 3 4
buffers : 25 25 25 25
threshold1: 100 70 100 5
threshold2: 100 80 100 100
reserved : 50 100 50 100
maximum : 400 100 400 100
If I understand this then:
1) reserved buffers equal (50% x 25%) + (100% x 25%) + (50% x 25%) + (100% x 25%) = 12.5 + 25 + 12.5 + 25 = 75% of buffers are reserved.
2) this leave 25% of buffers available for the shared pool
However, for Queue 1 (and 3) - 400% x (50% of 25% for Queue 1|3 buffers) is 50% of buffer space. How does this "overflow" 50% of the buffer space for Queue 1 (or 3) fit into 25% of available buffer space.
Thanks in advance
"buffers" means that each of the 4 egress queues on the interface get 25% of the buffer "reserved" means how much of that 25% buffer you are guaranteed. 100% means you are guaranteed your full 25% buffer share. 50% means that you are only guaranteed 1/2 of your 25% buffer share and that the other 3 queues can borrow the other 1/2 if you are not using it at the time. "maximum" means how much buffer you can borrow from other queues. 400% means you can use your 100% plus the 100% of the other 3 queues (if not in use and not reserved) for a total of 400% "threshold" are the drop thresholds for each queue. Once you hit that % of queue fullness you drop all packets with the QoS value associated with that queue.
So to answer your question "How does this "overflow" 50% of the buffer space for Queue 1 (or 3) fit into 25% of available buffer space?" the answer is it doesn't. Your calculation is correct in that you have allocated up to 50% of total buffer space for queues 1 and 3 to borrow - but with this config the most you'll ever get is ALL of the 25% allocated for the shared pool.
From the above link:
While buffer ranges allow individual queues in the queue-set to use more of the common pool when available, the maximum number of packets for each queue is still internally limited to 400 percent, or 4 times the allocated number of buffers.
The switch uses a buffer allocation scheme to reserve a minimum amount of buffers for each egress queue, to prevent any queue or port from consuming all the buffers and depriving other queues, and to decide whether to grant buffer space to a requesting queue. The switch decides whether the target queue has not consumed more buffers than its reserved amount (under-limit), whether it has consumed all of its maximum buffers (over-limit), and whether the common pool is empty (no free buffers) or not empty (free buffers). If the queue is not over-limit, the switch can allocate buffer space from the reserved pool or from the common pool (if it is not empty). If there are no free buffers in the common pool or if the queue is over-limit, the switch drops the frame.
This response matches what I believed the documentation stated but I have inherited a configuration that sets maximums to values that cannot be filled.
I also cut a TAC case # (617774311) essentially asking the same and got, what I understand, a different answer. I offered an example where maximum would take more than the available space in the common pool. The answer I got back was that this would be taken from system resources. I am not sure what that means.
My problem is that management expectations are that Q1 can grow to 400% buffer size and no-one made the connection that there wasn't enough common space to service this. If the excess is "stolen" from the system then perhaps that's ok (perhaps not) but if the excess cannot be services beyond the common pool (as I originally believed) and, obviously IOS doesn't check the arithmetic, I may have to rework the QoS policy.
Clarificaiton would be appreciated - thanks.