09-20-2012 09:25 AM - edited 03-07-2019 08:59 AM
Hi all,
I have a doubt about this particular threshold. I configured auto qos on a catalyst 2960 and I'm seeing the following lines:
mls qos queue-set output 1 threshold 1 100 100 50 200
mls qos queue-set output 1 threshold 2 125 125 100 400
mls qos queue-set output 1 threshold 3 100 100 100 400
mls qos queue-set output 1 threshold 4 60 150 50 200
According to... mls qos queue-set output 1 threshold <queue-ID> <Threshold1> <Threshold2> <Reserved> <Maximum>
What I Know so far (or I think I know):
Please correct me if wrong in the previous points.
Now my doubt: looking at line 2 for example... mls qos queue-set output 1 threshold 2 125 125 100 400
Thanks in advance for your help.
09-20-2012 03:57 PM
Hi Ricardo,
lets see if i can explain , sorry i am also just learner.
#sh mls qos queue-set 2
Queueset: 2
Queue : 1 2 3 4
----------------------------------------------
buffers : 10 10 40 40
threshold1: 50 200 100 100
threshold2: 200 200 100 100
reserved : 75 50 50 50
maximum : 300 400 400 400
The buffer values can be a bit confusing or helpful some time. First we define how big a share
the queue gets of the buffers in percent. The thresholds will define when
traffic will be dropped if the buffer is full. For queue 1 we start dropping traffic in threshold 1 at 50%
Then we drop traffic in threshold 2 at 200%
How can a queue get to 200%?
The secret here is that a queue can outgrow
the buffers we assign if there are buffers available in the common pool.
This is where the reserved values comes into play.
Every queue gets assigned buffers but we can define that only 50% of
these buffers are strictly reserved for the queue.
The other 50% goes into a common pool and can be used by the other queues as well.
We then set a maximum value for the queue which says that it can grow
up to 400% but no more than that.
The main benefit is that switch can keep some more packets in its buffer before it start dropping them.
Even you can increase it more
i am changing queue-set 2, queue 1 thresholds. Both Threshold 1 and Threshold 2 are mapped to 3100 so that they can pull buffer from the reserved pool if required.
Switch(config)#mls qos queue-set output 2 threshold 1 3100 3100 100 3200
queue set: 2
Queue : 1 2 3 4
----------------------------------------------
buffers : 70 10 10 10
threshold1: 3100 100 100 100
threshold2: 3100 100 100 100
reserved : 100 50 50 50
maximum : 3200 400 400 400
please note if you do want to make some changes test them on queue set 2 coz by default all the ports mapped to queue set 1 and it will effect all the interfaces.
You can change any port to queue 2
Int f0/2
Queue-set 2
Please rate if this helps
thanks
09-20-2012 09:30 PM
Thanks, I think I'm getting to it, this might change the entire focus I was considering before, please tell me wheter I'm getting closer or not:
This is what caught my attention by now: "Both Threshold 1 and Threshold 2 are mapped to 3100 so that they can pull buffer from the reserved pool if required."
Please tell me if one of this is correct (I hope at least one is).
Thanks again.
09-20-2012 11:27 PM
The most logical is that a queue uses firstly its reserved buffer, in order to let free the common buffer for other queues/ports. By example, imagine the reseved buffer is sufficient to absorb excess of traffic on a queue, it would be waste of buffer if the queue began to use common buffers, as the reserved traffic can't in any case be used by other queues.
Even if you will not find exactly an answer here, have a look at this link, which is the most complete (and very good) document I found about C3750 QoS:
https://supportforums.cisco.com/docs/DOC-8093
Regards
P.
09-20-2012 11:49 PM
09-21-2012 07:49 AM
Thank you very much guys I'm definitely having a deep look to both documents.
In the meanwhile, going back to the first question
mls qos queue-set output 1 threshold 2 125 125 100 400
09-21-2012 03:24 PM
Hi Mate,
The buffer space is divided between the common pool and the reserved pool. The switch uses a buffer allocation scheme to reserve a minimum amount of buffers for each egress queue, to prevent any queue or port from consuming all the buffers and depriving other queues, and to control whether to grant buffer space to a requesting queue. The switch detects whether the target queue has not consumed more buffers than its reserved amount (under-limit), whether it has consumed all of its maximum buffers (over limit), and whether the common pool is empty (no free buffers) or not empty (free buffers). If the queue is not over-limit, the switch can allocate buffer space from the reserved pool or from the common pool (if it is not empty). If there are no free buffers in the common pool or if the queue is over-limit, the switch drops the frame.
check out this link for detail qos
thanks
09-22-2012 01:05 AM
Ricardo,
I think there is a misunderstanding.
If:
T1 = 125%
T2 = 125 %
it means: when you reach 125% of the allocated buffer, whatever the traffic, traffic binded (DSCP map for example) to T1 & T2 will be droped. It is not: traffic binded to T1 belongs 125% of buffer and traffic binded to T2 belongs 125% of other buffer.
About T3 and implicit value of 100%: I am not sure about the validity of that, event if I have readen that too. The most logical is that T3 is max threshlod.
So if you really want to make treatment distinction of traffic inside a queue, you can configure for example:
T1: 20%
T2: 100%
T3: 400%
in such an example, you consider that "T1 traffic", with T1=20%, is realy not important and is quite immediately droped in case of congestion.
T3 at the opposite corresponds to traffic important for which you want to allocate good part of buffer. If with the value of 400%, you still see drop, you can increase it to 800%, for example (think that the common pool is not only the buffer for 1 ports but for all the port linked to the ASIC, so if other ports don't use comon buffer, you can use it, which explains this percentage)
09-27-2012 07:49 AM
Thanks to all for your help. Parisdooz12, I've read the document you posted (almost all of it), I will be moving on to the next ones soon.I have two new questions if you please.
Thanks a lot for all your help.
10-10-2012 10:56 AM
Hi Ricardo,
to answer to the 1st question:
- first, notice that the buffer will begin to be filed only id there is congestion on physical interface.
- step #1: there is a burst of COS 0 traffic, and congestion occurs
- step #2: the buffer begins to be filed with new packets
- step #3: a COS 0 traffic packet arrive, 2 cases:
- case 1: the buffer as not reached 20% of utlizatoin and packet is accepted in the buffer
- case 2: the buffer as already reached 20 % of the buffer and COS 0 packet is droped
About you 2d question: to be honest, I havent't readen the test of this document and I am surprised about the result. As you, it doesn't make sense either for me. According to the documents, the fact to increase threshold permit to use more common buffer, as far as common buffer is available.
The only effect I can see in increasing allocated buffer is that reserved buffer is increased to, if you keep the same % value for reserved buffer (and as this value of reserved buffer is a % of allocated buffer). But: 1) if common buffer is available, which seems to be the case in the test, it should have no impact 2) the test consisting to increase the reserved buffer has quite no effect.
So .... I am interested too to have an axplanation!
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide