cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2922
Views
9
Helpful
9
Replies

QoS Thresholds (Max Threshold purpose)

Hi all,

I have a doubt about this particular threshold. I configured auto qos on a catalyst 2960 and I'm seeing the following lines:

mls qos queue-set output 1 threshold 1 100 100 50 200

mls qos queue-set output 1 threshold 2 125 125 100 400

mls qos queue-set output 1 threshold 3 100 100 100 400

mls qos queue-set output 1 threshold 4 60 150 50 200

According to... mls qos queue-set output 1 threshold <queue-ID> <Threshold1> <Threshold2> <Reserved> <Maximum>

What I Know so far (or I think I know):

  • 4 egress queues for each port, therefor the previous 4 lines (one for each queue).
  • The threshold values are related to the buffers assigned to the queue (not to the entire port), so that way they can go higher than 100%... borrowing buffers from other queues in the same port.
  • There is a Threshold 3 implicit, not configurable which is 100%.
  • The reserved Threshold is to guaranty some, lets say, dedicated buffers for the specific queue, so no other queue can use them.
  • The Max threshold is for allowing the queue to grow more than the original size assigned to it.

Please correct me if wrong in the previous points.

Now my doubt: looking at line 2 for example... mls qos queue-set output 1 threshold 2 125 125 100 400

  • I see that T1 + T2 = 125+125= 250%.
  • We might also add T3... T1+T2+T3= 350%.
  • So why is the Max Threshold set to 400%?.
    • A) Is there some benefit out of this?, in that case what would it be?
    • B) Or may be it would be just the same if we set the Max Threshold to 350%.

Thanks in advance for your help.

9 Replies 9

singhaam007
Level 3
Level 3

Hi Ricardo,

lets see if i can explain , sorry i am also just learner.

#sh mls qos queue-set 2
Queueset: 2
Queue     :       1       2      3       4
----------------------------------------------
buffers   :     10     10     40     40
threshold1:     50     200     100     100
threshold2:     200     200     100     100
reserved :     75     50     50     50
maximum   :     300     400     400     400

The buffer values can be a bit confusing or helpful some time. First we define how big a share
the queue gets of the buffers in percent. The thresholds will define when
traffic will be dropped if the buffer is full. For queue 1 we start dropping traffic in threshold 1 at 50%
Then we drop traffic in threshold 2 at 200%
How can a queue get to 200%?

The secret here is that a queue can outgrow
the buffers we assign if there are buffers available in the common pool.
This is where the reserved values comes into play.
Every queue gets assigned buffers but we can define that only 50% of
these buffers are strictly reserved for the queue.
The other 50% goes into a common pool and can be used by the other queues as well.
We then set a maximum value for the queue which says that it can grow
up to 400% but no more than that.

The main benefit is that switch can keep some more packets in its buffer before it start dropping them.

Even you can increase it more

i am changing queue-set 2, queue 1 thresholds. Both Threshold 1 and Threshold 2 are mapped to 3100 so that they can pull buffer from the reserved pool if required.

Switch(config)#mls qos queue-set output 2 threshold 1 3100 3100 100 3200

queue set: 2

Queue     :       1       2       3       4

----------------------------------------------

buffers   :     70     10     10     10

threshold1:   3100     100     100     100

threshold2:   3100     100     100     100

reserved :     100     50     50     50

maximum   :   3200     400     400     400

please note if you do want to make some changes test them on queue set 2 coz by default all the ports mapped to queue set 1 and it will effect all the interfaces.

You can change any port to queue 2

Int f0/2

Queue-set 2

Please rate if this helps

thanks

Thanks, I think I'm getting to it, this might change the entire focus I was considering before, please tell me wheter I'm getting closer or not:

This is what caught my attention by now: "Both Threshold 1 and Threshold 2 are mapped to 3100 so that they can pull buffer from the reserved pool if required."

  • I understand that it is possible and logic to allow a queue to outgrow.
  • My question now is: which buffers are used first, the buffers that are reserved for that queue, or the buffers tha go to the common pool?
    • First I thought that it was the ones located in the Reserved Threshold, then the common pool, until every threshold in the queue gets to the maximum and start being droppen.
    • Now, when you say "if required" it makes me think that the switch (or port, to be more accurate) uses first the buffers from the common pool, and if required (may be the common pool is full and a particular threshold has not been reached yet) then start using the Reserved Threshold for that particular queue. May be this way the situation I presented in the first post would make more sense.

Please tell me if one of this is correct (I hope at least one is).

Thanks again.

The most logical is that a queue uses firstly its reserved buffer, in order to let free the common buffer for other queues/ports. By example, imagine the reseved buffer is sufficient to absorb excess of traffic on a queue, it would be waste of buffer if the queue began to use common buffers, as the reserved traffic can't in any case be used by other queues.

Even if you will not find exactly an answer here, have a look at this link, which is the most complete (and very good)  document I found about C3750 QoS:

https://supportforums.cisco.com/docs/DOC-8093

Regards

P.

singhaam007
Level 3
Level 3

hello

It use its allocated buffer first and if need then it can have some more from the pool.

see this document and hopefuly it will clear all ur questions. let me know if u still need any help

Please rate if this helps

thanks

Thank you very much guys I'm definitely having a deep look to both documents.

In the meanwhile, going back to the first question

mls qos queue-set output 1 threshold 2 125 125 100 400

    • Traffic for T1 will at most reach 125% and then start being discarded.
  • Traffic for T2 will at most reach 125% and then start being discarded.
  • Traffic for T3 (implicit for what I know) will at most reach 100% (the default, non-configurable value for T3) and then start being discarded.
  • The reserved threshold for this queue is 100%, so no other queue is allowed to use buffers from queue 2.
  • Asuming all of these Thresholds get to their limits... they will be using 350% of the queue.
  • So.. is there really a benefit from having a maximum of 400%? or would it be the same if we just set this maximum at 350%?

Hi Mate,

The buffer space is divided between the common pool and the reserved pool. The switch uses a buffer allocation scheme to reserve a minimum amount of buffers for each egress queue, to prevent any queue or port from consuming all the buffers and depriving other queues, and to control whether to grant buffer space to a requesting queue. The switch detects whether the target queue has not consumed more buffers than its reserved amount (under-limit), whether it has consumed all of its maximum buffers (over limit), and whether the common pool is empty (no free buffers) or not empty (free buffers). If the queue is not over-limit, the switch can allocate buffer space from the reserved pool or from the common pool (if it is not empty). If there are no free buffers in the common pool or if the queue is over-limit, the switch drops the frame.

check out this link for detail qos

http://www.cisco.com/en/US/docs/switches/lan/catalyst3750x_3560x/software/release/12.2_55_se/configuration/guide/swqos.html#wp1020443

thanks

Ricardo,

I think there is a misunderstanding.

If:

T1 = 125%

T2 = 125 %

it means: when you reach 125% of the allocated buffer, whatever the traffic, traffic binded (DSCP map for example) to T1 & T2 will be droped. It is not: traffic binded to T1 belongs 125% of buffer and traffic binded to T2 belongs 125% of other buffer.

About T3 and implicit value of 100%: I am not sure about the validity of that, event if I have readen that too. The most logical is that T3 is max threshlod.

So if you really want to make treatment distinction of traffic inside a queue, you can configure for example:

T1: 20%

T2: 100%

T3: 400%

in such an example, you consider that "T1 traffic", with T1=20%, is realy not important and is quite immediately droped in case of congestion.

T3 at the opposite corresponds to traffic important for which you want to allocate good part of buffer. If with the value of 400%, you still see drop, you can increase it to 800%, for example (think that the common pool is not only the buffer for 1 ports but for all the port linked to the ASIC, so if other ports don't use comon buffer, you can use it, which explains this percentage)

Thanks to all for your help. Parisdooz12, I've read the document you posted (almost all of it), I will be moving on to the next ones soon.I have two new questions if you please.

  1. (about your last post).
    • The threshold is applied to a class of traffic, lets say T1 is used for COS 0, and T2 is used for COS 1; but (this is my doubt) these thresholds are compared against the entire queue, not against the buffers that this particular traffic is consuming.
    • Plus, we said that the queue uses first its allocated buffers to place every packet that comes in.
    • In the last example: T1=20% and T2=100%.
    • Now, lets say tha the queue has all of its buffers free, may be the switch or the interface has just been turned on. And at the begining all of the traffic going to that interface is binded to T2, it gets to the point of using the first 20% of the buffers... and then a packet binded to T1 comes in.
    • What will happen to this packet?
      • Will it get dropped because the queue has reached T1 (the 20%)?
      • Or will it be forwarded? and in this case, how or why?
  2. About the document you posted https://supportforums.cisco.com/docs/DOC-8093 (very good by the way).
    • You were showing some tests in 3.2.2, showing the effects of increasing thresholds and increasing the queue size.
    • It's clear (demonstrated) that increasing the queue size is more effective than increasing the Threshold.
    • My question: why?..  Does it make sense?
      • It was said before that if the queue is (for example) the 25% of 200 buffers (therefore 50 buffers) the traffic binded to a Threshold can still overgrow by using the common pool.
      • I assume that in the Test you made the only traffic going through the switch was the traffic ingressing to the two interfaces which was going directed to the port 3 (the outgoing interface) and the common pool was practically able to allocate any traffic in its buffers.
    • So what would be the difference between the following points.
      • A) queue2=25% of 200 buffers = 50 buffers; T1=200% of queue2 = 100 buffers.
      • B) queue2=50% of 200 buffers = 100 buffers; T1=100% of queue2 = 100 buffers.
      • Assuming only traffic binded to Q2 is being forwarded (Single CoS traffic), only to one outgoing interface, and therefor the common pool has sufficient buffers to be used.

Thanks a lot for all your help.

Hi Ricardo,

to answer to the 1st question:

- first, notice that the buffer will begin to be filed only id there is congestion on physical interface.

- step #1: there is a burst of COS 0 traffic, and congestion occurs

- step #2: the buffer begins to be filed with new packets

- step #3: a COS 0 traffic packet arrive, 2 cases:

     - case 1: the buffer as not reached 20% of utlizatoin and packet is accepted in the buffer

     - case 2: the buffer as already reached 20 % of the buffer and COS 0 packet is droped

About you 2d question: to be honest, I havent't readen the test of this document and I am surprised about the result. As you, it doesn't make sense either for me. According to the documents, the fact to increase threshold permit to use more common buffer, as far as common buffer is available.

The only effect I can see in increasing allocated buffer is that reserved buffer is increased to, if you keep the same % value for reserved buffer (and as this value of reserved buffer is a % of allocated buffer). But: 1) if common buffer is available, which seems to be the case in the test,  it should have no impact 2) the test consisting to increase the reserved buffer has quite no effect.

So .... I am interested too to have an axplanation!

Review Cisco Networking for a $25 gift card