cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2010
Views
0
Helpful
17
Replies

QoS Output Queues Threshold above 100%

nocteam465
Beginner
Beginner

Hi,

First of all I have to say that I am not a native english speaker. Apologize errors of any kind.

On a 3750X, I am allowed on an output queue to set the value of Threshold#1 and Threshold#2 above 100% (up to 3200%).

From all my readings I understood that the thresholds applies on the allocated buffers  (the value returned in the "buffers" line of the example below) , not the "reserved" or "maximum" buffers.

Am I wrong here ?

Knowing that there is an implicite threshold 3 unalterably set to 100% what can be the reason to allow a threshold value above 100% ?

In the below example, the queue #1 drops every coming packet as soon as this queue is full, i.i. assoon it "consumes" 10 buffers. ( Threshold3 is 100)

Light welcome.

JM


mgth03#sho mls qos queue-set 2

Queueset: 2

Queue     :       1       2       3       4

----------------------------------------------

buffers   :     10     10     26     54

threshold1:   1111     200     100     20

threshold2:   2222     200     100     50

reserved :     98     50     50     67

maximum   :   3000     400     400     400

17 Replies 17

Joseph W. Doherty
Hall of Fame Master Hall of Fame Master
Hall of Fame Master

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

You can go above 100% because you can grab unreserved buffers.  Default buffers only reserve 50% of buffers.

If you haven't seen https://supportforums.cisco.com/docs/DOC-8093, it might help to you to better understand.

Hi Joseph,

I understand well the "reserve" and "maximum" settings. These parameters are very clear to me.

I haven't yet read the document you mention, I just had a look to it and found :

"

“Threshold1” & “Threshold2” – units are a percentage of total  buffers allocated to the tx queue (from the “buffers” calculation).  "

In the above example, if you imagine I have only traffic that goes via queue 4 ( due to DSCP marking etc etc).

Imagine this queue uses 54% of all possible buffers ( i.e. the value in the line "buffers" for queue4).

This queue is then used at 100% => Threshols 3 applies , and any further packet is rejected, as long that this queue uses 54% of the available buffers. No packet will be borrowed to he 46% remaining.

That's my unserdtanding of the think. What are your thinking?

Anyway I will read with spacial attention the mentionned document. But only tomorrow now... I sit in Europe and it's late.

Thanks

best regards

JM

Disclaimer

The  Author of this posting offers the information contained within this  posting without consideration and with the reader's understanding that  there's no implied or expressed suitability or fitness for any purpose.  Information provided is for informational purposes only and should not  be construed as rendering professional advice of any kind. Usage of this  posting's information is solely at reader's own risk.

Liability Disclaimer

In  no event shall Author be liable for any damages whatsoever (including,  without limitation, damages for loss of use, data or profit) arising out  of the use or inability to use the posting's information even if Author  has been advised of the possibility of such damage.

Posting

My understanding is the threshold settings are logical limits.  You may run short of physical buffers before reaching them but if the buffer resources are available, you won't acquire any additional buffers beyond them.  (This is some what similar to a queue limit on a router's interface.  Router won't queue packets beyond the configured limit, but it also might run out of buffering before reaching that limit.  On routers, normally you more often hit the logical queue limit before you run short of memory.  Switches, like the 3750 series, are more likely to run short of buffer memory.)

In any case, on a 3750 if you set each queue to 25% and configure it to reserve 100% of its buffers, no queue could queue more than 100%.  Yet, if you set each to reserve 0%, each queue could acquire 400% buffers. (NB: according to the document I referenced, 3750 will always reserve some buffers per queue.)

Hi Joseph,

I also thought that the threshold settings were logical limits.

I now think that T+ and T2 are logical limits but T3, that is just the physical/harware limit "Queue Full" ( with all the borrowed buffers from common pool up to maximum). Giving to that limit the name of "threshold" is in my opinion exagerated and confusing.

Disclaimer

The   Author of this posting offers the information contained within this   posting without consideration and with the reader's understanding that   there's no implied or expressed suitability or fitness for any purpose.   Information provided is for informational purposes only and should not   be construed as rendering professional advice of any kind. Usage of  this  posting's information is solely at reader's own risk.

Liability Disclaimer

In   no event shall Author be liable for any damages whatsoever (including,   without limitation, damages for loss of use, data or profit) arising  out  of the use or inability to use the posting's information even if  Author  has been advised of the possibility of such damage.

Posting

I believe T3 is a logical limit too.

I agree the terms I used were wrong : the size of the queues is  determined by software and calculation as a part of the available memory, in that sense it is logical, and  so T3 is a logical limit too.

I however persist in the fact that saying that *Full" is a  threshold is excessive. Noone expect be able to put something in a  container that is full.

Disclaimer

The   Author of this posting offers the information contained within this   posting without consideration and with the reader's understanding that   there's no implied or expressed suitability or fitness for any purpose.   Information provided is for informational purposes only and should not   be construed as rendering professional advice of any kind. Usage of  this  posting's information is solely at reader's own risk.

Liability Disclaimer

In   no event shall Author be liable for any damages whatsoever (including,   without limitation, damages for loss of use, data or profit) arising  out  of the use or inability to use the posting's information even if  Author  has been advised of the possibility of such damage.

Posting

I however persist in the fact that saying that *Full" is a  threshold is excessive. Noone expect be able to put something in a  container that is full.

I'm unsure what you're trying to say.  "Full" might be applied to both a logical limit (threshold) or a physical limit (buffer exhaustion).

nocteam465
Beginner
Beginner

Sorry, Due to operationnal issues and important projects, I was far away from QoS considerations for all this time.

I will try to reformulate my thoughts and questions.

Each ASIC from a switch manages a certain number of ports, let say 8 Ports.

Each ASIC from a switch disposes from a certain amount of physical memory, let say 100 MB.

Assuming this configuration :

buffers   :          25      25      25      25      

threshold1:         100     200     100     100

threshold2:         100     150     100     100

reserved  :          50      50      50      50 

maximum   :         400     400     400     400  

Q1, Q2, Q3 and Q4 reserve 50% of 25% of the 10MB.

This means that 12.5 MB of memory are reserved to Q1.

In my understanding as this memory is "reserved" it will not be available to other queues.

(https://supportforums.cisco.com/docs/DOC-8093 : The number of buffers placed into the common pool is

equal to the total number of buffers less the total reserved buffers)

Q2 reserves also 12.5MB, as do Q3 and Q4, so at the end 50MB of physical memory is marqued as "reserved"

and no longer available to the common pool.

This means that 50 MB of memory are available in the common pool, allowing each queue to grow up

to 12.5 MB (reserved) + 50 MB (common) = 62.5 MB.

In all until yet read documentation I found that the above setting will allow each queue to grow up

to 4 times  (line maximum" in the above configuration = 400%) the "buffers" value , ie 4 x 25 % = 100%

So each queue should be able to grow up to 100MB.

Is my calculation delivering a maximum of 62.5 MB wrong ?

As next question concerns now Q2:

A T1 value of 200% means that the queue 2 will drop packets qualified Q2T1when the Q2 uses more than 200%(T1) x 25%(buffers) = 50 MB

of the available physical memory.

A T2 value of 150% means that the queue 2 will drop packets qualified Q2T2when the Q2 uses more than 150%(T2) x 25%(buffers) = 37.5 MB

of the available physical memory.

As there is an implicit threshold3, all paquets will be dropped when T3 is reached,ie when the size of Q2 = 25MB.

Does it make sense to allow T2 and T3 above 100% ?.

Finally I would be very interrested if someone could tell me the size (in Bytes) , the maximum size (in bytes)

and the threshold values (in bytes : at what size of Qx will a packet qualified QxTy be dropped) ,

for each queue, assuming a 100MB available physical memory for the following configuration :

buffers :         25     25     25     25

threshold1:     1111     200     100     100

threshold2:     2222     200     100     100

reserved :     98     50     50     50

maximum :     3000     400     400     400

Disclaimer

The  Author of this posting offers the information contained within this  posting without consideration and with the reader's understanding that  there's no implied or expressed suitability or fitness for any purpose.  Information provided is for informational purposes only and should not  be construed as rendering professional advice of any kind. Usage of this  posting's information is solely at reader's own risk.

Liability Disclaimer

In  no event shall Author be liable for any damages whatsoever (including,  without limitation, damages for loss of use, data or profit) arising out  of the use or inability to use the posting's information even if Author  has been advised of the possibility of such damage.

Posting

Assuming this configuration :

buffers   :          25      25      25      25      

threshold1:         100     200     100     100

threshold2:         100     150     100     100

reserved  :          50      50      50      50 

maximum   :         400     400     400     400  

Q1, Q2, Q3 and Q4 reserve 50% of 25% of the 10MB.

This means that 12.5 MB of memory are reserved to Q1.

In my understanding as this memory is "reserved" it will not be available to other queues.

(https://supportforums.cisco.com/docs/DOC-8093 : The number of buffers placed into the common pool is

equal to the total number of buffers less the total reserved buffers)

Agreed - that's my understanding too.

Q2 reserves also 12.5MB, as do Q3 and Q4, so at the end 50MB of physical memory is marqued as "reserved"

and no longer available to the common pool.

This means that 50 MB of memory are available in the common pool, allowing each queue to grow up

to 12.5 MB (reserved) + 50 MB (common) = 62.5 MB.

In all until yet read documentation I found that the above setting will allow each queue to grow up

to 4 times  (line maximum" in the above configuration = 400%) the "buffers" value , ie 4 x 25 % = 100%

So each queue should be able to grow up to 100MB.

Is my calculation delivering a maximum of 62.5 MB wrong ?

Yes, and yes (assuming no other queues have used any common), and yes (logically and if not T1 or T2 marked), and yes (physically).

As next question concerns now Q2: 

A T1 value of 200% means that the queue 2 will drop packets qualified Q2T1when the Q2 uses more than 200%(T1) x 25%(buffers) = 50 MB

of the available physical memory.

A T2 value of 150% means that the queue 2 will drop packets qualified Q2T2when the Q2 uses more than 150%(T2) x 25%(buffers) = 37.5 MB

of the available physical memory.

As there is an implicit threshold3, all paquets will be dropped when T3 is reached,ie when the size of Q2 = 25MB.

Does it make sense to allow T2 and T3 above 100% ?.

Yes, yes, no (T3 default is 400%) and maybe (depends what you're trying to accomplish - also usually you would drop T1 before T2, not the reverse).

Also keep in mind, thresholds are logical values, there may be insufficient buffers to allow that threshold.

Finally I would be very interrested if someone could tell me the size (in Bytes) , the maximum size (in bytes)

and the threshold values (in bytes : at what size of Qx will a packet qualified QxTy be dropped) ,

for each queue, assuming a 100MB available physical memory for the following configuration :

buffers :         25     25     25     25

threshold1:     1111     200     100     100

threshold2:     2222     200     100     100

reserved :     98     50     50     50

maximum :     3000     400     400     400

You can calculate the logical limits as you've already done.  They're a percentage of a percentage.

BTW, I've only found one reference for actual buffer resources for 3750s, and it's pretty slim (NB: can't find the reference at the moment).

I must be completely stupid but I still cannot understand how, with the given configuration where 50% of the available memory is reserved, so unacessible for the common pool, a queue can grows up to a maximum of 100% of the physical memory.

Growing up to 100% of the memory would require to get access to memory marked as "reserved" .

At maximum one "color" can grow up to "color" + "blue" = 12.5 + 50 = 62.5%

Regarding T3 :

http://www.cisco.com/en/US/docs/switchs/lan/catalyst3750/software/release/12.2_50_se/command/reference/cli1.html

While buffer ranges allow individual queues in the queue-set to use more of the common pool when available, the maximum number of packets for each queue is still internally limited to 400 percent, or 4 times the allocated number of buffers. One packet can use one 1 or more buffers.

should T3 be understood as 400% of the value found at the line "buffers"  or as the value configured in the line "maximum" ?

Disclaimer

The   Author of this posting offers the information contained within this   posting without consideration and with the reader's understanding that   there's no implied or expressed suitability or fitness for any purpose.   Information provided is for informational purposes only and should not   be construed as rendering professional advice of any kind. Usage of  this  posting's information is solely at reader's own risk.

Liability Disclaimer

In   no event shall Author be liable for any damages whatsoever (including,   without limitation, damages for loss of use, data or profit) arising  out  of the use or inability to use the posting's information even if  Author  has been advised of the possibility of such damage.

Posting

I must be completely stupid . . . 

Laugh - I doubt it - maybe only partially stupid

. . . but I still cannot understand how, with the given configuration where 50% of the available memory is reserved, so unacessible for the common pool, a queue can grows up to a maximum of 100% of the physical memory.

Growing up to 100% of the memory would require to get access to memory marked as "reserved" .

Agreed - i.e. one queue cannot obtain 100% of all physical memory, when any is reserved, but one queue might obtain all available (common) memory.

At maximum one "color" can grow up to "color" + "blue" = 12.5 + 50 = 62.5% 

Also agreed, if default settings are used.  But if you change the buffer percentages and/or reserved values, one queue can obtain more than 62.5% of the total buffer space.

Regarding T3 :

http://www.cisco.com/en/US/docs/switchs/lan/catalyst3750/software/release/12.2_50_se/command/reference/cli1.html

While buffer ranges allow individual queues in the queue-set to use more of the common pool when available, the maximum number of packets for each queue is still internally limited to 400 percent, or 4 times the allocated number of buffers. One packet can use one 1 or more buffers.

should T3 be understood as 400% of the value found at the line "buffers"  or as the value configured in the line "maximum" ?

My understanding is T3 is a percentage against buffers, like T1 and T2.

BTW, later IOS allow a setting > 400.

e.g.

3560X-12.2(55)SE7(config)#mls qos queue-set output 1 threshold ?

  <1-4>  enter queue id in this queue set

3560X-12.2(55)SE7(config)#mls qos queue-set output 1 threshold 1 ?

  <1-3200>  enter drop threshold1 1-3200

3560X-12.2(55)SE7(config)#mls qos queue-set output 1 threshold 1 1 ?

  <1-3200>  enter drop threshold2 1-3200

3560X-12.2(55)SE7(config)#mls qos queue-set output 1 threshold 1 1 2 ?

  <1-100>  enter reserved threshold 1-100

3560X-12.2(55)SE7(config)#mls qos queue-set output 1 threshold 1 1 2 1 ?

  <1-3200>  enter maximum threshold 1-3200

I think we can find an agreement about how big is "partially" :-)...

OK, one queue can, when so configured, grow up to 100%. That's fully clear for me. But not with the default settings.

I often find such explanation for the default settings :

http://www.cisco.com/en/US/products/hw/switches/ps5023/products_tech_note09186a0080883f9e.shtml :

" Two queue sets are configured and queue set 1 is assigned to all the       ports by default. Each queue is allocated 25 percent of the total buffer space.       Each queue is reserved 50 percent of allocated buffer space which is 12.5       percent of the total buffer space. The sum of all the reserved buffers       represents the reserved pool, and the remaining buffers are part of the common       pool. The default configuration sets 400 percent as the maximum memory that       this queue can have before packets are dropped. "

Concerning T3, I share your understanding in that sense that I think that T3 = "value at the line maximum", for consistent settings of parameters, reserved and maximum.

T3 is - in my sense - a logically full queue.( A queue that could grow up to "maximun", by borrowing buffers from Common Pool once it's size became bigger than "reserved")

Finally all my questions went from the fact I expected the Cisco IOS to check some consistency in the by-the-user-given parameters when configuring the output queues settings, and I guess it is not the case. Cisco IOS assumes that the users understand what he does.

I mean, once entered the command configuring the output queues this way :

buffers   :            25      25      25      25      

threshold1:         100     200     100     100

threshold2:         100     200     100     100

reserved  :          50      50      50      50 

The values of the maximum could be calculated by the IOS, and not allowing

maximum   :         400     400     400     400

While writing that I realise that  the default configutration is as following :

wiar02#sho mls qos queue-set 1

Queueset: 1

Queue     :       1       2       3       4

----------------------------------------------

buffers   :      25      25      25      25

threshold1:     100     200     100     100

threshold2:     100     200     100     100

reserved  :      50      50      50      50

maximum   :     400     400     400     400

and cannot imagine a "wrong" default configuration in the sense that maximum should not be 400% but 62,5%

Again submerged by an horrible doubt... should "completely" apply...

Disclaimer

The  Author of this posting offers the information contained within this  posting without consideration and with the reader's understanding that  there's no implied or expressed suitability or fitness for any purpose.  Information provided is for informational purposes only and should not  be construed as rendering professional advice of any kind. Usage of this  posting's information is solely at reader's own risk.

Liability Disclaimer

In  no event shall Author be liable for any damages whatsoever (including,  without limitation, damages for loss of use, data or profit) arising out  of the use or inability to use the posting's information even if Author  has been advised of the possibility of such damage.

Posting

Yes perhaps Cisco's T3 default is a bit misleading as it's not obtainable with other defaults.  However, even the 100 or 200 T1/T2 settings might not be obtainable in actual instance usage.  For example, if one queue has indeed grabbed all the common buffers, then when another attempts to go beyond its reserved allocation it will not obtain a buffer.  I.e. only reserved is guaranteed, so by this measure T1/T2/T3 are all logical limits, none of which might be actually obtainable.

As buffers is a different settings from thresholds, suppose you set buffers to 1 1 1 97.  With such a configuration, assuming other default settings and assuming the default 50% of common isn't otherwise being used,  the first 3 queues can all obtain 400% of their buffer allocation (while still leaving free common).

What this all boils down to, defaults are just defaults, and they might be not very suitable for your usage.  I.e. to get the most out of this QoS architecture, you'll want to understand how it works and you might need to tune it to meet your QoS policy goals.

PS:

BTW, have you look at what auto QoS (v4) does with defaults?

I.e.:

mls qos queue-set output 1 threshold 1 100 100 50 200

mls qos queue-set output 1 threshold 2 125 125 100 400

mls qos queue-set output 1 threshold 3 100 100 100 400

mls qos queue-set output 1 threshold 4 60 150 50 200

mls qos queue-set output 1 buffers 15 25 40 20

Switch#sh mls qos queue-set

Queueset: 1

Queue     :       1       2       3       4

----------------------------------------------

buffers   :      15      25      40      20

threshold1:     100     125     100      60

threshold2:     100     125     100     150

reserved  :      50     100     100      50

maximum   :     200     400     400     200

Queueset: 2

Queue     :       1       2       3       4

----------------------------------------------

buffers   :      25      25      25      25

threshold1:     100     200     100     100

threshold2:     100     200     100     100

reserved  :      50      50      50      50

maximum   :     400     400     400     400

Hi Joseph,

For example, if one queue has indeed grabbed all the common buffers,  then when another attempts to go beyond its reserved allocation it will  not obtain a buffer.  I.e. only reserved is guaranteed, so by this  measure T1/T2/T3 are all logical limits, none of which might be actually  obtainable => That was understood and I agree.

With such a configuration, assuming other default settings and assuming  the default 50% of common isn't otherwise being used,  the first 3  queues can all obtain 400% of their buffer allocation (while still  leaving free common).

=> That was understood and I agree.

What this all boils down to, defaults are just defaults, and they might be not very suitable for your usage. => and the parameters explanations seem wrong or at least confusing.

you'll want to understand how it works and you might need to tune it to meet your QoS policy goals. =>

I wanted to (re)start thinking about QoS with a pragmatic approach. Before tuning some parameters, I like to understand on what they act and what are their impact.

I wrote my very first -and only for me- document about QoS in 2003, but as that time I could not find 2 Cisco documents or experts giving the same explanations about the parameters, I gave up. In 2005 I changed job and came to a company that had spent a lot of money to let elaborate from an external " Cisco Specialist" a QoS concept and configurations for its network and switchs. The goal was support for VoIP. Everything was fine,marking, classifiying,... except that the VoIP physical ports were not configured with a "priority queue out" statement, so not really fullfilling the requirements. And it worked for 8 years...

The problem I met now is about Video over IP, and 65 cameras sending a video stream to a single Video Server, over a 1 Gb/s ports, that experienced packet losses.

Using 4  1GB/s ports enhanced the situation, upgrading the PC-NIC pilot helped also to reduce packet losses, so I envisaged to "play" with QoS settings to bring these packet losses to zero.

I think that changig the NIC on the PC for a 10G Nic will be a cheaper and more "long term oriented" solution than plaing with the QoS parameters, that "may" change on the next IOS update I will have to do on the switces.

Finally

BTW, have you look at what auto QoS (v4) does with defaults? => not yet but i find interresting that Q2 and Q3 reserves 100% => these 2 queues "consume" 65% of the memory available for the output buffers leaving not enough buffers for Q4 ( I ignore Q1 for the moment) to reach 2 x 20%.

Once again I am really not conviced about QoS. I mean when you are not an expensive consultant...

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Recognize Your Peers