cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1664
Views
5
Helpful
5
Replies

bandwith limiting with qos issue

Alexander Demin
Level 1
Level 1

Hello.

I have Cisco Cat 2960G (IOS 12.2(50)SE3, lanbase image). Trying to limit available bandwith on port for 250 Mbps. The actual speed does not raises higher than 200 Mbps. Any ideas with queue set ?

sw1#show running-config interface gigabitEthernet 0/1

interface GigabitEthernet0/1

switchport access vlan 3

switchport mode access

load-interval 60

srr-queue bandwidth shape 0 0 0 0

srr-queue bandwidth limit 25

mls qos trust cos

flowcontrol receive desired

no cdp enable

spanning-tree portfast

spanning-tree bpdufilter enable

end

sw1#sh mls qos interface gigabitEthernet 0/1

GigabitEthernet0/1

trust state: trust cos

trust mode: trust cos

trust enabled flag: ena

COS override: dis

default COS: 0

DSCP Mutation Map: Default DSCP Mutation Map

Trust device: none

qos mode: port-based

sw1#sh mls qos interface gigabitEthernet 0/1 queueing

GigabitEthernet0/1

Egress Priority Queue : disabled

Shaped queue weights (absolute) :  0 0 0 0

Shared queue weights  :  25 25 25 25

The port bandwidth limit : 25  (Operational Bandwidth:27.28)

The port is mapped to qset : 1

sw1#sh mls qos interface gigabitEthernet 0/1 buffers

GigabitEthernet0/1

The port is mapped to qset : 1

The allocations between the queues are : 25 25 25 25

sw1#sh mls qos queue-set 1

Queueset: 1

Queue     :       1       2       3       4

----------------------------------------------

buffers   :      25        25      25    25

threshold1: 100    3200    100  100

threshold2: 100    3200    100  100

reserved  :    50      100      50     50

maximum : 400    3200    400   400

sw1#sh mls qos interface gigabitEthernet 0/1 statistics | b cos: incoming

  cos: incoming

-------------------------------

0 -  4 :  2051465609            0            0            0            0

  5 -  7 :           0            0            0

  cos: outgoing

-------------------------------

0 -  4 :  1354630319            1            0            0            0

  5 -  7 :           0            0            0

  output queues enqueued:

queue: threshold1 threshold2 threshold3

-----------------------------------------

queue 0:           2           0           0

queue 1:  1354636197       63323     2072422

queue 2:           0           0           0

queue 3:           0           0          51

  output queues dropped:

queue: threshold1 threshold2 threshold3

-----------------------------------------

queue 0:            0            0            0

queue 1:    200569558            9          668

queue 2:            0            0            0

queue 3:            0            0            0

Policer: Inprofile:            0 OutofProfile:            0


Regards, Alex.

DAO21-RIPE
5 Replies 5

Joseph W. Doherty
Hall of Fame
Hall of Fame

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

See "These values are not exact because the hardware adjusts the line rate in increments of six." from:

http://www.cisco.com/en/US/docs/switches/lan/catalyst2960/software/release/12.2_50_se/configuration/guide/swqos.html#wp1253412

JosephDoherty написал(а):


See "These values are not exact because the hardware adjusts the line rate in increments of six." from:

http://www.cisco.com/en/US/docs/switches/lan/catalyst2960/software/release/12.2_50_se/configuration/guide/swqos.html#wp1253412

It's true, but:

sw1#sh mls qos interface gigabitEthernet 0/1 queueing

GigabitEthernet0/1

Egress Priority Queue : disabled

Shaped queue weights (absolute) :  0 0 0 0

Shared queue weights  :  25 25 25 25

The port bandwidth limit : 25  (Operational Bandwidth:27.28)

The port is mapped to qset : 1

So, port bandwidth is actually not limited to 250 mbps but is even higher - abt. 270 mbps.

While the connected client doesn't get more than 200 mbps.

I suppose this could be an issue with buffer and queue weights, as almost all clents traffic goes to second queue.

DAO21-RIPE

Disclaimer

The  Author of this posting offers the information contained within this  posting without consideration and with the reader's understanding that  there's no implied or expressed suitability or fitness for any purpose.  Information provided is for informational purposes only and should not  be construed as rendering professional advice of any kind. Usage of this  posting's information is solely at reader's own risk.

Liability Disclaimer

In  no event shall Author be liable for any damages whatsoever (including,  without limitation, damages for loss of use, data or profit) arising out  of the use or inability to use the posting's information even if Author  has been advised of the possibility of such damage.

Posting

Excellent!  I missed the stats actually provides what the actual bandwidth is limited to.

I also agree with your supposition problem might be related to buffer allocations and queue weights.

One of the stats that would indicate buffering might be a problem is:

  output queues enqueued:

queue: threshold1 threshold2 threshold3

-----------------------------------------

queue 0:           2           0           0

queue 1:  1354636197       63323     2072422

queue 2:           0           0           0

queue 3:           0           0          51

As you've also noted, most of the traffic appears to be enqueued in queue 1.

Perhaps more important, look at the drops for that queue's threshold1.

  output queues dropped:

queue: threshold1 threshold2 threshold3

-----------------------------------------

queue 0:            0            0            0

queue 1:    200569558            9          668

queue 2:            0            0            0

queue 3:            0            0            0

If this is TCP traffic, a too high drop rate will keep the  traffic, on average, from using all the available bandwidth (much as is  often seen with rate-limiters).

Since you've noted bandwidth peaks about about 200 Mbps, does it do so routinely?  If so, the share of buffer allocations might be insufficient.  I see your threshold settings have already been adjusted much larger than the defaults.  I see you've also doubled the reserve buffer allocation for this queue but you might try further increasing both it and increasing the proportion of reserved buffer allocations too; perhaps as much as 40 to 50%.

Again, if this is mainly TCP traffic that continues to attempt to push beyond the available bandwidth, it will normally average less than the available bandwidth so when drops are detected it either falls back to slow start or congestion avoidance.  This is effect is often more noticeable with fewer flows as more flows aggregate often often better use all the available bandwidth.

If you're shaping to conform to a provider's rate-limiting, you might try to find what intervals they are measuring against and whether they allow any bursting.  Knowing this, you might "push" your bandwidth allowance up a little to increase the average utilization.  Do note, when doing this it's difficult to not bump into a provider's rate-limiting settings both because of the inaccuracy of a this type of shaping and further because, unlike a router's shaper, you're also unable to define Tc, Bc, Be.

PS:

Not knowing how your other switch ports are utilized, if this port is "special", and if queue-set 2 not being used, you might want to move this port to queue-set 2 so it's custom settings don't impact other ports.

Since you've noted bandwidth peaks about about 200 Mbps, does it do so routinely?  If so, the share of buffer allocations might be insufficient.  I see your threshold settings have already been adjusted much larger than the defaults.  I see you've also doubled the reserve buffer allocation for this queue but you might try further increasing both it and increasing the proportion of reserved buffer allocations too; perhaps as much as 40 to 50%. 

PS:

Not knowing how your other switch ports are utilized, if this port is "special", and if queue-set 2 not being used, you might want to move this port to queue-set 2 so it's custom settings don't impact other ports.

So..

Now my queue-set 1 buffer memory allocation si 25 25 25 25.

You mean, I can try increase allocation for queue 2 at 40-50%? So overall allocation will look like: 16 52 16 16 ?

WBR, alex.

DAO21-RIPE

Disclaimer

The   Author of this posting offers the information contained within this   posting without consideration and with the reader's understanding that   there's no implied or expressed suitability or fitness for any purpose.   Information provided is for informational purposes only and should not   be construed as rendering professional advice of any kind. Usage of  this  posting's information is solely at reader's own risk.

Liability Disclaimer

In   no event shall Author be liable for any damages whatsoever (including,   without limitation, damages for loss of use, data or profit) arising  out  of the use or inability to use the posting's information even if  Author  has been advised of the possibility of such damage.

Posting

Yes, that's the idea.  You'll need to be careful, though.  Taking too much buffer allocation from other queues might cause them to have excessive drops.  I.e. Make a small change and monitor.  For example, you might first try 20 40 20 20 or 25 40 20 15.

Since queue set impacts all ports using it, that's also why I suggest moving this port to queue-set 2 so other ports aren't changed too.  I believe Cisco's thinking about queue sets was one of them would be used for "normal" ports and the other for "special" ports, e.g. host ports vs. uplink ports.

Increasing buffer allocations for the queue with the high drops probably won't help much if the congestion is sustained.

Review Cisco Networking for a $25 gift card