10-22-2019 02:06 AM
I am trying to limit the Download speed (Egress) of ports on my C3750X-24S-E
For those not familiar, this 3750 is an SFP port only switch.
I have Fibre SFP modules connected into the ports and can see they are fixed 1 Gig speed capability.
Now, the requirement is for the ports to have a mix of download and upload speeds throttling each one.
For example, Download 5Mb, 10Mb, 20Mb, 30Mb, 50Mb and 100Mb, 150Mb.
Uploads the same but to be applied separately.
Now, I've activated mls qos.
I've added the following command to each interface srr-queue bandwidth limit 10
This will reduce the download bandwidth to 100Mb as it's 10% of 1000Mb.
So far so good, now comes the problematic bit.
I created the following policy for upload and download speeds:
policy-map 5Mb
description Upload 5Mbps
class rate-limit
police 5242500 200000 exceed-action drop
policy-map 100Mb
description Upload 100Mbps
class rate-limit
police 104857500 200000 exceed-action drop
policy-map 10Mb
description Upload 10Mbps
class rate-limit
police 10240000 200000 exceed-action drop
policy-map 20Mb
description Upload 20Mbps
class rate-limit
police 20480000 200000 exceed-action drop
I then started to apply it to the interfaces, eg:
interface GigabitEthernet1/0/4
description Client A
switchport access vlan 200
switchport mode access
srr-queue bandwidth limit 10
service-policy input 20Mb
!
This is great for the ingress port (Upload) the following command wasn't supported for the Egress (Download)
service-policy output 20Mb
With this little set-back I then considered returning to the srr-queue bandwidth command to throttle download speeds, and therein lies my issue.
Using the srr-queue bandwidth shape 2 0 0 0 it was unclear what this would do nor an understanding of the "weights"
The above command "in theory and my understanding" limits the download speed to half of 25Mb. Is that correct.
Can anyone out there provide the weights for the above policy speeds please as I'm struggling.
Many thanks
However when I tried to type in
Solved! Go to Solution.
10-28-2019 06:24 AM
Hello,
"So, if I used:
srr-queue bandwidth shape 8 200 200 200
This would approximately (if the command was applied to the port) limit the traffic on an @100 Mbps port to a value of between 10-15 Mbps? (12.5 + 0.5 + 0.5 + 0.5 = @ 14Mbps"
-Yes, this should limit total traffic on 100Mbps port to 14 Mbps approximately. And some traffic types (q2-4) within these 14Mbps will be limited to 0.5 Mbps.
10-22-2019 03:16 AM
Hello,
Please, find my comments below:
"Using the srr-queue bandwidth shape 2 0 0 0 it was unclear what this would do nor an understanding of the "weights"
The above command "in theory and my understanding" limits the download speed to half of 25Mb. Is that correct."
There are 4 hardware egress queues per port on 3750. 4 numbers in the command above assign weights or "ratios" for bandwidth allocation for each of these queues.
Weight or ratio means that those exact 4 numbers itself are not very important, it's more important their relation between each other.
For convenience, you can think about these numbers are percentage from 100% interface bandwidth. So, in total 4 numbers should give 100. Exact number for each queue will indicate percentage of bandwidth for respective queue.
But in general, you can calculate weight as following:
sum these 4 numbers, and then divide each number by sum and multiply by 100% to calculate percentage for each queue
For example:
srr-queue bandwidth shape 25 25 25 25 - will guarantee and shape each of 4 queue at equal amount of bandwith (1/4 or 25% in this case). This command will have the same effect with any equal weights, for example:
srr-queue bandwidth shape 10 10 10 10 or srr-queue bandwidth shape 2 2 2 2
Calculation: 10+10+10+10=40. For each queue: 10/40*100%=25%
2+2+2+2 = 8. For each queue: 2/8*100%=25%
Command below for example, allocates 50% for 1st queue, 30% for 2nd and 10% for 3rd and 4th.
srr-queue bandwidth shape 50 30 10 10
And so on.
srr-queue bandwidth shape 2 0 0 0 - In this case this configuration provides all the available bandwidth to traffic from the 1st queue.
2+0+0+0 = 2. For 1st queue: 2/2*100%=100%
Bandwidth limit comfig limits total interface bandwidth to defined level:
Bandwidth Limit Configuration:
In order to limit maximum output on a port, configure the srr-queue bandwidth limit interface configuration command. If you configure this command to 80 percent, the port is idle 20 percent of the time. The line rate drops to 80 percent of the connected speed. These values are not exact because the hardware adjusts the line rate in increments of six. This command is not available on a 10-Gigabit Ethernet interface.
10-22-2019 06:19 AM - edited 10-22-2019 06:29 AM
Thank you.....I certainly understand it a little better now though I'm not 100% out of the woods yet 8-)
A question then:
As I said earlier, I've applied the mls qos command globally.
I'm guessing I don't need to set up the queues and I can go straight into configuring the ports with the srr-queue bandwidth shape command occupying all 4 "queues" with a weight.
Is that correct?
If that is then will srr-queue bandwidth shape 50 30 10 10 provide me with 25 Mb of the 100 Mb port?
(50% of 25 Mb =12.5 Mb)
(30% of 25 Mb = 7.5 Mb)
(10% of 25 Mb = 2.5 Mb)
(10% of 25 Mb = 2.5 Mb)
10-23-2019 03:10 AM
Hello,
Sorry, I just noticed that my previous explanation was not completely correct.
It's was correct in general from weights and ratios perspective, but not correct in particular case of shaping weights in shaped round-robin mode.
So, in case of weights in shaped round-robin mode, switch uses inverse ratio (1/configured value) to calculate the allocated bandwidth percentage. And "0" means, that shaped round-robin mode is disabled for particular queue, and shared round-robin is used.
In shared round-robin mode, and in many other cases, like for buffers allocation, "straight" weight values are used with the logic, which I described previously.
So, command:
"srr-queue bandwidth shape 2 0 0 0" - assigns 50% of available bandwidth (1/2) for queue1, and disables shaped round-robin for queues 2-4 .
In this case, queues 2-4 will be served in shared round robin, based on "straight" weights. Default shared weights are 25 25 25 25. Values for queues, which are in shaped mode, are ignored. So, it means, that those remaining 50% will be distributed among queues 2-4 equally in shared mode.
If we would use command:
"srr-queue bandwidth share 2 0 0 0", then as I described, all the 100% would be allocated to queue 1.
Difference between shaped and shared mode is that in shaped mode queue is limited by the allocated bandwidth (can't consume more, than allocated), and in shared mode queue can consume more, than allocated, if bandwidth is available.
Shaped and shared weights values can be verified with "show mls qos interface <X/Y> queueing" command.
Please, note that weight value is ignored for priority queue, if priority queue is enabled.
Regarding the queues
Once you enabled mls qos, those 4 queues are activated automatically. Now traffic is distributed between those 4 queues, based on default "cos-output-q" and "dscp-output-q" maps. Meaning, that particular traffic goes to specific queue, depends on CoS or DSCP value in the packet. You can check default mapping with following commands:
show mls qos maps dscp-output-q
show mls qos maps cos-output-q
To see, how much traffic was already enqueued to particular queue, you can use following command:
show mls qos interface <X/Y> statistics
To see operational bandwidth from qos perspective ,and check whether priority queue is enabled, you can use following command:
show mls qos interface <X/Y> queueing
-----
Regarding your initial requrements. As I mentioned, traffic is distributed among the queues by default. Sensitive traffic (like control, voice) are in sperate queue. If you don't need to separate this traffic, you can map all the traffic to single queue (lower queue number should be used) and shape it to required value, and shape remaining queues to some very low values.
For example,
"srr-queue bandwidth shape 4 200 200 200" - shapes queue 1 to 25% (1/4*100%=25%), which is 25Mbps for 100Mbps interface. And queue 2-4 are shaped at 0,5% (1/200*100%=0.5%), which is 0,5 mbps in our case.
10-22-2019 09:58 AM
10-25-2019 03:49 AM
Well...certainly took a while for me to get that around my head, but I really appreciate the explanations.
So as far as I understand it based on your helpful submissions, using the following command;
On the 3750X for the egress side of things with a policy-map already assigned to throttle ingress speeds,
mls qos
srr-queue bandwidth limit 10 ( Will limit the 1000Mb SFP fixed fibre to 10% on 1000Mb which is 100Mb)
srr-queue bandwidth shape 8 0 0 0 ( Will give me @12.5Mb (1/8*100% =12.5% of 100Mb port)
Seeing as the queues 2-4 inc will be ignored, is that the take which I've been trying for?
I would love to have the definitive answer as I'm hoping to not only understand this command and its control on a basic level before I start trying to understand the more advanced aspects of bringing the other queues into play.
Many thanks again.
10-25-2019 07:54 AM
Hello,
Please, find my comments below:
"On the 3750X for the egress side of things with a policy-map already assigned to throttle ingress speeds,"
- Yes, we now we are talking only about egress direction
mls qos
srr-queue bandwidth limit 10 ( Will limit the 1000Mb SFP fixed fibre to 10% on 1000Mb which is 100Mb).
- Yes, at least approximately, because it's written that limitation is not exact, most likely due to hardware architecture.
srr-queue bandwidth shape 8 0 0 0 ( Will give me @12.5Mb (1/8*100% =12.5% of 100Mb port)
- Yes, but it will give (shape at) 12.5Mb not for whole port, but for 1st queue only.
Seeing as the queues 2-4 inc will be ignored, is that the take which I've been trying for?
-Queues 2-4 will not be completely ignored. "0" values in previous command mean, that shaped mode will not be applied to queues 2-4. But these queues still will work in shared mode. In shared mode by default (even if you don't configure anything else) they will have equal amount of bandwidth allocated. You will see something like this:
show mls qos interface <#/#> queueing
...
Shaped queue weights (absolute) : 8 0 0 0 <<<< this is what you will configure
Shared queue weights : 25 25 25 25 <<<< these are default values. Value for 1st queue will be ignored, because it's non-zero in "Shaped" weights. And for q2-4 default values are equal (25), and they will have equal bandwidth.
So q2-4 will be able to consume (theoretically) all the bandwidth, remained from shaped queue 1 up to the bandwidth limit.
Queues 2-4 will have: 100Mpbs (limit) - 12,5Mbps(for q1) = 87.5 Mbps in total / 3 queues = 29.16Mbps per queue.
How much bandwidth they will consume in fact, it depends on the nature and amount of traffic in these queues.
---
I would suggest you to try following:
1. It's not necessary to use "srr-queue bandwidth limit <>" command. This command applies only approximate limitation, and anyway, we cannot limit bandwidth to all the desired levels with this command, because 10% is the lowest value
Instead, you can use just only "srr-queue bandwidth shape <><><><>" command with increased weight values.
For, example to limit 1st q to 12.5Mbps on 1G port, command will be
"srr-queue bandwidth shape 80 0 0 0" (1/80*100% =1.25% of 1000Mb port=12.5 Mbps)
2. As I mentioned, this command sets limit not per port, but per queue. By default, different traffic is already mapped to different queues. For example, below you can see, that by default traffic with CoS 0 and 1 is mapped to q2. CoS 2-3 to q3. CoS 4,6-7 to q4. CoS 5 to q1. The same table you can check for DSCP.
#show mls qos maps cos-output-q
Cos-outputq-threshold map:
cos: 0 1 2 3 4 5 6 7
------------------------------------
queue-threshold: 2-1 2-1 3-1 3-1 4-1 1-1 4-1 4-1
If such traffic separation is not needed, and you want to have bandwidth limitation for all the traffic per port, first you would need to map all the traffic to single queue (q1, for example). And then shape this queue, as described above, and remaining queues to some low values, or just ignore them. CoS to queue and DSCP to queue mapping is done by following commands
mls qos srr-queue output cos-map queue <q#> threshold <th#> <CoS values>, for example, map all CoS to q1:
mls qos srr-queue output cos-map queue 1 threshold 3 0 1 2 3 4 5 6 7
DSCP mapping:
mls qos srr-queue output dscp-map queue <q#> threshold <th#> <DSCP values, up to 8 per command>
Please, note, that these commands are global, so will be applied to all the traffic.,
This article is quite big, but it has very good description of all the qos details, with examples:
https://community.cisco.com/t5/networking-documents/egress-qos/ta-p/3122802#_Toc191205651
10-25-2019 08:08 AM - edited 10-25-2019 08:20 AM
So, if I used:
srr-queue bandwidth shape 8 200 200 200
This would approximately (if the command was applied to the port) limit the traffic on an @100 Mbps port to a value of between 10-15 Mbps? (12.5 + 0.5 + 0.5 + 0.5 = @ 14Mbps
Is that correct?
10-28-2019 06:24 AM
Hello,
"So, if I used:
srr-queue bandwidth shape 8 200 200 200
This would approximately (if the command was applied to the port) limit the traffic on an @100 Mbps port to a value of between 10-15 Mbps? (12.5 + 0.5 + 0.5 + 0.5 = @ 14Mbps"
-Yes, this should limit total traffic on 100Mbps port to 14 Mbps approximately. And some traffic types (q2-4) within these 14Mbps will be limited to 0.5 Mbps.
10-28-2019 06:58 AM
Thanks a lot, I mean that.
It's all understood now, by understanding the basics and how it all works I can then start playing with the advanced guff.
I often mull the fact as to why cisco, or indeed any other vendor, can't re-jig their command algorithms to reflect a lot of these sort of task commands and make them simpler and more plain english without staying with advanced mechanics and maths 8-)
Still, thanks again and for your patience.
10-28-2019 07:09 AM
You are welcome, I wish you successful implementation :-)
Yes, I totally agree with you, that simpler approach would be better. And that's what Cisco implemented in more recent switch models with IOS-XE (like, 3850), these switches uses more user-friendly MQC QoS config for L2 QoS, both ingress and egress, with class-maps, policy-maps and actions, which is much more intuitive.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide