Configuring QOS can be an extremely complex task depending on how granular the user wants to control both in policy as well as out of policy traffic. Meaning, is bandwidth guaranteed and if so how is excess bandwidth handled. Many software forwarding platforms today use a two (2) parameter scheduler that include "maximum" and "excess".
The ASR1000 series implements a more advanced three (3) parameter scheduler.
The three parameters are:
Min = bandwidth command Max = shape command Excess_weight = bandwidth remaining command
Usually available bandwidth above the guaranteed amount is referred to simply as excess but giving the description "excess_weight" may make it easier to explain as in the example below.
This becomes important because the same configuration on say a 7200VXR series platform may not result in the same traffic handling policy on an ASR1000 series device.
Let's explain the calculation a little for the ASR1000 implementation. We service the "Min" guarantees first so class1 gets 10% based on the "bandwidth" command since the "bandwidth" command sets the Min. Similarly, class2 gets 20% which leaves 70% of the overall bandwidth available to be used and is not guaranteed anywhere. Meaning we have 70% in the "excess pool".
This excess is shared among classes that are looking for bandwidth based on their "Excess_weight". In the implementation today each class gets an equal share of the excess which is how the respective throughput values are derived.
Total load: 100% class1: min = 10 class2: min = 20
Excess is 70% so with two classes that's 35% of the excess to each class. Therefore class1 would get 45% (10 min + 35% excess) and class2 would get 55% (20 min + 35% excess).
Let's take the above and apply it to a network scenario with real values.
Assume we would like to have a subrate gigabit ethernet link running at 200 Mbps and guarantee the following
class "voice" is guaranteed 10 Mbps of low latency traffic and is policed above that rate class "data1" is guaranteed 10 Mbps class "data2" is guaranteed 10 Mbps class "data3" is guaranteed 80 Mbps class "default" is guaranteed 10 Mbps
Here is the policy-map configuration
shape average 200000000
The result of this configuration would be the following throughput for each class if each class was sending at 200 Mbps. We use the 200 Mbps sending for each class to represent how all the excess would be used if there was only data for a single class at any given time.
Under the full load for all classes the result would be:
class voice => gets 10 Mbps and is strictly policed above 10 Mbps with or without congestion class data1 => gets 30 Mbps class data2 => gets 30 Mbps class data3 => gets 100 Mbps class default => gets 30 Mbps
That gives a total of 190 Mbps allocated to the bandwidth classes and 10 Mbps for the priority for a total of the 200 Mbps which the parenter shaper allows.
The bandwidth received for each class was calculated based on an equal excess weight for each class and a excess bandwidth of 80 Mbps (200 total - 10 priority - 110 Min). That means each bandwidth (non priority) class would get 20 Mbps of the excess.
In the future the allocation of excess bandwidth will be configurable similar to something like this versus all classes getting an equal weight of excess bandwidth:
I'm testing using Prime Infrastructure 3.5 to push out a new IOS to some Cat3850 switches. I have a Linux VM running SCP server acting as the intermediary. The job starts and I see the IOS file start to copy to the Linux SCP folder. Then job fails and I g...
Hi the community, I would like to have your point of view and suggestion about my problem. As you can see on the draw attached, I have static routing and dynamic routing for the same remote sites LANs.My goal is to replace the static routing by ...
We have a ISR4351-V/K9 as a voice gateway with all licensed pack included. For High Availability feature, we need another ISR4351-V/K9 router.So my question is, do we need to buy the hardware part with all license pack like existing one or just need to pu...