09-03-2019 09:51 AM
Hello experts,
I am starting to implement QoS on mGre tunnels
and I learned QoS on mGre Tunnel are slightly different than others since that is logical
in DMVPN it is something so-called QoS per tunnel.
we can configured difference policy-map for difference tunnel also the same policy-map can be share
refer the above example Page 14 (and I copied the output below), endpoint 203.0.113.252 and 203.0.113.253 share the same policy group1_parent will they share the same 3Mbps bandwidth?
The question I asked since I can see that in the output below it seems they are calculated separately,
Questions to experts
1) will QoS shaped 2 x 3Mbps = 6 Mbps?
2) where I can see the total bandwidth for a single policy which just fit with 3Mbps per example below?
Device# show policy-map multipoint
Interface tunnel1 <--> 203.0.113.252
Service-policy output: group1_parent
Class-map: class-default (match-any)
29 packets, 4988 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any
Queueing
queue limit 750 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
shape (average) cir 3000000, bc 12000, be 12000
target shape rate 3000000
Service-policy : group1
queue stats for all priority classes:
queue limit 250 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
Class-map: group1_voice (match-all)
0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: access-group 100
Priority: 1000 kbps, burst bytes 25000, b/w exceed drops: 0
Class-map: group1_Routing (match-all)
0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: ip precedence 6
Queueing
queue limit 150 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
bandwidth 20% (600 kbps)
Class-map: class-default (match-any)
29 packets, 4988 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any
queue limit 350 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
Interface tunnel1 <--> 203.0.113.253
Service-policy output: group1_parent
Class-map: class-default (match-any)
29 packets, 4988 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any
Queueing
queue limit 750 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
shape (average) cir 3000000, bc 12000, be 12000
target shape rate 3000000
Service-policy : group1
queue stats for all priority classes:
queue limit 250 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
Class-map: group1_voice (match-all)
0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: access-group 100
Priority: 1000 kbps, burst bytes 25000, b/w exceed drops: 0
Class-map: group1_Routing (match-all)
0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: ip precedence 6
Queueing
queue limit 150 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
bandwidth 20% (600 kbps)
Class-map: class-default (match-any)
29 packets, 4988 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any
queue limit 350 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
Interface tunnel1 <--> 203.0.113.254
Service-policy output: group2_parent
Class-map: class-default (match-any)
14 packets, 2408 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any
Queueing
queue limit 500 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
shape (average) cir 2000000, bc 8000, be 8000
target shape rate 2000000
Service-policy : group2
queue stats for all priority classes:
queue limit 100 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
Class-map: group2_voice (match-all)
0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: access-group 100
Priority: 20% (400 kbps), burst bytes 10000, b/w exceed drops: 0
Class-map: group2_Routing (match-all)
0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: ip precedence 6
Queueing
queue limit 50 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
bandwidth 10% (200 kbps)
Class-map: class-default (match-any)
14 packets, 2408 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any
queue limit 350 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
Solved! Go to Solution.
09-08-2019 01:13 AM
Hello Perkin,
you can apply a policy-map on the physical interface to provide an overall shaper with the following limitations / guidelines provided on pag. 3 of the document that you have linked in your first post in this thread:
>>
09-09-2019 06:04 AM
leaving the notes as summary if anyone read this someday :-)
after test and discussion, it seems to me this per tunnel Mgre can do well
IF we assume no bandwidth limitation on the headend
the actual limitation I found are:-
- tunnel base QoS (looks like) it just takes the beauty to "profile" the QoS but actually it does not have any intelligence to know how many tunnels (spoke) it is connected to and how many bandwidths to share, for example, if you have 10Mbps hub but 20x1Mbps spoke, so QoS will think about there has 20 x 1 Mbps bandwidth to use and will not drop the traffic,
this is the problem on the "parent QoS" which do not know the actual bandwidth can be use and it will be the problem on child policy if you want to have some fairness on the different classes of traffic, - it may fully used by one of the Queue since even QoS policy do not know how many he can use actually.
additional QoS cannot help seems
QoS on ingress is nothing with bandwidth/shaping
QoS on Engress physical interfaces cannot do shaping only with (class-default) 1 Queue only, so it just shapes all and loss the beauty of bandwidth allocation.
so the only better way I can think about is need to have another router (like CE router from the provider) to do actual bandwidth allocation
hope any experts can suggest the better QoS design for MGRE with limited bandwidth on hub side.
09-07-2019 05:31 AM
Hello Perkin,
the document that you have provided is a pdf document of 18 pages.
At pag 4 when listing benefits it tells
>> Traffic can be regulated from the hub to spokes on a per-spoke basis.
This feature is called per tunnel QoS but it should be called per Spoke QoS.
Some spokes can use the same policy if they belong to the same NHRP group.
But a different instance of the policy-map is used for each spoke.
>> The NHRP group string received from a spoke is mapped to a QoS policy, which is applied to that hub-to-spoke tunnel in the egress direction.
Coming to your specific questions:
>> 1) will QoS shaped 2 x 3Mbps = 6 Mbps?
Yes, there will be two instances of the shaper policy one for each spoke and also a 2 Mbps shaper for the third spoke that is part of a different NHRP group.
>> 2) where I can see the total bandwidth for a single policy which just fit with 3Mbps per example below?
I report some lines of your example output
Device# show policy-map multipoint
Interface tunnel1 <--> 203.0.113.252
Service-policy output: group1_parent
[ output omitted ]
>>shape (average) cir 3000000, bc 12000, be 12000 ! this means shape all traffic to this spoke as 3Mbps
>> target shape rate 3000000
[output omitted]
Interface tunnel1 <--> 203.0.113.253
Service-policy output: group1_parent
[output omitted]
>>shape (average) cir 3000000, bc 12000, be 12000 ! also this spoke has a parent shaper of 3 Mbps
>>target shape rate 3000000
Service-policy : group1 ! child policy can allow differentiated treatment within the logical pipe provided by parent policer their bandwidth reference is 3 Mbps and not the interface speed/bandwidth.
The great advantage of this feature is that the NHRP protocol uses spoke registration to adjust the QoS configuration.
The NHRP group name configured on the spoke is reported in NHRP messages sent from spoke to HUB/NHS.
The Hub/NHS can dynamically change the QoS configuration for each spoke based on the NHRP group name attribute.
This travels in NHRP as a Vendor Private Extension to NHRP protocol.
If changing the NHRP group name on spoke the new group name will be sent at next registration time (there is no flash update).
Hope to help
Giuseppe
09-07-2019 02:25 PM
09-08-2019 01:13 AM
Hello Perkin,
you can apply a policy-map on the physical interface to provide an overall shaper with the following limitations / guidelines provided on pag. 3 of the document that you have linked in your first post in this thread:
>>
09-08-2019 03:26 AM
09-08-2019 09:36 AM
09-07-2019 04:20 PM
Hello again!
so per your explanation is more on Qos considerations on spoke site
So let’s steps backwards is that any way I can take care the HUB site?
For example
hub 10Mbps
spoke 10x 1 Mbps and 10x 2 Mbps
2 q 50/50
So, per your explanation, even if I setup 2 types of qos 1 Mbps profile and 2 Mbps profile
hub site router will think about he has
10x0.5+10x1=30 mbps of q?
Q2 will congested even he has say 1 Mbps actually?
How I can prevent q1 over utilise the whole 10 Mbps and no bandwidth for q2?
09-07-2019 10:27 AM
09-07-2019 02:49 PM
09-07-2019 04:30 PM
09-09-2019 06:04 AM
leaving the notes as summary if anyone read this someday :-)
after test and discussion, it seems to me this per tunnel Mgre can do well
IF we assume no bandwidth limitation on the headend
the actual limitation I found are:-
- tunnel base QoS (looks like) it just takes the beauty to "profile" the QoS but actually it does not have any intelligence to know how many tunnels (spoke) it is connected to and how many bandwidths to share, for example, if you have 10Mbps hub but 20x1Mbps spoke, so QoS will think about there has 20 x 1 Mbps bandwidth to use and will not drop the traffic,
this is the problem on the "parent QoS" which do not know the actual bandwidth can be use and it will be the problem on child policy if you want to have some fairness on the different classes of traffic, - it may fully used by one of the Queue since even QoS policy do not know how many he can use actually.
additional QoS cannot help seems
QoS on ingress is nothing with bandwidth/shaping
QoS on Engress physical interfaces cannot do shaping only with (class-default) 1 Queue only, so it just shapes all and loss the beauty of bandwidth allocation.
so the only better way I can think about is need to have another router (like CE router from the provider) to do actual bandwidth allocation
hope any experts can suggest the better QoS design for MGRE with limited bandwidth on hub side.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide