cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2462
Views
10
Helpful
10
Replies

QoS with multipoint mGre

perkin
Level 1
Level 1

Hello experts, 

 

I am starting to implement QoS on mGre tunnels

and I learned QoS on mGre Tunnel are slightly different than others since that is logical

in DMVPN it is something so-called QoS per tunnel.

 

refer the link https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/sec_conn_dmvpn/configuration/xe-16-8/sec-conn-dmvpn-xe-16-8-book/sec-conn-dmvpn-per-tunnel-qos.pdf

we can configured difference policy-map for difference tunnel also the same policy-map can be share

 

refer the above example Page 14 (and I copied the output below), endpoint 203.0.113.252 and 203.0.113.253 share the same policy group1_parent will they share the same 3Mbps bandwidth?

 

The question I asked since I can see that in the output below it seems they are calculated separately,

Questions to experts

1) will QoS shaped 2 x 3Mbps = 6 Mbps?

2) where I can see the total bandwidth for a single policy which just fit with 3Mbps per example below?

 

Device# show policy-map multipoint
Interface tunnel1 <--> 203.0.113.252
Service-policy output: group1_parent
Class-map: class-default (match-any)
29 packets, 4988 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any
Queueing
queue limit 750 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
shape (average) cir 3000000, bc 12000, be 12000
target shape rate 3000000
Service-policy : group1
queue stats for all priority classes:
queue limit 250 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
Class-map: group1_voice (match-all)
0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: access-group 100
Priority: 1000 kbps, burst bytes 25000, b/w exceed drops: 0
Class-map: group1_Routing (match-all)
0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: ip precedence 6
Queueing
queue limit 150 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
bandwidth 20% (600 kbps)
Class-map: class-default (match-any)
29 packets, 4988 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any
queue limit 350 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
Interface tunnel1 <--> 203.0.113.253
Service-policy output: group1_parent
Class-map: class-default (match-any)
29 packets, 4988 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any
Queueing
queue limit 750 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
shape (average) cir 3000000, bc 12000, be 12000
target shape rate 3000000
Service-policy : group1
queue stats for all priority classes:
queue limit 250 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
Class-map: group1_voice (match-all)
0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: access-group 100
Priority: 1000 kbps, burst bytes 25000, b/w exceed drops: 0
Class-map: group1_Routing (match-all)
0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: ip precedence 6
Queueing
queue limit 150 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
bandwidth 20% (600 kbps)
Class-map: class-default (match-any)
29 packets, 4988 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any
queue limit 350 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
Interface tunnel1 <--> 203.0.113.254
Service-policy output: group2_parent
Class-map: class-default (match-any)
14 packets, 2408 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any
Queueing
queue limit 500 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
shape (average) cir 2000000, bc 8000, be 8000
target shape rate 2000000
Service-policy : group2
queue stats for all priority classes:
queue limit 100 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
Class-map: group2_voice (match-all)
0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: access-group 100
Priority: 20% (400 kbps), burst bytes 10000, b/w exceed drops: 0
Class-map: group2_Routing (match-all)
0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: ip precedence 6
Queueing
queue limit 50 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
bandwidth 10% (200 kbps)
Class-map: class-default (match-any)
14 packets, 2408 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any
queue limit 350 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0

2 Accepted Solutions

Accepted Solutions

Hello Perkin,

you can apply a policy-map on the physical interface to provide an overall shaper with the following limitations / guidelines provided on pag. 3 of the document that you have linked in your first post in this thread:

 

>>

The main interface
or subinterface
QoS service
policy is limited
to only a class-default
shaper (it
can only contain
the
class
class-default
and
shape
commands).
Additional
QoS configurations
are
not supported
on the main interface
or subinterface
when two different QoS service
policies
are
applied
to the main or subinterface
and the tunnel interface
simultaneously
.
• The main interface
or subinterface
QoS service
policy must be applied
before the tunnel interface
service
policy
 
So you can apply an overall general shaper on the physical interface that is the source of the multipoint GRE tunnel and then you have other two levels of QoS: the per spoke shaper chosen using the NHRP group name advertised by the Spoke in its own NHRP registration messages sent to the HUB/NHS and the related child queue scheduler policy.
 
This should be enough to protect the HUB from attempting to use more bandwidth then available at the physical interface and to provide per Spoke shaper and child CBWFQ.
 
Note: I think this thread is more meaningful and helpful of the older one, because you have provided a document that can be used by other people that may have the same issue
 
Hope to help
Giuseppe
 
Hope to help
Giuseppe

View solution in original post

perkin
Level 1
Level 1

leaving the notes as summary if anyone read this someday :-)

 

after test and discussion, it seems to me this per tunnel Mgre can do well

IF we assume no bandwidth limitation on the headend

the actual limitation I found are:-

- tunnel base QoS (looks like) it just takes the beauty to "profile" the QoS but actually it does not have any intelligence to know how many tunnels (spoke) it is connected to and how many bandwidths to share, for example, if you have 10Mbps hub but 20x1Mbps spoke, so QoS will think about there has 20 x 1 Mbps bandwidth to use and will not drop the traffic,

this is the problem on the "parent QoS" which do not know the actual bandwidth can be use and it will be the problem on child policy if you want to have some fairness on the different classes of traffic, - it may fully used by one of the Queue since even QoS policy do not know how many he can use actually.

 

additional QoS cannot help seems

QoS on ingress is nothing with bandwidth/shaping

QoS on Engress physical interfaces cannot do shaping only with (class-default) 1 Queue only, so it just shapes all and loss the beauty of bandwidth allocation.

 

so the only better way I can think about is need to have another router (like CE router from the provider) to do actual bandwidth allocation

 

hope any experts can suggest the better QoS design for MGRE with limited bandwidth on hub side.

 

View solution in original post

10 Replies 10

Giuseppe Larosa
Hall of Fame
Hall of Fame

Hello Perkin,

the document that you have provided is a pdf document of 18 pages.

At pag 4 when listing benefits it tells

>> Traffic can be regulated from the hub to spokes on a per-spoke basis.

 

 

This feature is called per tunnel QoS but it should be called per Spoke QoS.

Some spokes can use the same policy if they belong to the same NHRP group.

But a different instance of the policy-map is used for each spoke.

>> The NHRP group string received from a spoke is mapped to a QoS policy, which is applied to that hub-to-spoke tunnel in the egress direction.

 

Coming to your specific questions:

>> 1) will QoS shaped 2 x 3Mbps = 6 Mbps?

Yes, there will be two instances of the shaper policy one for each spoke and also a 2 Mbps shaper for the third spoke that is part of a different NHRP group.

 

>> 2) where I can see the total bandwidth for a single policy which just fit with 3Mbps per example below?

 

I report some lines of your example output

 

Device# show policy-map multipoint
Interface tunnel1 <--> 203.0.113.252
Service-policy output: group1_parent

[ output omitted ]

>>shape (average) cir 3000000, bc 12000, be 12000    ! this means shape all traffic to this spoke as 3Mbps
>> target shape rate 3000000

[output omitted]

Interface tunnel1 <--> 203.0.113.253
Service-policy output: group1_parent

[output omitted]

>>shape (average) cir 3000000, bc 12000, be 12000 ! also this spoke has a parent shaper of 3 Mbps
>>target shape rate 3000000
Service-policy : group1   ! child policy can allow differentiated treatment within the logical pipe provided by parent policer their bandwidth reference is 3 Mbps and not the interface speed/bandwidth.

 

The great advantage of this feature is that the NHRP protocol uses spoke registration to adjust the QoS configuration.

The NHRP group name configured on the spoke is reported in NHRP messages sent from spoke to HUB/NHS.

The Hub/NHS can dynamically change the QoS configuration for each spoke based on the NHRP group name attribute.

This travels in NHRP as a Vendor Private Extension to NHRP protocol.

If changing the NHRP group name on spoke the new group name will be sent at next registration time (there is no flash update).

 

Hope to help

Giuseppe

 

 

 

 

 

Many thanks Giuseppe
So meaning that what I worries is valid or is that any way to sloved the problem?


For example I have 10mbps Mpls as hub site
And 20 x 1 mbps spoke site
If I shaped average each spoke with 1 mbps then it will over subscription on hub site

If I shaped to 0.5Mbps then the max traffic to spoke will be 0.5Mbps only?


So the best setup I can think of is using parent to shaped the total hub side bandwidth *before* the allocation to per spoke QoS — is that possible?
I can’t found this kind of example

Hello Perkin,

you can apply a policy-map on the physical interface to provide an overall shaper with the following limitations / guidelines provided on pag. 3 of the document that you have linked in your first post in this thread:

 

>>

The main interface
or subinterface
QoS service
policy is limited
to only a class-default
shaper (it
can only contain
the
class
class-default
and
shape
commands).
Additional
QoS configurations
are
not supported
on the main interface
or subinterface
when two different QoS service
policies
are
applied
to the main or subinterface
and the tunnel interface
simultaneously
.
• The main interface
or subinterface
QoS service
policy must be applied
before the tunnel interface
service
policy
 
So you can apply an overall general shaper on the physical interface that is the source of the multipoint GRE tunnel and then you have other two levels of QoS: the per spoke shaper chosen using the NHRP group name advertised by the Spoke in its own NHRP registration messages sent to the HUB/NHS and the related child queue scheduler policy.
 
This should be enough to protect the HUB from attempting to use more bandwidth then available at the physical interface and to provide per Spoke shaper and child CBWFQ.
 
Note: I think this thread is more meaningful and helpful of the older one, because you have provided a document that can be used by other people that may have the same issue
 
Hope to help
Giuseppe
 
Hope to help
Giuseppe

Many thanks and let me try
Since I read some articles mentioned that tunnel and physical interface qos can’t be use in the same time

"Since I read some articles mentioned that tunnel and physical interface qos can’t be use in the same time"

Depends on the device. I've done it on some routers with non-mGRE tunnels. Don't recall if I've done it with mGRE.

Hello again!

 

so per your explanation is more on Qos considerations on spoke site

 

So let’s steps backwards is that any way I can take care the HUB site?

 

For example

hub 10Mbps 

spoke 10x 1 Mbps and 10x 2 Mbps

2 q 50/50

 

So, per your explanation, even if I setup 2 types of qos 1 Mbps profile and 2 Mbps profile

 

hub site router will think about he has

10x0.5+10x1=30 mbps of q?

Q2 will congested even he has say 1 Mbps actually?

 

How I can prevent q1 over utilise the whole 10 Mbps and no bandwidth for q2?

Joseph W. Doherty
Hall of Fame
Hall of Fame
It's per tunnel shaping, so if you have two tunnels, each shaped for 3 Mbps, each tunnel is independently shaped at 3 Mbs. This assumes the interface hosting the tunnels has at least the aggregate of the tunnels. For example, the physical hosting interface can provide at least 6 Mbps.

If on the other hand, the physical interface only provided 5 Mbps, either tunnel could go as high as 3 Mbps provided the other tunnel was using 2 Mbps or less. If both tunnels wanted their 3 Mbps, they would share the 5 Mbps.

Hi Joseph

Thanks for your explanation
Yes like your example
If my head end had only 10 Mbps and I have 10 x 1 Mbps links and 10 x 2mbps links

So what is the best way to setup qos? With 2 queue as example 50/50

The pain point is:-
Each profile they do not “share” the actual qos but just share the “profile”

Individual qos they will think about they can use all bandwidth; like above if policy allow q 1 traffic up to 5Mbps (50%) per spoke — then it will stress out the q2 traffic already

The very best logic I can think of
— by use shaping to Sharpe traffic goes into tunnel then using percentage per spoke also they can like “bandwidth” to adjust if available bandwidth is sufficient

Is that something exist on the Cisco design?
Many thanks !


Thanks again
After I take a look your comment and others
So it seems to me this Qos is more on the spoke side tunnel instead of the class of traffic actually and assume head end have unlimited bandwidth.

So if I just ask a basic humble question
Now- I dun care the spoke side but just want to ensure the hub side will having a fair qos for 2 queues of traffic for 50/50.
How I can set up to make sure Each class will having the 50% from the head end ?

perkin
Level 1
Level 1

leaving the notes as summary if anyone read this someday :-)

 

after test and discussion, it seems to me this per tunnel Mgre can do well

IF we assume no bandwidth limitation on the headend

the actual limitation I found are:-

- tunnel base QoS (looks like) it just takes the beauty to "profile" the QoS but actually it does not have any intelligence to know how many tunnels (spoke) it is connected to and how many bandwidths to share, for example, if you have 10Mbps hub but 20x1Mbps spoke, so QoS will think about there has 20 x 1 Mbps bandwidth to use and will not drop the traffic,

this is the problem on the "parent QoS" which do not know the actual bandwidth can be use and it will be the problem on child policy if you want to have some fairness on the different classes of traffic, - it may fully used by one of the Queue since even QoS policy do not know how many he can use actually.

 

additional QoS cannot help seems

QoS on ingress is nothing with bandwidth/shaping

QoS on Engress physical interfaces cannot do shaping only with (class-default) 1 Queue only, so it just shapes all and loss the beauty of bandwidth allocation.

 

so the only better way I can think about is need to have another router (like CE router from the provider) to do actual bandwidth allocation

 

hope any experts can suggest the better QoS design for MGRE with limited bandwidth on hub side.

 

Review Cisco Networking for a $25 gift card