cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
497
Views
0
Helpful
6
Replies

per tunnel qos - help

carl_townshend
Spotlight
Spotlight

Hi All

We have a MPLS network with a full mesh of gre tunnels.

there are many spoke sites, with a main HQ.

When we apply QOS to this WAN, what is the best way to ensure each tunnel gets the right qos treatment?

 

Do you put bandwidth statement on the physical interface only? or do you put bandwidth statements on the tunnels?

 

we don't run a dmvpn so cannot do the nhrp groups for qos etc.

 

What is the correct way to do it?

 

cheers

6 Replies 6

Mark Malone
VIP Alumni
VIP Alumni
Hi
I would put the BW statement on physical and the tunnel interface but qos only on the wan/lan physical interfaces ,and on the wan not under the sub-interfaces if you have any , we went through several best methods with Cisco when deploying IWAN over our global MPLS network, your setup is different but concepts are the same and this is what was recommended to us and has worked well so far

Hi Mark

So put the bandwidth statement on the WAN physical and the tunnel interfaces? if the spoke site has 10 tunnels, I guess we just match each tunnel bandwidth to the same as the value configured on the physical interface?

how exactly would it work? which statement would qos actually use?

cheers

Hello,

 

on a side note, you can also shape, police, and configure CBWFQ directly on each individual tunnel, just in case you need different QoS parameters on each tunnel...

 

Quality of Service Options on GRE Tunnel Interfaces

 

https://www.cisco.com/c/en/us/support/docs/quality-of-service-qos/qos-policing/10106-qos-tunnel.html

Hi Carl
yes here is an example of how we do it , the QOS attached to physical only but BW statement on all physical and logical tunnel and sub ints and ,matching

As George has noted you could specifically use the Tunnel too but this worked well for us in our design using GRE through IWAN

interface Tunnel190
description
bandwidth 10000
ip address xxxxxx
no ip redirects
ip mtu 1400
ip nbar protocol-discovery
ip flow monitor xxxxxx input
ip flow monitor xxxxxx output
ip nhrp authentication h2smpls
ip nhrp group xxxxx
ip nhrp network-id 11
ip nhrp holdtime 600
ip nhrp nhs 1xxxxxx nbma xxxx multicast
ip nhrp nhs xxxxxx nbma xxxxx multicast
ip nhrp nhs xxxxxx nbma xxxxx multicast
ip nhrp nhs xxxxxx nbma xxxxx multicast
ip nhrp nhs xxxxxx nbma xxxxxx multicast
ip nhrp registration no-unique
ip nhrp registration timeout 60
ip nhrp shortcut
ip tcp adjust-mss 1360
load-interval 30
delay 1000
no nhrp route-watch
if-state nhrp
tunnel source GigabitEthernet0/0/1.100
tunnel mode gre multipoint
tunnel key xxxxxx
tunnel vrf xxxxxx
tunnel protection ipsec profile xxxxxx


interface GigabitEthernet0/0/1
description xxxxxx
bandwidth 10000
no ip address
ip nbar protocol-discovery
ip flow monitor xxxxxx input
ip flow monitor xxxxxx output
speed 10
no negotiation auto
service-policy output (YOUR OQS POLICY HERE )
service-policy type performance-monitor input xxxxxx
service-policy type performance-monitor output xxxxxx


interface GigabitEthernet0/0/1.100
description
bandwidth 10000
encapsulation dot1Q xxxxxx
vrf forwarding xxxxxx
ip address xxxxxx
ip nbar protocol-discovery
ip flow monitor xxxxxx input
ip flow monitor xxxxxx output
no cdp enable
service-policy type performance-monitor input xxxxxx
service-policy type performance-monitor output xxxxxx

Joseph W. Doherty
Hall of Fame
Hall of Fame
"When we apply QOS to this WAN, what is the best way to ensure each tunnel gets the right qos treatment?"
*** and ***
"We have a MPLS network with a full mesh of gre tunnels."

This makes a problem! If you have a mesh, you may have multiple nodes sending to just one other node, and they cannot coordinate their bandwidth usage (for proper QoS "backpresssure" operation). You could shape all other nodes, sending to anyone other node, such that their combined shaping does not exceed the bandwidth capacity of the destination node. This works but generally leaves much bandwidth unused, and it's a pain to maintain. (The ideal situation, for a multi node "cloud", would be egress QoS on cloud's egress link to destination node.)

Excluding the issue caused by any kind of mesh, ideally you want two levels of QoS, one set on the physical interface (egress to "cloud") and QoS on each tunnel. The latter shaped to the destination node's bandwidth (if less than nodes's egress bandwidth). (NB: if you have "cloud" egress QoS, you don't need the tunnel QoS.")

"Do you put bandwidth statement on the physical interface only? or do you put bandwidth statements on the tunnels?"

There's a bit more to a QoS policy than just using a bandwidth statement. As QoS features vary by platform and/or IOS, cannot delve into possible implementations.

BTW, why are you using GRE across a (service provider?) MPLS network? Using an IGP or multi-cast?

Hi Joseph

We are using GRE tunnels which then get crypto applied to them, so gre over ipsec, this is so we can encrypt the traffic over the MPLS.

We are running it with OSPF, multicast

Review Cisco Networking for a $25 gift card