cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
4203
Views
0
Helpful
8
Replies

Per-tunnel QoS and physical interface CBWFQ

Hello,

I am preparing configuration (currently in lab) for Per-Tunnel QoS in DMVPN on ASR 1002F for one of our customers, and I came across one issue. According to restrictions for this feature, I cannot apply per-tunnel QoS in conjunction with interface based QoS. This means, I can provide shaping with hierarchical CBWFQ for each spoke, but I cannot guarantee anything on physical interface! What if there are services in native MPLS? I am also unable give reservations for BGP which is used on PE-CE link! How about monitoring spoke PE-CE links natively? I can only apply policy-map with class-default on physical interface. When I add anything related to queueing for that class (or any other non-default class) I get the message:

R1(config-pmap)class routing

R1(config-pmap-c)#bandwidth 16

service-policy with queueing features on sessions is not allowed in conjunction with interface based

Config:

policy-map cbwfq

class icmp

    bandwidth 32

class telnet

    bandwidth 32

class routing

    bandwidth 8

policy-map shaper128

class class-default

    shape average 128000 1280 0

  service-policy cbwfq

policy-map shaper256

class class-default

    shape average 256000 2560 0

  service-policy cbwfq

policy-map shaper-f0-0

class class-default

interface Tunnel0

bandwidth 10000

ip nhrp map group shaper128 service-policy output shaper128

ip nhrp map group shaper256 service-policy output shaper256

interface FastEthernet0/0

service-policy output shaper-f0-0

Any thoughts/ideas/solutions are more than welcome :-)

Cheers,

Krzysztof

8 Replies 8

jkirby
Level 1
Level 1

Arrgh!  Me too!  I need to shape per-tunnel traffic at the hubs to prevent overriding any of the spokes.  And I know my DSCP markings will pass up to the IPSEC header so the MPLS provider can correctly classify and queue our traffic.  However, I need to make sure I can correctly prioritize and queue those packets at egress of the physical port, not just within each tunnel, as well as ensure that BGP to the carrier gets priority as well. 

What good does per-tunnel QoS do when you can't apply per-interface QoS on the interface where all tunnels aggregate?  And if I read the docs correctly, it doesn't matter if I source my tunnels from a loopback; the restriction on the physical interface still applies.

dinan
Level 1
Level 1

I'm getting a simular issue.  I have per tunnel QOS on DMVPN tunnels. Also have a serial interface with a shaping policy and it works but If I apply a shaping policy to the ethernet interface going to the my data center I get the following: 

R1(config-if)#service-policy out shaping-policy

service-policy with queueing features on sessions is not allowed in conjunction with interface based

Should have read more.  I found this

http://www.cisco.com/en/US/docs/ios-xml/ios/sec_conn_dmvpn/configuration/15-2mt/sec-conn-dmvpn-per-tunnel-qos.html#GUID-182BD32F-56D4-479C-BFEF-B9738291E046

Can't do QOS on the tunnel interface and physical interface the DMVPN uses.  Explains why shaping works on the Serial which DMVPN doesn't use but doesn't work on the ethernet interface which is the physical interface for DMVPN. 

Joseph W. Doherty
Hall of Fame
Hall of Fame

Disclaimer

The  Author of this posting offers the information contained within this  posting without consideration and with the reader's understanding that  there's no implied or expressed suitability or fitness for any purpose.  Information provided is for informational purposes only and should not  be construed as rendering professional advice of any kind. Usage of this  posting's information is solely at reader's own risk.

Liability Disclaimer

In  no event shall Author be liable for any damages whatsoever (including,  without limitation, damages for loss of use, data or profit) arising out  of the use or inability to use the posting's information even if Author  has been advised of the possibility of such damage.

Posting

Any thoughts/ideas/solutions are more than welcome :-)

Patient: "Doctor, doctor - it hurts when I do this."  Doctor: "Then don't do that." 

Seriously, the (current) solution is to not oversubscribe your underlying physical link or understand the probabilities of performance loss if you do oversubscribe.

Further seriously, consider traffic from spokes to hub.  If you're unable to manage/control egress QoS, on the other side of your hub's connection to the "cloud", how do you manage aggregate congestion from all your spokes?

Eventually, Cisco may add such tunnel and physical interface QoS as a enhancement, but for now, to manage congestion on the hub's interface, while managing per spoke congestion, I believe we're stuck with managing spokes aggregate bandwidth to hub's physical bandwidth.

In our situation the Hub router for DMVPN is also the Hub router for MPLS.  Our issue is that we have backups at the DMVPN and MPLS sites we don't want them maxing out the bandwidth at the remote site.  The best way of course would be to shape at the remote router which is what I'll eventually look at doing but I was looking for a quick fix. 

I just found it interesting that I could put shaping on the serial interface that MPLS uses but I couldn't put it on the gig ethernet that goes to our DataCenter and also is the Interface DMVPN uses.   It makes more sense now why that wouldn't work. 

Disclaimer

The   Author of this posting offers the information contained within this   posting without consideration and with the reader's understanding that   there's no implied or expressed suitability or fitness for any purpose.   Information provided is for informational purposes only and should not   be construed as rendering professional advice of any kind. Usage of  this  posting's information is solely at reader's own risk.

Liability Disclaimer

In   no event shall Author be liable for any damages whatsoever (including,   without limitation, damages for loss of use, data or profit) arising  out  of the use or inability to use the posting's information even if  Author  has been advised of the possibility of such damage.

Posting

I'm a bit confused about your mention of shaping at the remote site.  What direction is the "typical" backup volume - from or to the remote site?  Remote site has more physical bandwidth than logical bandwidth?

The backups are from the remote site to the Data Center, so for the MPLS sites I could shape the backup traffic leaving from the remote site serial interface, I would use a time of day filter to limit backups only during the day. I was trying to figure out a way to limit backups during the day while allowing client restores to happen. The restores would be a download to the site, so I wouldn't shape that direction. I've already got this traffic marked lowest priority in my CBWFQ we still see performance issues and get complaints from the user, right now most of the time we get complaints about network slowness from a user the site is doing a backup. Most of the remote sites are standard T1's.

Disclaimer

The    Author of this posting offers the information contained within this    posting without consideration and with the reader's understanding that    there's no implied or expressed suitability or fitness for any  purpose.   Information provided is for informational purposes only and  should not   be construed as rendering professional advice of any kind.  Usage of  this  posting's information is solely at reader's own risk.

Liability Disclaimer

In    no event shall Author be liable for any damages whatsoever  (including,   without limitation, damages for loss of use, data or  profit) arising  out  of the use or inability to use the posting's  information even if  Author  has been advised of the possibility of such  damage.

Posting

I had worked with supporting something similar several years ago, i.e. remotes with low speed (single to several T1/E1) to MPLS cloud, and backups from the remote site.

What worked well for us, CBWFQ on the physical interface with backup traffic only allocated 1%.

Our MPLS did have QoS policy too.  It didn't set backup ingress to our HQ at 1% but I recall (?) it wasn't oversubscribed either.

Had some sites continuously pegged their remote link, outbound, during business hours because of backups, yet we supported all other mixes of traffic across the saturated remote link including VoIP and video conferencing.

Review Cisco Networking for a $25 gift card