cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
4124
Views
0
Helpful
5
Replies

QoS over GRE over IPSec - multiple tunnels, one physical interface - how?

RHITCHCOCK
Level 1
Level 1

I have an E-LAN carrier ethernet WAN for some of my locations.  Most traffic is HQ-bound, and I do GRE tunnels from loopback interfaces on the routers at outlying locations back to HQ, and encrypt them with IPSec on the outside interfaces (we're a financial institution, so private E-LAN or not, I encrypt).  Those outside interfaces are bandwidth capped by our provider, at different rates depending on what we pay for that location. 

We've been having some VoIP issues lately (surprise!).  QoS on this type of setup seems to be a real pain, primarily because a GRE interface doesn't detect congestion like a physical one.  I ran across this link while researching options:

https://supportforums.cisco.com/thread/2127287

So it looks like in order to give the tunnel interface a "congestion point", I can shape all traffic on the tunnel interface, then give it a child policy that prioritizes voice when it reaches that congestion point.

class-map match-all voice

match access-group 160

policy-map child

class voice

  priority 512

policy-map tunnel

class class-default

  shape average 2800000

  service-policy child

access-list 160 permit ip host 10.1.1.10 any      (if 10.1.1.10 is the IP of the on-site phone system where all VoIP traffic originates/terminates)

interface Tunnel1

service-policy output tunnel

This would be perfect for most of my locations.  Back at HQ I can just shape and prioritize on each tunnel for the VoIP traffic there, and at the remote locations I can shape and prioritize on their tunnel to HQ for the VoIP traffic originating at that location.  (Assuming this all works, I haven't implemented yet.)

However, I have one location that has quite a bit of traffic between both HQ, and another location (regional-HQ).  They are all on the same E-LAN WAN.  There is a tunnel built to HQ, and a tunnel built to regional-HQ.  The outside physical interface at the remote location is limited to 3Mbit.

So now I have an issue - I can shape on either tunnel, or both, but since the gre interfaces aren't aware of each other's "congestion", they could still easily saturate the 3Mbit limitation on the outside interface.  It would seem my only option would be to shape them down to half the available bandwidth - or maybe rate-limit on a physical interface?  Either way, those aren't ideal, as I want the full possible bandwidth to be available if traffic is currently flowing to HQ, regional-HQ, or both!  And I want VoIP to have precedence to either location, no matter what else is flowing where.

Obviously it would be ideal if I could do this on the outside interface, since both tunnels flow through that.  Of course, it's the GRE and IPSec that complicate that.  So, my questions would be:

1. Can I pre-classify on both tunnel interfaces, and will IPSec keep that as it encrypts traffic on its way out of the outside interface?  If so, what is an example config for this?  I'm not using ToS at all in the example above and I'm not sure if it would fit?

or

2. Can I use the service-policy described above in bold on the loopback interface of the router?  Both the tunnel to HQ and the tunnel to regional-HQ terminate on the same loopback, so theoretically this could shape and prioritize in one place for both tunnels... but I have no idea if that service-policy will do anything on a loopback interface.

or

3. Am I going about this the completely wrong way?  Should I be doing some QoS on the inside interface of the router instead?  Should I do something with rate-limits on the inside interface (like tagging non-VoIP traffic with a lower precedence ToS when it exceeds a rate)?  Should I be trying to pre-classify VoIP traffic on the tunnel interfaces with a higher precedence ToS?  And if so, is there additional config necessary to make sure that ToS precendence is honored on all interfaces/routers?  Or should I try something completely different I haven't even thought of?

As you can see, I've entertained a lot of ideas, but I feel like I'm a little over my head.  Any suggestions would be very much appreciated.  Oh, and the routers involved are all 1941s.

Thanks!

5 Replies 5

jeffreymertz
Level 1
Level 1

I did the same research, excepting, we are not doing GRE or IPSEC, we have 2801 facing the ISP's Adtran with the same shaper/QoS policy (almost).

All sites are mesh routing over the VES so each site is one hope from any other site. 6 sites are 1.5 Mbps, one site is 3 Mbps and the main site is 10 Mbps. I'm not seeing any one site do outbound enough to even trip QoS but each of the sites that is 1.5 Mpbs is having issues hearing any voice from any other site. It seems the VES network can't determine if the inbound (at the ISP) is full and they are only doing COS, making my ef and priority queueing useless.

Joseph W. Doherty
Hall of Fame
Hall of Fame

Disclaimer

The    Author of this posting offers the information contained within this    posting without consideration and with the reader's understanding that    there's no implied or expressed suitability or fitness for any  purpose.   Information provided is for informational purposes only and  should not   be construed as rendering professional advice of any kind.  Usage of  this  posting's information is solely at reader's own risk.

Liability Disclaimer

In    no event shall Author be liable for any damages whatsoever  (including,   without limitation, damages for loss of use, data or  profit) arising  out  of the use or inability to use the posting's  information even if  Author  has been advised of the possibility of such  damage.

Posting

For VoIP, that shares bandwidth with other traffic, anywhere congestion could form, you want a QoS policy to manage that congestion (normally dequeuing VoIP first).

Ideally, you want to manage congestion at the physical choke points, but that's not always possible, such as in the case where you're crossing some service provider "cloud" and have different site bandwidth connections to that "cloud" and/or your sites are multi-point on the "cloud".  In cases where you don't have access to the physical choke point, you can often "shape" upstream of that choke point for the choke point's bandwidth.

So, for example, if you have a 10 Mbps HQ hub with 15 1 Mbps branches, you not only need to insure you have QoS to handle the congestion of each physical interface, but insure remote side's egress bandwidth isn't exceeded.  If physically sites were multipoint, but logically hub-and-spoke, you could shape from hub to spoke for 1 Mbps, and from spoke to hub for 10/15 Mbps (or perhaps figure all 15 are unlikely to oversubscribe the 10 Mbps).  If your traffic flows in a logical partial mesh (like your two HQ issue) you can allocate shaping bandwidth as desired (e.g. hub to spoke .75, one other spoke to same spoke .25; could also logically oversubscribe too, chancy with VoIP).  Notice shaping mutiple upstream sources doesn't take advantage of available bandwidth, as would QoS at the physical chokepoint.  (The latter, BTW, often a feature of some service provider L3 VPNs.)

Thanks for the response, Joseph.  If I'm understanding you correctly, my only option might be to shape my tunnels down so each is congested at only half the available bandwidth?

Disclaimer

The   Author of this posting offers the information contained within this   posting without consideration and with the reader's understanding that   there's no implied or expressed suitability or fitness for any purpose.   Information provided is for informational purposes only and should not   be construed as rendering professional advice of any kind. Usage of  this  posting's information is solely at reader's own risk.

Liability Disclaimer

In   no event shall Author be liable for any damages whatsoever (including,   without limitation, damages for loss of use, data or profit) arising  out  of the use or inability to use the posting's information even if  Author  has been advised of the possibility of such damage.

Posting

Thanks for the response, Joseph.  If I'm understanding you correctly, my only option might be to shape my tunnels down so each is congested at only half the available bandwidth?

You mean for your two HQ tunnel issue?  If so, don't have to apportion bandwidth half and half, you can configure between the two HQs in any ratio you desire.  What can be critical is that the aggregate between the two doesn't exceed the maximum available bandwidth.  Doing this, you should avoid unmanaged cloud egress congestion.

You can also, if desired, oversubscribe.  For example, you might apportion 1.5 the actual available bandwidth.  With any oversubscription, you do risk unmanaged congestion.

Understood, yes.  I probably shouldn't have used the term "half".  But basically I can apportion fractions of the bandwidth to the different tunnels by giving them shaping policy-maps, as long as the total doesn't exceed the available bandwidth (or, like you said, I could oversubscribe).

I guess my last question would just be - what about the loopback interface?  If both GRE tunnels terminate on the same loopback (as they do), doesn't that mean traffic for both tunnels traverses the loopback interface?  In theory, if I shape on that loopback interface (since I don't believe it can sense congestion) couldn't I shape to the full bandwidth, and still reserve some for my VoIP traffic?  No matter what's going through the two tunnels, it would never exceed the total available bandwidth, and during congestion would guarantee bandwidth for my VoIP traffic.  And, best of all, if one tunnel is quiet the other could actually use all of the available bandwidth, instead of shaping down to a certain slice of it.

I suppose I could just try this, but I want to see if it even makes sense!

Thanks again for your responses.

Review Cisco Networking products for a $25 gift card