cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1197
Views
5
Helpful
7
Replies

QOS over GRE Tunnels

ahmad82pkn
Level 3
Level 3

Hi,

I need some clarification on  QOS over GRE.
As per Cisco documentation i have Create a Policy and applied in on Physical Interface and applied QOS Pre-Qualify command on GRE Tunnels.
i have 5 Remote Office so 5 GRE tunnels are present on HEAD Office Router.

Below configuration is for Head Office.
I have give 40% priority to Voice then 30% Bandwidth to Production Data a5 % bandwidth to Anti virus and Update Servers and 20% Default to General internet.

30Meg is Bandwidth at Head Office. ( Note Head office also has users doing internet downloading )


Now Question is.

  1. If Physical interface is congested by downloading of heavy files from Head Office users, Will my QOS be able to prioritize Voice without impact?
  2. How Bandwidth will be distributed to 5 tunnels individually?
  3. What will happen if one Tunnel is utilizing full 30Mbps traffic and second tunnel also start downloading data, How priority will be allocated to Tunnel 2?

Please answer if you are well familiar and real hands on on QOS :) not just assumptions :(

thank you in advance, Below is my Sample configuration.

int fa2/1
bandwidth 30000 ( 30Mbps)
service-policy output BRANCH-QOS-POLICY

int tun  1
qos pre-qualify
int tun 2
qos pre-qualify
int tun 3
qos pre-qualify
int tun5
qos pre-qualify
int tun 5
qos pre-qualify

Policy-map BRANCH-QOS-POLICY
 class class-default
 shape average 30000000
 service-policy TUNNEL-QOS


policy-map TUNNEL-QOS
class BRANCH-VOICE
priority percent 40

class BRANCH-CALL-SIGNALING
bandwidth percent 5

class BRANCH-MISSION-CRITICAL
bandwidth percent  30


class BRANCH-BULK-DATA
bandwidth percent 5
random-detect

class class-default
bandwidth percent 20
random-detect

 

7 Replies 7

Joseph W. Doherty
Hall of Fame
Hall of Fame

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

#1 which physical interface, i.e. HQ or branch?  If HQ physical (egress), VoIP will be prioritized.

#2 depends on how much traffic and what kind of traffic they are transmitting.  Basically, with your setup, bandwidth depends on QoS at physical egress.  Think of the tunnel egress similar to switch port ingress, but without a switch port's physical bandwidth limits.

#3 As far as I know, tunnels shouldn't have any priority relative to each other.  So, everything else being equal, they should (eventually) average out at about equal bandwidth utilization.  (This because each tunnel would direct its similar traffic to the same class queue.)

BTW:

30Meg is Bandwidth at Head Office. ( Note Head office also has users doing internet downloading )

Internet bandwidth sharing generally negates your QoS policy.  Your egress policy can easily manage outbound, but what manages inbound Internet traffic?

For you first question, you policy will prioritize any tunnel's VoIP traffic at HQ egress, but unless your branch also has 30 Mbps, or if the branch also permits internet bandwidth sharing, you can have unmanaged congestion issues at branch ingress which may disrupt your VoIP, and other class, SLA goals.

 

so what i understand is, even if i prioritize traffic from Branch Towards HQ Out,

when someone is downloading some huge file in HQ, there is nothing i can do.

Secondly.

what about if i prioritize from HQ towards Branch, will it be beneficial? my point is when traffic will     leave HQ upload to reach Branch in download direction IN, it would have congested Branch ingress interface already and then it will open packet to find out its priority etc, so i dont think it will be beneficial in that case.

is that correct?

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

so what i understand is, even if i prioritize traffic from Branch Towards HQ Out,

when someone is downloading some huge file in HQ, there is nothing i can do.

If the huge file is coming from the Internet, then yes, you have a QoS problem.  If you mean from a branch, you only have a problem with traffic from multiple branches (especially if their aggregate oversubscribes the HQ link).

what about if i prioritize from HQ towards Branch, will it be beneficial? my point is when traffic will     leave HQ upload to reach Branch in download direction IN, it would have congested Branch ingress interface already and then it will open packet to find out its priority etc, so i dont think it will be beneficial in that case.

It depends on the bandwidth on the branch.  If, for example, the branch only had 10 Mbps, your HQ sending at 25 Mbps wouldn't even queue at HQ, but would congest on the link to the branch.  The usual way to address this issue, is have each tunnel have a shaper, with prioritization QoS, that corresponds to the branch's bandwidth.  Some Cisco devices will support that and physical interface QoS too.  Unsure any, though, will support both a tunnel and physical interface shaper or a hierarchal physical interface policy that supports both a parent and child shaper.

If one device cannot do the two level shaping management, you could consider next upstream HQ LAN device (if capable) shape and prioritize for 30 Mbps, and use the router to further shape for each branch.  I would recommend such an effort, as QoS can be great for congestion management, but again, if you're sharing the link for general Internet access, this creates ingress traffic flows you have little bandwidth management control over.

Really Appreciate your time in teaching me this tough topic.

i need some further help in a query.

i did apply QOS between my two offices where delay is only 1 MS between both offices.

i applied QOS and Provided Voice 40% Priority in 100Meg Link.

for testing i also added ICMP in same Voice Policy.

What i noticed that ICMP response time increased from 1 ms to between ( 18 ms to 20 ms)

Why would this happen? Although i dont see any Drops on Voice Queue.

attached is my Queue output at the time of congestion in JPEG and Raw data.

Any suggestion or guidance will be appreciated.

 sh policy-map int gi0/0
 GigabitEthernet0/0

  Service-policy output: PL-QOS

    queue stats for all priority classes:
      Queueing
      queue limit 64 packets
      (queue depth/total drops/no-buffer drops) 0/0/0
      (pkts output/bytes output) 563869/73019283

    Class-map: BRANCH-VOICE (match-any)
      563865 packets, 73156548 bytes
      5 minute offered rate 1526000 bps, drop rate 0 bps
      Match: protocol rtp
        553892 packets, 69893595 bytes
        5 minute rate 1456000 bps
      Match:  dscp ef (46)
        2802 packets, 665948 bytes
        5 minute rate 24000 bps
      Match: protocol sip
        4164 packets, 2166469 bytes
        5 minute rate 64000 bps
      Match: protocol icmp
        3007 packets, 430536 bytes
        5 minute rate 25000 bps
      Priority: 40% (40000 kbps), burst bytes 1000000, b/w exceed drops: 0

 

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

You say link is 100 Mbps, but interface is gig.  So, it's running at 100 Mbps?  It's full 100 Mbps end-to-end?  Link isn't shared with any other traffic?  You say, when congested, so link was at 100% utilization?

actually i am on the topic a bit. the last question was generic.

i have Private line between 2 offices 100Meg. so i put normal QOS on its interface in out direction.

40% assigned to priority Queue for voice.

one thing i noticed was that ping response time increased from 1 ms to 20 ms during congestion, so i put icmp in priority queue as well ( so that end user dont witness this increase latency) but even then ICMP response time remained 20 ms, whereas there was no drop rate in Voice Queue.

is it normal? my friend  said it some serialization delay, is it so? it means RTP will also face additional 20ms delay during congestion?

 

policy-map PL-QOS
 class BRANCH-VOICE
    priority percent 40

class-map match-any BRANCH-VOICE
 match protocol rtp
 match  dscp ef
 match protocol sip
 match protocol icmp

 

 

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

Not enough information to say for sure, but there are a couple of common possibilities.

First, interfaces have their own TX FIFO queues.  Only after these fill does software queuing come in to play.  To mitigate this, on some devices you can decrease the hardware queue by using the tx-ring-limit command.

Second, although LLQ has dequeuing priority over other queues, packets can still FIFO queue in the LLQ, which will add queuing latency.

 

Review Cisco Networking for a $25 gift card