cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Announcements
433
Views
15
Helpful
7
Replies
Highlighted
Beginner

VoIP, GRE, QoS & Policy Shaping

Trying to wrap my head around proper config to ensure VoIP service when link becomes saturated.

My WAN sites connect using GRE tunnels anchored to Loopbacks thru MPLS cloud.   In this example, I have a site whose bandwidth is 50 Mbs.  

The site has 2 3560G switches running 12.2(44)SE5 (soon to be replaced by 3650).   Phones are mix of SIP and SCCP.

 

Access ports are all defined like this:

!

interface GigabitEthernet0/xx
 switchport mode access
 switchport voice vlan xxx
 mls qos trust device cisco-phone
 mls qos trust cos
 spanning-tree portfast

 

Trunk Port between 3560G switch and ISR4331 router is:

!

interface GigabitEthernet0/xx
 description TRUNK to ISR4331 Gi0/0/0
 switchport trunk encapsulation dot1q
 switchport mode trunk
 mls qos trust dscp

 

3560G-SW#show mls qos int g0/xx stati
GigabitEthernet0/xx (All statistics are in packets)

  dscp: incoming 
-------------------------------

  0 -  4 :  1733946501         5779         7584           52           25 
  5 -  9 :         307        53202            0         1565           58 
 10 - 14 :    11073809            0            2            0            0 
 15 - 19 :           0         3642            1     10124907          459 
 20 - 24 :        2516          152          599            0       185297 
 25 - 29 :        3502       933215            0        31912           25 
 30 - 34 :      199705          411       564458           14       129597 
 35 - 39 :          62            9           19       107504            1 
 40 - 44 :    31739470           44          243           30           65 
 45 - 49 :           0     57647809            0     23805805          762 
 50 - 54 :         105            0            0            0           30 
 55 - 59 :           0          196            0            0            0 
 60 - 64 :           0            0            0            0 
  dscp: outgoing
-------------------------------

  0 -  4 :  3134147655            0            0            0            0 
  5 -  9 :           0            0            0            0            0 
 10 - 14 :           0            0            0            0            0 
 15 - 19 :           0            0            0            0            0 
 20 - 24 :           0            0            0            0     54024192 
 25 - 29 :           0            0            0            0            0 
 30 - 34 :           0            0            0            0            0 
 35 - 39 :           0            0            0            0            0 
 40 - 44 :           0            0            0            0            0 
 45 - 49 :           0    373600851            0      1348947            0 
 50 - 54 :           0            0            0            0            0 
 55 - 59 :           0            0            0            0            0 
 60 - 64 :           0            0            0            0 
  cos: incoming 
-------------------------------

  0 -  4 :  1883503606            0            0            0            0 
  5 -  7 :           0            0            0 
  cos: outgoing
-------------------------------

  0 -  4 :  3145237140            0            0     54025057            0 
  5 -  7 :   373600886      1348947         1196 
  output queues enqueued:
 queue:    threshold1   threshold2   threshold3
-----------------------------------------------
 queue 0:   362683164           0    10917361
 queue 1:  3034809757    18936560    48678759
 queue 2:    52080131           0           0
 queue 3:     8763406         134    96439164

  output queues dropped:
 queue:    threshold1   threshold2   threshold3
-----------------------------------------------
 queue 0:           0           0           0
 queue 1:           0           0           0
 queue 2:           0           0           0
 queue 3:           0           0           0

Policer: Inprofile:            0 OutofProfile:            0

 

The site router is a 4331 running IOS XE 16.06.03

All traffic has to go thru GRE to reach data center and internet at Corp HQ. The remote site GREs all terminate on a 4451X in the data center. 

I get occasional complaints from this site that "phones drop". When I look at the Network Monitor I often see the bandwidth at or very near the 50 Mbs.  NetFlow v9 IDs a couple of bandwidth hogs -- usually copying a huge 12 gig file to a network drive etc.!

 

So I started looking into what I could do and realize that I should probably setup some sort of class & policy Maps on the 4331 to preserve bandwidth. I think that the policy-map needs to go on the GRE TU rather than the physical Gig interface that is the service provider connection -- since all the traffic goes through the GRE.   I am looking for suggestions / comments on the following:

 

class-map match-any VOICE
  match ip dscp 46
class-map match-any SIGNALLING
  match ip dscp af31
!
policy-map VOICE-POLICY
  class VOICE

!!! Maximum  bandwidth VOICE class can use !!!
     priority percent 10
  class SIGNALLING

!!! Minimum bandwidth SIGNALLING can use of remaining !!!!
     bandwidth remaining percent 5

!

policy-map SHAPER
   class class-default

!!! LINK BANDWIDTH INTO MPLS !!!
     shape average 50m
     service-policy VOICE-POLICY

 

!!! APPLY POLICY OUTBOUND ON GRE !!!

Interface tunnel0
   service-policy output SHAPER

!!! PLACES QOS MARKINGS IN GRE HEADER !!!
   qos pre-classify

 

 TIA - Perry

7 REPLIES 7
Highlighted
Rising star

Hi Perry

Looks good, but the service policy should be applied on the physical interface towards the provider.

Sometimes, for certain paltforms, you have to decrease the value for the shaper a little, because the layer2 overhead is not calculated in the shaper. Otherwise you can be sending a slite more traffic than 50M, and if the provider is policing to 50M, he may drop some of your traffic.

/Mikael

Highlighted

Hi

Should qos pre-classify be used here to make sure the DSCP markings are copied to the GRE header?
Highlighted

By default (except for very, very old IOS versions), original packet's ToS is copied to encapsulated packet's ToS. The purpose of pre-classify is to allow a physical interface policy to examine a "shadow" copy of some of the original packet's other fields, like UDP/IP port numbers. If you're just looking at the ToS, you don't need pre-classify.
Highlighted
VIP Expert

You're on the right-track, but a lot more might be needed.

As you mention MPLS and sites (plural), if you have more than two sites, ideally you need QoS support from your MPLS provider too.

Each site should PQ or LLQ VoIP bearer traffic as it sends traffic into the MPLS cloud. If the available egress bandwidth is less than the physical hand-off provides, then you should "shape" for that bandwidth (with a subordinate policy - as you're doing).

Signally traffic also should be "protected" (i.e. insure it has guaranteed bandwidth, for whatever it needs - again like you're doing).

(BTW, if your device supports it, I would recommend FQ for class-default - for all your other traffic - goes a long way dealing with a few bandwidth hog flows.)

With a multi-site cloud (of more than two sites), you also need to insure QoS for traffic as it egresses the cloud. For example, an HQ site might overrun any one branch's link capacity, or perhaps multiple branches can overrun the any other one site, including the HQ. There is where you need MPLS QoS. If that's not possible, you can shape sites to avoid overrunning other sites (you take into account all other sites aggregate maximum bandwidth), but as the sites do not "know" what other sites are doing, to fully protect critical traffic like VoIP bearer, you end up loosing available bandwidth.

Highlighted

Thanks for the replies.   QoS configuration  is definitely a dark science!  

 

My MPLS subscribed network bandwidth  is 900Mbs.  I have many termite sites at various bandwidths feeding into our MPLS cloud.   

 

We use GRE tunnels anchored to loopbacks  through the MPLS to our corporate data center and our D/R site.  The D/R site routes are weighted to be less preferred. 

 

I talked to TAC and he *thinks* the policy needs to be applied to GRE rather than the physical Gig I/F... but not sure :-/.  Also believes the pre-classify is needed to ensure priority markings are placed in to GRE header before its encrypted.   

 

The “last mile” to this site is provided by Spectrum fiber.   It is a 50Mbs link and our network monitor has confirmed the site reaches it.   I was told the network honors QoS / priority, but I will address that after I have a policy applied that consensus here agrees is correct.   I am looking for help getting correct policy so voice is protected when the users at the site bury the link with their large file copies etc.  

Highlighted

Thanks for the replies.   QoS configuration  is definitely a dark science!  

 

My MPLS subscribed network bandwidth  is 900Mbs.  I have many termite sites at various bandwidths feeding into our MPLS cloud.   

 

We use GRE tunnels anchored to loopbacks  through the MPLS to our corporate data center and our D/R site.  The D/R site routes are weighted to be less preferred. 

 

I talked to TAC and he *thinks* the policy needs to be applied to GRE rather than the physical Gig I/F... but not sure :-/.  Also believes the pre-classify is needed to ensure priority markings are placed in to GRE header before its encrypted.   

 

The “last mile” to this site is provided by Spectrum fiber.   It is a 50Mbs link and our network monitor has confirmed the site reaches it.   I was told the network honors QoS / priority, but I will address that after I have a policy applied that consensus here agrees is correct.   I am looking for help getting correct policy so voice is protected when the users at the site bury the link with their large file copies etc.  

Highlighted

"QoS configuration is definitely a dark science!"

Laugh - yea, often appears to be, or maybe a "dark art", but I think that has a lot to due with it being poorly explained and/or not taught much. (The TAC response being a case in point! "Also believes the pre-classify is needed to ensure priority markings are placed in to GRE header before its encrypted." Sigh.)

(BTW, sometimes, when QoS is "done right", application performance improvement can be "magically".)

"I talked to TAC and he *thinks* the policy needs to be applied to GRE rather than the physical Gig I/F... but not sure :-/."

Yes and no. It depends on what you're doing. Sometimes you want a policy on both the tunnels and the physical interface, although not all Cisco devices support that.

For example, if you logically had a hub-and-spoke topology (or that's how your traffic mainly flows), you would shape each tunnel for the far side's bandwidth (with a subordinate policy, to prioritize) but you would still want a physical policy on the interface in case the tunnels' aggregate traffic overruns it. (Again, if the physical interface had more bandwidth than a logical cap, you would shape the physical, even though the tunnels would also be shaped.) (Also again, unless the physical interface needs to "examine" more than ToS, you don't need pre-classify.)

You'll want to find out more about your MPLS provider's QoS support. The reason why, ideally you want your policy to work with it (or possibly "around it"). Also, sometimes MPLS providers can provide different QoS, often from a menu of choices.

PS:
BTW, have you optimized how you've configured your tunnels? The impact of fragmentation can be significant. Are you sure you need tunnels? [Often tunnels across MPLS are needed if you're doing multicast, and the MPLS vendor doesn't support it directly.)