cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
4821
Views
20
Helpful
22
Replies

QoS on Tunnel IPIP

stevebt28
Level 1
Level 1

Big question:  Does the config below work? We have issues with WRED statistics.

 

Environment:

Two branch locations that use MPLS to connect to a datacenter. The configuration of QoS is applied from CE in the datacenter.  Branch uses VOIP and some Core Vendor software (extremely network sensitive).

 

We want to accept any critiques, and know if the config below would work in general/at all.  In particular I would like to ensure the Core Vendor traffic never drops/has too high of a latency.

 

In the current configuration I didn´t saw packets drops with WRED (avoid congestion).

 

This is the configuration of QoS.

 

class-map match-any Mgtm
match access-group name Mgtm-list

 

class-map match-any VoIP
match protocol rtp audio
match ip dscp ef
match access-group name VoIP-list

 

class-map match-any VIDEO
match access-group name CUCM

 

class-map match-any HIGH
match access-group name Aplications-list

 

class-map match-any MEDIUM
match access-group name Network_list
match access-group name VPN_Services
match access-group name Skype_Business

 

policy-map Child_Policy-QoS-1MB
!
class VoIP
set dscp ef
priority 64
!
class MGTM
set dscp cs6
bandwidth 32
!
class HIGH
set dscp af31
bandwidth 364
random-detect dscp-based
police cir 512000 bc 16000
conform-action transmit
exceed-action set-dscp-transmit af32
!
class MEDIUM
set dscp af21
bandwidth 400
random-detect dscp-based
police cir 128000 bc 4000
conform-action transmit
exceed-action set-dscp-transmit af22
!
class class-default
fair-queue
random-detect dscp-based
police cir 700000 bc 21875
conform-action transmit
exceed-action drop

 

policy-map Parent_Policy-QoS-1MB
class class-default
shape average 1024000
service-policy Child_Policy-QoS-1MB

 

Interface Tunnel0

 service-policy output Parent_Policy-QoS-1MB

 

22 Replies 22

Hi

As suggested by PM, I would recommend to open a TAC case. I showed you that it was working on different equipments. However, I don't have any 3945 to test it. Maybe a router limitation, or a cosmetic bug...
TAC will be able to answer.

Thanks
Francesco
PS: Please don't forget to rate and select as validated answer if this answered your question

Ah, I would expect that platform's CBWFQ to support FQ in any bandwidth class.

As to your ALTA class, FQ in class default, in my example policy, usually successfully handles many traffic situations. I.e. first try placing such traffic in the default class.

If you have bulk traffic you can deprioritize such traffic to my example's Low class. If you have time sensitive traffic, that doesn't require LLQ, map it into my example's High class. However, try to only place traffic in the High class that really needs the extra priority and also try to avoid putting traffic there that will saturate the class.

PS:
If you wondering more about why I recommend FQ and don't recommend RED, you might browse through more of my posting on those subjects contained within the forums. if have additional questions, feel free to ask.

Hi Joseph,

 

With respect to your answers: The traffic is already classified and the queueing strategy is the correct. LLQ should be configured only to VoIP and Video with the command priority level *.

 

FQ is a mechanism of congestion management not evoid congestion. It should be configured in the class-default and WRED is recomended to evoid issues like sync global and tcp starvation.

 

 

Your OP noted "I am unexpereienced with QoS." From you last post reply to mine, yes, I would tend to agree with you. ;)

It's a bit unclear why you're looking for confirmation and/or critique of your policy, such as knowing your classification and queuing strategy is already correct. (I suspect it isn't, but it's possible it is, QoS policies are tailored for specific results, and I don't know all your criteria. Possibly you're also trying to work within the framework of whatever [usually inadequate, IMO - laugh] QoS support your MPLS vendor supports.)

I suspect you've read somewhere FQ is used for congestion management, but it, and RED, can both be used to manage congestion. Neither truly avoids congestion, because both drop packets (excluding RED that can set the congestion bit) when there's congestion, and for traffic that "slows" its transmission rate when drops are detected, like TCP, both can avoid massive congestion. However, WRED is more likely to still have global (i.e. within a class) sync issues (you understand it tail drops all when you go past its max value, right?) and WRED might also be more likely to cause TCP starvation, by default, if TCP and non-TCP traffic are subjected to the same WRED processing.

As to FQ should be configured in class default, where did you read that? Something dated? Perhaps in pre-HQF documentation, which supported FQ, actually WFQ, only in class-default. One of the nicest feature improvements in HQF was support of FQ in any bandwidth class.

You've taken much interest in WRED. I don't blame you, I too drank that kool-aid years ago. However, trying to get WRED to work optimally lead me to read much about it, starting with Dr. Floyd's original paper on it. What's immediately telling, was Dr. Floyd needing to "fix" some of REDs parameters almost immediately when it was tried. Since then, you'll find lots and lots of RED variants to "improve" it. It does make one wonder, why there are so many "improved" variants of RED. (Oh, and BTW, the Cisco implementation usually don't seem to follow Dr. Floyd's parameter recommendations.)

In the usual Cisco implementation, WRED tails drops, where RED appears to work better if it head drops. This because tail drops delay the sender from detecting the drop, i.e negating much of the point of "early" dropping.

(Oh, one interesting feature of RED is how it can drop received packets when the interface queue is actually empty or conversely not drop packets when there's a massive queue. Both due to RED's use of a moving average of computed queue depth. i.e. it's really a feature, and there's a reason for it, but it's interesting just the same.)

Another issue with the usual Cisco implementation (excluding the Cisco FRED variant), it's not flow aware, i.e. it can drop a packet from a flow that not contributing to the congestion.

What have you read about setting WRED parameters? Are you going to use the Cisco defaults (as you didn't post any)? You're aware how different platforms might assign completely different parameters for the same link? Again, such make one wonder about choice of these parameters, if even Cisco is inconsistent.

The problem with the typical Cisco WRED, like you'll find on a 3900, it's difficult to get it to work optimally. The issues, noted above, especially lack of targeting flows causing congestion, and it tail dropping, I'm found, it's not really better than FQ tail dropping individual flows, the ones with congestion. Additionally, FQ, protects light bandwidth usage flows from being delayed by heavy bandwidth usage flows. I.e. generally you obtain a much better network application experience.

I might also mention, for about 10 years, I developed QoS for a large international software development company, supporting all kinds of WAN links, such as leased lines, frame-relay, ATM, MPLS and Internet VPN. I like to brag we had some offices where our links ran at 100%, all the time, during business hours, yet we had everything from VoIP, video conferencing, video streaming, interactive application (including, like you note, some very sensitive to delay and/or packet loss) to backups on those links - QoS making it all possible. I.e. Let's say I have a bit of experience of what really works, or not, in the real world.

 

Also BTW, Cisco's typical WRED can be easily used for tiered dropping.

Hi Joseph,

 

Will open a cisco case. Thanks for you answer.

 

 

"It's a bit unclear why you're looking for confirmation and/or critique of your policy, such as knowing your classification and queuing strategy is already correct. (I suspect it isn't, but it's possible it is, QoS policies are tailored for specific results, and I don't know all your criteria. Possibly you're also trying to work within the framework of whatever".

 

A// I'm looking confirmation, because WRED no show statistics on tunnel interface. The traffic is already classified and marked and have configured LLQ to VoIP and Vídeo, CBWFQ to class ALTA and MEDIA and Fair-queue in the class default. The policy QoS is configured how HQF.

 

"Neither truly avoids congestion, because both drop packets (excluding RED that can set the congestion bit) when there's congestion, and for traffic that "slows" its transmission rate when drops are detected, like TCP, both can avoid massive congestion"

 

A// It isn't correct. FQ is a mechanism to mgmt congestion and permit configuring queuing dynamic. RED is a mechanism to avoid congestion, but i have configured WRED DSCP - Based with custom profiles on HQF.

 

 

"As to FQ should be configured in class default, where did you read that? Something dated? Perhaps in pre-HQF documentation, which supported FQ, actually WFQ, only in class-default. One of the nicest feature improvements in HQF was support of FQ in any bandwidth class."

 

A// Configuring Class Policy in the Policy MapTo configure a policy map and create class policies that make up the service policy, use the policy-map command to specify the policy map name, then use one or more of the following commands to configure policy for a standard class or the default class:

class

bandwidth (policy-map class)

fair-queue (for class-default class only)

queue-limit or random-detect

 

https://www.cisco.com/c/en/us/td/docs/ios/12_2/qos/configuration/guide/fqos_c/qcfwfq.htm

 

 

 

A 12.2 documentation reference, eh? Just as I suspected, your looking at old, pre-HQF documentation.

I think I understand your focus on WRED's lack of statistics, but your OP did also note "We want to accept any critiques, and know if the config below would work in general/at all." So, to put it another way, I'm been trying to get you to consider your overall QoS might be less than optimal and even using WRED might be a waste. However, you don't appear open to accept critique contrary to your understanding of QoS.

If you do notice that your non-LLQ traffic still appears to suffer when there's congestion, please post again. Perhaps I and others might help you mitigate that.

Hi Joseph,

 

Thanks for you answer.  I was checking 15.5 reference and found this:

 

Configuring Class Policy in the Policy Map

To configure a policy map and create class policies that make up the service policy, use the policy-mapcommand to specify the policy map name, then use one or more of the following commands to configure policy for a standard class or the default class:


  • class

  • bandwidth (policy-map class)

  • fair-queue (for class-default class only)

  • queue-limit or random-detect

For each class that you define, you can use one or more of the listed commands to configure class policy. For example, you might specify bandwidth for one class and both bandwidth and queue limit for another class.

 

The default class of the policy map (commonly known as the class-default class) is the class to which traffic is directed if that traffic does not satisfy the match criteria of other classes whose policy is defined in the policy map.

 

https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/qos_conmgt/configuration/15-mt/qos-conmgt-15-mt-book/qos-conmgt-cfg-wfq.html

 

What do you think about that?

What do I think? I believe the documentation needs correction.

In your platform's IOS, Cisco IOS Quality of Service Solutions Command Reference, https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/qos/command/qos-cr-book/qos-d1.html#wp6666744640, under a discussion of HQF, one example is:

You can apply the fair-queue command to a user-defined class as shown in the following example:

policy-map p1
class c1
bandwidth 1000
fair-queue

But as this isn't the first time I've found documentation to be incorrect, I checked to see if we had a 39xx with your code (train). We have 65 3945s in production, but only one is running 15.5(3)M. I jumped on that router, and entered a test policy to see if FQ will take under a bandwidth class, as I expected, it did, i.e.:
xxx#sh policy-map x
Policy Map x
Class QOS-CLASS-PREFERRED
bandwidth 1 (kbps)
fair-queue