cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2112
Views
12
Helpful
6
Replies

Quality of Service - Question of nested policy behavior when parent congested

The situation I've been asked to address using QoS is to design a policy on the WAN router at head office to address the following requirements:

  1. Not allow head office to overload remote site WAN 1Mbps connections (i.e. shape average)
  2. Give priority to certain applications on the network

The reason for #1 is to try avoid the WAN DCoS drop scheme as it is non-negotiable and unsuitable to customer requirements. #2 is just to protect and give priority to important applications (i.e. VOIP, Citrix, etc)

I thought about this and I figured a nested policy should address this requirement, i.e.:

  • Parent Policy - Overall Queuing scheme
    • Site 1
      • Shape Average 1Mbps
      • Service-policy "Child Policy"
    • Site 2
      • Same as above.
      • ...
  • Child Policy - Per remote site
    • Priority Queue 20%
      • VOIP
    • Bandwidth 50%
      • Citrix
    • Bandwidth 20%
      • Printing, FTP, SNMP

My question relates to the logic bomb associated with this type of approach. If every single remote site has 1Mbps and there are 50 sites (i.e. 50Mbps total) and head office has only 20Mbps how will the head office WAN router cater for both "priority" and "bandwidth" queues holistically?

If there are 50 sites with 1Mbps each and a grand total of 20% has been allocated to every site for VOIP in a priority queue... does that mean 10Mbps is reserved for the priority queue? Or will the parent/child relationship here mean that each site will be guarenteed a fraction of the total link and relative fraction priority queue thereafter.

As an example here is some configuration:

! CLASSIFICATION/MARKING - Application Marking Scheme

class-map match-any CM-PREC-3-IN

match protocol ftp

match protocol snmp

match access-group name ACL-PRINTING

class-map match-any CM-PREC-4-IN

match protocol citrix

class-map match-any CM-PREC-5-IN

match protocol rtp

match protocol sip

match protocol skinny

match protocol rtcp

!

! CLASSIFICATION/MARKING - Per Remote Site Marking Scheme

class-map match SITE-0001

match access-group name ACL-SITE-0001

class-map match SITE-0002

match access-group name ACL-SITE-0002

!

! CLASSIFICATION/MARKING - Marking Policy on LAN interface of CPE

policy-map PM-CLASSIFY-IN

class CM-PREC-5-IN

  set ip precedence 5

class CM-PREC-4-IN

  set ip precedence 4

class CM-PREC-3-IN

  set ip precedence 3

!

! QUEUING/SCHEDULING - Child Policy - Per Remote site

policy-map PM-QUEUE-MARK-OUT-DEFAULT

class CM-PREC-5-IN

    priority percent 20

class CM-PREC-4-IN

     bandwidth remaining percent 50

class CM-PREC-3-IN

     bandwidth remaining percent 20

class class-default

    fair-queue

     random-detect

!

! QUEUING/SCHEDULING -  Parent Policy - Shaping per site using child policy

policy-map PM-SHAPE-QUEUE-PER-SITE

description *** Shape and Queue Per Site ***

class SITE-0001

    shape average 945600

  service-policy PM-QUEUE-MARK-OUT-DEFAULT

class SITE-0002

    shape average 945600

  service-policy PM-QUEUE-MARK-OUT-DEFAULT

class class-default

   fair-queue

!

! QUEUING/SCHEDULING - Overall shaping policy

policy-map PM-SHAPE-QUEUE-OUT

description *** Shape and Queue ***

class class-default

    shape average 19456000

  service-policy PM-SHAPE-QUEUE-PER-SITE

!

interface GigabitEthernet0/0

service-policy output PM-SHAPE-QUEUE-OUT

!

!

ip access-list extended ACL-PRINTING

permit tcp any any eq 9100

permit tcp any eq 9100 any

ip access-list extended ACL-SITE-0001

permit ip any 192.168.1.0 255.255.255.0

permit ip 192.168.1.0 255.255.255.0 any

ip access-list extended ACL-SITE-0002

permit ip any 192.168.2.0 255.255.255.0

permit ip 192.168.2.0 255.255.255.0 any

If the above can be achieved more efficiently in other ways let me know.

6 Replies 6

Hello

Looking at your qos config - the hubs cumulative wan link is shaped to 20mb, awith each spoke allocated just under 1mb.

My calculation for each child policy using 1mb:

Interface bandwidth 1024000kps

subtract reserved BW for class class-default = 25% = 256000kps, leaving 768000Kps

subtract any LLQ' bw 20% of  768000  = 153600kps

So the remaining amount BW  in this case = 614400kps,  is subdivided using the bandwidth remaining percent commands.

If the child default is not being utilized this BW is shared between the defined classes, but it can be manipulated with the max-reserved-bandwidth command

res

Paul

Please don't forget to rate any posts that have been helpful.

Thanks.


Please rate and mark as an accepted solution if you have found any of the information provided useful.
This then could assist others on these forums to find a valuable answer and broadens the community’s global network.

Kind Regards
Paul

Thanks for the response however that was not quite what I was after. Having a read through the post I see that maybe I wasn't clear enough.

So...

  • 20Mbps connection at head office.
  • 50 remote sites, each with 1Mbps each.
  • We want to prevent overloading of remote WAN links from head office using child policies (with a "shape average 945600")
  • We want to include a 20% priority queue within that 1Mbps for voice traffic ("priority percent 20"... LLQ)
  • We want other traffic to share the remaining bandwidth on a weighted basis (Citrix "bandwidth 50", Printing "bandwidth 20"... CBWFQ)

I wanted to understand whether the 50 accumulative child policies would for LLQ:

  • Reserve 20% of accumulative bandwidth requested by child policies regardless of parent policy shaped average
    (i.e. 20% of 50Mbps = 10Mbps)
  • Reserve 20% of parent policy bandwidth regardless of accumlative bandwidth requested by child policies
    (i.e. 20% of 20Mbps = 4Mbps)

Forgetting about the bandwidth (CBWFQ) items for now and examine the LLQ. I understand that there is only one "LLQ"... I just wanted to try understand which way the router went about "building" the size of that priority queue given that we're creating that priority queue based on "police percent" statements from nested child policies.

Will the police commands reserve 20% of total bandwidth of parent policy throughput or accumulative 20% throughput of every child policy?

I think it will reservce 10Mbps rather than 4Mbps.

Hello Jonathan,

As I see it, the percentage value for your LLQ is that of the shaped child policy not of the parent  however it could increase  when you are not experiencing congestion when the spokes are not maxing their wan link.

So gong back to my calculation in previous post, the total LLQ would amount to 153600*50 =7680000( 7.6mb)

However, You are starting off with a oversubscription of 250% on your Hub Wan link, which means the cumulative traffic that can hit your HUB wan link is 50mb on a 20mb link, so matter how much qos you apply your going to see congestion.

res

Paul

Please don't forget to rate any posts that have been helpful.

Thanks.


Please rate and mark as an accepted solution if you have found any of the information provided useful.
This then could assist others on these forums to find a valuable answer and broadens the community’s global network.

Kind Regards
Paul

Joseph W. Doherty
Hall of Fame
Hall of Fame

Disclaimer

The  Author of this posting offers the information contained within this  posting without consideration and with the reader's understanding that  there's no implied or expressed suitability or fitness for any purpose.  Information provided is for informational purposes only and should not  be construed as rendering professional advice of any kind. Usage of this  posting's information is solely at reader's own risk.

Liability Disclaimer

In  no event shall Author be liable for any damages whatsoever (including,  without limitation, damages for loss of use, data or profit) arising out  of the use or inability to use the posting's information even if Author  has been advised of the possibility of such damage.

Posting

Logically, what you want to do is shape your aggregate HQ WAN link at 20 Mbps, and prioritize by traffic class, while also shaping each remote site to 1 Mbps, while prioritizing by traffic class.  Physically, it might not be possible to define a configuration to support this on a single device.

Even if you could, you might have another "logic bomb", which is, if you have 50 sites with 1 Mbps each and HQ only has 20 Mbps, how were you planning on dealing with possible branch to HQ aggregate oversubscription?

What you have, a 3 level policy, is basically the correct approach, although if there's congestion at the top level, traffic won't be correctly prioritized by class (i.e. believe all sites will get proportional bandwidth).

If there are 50 sites with 1Mbps each and a grand total of 20% has been allocated to every site for VOIP in a priority queue... does that mean 10Mbps is reserved for the priority queue? Or will the parent/child relationship here mean that each site will be guarenteed a fraction of the total link and relative fraction priority queue thereafter.

Child policies should get their bandwidth percentages relative to their parent's bandwidth.  So, for example, your 20% LLQ would be 200 Kbps (20% of the 1 Mbps), but that's not guaranteed because the top policy doesn't allocate bandwidth, for LLQ, against the 20 Mbps.

PS:

What can you do to make this right?  Well for outbound, you could either add another device "in-line" to correctly shape the 20 Mbps with traffic class priorities, or you upgrade the WAN link to 50 Mbps.  The latter also deals with the problem of branch to HQ aggregate.  (An alternate for branch to HQ, would be shape some/all the branches to insure the 20 Mbps isn't oversubscribed.)

Hi pdriver and JosephDoherty. Thank you both for responses. Both are helpful.

I agree that the 2.5:1 oversubscription is an issue that QoS can't really address completely. Monitoring on that WAN connection at head office does show an average outbound throughput of roughly 10Mbps average.

The real point of this policy is to avoid the ISP DCoS queuing/scheduling configuration in the WAN cloud. The 6-tier ISP DCoS model dictates 10% for Citrix, 15% for LLQ based on the IP Precedence values. The model there is non-negotiable so hence the 3-tiered nested policy is deemed required to try avoid this DCoS scheme.

I've got some further reading to do and some lab tests to run to verify this further. I'll leave this open for a bit longer but think it has been answered. I'll mark it correctly answered once I've verified in my virtual lab.

Thanks again.

Disclaimer

The  Author of this posting offers the information contained within this  posting without consideration and with the reader's understanding that  there's no implied or expressed suitability or fitness for any purpose.  Information provided is for informational purposes only and should not  be construed as rendering professional advice of any kind. Usage of this  posting's information is solely at reader's own risk.

Liability Disclaimer

In  no event shall Author be liable for any damages whatsoever (including,  without limitation, damages for loss of use, data or profit) arising out  of the use or inability to use the posting's information even if Author  has been advised of the possibility of such damage.

Posting

I agree that the 2.5:1 oversubscription is an issue that QoS can't really address completely. Monitoring on that WAN connection at head office does show an average outbound throughput of roughly 10Mbps average.

The real point of this policy is to avoid the ISP DCoS queuing/scheduling configuration in the WAN cloud. The 6-tier ISP DCoS model dictates 10% for Citrix, 15% for LLQ based on the IP Precedence values. The model there is non-negotiable so hence the 3-tiered nested policy is deemed required to try avoid this DCoS scheme.

Hmm, thinking a little more about this, I'm wondering whether an additional "in-line" device might be the same hub device physically looping traffic through itself.  First time it shapes for each site and prioritize each site's traffic if shaped.  Second time it shapes for your aggregate outbound bandwidth also prioritizing traffic types if traffic is shaped.  This would be the ideal for your hub to spokes.

I've too dealt with ISP "QoS".  Doing it yourself you can often provide much better/effective QoS.

Spoke to hub is still a possible problem, unless you insure, as I've already noted, all the combined spoke bandwidths don't oversubscribe the hub's bandwidth.  However, assuming spokes pull more than they push, if you provide QoS for their physical ports, you might then be able to use the ISP's QoS from cloud to hub egress in the, hopefully, rare occasion when spoke aggregate bandwidth bursts beyond physical capacity.

BTW, since you've mention cloud, there's no spoke-to-spoke traffic?

Also BTW, don't be too sanguine about an average of 10 Mbps.  Such averages, especially over multi-minute time periods can be very deceptive if you need to support RT traffic like VoIP.

Review Cisco Networking for a $25 gift card