04-28-2013 06:36 PM - edited 03-04-2019 07:45 PM
The situation I've been asked to address using QoS is to design a policy on the WAN router at head office to address the following requirements:
The reason for #1 is to try avoid the WAN DCoS drop scheme as it is non-negotiable and unsuitable to customer requirements. #2 is just to protect and give priority to important applications (i.e. VOIP, Citrix, etc)
I thought about this and I figured a nested policy should address this requirement, i.e.:
My question relates to the logic bomb associated with this type of approach. If every single remote site has 1Mbps and there are 50 sites (i.e. 50Mbps total) and head office has only 20Mbps how will the head office WAN router cater for both "priority" and "bandwidth" queues holistically?
If there are 50 sites with 1Mbps each and a grand total of 20% has been allocated to every site for VOIP in a priority queue... does that mean 10Mbps is reserved for the priority queue? Or will the parent/child relationship here mean that each site will be guarenteed a fraction of the total link and relative fraction priority queue thereafter.
As an example here is some configuration:
! CLASSIFICATION/MARKING - Application Marking Scheme
class-map match-any CM-PREC-3-IN
match protocol ftp
match protocol snmp
match access-group name ACL-PRINTING
class-map match-any CM-PREC-4-IN
match protocol citrix
class-map match-any CM-PREC-5-IN
match protocol rtp
match protocol sip
match protocol skinny
match protocol rtcp
!
! CLASSIFICATION/MARKING - Per Remote Site Marking Scheme
class-map match SITE-0001
match access-group name ACL-SITE-0001
class-map match SITE-0002
match access-group name ACL-SITE-0002
!
! CLASSIFICATION/MARKING - Marking Policy on LAN interface of CPE
policy-map PM-CLASSIFY-IN
class CM-PREC-5-IN
set ip precedence 5
class CM-PREC-4-IN
set ip precedence 4
class CM-PREC-3-IN
set ip precedence 3
!
! QUEUING/SCHEDULING - Child Policy - Per Remote site
policy-map PM-QUEUE-MARK-OUT-DEFAULT
class CM-PREC-5-IN
priority percent 20
class CM-PREC-4-IN
bandwidth remaining percent 50
class CM-PREC-3-IN
bandwidth remaining percent 20
class class-default
fair-queue
random-detect
!
! QUEUING/SCHEDULING - Parent Policy - Shaping per site using child policy
policy-map PM-SHAPE-QUEUE-PER-SITE
description *** Shape and Queue Per Site ***
class SITE-0001
shape average 945600
service-policy PM-QUEUE-MARK-OUT-DEFAULT
class SITE-0002
shape average 945600
service-policy PM-QUEUE-MARK-OUT-DEFAULT
class class-default
fair-queue
!
! QUEUING/SCHEDULING - Overall shaping policy
policy-map PM-SHAPE-QUEUE-OUT
description *** Shape and Queue ***
class class-default
shape average 19456000
service-policy PM-SHAPE-QUEUE-PER-SITE
!
interface GigabitEthernet0/0
service-policy output PM-SHAPE-QUEUE-OUT
!
!
ip access-list extended ACL-PRINTING
permit tcp any any eq 9100
permit tcp any eq 9100 any
ip access-list extended ACL-SITE-0001
permit ip any 192.168.1.0 255.255.255.0
permit ip 192.168.1.0 255.255.255.0 any
ip access-list extended ACL-SITE-0002
permit ip any 192.168.2.0 255.255.255.0
permit ip 192.168.2.0 255.255.255.0 any
If the above can be achieved more efficiently in other ways let me know.
04-29-2013 01:26 AM
Hello
Looking at your qos config - the hubs cumulative wan link is shaped to 20mb, awith each spoke allocated just under 1mb.
My calculation for each child policy using 1mb:
Interface bandwidth 1024000kps
subtract reserved BW for class class-default = 25% = 256000kps, leaving 768000Kps
subtract any LLQ' bw 20% of 768000 = 153600kps
So the remaining amount BW in this case = 614400kps, is subdivided using the bandwidth remaining percent commands.
If the child default is not being utilized this BW is shared between the defined classes, but it can be manipulated with the max-reserved-bandwidth command
res
Paul
Please don't forget to rate any posts that have been helpful.
Thanks.
04-29-2013 02:15 AM
Thanks for the response however that was not quite what I was after. Having a read through the post I see that maybe I wasn't clear enough.
So...
I wanted to understand whether the 50 accumulative child policies would for LLQ:
Forgetting about the bandwidth (CBWFQ) items for now and examine the LLQ. I understand that there is only one "LLQ"... I just wanted to try understand which way the router went about "building" the size of that priority queue given that we're creating that priority queue based on "police percent" statements from nested child policies.
Will the police commands reserve 20% of total bandwidth of parent policy throughput or accumulative 20% throughput of every child policy?
I think it will reservce 10Mbps rather than 4Mbps.
04-29-2013 02:49 AM
Hello Jonathan,
As I see it, the percentage value for your LLQ is that of the shaped child policy not of the parent however it could increase when you are not experiencing congestion when the spokes are not maxing their wan link.
So gong back to my calculation in previous post, the total LLQ would amount to 153600*50 =7680000( 7.6mb)
However, You are starting off with a oversubscription of 250% on your Hub Wan link, which means the cumulative traffic that can hit your HUB wan link is 50mb on a 20mb link, so matter how much qos you apply your going to see congestion.
res
Paul
Please don't forget to rate any posts that have been helpful.
Thanks.
04-29-2013 07:37 AM
Disclaimer
The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
Liability Disclaimer
In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
Posting
Logically, what you want to do is shape your aggregate HQ WAN link at 20 Mbps, and prioritize by traffic class, while also shaping each remote site to 1 Mbps, while prioritizing by traffic class. Physically, it might not be possible to define a configuration to support this on a single device.
Even if you could, you might have another "logic bomb", which is, if you have 50 sites with 1 Mbps each and HQ only has 20 Mbps, how were you planning on dealing with possible branch to HQ aggregate oversubscription?
What you have, a 3 level policy, is basically the correct approach, although if there's congestion at the top level, traffic won't be correctly prioritized by class (i.e. believe all sites will get proportional bandwidth).
If there are 50 sites with 1Mbps each and a grand total of 20% has been allocated to every site for VOIP in a priority queue... does that mean 10Mbps is reserved for the priority queue? Or will the parent/child relationship here mean that each site will be guarenteed a fraction of the total link and relative fraction priority queue thereafter.
Child policies should get their bandwidth percentages relative to their parent's bandwidth. So, for example, your 20% LLQ would be 200 Kbps (20% of the 1 Mbps), but that's not guaranteed because the top policy doesn't allocate bandwidth, for LLQ, against the 20 Mbps.
PS:
What can you do to make this right? Well for outbound, you could either add another device "in-line" to correctly shape the 20 Mbps with traffic class priorities, or you upgrade the WAN link to 50 Mbps. The latter also deals with the problem of branch to HQ aggregate. (An alternate for branch to HQ, would be shape some/all the branches to insure the 20 Mbps isn't oversubscribed.)
04-29-2013 05:40 PM
Hi pdriver and JosephDoherty. Thank you both for responses. Both are helpful.
I agree that the 2.5:1 oversubscription is an issue that QoS can't really address completely. Monitoring on that WAN connection at head office does show an average outbound throughput of roughly 10Mbps average.
The real point of this policy is to avoid the ISP DCoS queuing/scheduling configuration in the WAN cloud. The 6-tier ISP DCoS model dictates 10% for Citrix, 15% for LLQ based on the IP Precedence values. The model there is non-negotiable so hence the 3-tiered nested policy is deemed required to try avoid this DCoS scheme.
I've got some further reading to do and some lab tests to run to verify this further. I'll leave this open for a bit longer but think it has been answered. I'll mark it correctly answered once I've verified in my virtual lab.
Thanks again.
04-29-2013 06:30 PM
Disclaimer
The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
Liability Disclaimer
In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
Posting
I agree that the 2.5:1 oversubscription is an issue that QoS can't really address completely. Monitoring on that WAN connection at head office does show an average outbound throughput of roughly 10Mbps average.The real point of this policy is to avoid the ISP DCoS queuing/scheduling configuration in the WAN cloud. The 6-tier ISP DCoS model dictates 10% for Citrix, 15% for LLQ based on the IP Precedence values. The model there is non-negotiable so hence the 3-tiered nested policy is deemed required to try avoid this DCoS scheme.
Hmm, thinking a little more about this, I'm wondering whether an additional "in-line" device might be the same hub device physically looping traffic through itself. First time it shapes for each site and prioritize each site's traffic if shaped. Second time it shapes for your aggregate outbound bandwidth also prioritizing traffic types if traffic is shaped. This would be the ideal for your hub to spokes.
I've too dealt with ISP "QoS". Doing it yourself you can often provide much better/effective QoS.
Spoke to hub is still a possible problem, unless you insure, as I've already noted, all the combined spoke bandwidths don't oversubscribe the hub's bandwidth. However, assuming spokes pull more than they push, if you provide QoS for their physical ports, you might then be able to use the ISP's QoS from cloud to hub egress in the, hopefully, rare occasion when spoke aggregate bandwidth bursts beyond physical capacity.
BTW, since you've mention cloud, there's no spoke-to-spoke traffic?
Also BTW, don't be too sanguine about an average of 10 Mbps. Such averages, especially over multi-minute time periods can be very deceptive if you need to support RT traffic like VoIP.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide