- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-24-2014 02:16 AM - edited 03-04-2019 11:49 PM
I have the following policy applied on a 1941:
class HTTPS
bandwidth 1024
police 1024000 192000 384000 conform-action set-dscp-transmit af31 exceed-action set-dscp-transmit default violate-action set-dscp-transmit 0
fair-queue
class NETWORK-ROUTING
bandwidth 100
class NETWORK-MGMT
bandwidth 100
class class-default
random-detect
fair-queue
policy-map OUTBOUND-SHAPER
class class-default
shape average 9000000
service-policy QOS_OUT
With this config, I was seeing output drops in the HTTPS class. This was because I was exceeding the default queue depth of 64 packets.
I increased this to 256 packets which stopped the output drops.
I have also, since my first draft, added the fair-queue command in the HTTPS policy so that smalled flows are less affected by larger flows.
Could someone please check the above policy looks ok and nothing obviousl wrong stands out?
The policer is not meant to drop packets, but just de-mark them once the BW exceeds 1mbps.
See show policy-map int output below:
1703269 packets, 2411289461 bytes
5 minute offered rate 60000 bps, drop rate 0 bps
Match: any
Queueing
queue limit 64 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 1703066/2246674391
shape (average) cir 9000000, bc 36000, be 36000
target shape rate 9000000
1574214 packets, 2227648884 bytes
5 minute offered rate 41000 bps, drop rate 0 bps
Match: ip dscp af31 (26)
Queueing
queue limit 256 packets
(queue depth/total drops/no-buffer drops/flowdrops) 0/0/0/0
(pkts output/bytes output) 1574012/2227363668
bandwidth 1024 kbps
police:
cir 1024000 bps, bc 192000 bytes, be 384000 bytes
conformed 182498 packets, 255363918 bytes; actions:
set-dscp-transmit af31
exceeded 1631 packets, 2303354 bytes; actions:
set-dscp-transmit default
violated 1390086 packets, 1969981688 bytes; actions:
set-dscp-transmit default
conformed 0 bps, exceed 0 bps, violate 0 bps
Fair-queue: per-flow queue limit 64
2506 packets, 264300 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: ip dscp cs6 (48) cs7 (56)
Queueing
queue limit 64 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 2506/179120
bandwidth 100 kbps
123098 packets, 183141623 bytes
5 minute offered rate 16000 bps, drop rate 0 bps
Match: ip dscp cs2 (16)
Queueing
queue limit 64 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 123098/18890565
bandwidth 100 kbps
3450 packets, 234654 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any
Queueing
queue limit 64 packets
(queue depth/total drops/no-buffer drops/flowdrops) 0/0/0/0
(pkts output/bytes output) 3450/241038
Exp-weight-constant: 9 (1/512)
Mean queue depth: 0 packets
class Transmitted Random drop Tail/Flow drop Minimum Maximum Mark
pkts/bytes pkts/bytes pkts/bytes thresh thresh prob
1 1/66 0/0 0/0 22 40 1/10
2 0/0 0/0 0/0 24 40 1/10
3 0/0 0/0 0/0 26 40 1/10
4 0/0 0/0 0/0 28 40 1/10
5 1/58 0/0 0/0 30 40 1/10
6 0/0 0/0 0/0 32 40 1/10
7 0/0 0/0 0/0 34 40 1/10
Fair-queue: per-flow queue limit 16
Solved! Go to Solution.
- Labels:
-
Other Routing
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-24-2014 06:02 AM
Disclaimer
The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
Liability Disclaimer
In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
Posting
Yes, for 9 Mbps, a 64 packet queue is likely too shallow (I find queue depth, for TCP, should support about half your BDP). BTW, you still have the default queue depth for your default class and I normally recommend against using WRED unless you really, really understand how to optimize it.
As you also have FQ enabled in your default class, you might consider whether you need your two network classes. Additionally, if you move your policer/marker to an ingress policy (assuming you only have one ingress port), you might also just have your HTTPS class traffic also fall into your default class. If you get your subordinate policy down to just the default class, you also might try everything within your current parent policy's default class.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-24-2014 06:02 AM
Disclaimer
The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
Liability Disclaimer
In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
Posting
Yes, for 9 Mbps, a 64 packet queue is likely too shallow (I find queue depth, for TCP, should support about half your BDP). BTW, you still have the default queue depth for your default class and I normally recommend against using WRED unless you really, really understand how to optimize it.
As you also have FQ enabled in your default class, you might consider whether you need your two network classes. Additionally, if you move your policer/marker to an ingress policy (assuming you only have one ingress port), you might also just have your HTTPS class traffic also fall into your default class. If you get your subordinate policy down to just the default class, you also might try everything within your current parent policy's default class.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-24-2014 08:54 AM
I created this post but it never showed up so I created another :s
Assuming 250Byte packets and 15ms rtt my BDP comes to:
(9,000,000bps * 0.015s) /8 = 16,875 B/ms
16,875B/ms / 250B = 67.5 Packets. Is this right?
Nonetheless, I increased the HTTPS queue-limit to 256.
I also increased the default-class queue to 256.
I've used WRED as a means to reduce/prevent TCP synchronisation as multiple HTTP sessions may congest the WAN link and keep attempting to retransmit at the same time if tail-drop is used.
Only DSCP0 traffic should be caught by the class-default class so i dont really care what traffic gets dropped in this class.
The point of this is to guarantee 1Mbps of HTTPS traffic and a small amount for SNMP and ROUTING.
class ACTURIS-HTTPS
bandwidth 1024
police 1024000 192000 384000 conform-action set-dscp-transmit af31 exceed-action set-dscp-transmit default violate-action set-dscp-transmit 0
queue-limit 256 packets
class NETWORK-ROUTING
bandwidth 100
class NET-MGMT
bandwidth 100
class class-default
fair-queue
queue-limit 256 packets
random-detect
policy-map OUTBOUND-SHAPER
class class-default
shape average 9000000
service-policy QOS_OUT
Another possible problem i've found is that the MAX-THRESH is quite low for class-default:
Class-map: class-default (match-any)
195130 packets, 275238300 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any
Queueing
queue limit 256 packets
(queue depth/total drops/no-buffer drops/flowdrops) 0/169/0/0
(pkts output/bytes output) 194961/275004521
Fair-queue: per-flow queue limit 64
Exp-weight-constant: 9 (1/512)
Mean queue depth: 0 packets
class Transmitted Random drop Tail/Flow drop Minimum Maximum Mark
pkts/bytes pkts/bytes pkts/bytes thresh thresh prob
0 7/504 0/0 0/0 20 40 1/10
1 0/0 0/0 0/0 22 40 1/10
2 0/0 0/0 0/0 24 40 1/10
3 0/0 0/0 0/0 26 40 1/10
4 0/0 0/0 0/0 28 40 1/10
5 0/0 0/0 0/0 30 40 1/10
6 0/0 0/0 0/0 32 40 1/10
7 0/0 0/0 0/0 34 40 1/10
I was thinking about possible adding the below commands:
random-detect dscp 0 64 128
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-24-2014 09:51 AM
Disclaimer
The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
Liability Disclaimer
In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
Posting
As to using WRED, as you're also using FQ, WRED will drop packets out of the class queue globally while FQ will tail drop from each flow queue. This mean when the interface congests, WRED might drop packets out of shallow flow queues. I.e. not a great combination. (NB: if routers supported FRED, I wouldn't as strongly recommend against it.)
BDP is calculated path bps times RTT. You then will know buffer space required for optimal TCP transmission. On network devices, I believe you only need half BDP for queuing because only half should be received in a burst (other half should be "in flight").
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-25-2014 02:19 AM
Thanks for your help Joseph,
So you would recommend getting rid of WRED or FQ for class-default?
If i should get rid of FQ, do i need to amend my max thresholds for WRED?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-25-2014 02:50 AM
Disclaimer
The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
Liability Disclaimer
In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
Posting
Yes, I would suggest removing WRED, especially when mixed with FQ.
With FQ, flows that are causing congestion should also be the deep flow queues and FQ will drop from just those flow queues first, i.e. targeting the congesting causing flows.
Even without FQ, WRED, in practice, isn't quite as simple to use as it seems in concept. If you search the internet you'll finds lots of research on RED behavior and proposed "fixes". The latter is a "clue" that RED stock doesn't work as nicely as we would like. Or, even search for FRED on Cisco's site, and then ponder why Cisco offers it if WRED is so good.
BTW, I'm not saying WRED is useless (far from it), but to really derive full benefit from it, you need to really, really need to understand both its operation and your traffic.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-25-2014 05:52 AM
The problem i'm seeing with FQ is that with a queue-limit of 256 packets, the per-flow Q size is 64 packets.
This is not enough for, lets say, a simple TCP file upload as packets start to tali-drop pretty much immediately.
I could increase this but i'm struggling to find information on how much i can increase this to without reaching the limits of the Cisco 1921 or 1941 buffers.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-25-2014 06:06 PM
Disclaimer
The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
Liability Disclaimer
In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
Posting
I believe the two limits are free physical RAM and maximum size for queue limit (4 K?).
