09-24-2014 02:16 AM - edited 03-04-2019 11:49 PM
I have the following policy applied on a 1941:
With this config, I was seeing output drops in the HTTPS class. This was because I was exceeding the default queue depth of 64 packets.
I increased this to 256 packets which stopped the output drops.
I have also, since my first draft, added the fair-queue command in the HTTPS policy so that smalled flows are less affected by larger flows.
Could someone please check the above policy looks ok and nothing obviousl wrong stands out?
The policer is not meant to drop packets, but just de-mark them once the BW exceeds 1mbps.
See show policy-map int output below:
Solved! Go to Solution.
09-24-2014 06:02 AM
Disclaimer
The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
Liability Disclaimer
In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
Posting
Yes, for 9 Mbps, a 64 packet queue is likely too shallow (I find queue depth, for TCP, should support about half your BDP). BTW, you still have the default queue depth for your default class and I normally recommend against using WRED unless you really, really understand how to optimize it.
As you also have FQ enabled in your default class, you might consider whether you need your two network classes. Additionally, if you move your policer/marker to an ingress policy (assuming you only have one ingress port), you might also just have your HTTPS class traffic also fall into your default class. If you get your subordinate policy down to just the default class, you also might try everything within your current parent policy's default class.
09-24-2014 06:02 AM
Disclaimer
The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
Liability Disclaimer
In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
Posting
Yes, for 9 Mbps, a 64 packet queue is likely too shallow (I find queue depth, for TCP, should support about half your BDP). BTW, you still have the default queue depth for your default class and I normally recommend against using WRED unless you really, really understand how to optimize it.
As you also have FQ enabled in your default class, you might consider whether you need your two network classes. Additionally, if you move your policer/marker to an ingress policy (assuming you only have one ingress port), you might also just have your HTTPS class traffic also fall into your default class. If you get your subordinate policy down to just the default class, you also might try everything within your current parent policy's default class.
09-24-2014 08:54 AM
I created this post but it never showed up so I created another :s
Assuming 250Byte packets and 15ms rtt my BDP comes to:
(9,000,000bps * 0.015s) /8 = 16,875 B/ms
16,875B/ms / 250B = 67.5 Packets. Is this right?
Nonetheless, I increased the HTTPS queue-limit to 256.
I also increased the default-class queue to 256.
I've used WRED as a means to reduce/prevent TCP synchronisation as multiple HTTP sessions may congest the WAN link and keep attempting to retransmit at the same time if tail-drop is used.
Only DSCP0 traffic should be caught by the class-default class so i dont really care what traffic gets dropped in this class.
The point of this is to guarantee 1Mbps of HTTPS traffic and a small amount for SNMP and ROUTING.
Another possible problem i've found is that the MAX-THRESH is quite low for class-default:
Class-map: class-default (match-any)
195130 packets, 275238300 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any
Queueing
queue limit 256 packets
(queue depth/total drops/no-buffer drops/flowdrops) 0/169/0/0
(pkts output/bytes output) 194961/275004521
Fair-queue: per-flow queue limit 64
Exp-weight-constant: 9 (1/512)
Mean queue depth: 0 packets
class Transmitted Random drop Tail/Flow drop Minimum Maximum Mark
pkts/bytes pkts/bytes pkts/bytes thresh thresh prob
0 7/504 0/0 0/0 20 40 1/10
1 0/0 0/0 0/0 22 40 1/10
2 0/0 0/0 0/0 24 40 1/10
3 0/0 0/0 0/0 26 40 1/10
4 0/0 0/0 0/0 28 40 1/10
5 0/0 0/0 0/0 30 40 1/10
6 0/0 0/0 0/0 32 40 1/10
7 0/0 0/0 0/0 34 40 1/10
I was thinking about possible adding the below commands:
09-24-2014 09:51 AM
Disclaimer
The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
Liability Disclaimer
In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
Posting
As to using WRED, as you're also using FQ, WRED will drop packets out of the class queue globally while FQ will tail drop from each flow queue. This mean when the interface congests, WRED might drop packets out of shallow flow queues. I.e. not a great combination. (NB: if routers supported FRED, I wouldn't as strongly recommend against it.)
BDP is calculated path bps times RTT. You then will know buffer space required for optimal TCP transmission. On network devices, I believe you only need half BDP for queuing because only half should be received in a burst (other half should be "in flight").
09-25-2014 02:19 AM
Thanks for your help Joseph,
So you would recommend getting rid of WRED or FQ for class-default?
If i should get rid of FQ, do i need to amend my max thresholds for WRED?
09-25-2014 02:50 AM
Disclaimer
The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
Liability Disclaimer
In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
Posting
Yes, I would suggest removing WRED, especially when mixed with FQ.
With FQ, flows that are causing congestion should also be the deep flow queues and FQ will drop from just those flow queues first, i.e. targeting the congesting causing flows.
Even without FQ, WRED, in practice, isn't quite as simple to use as it seems in concept. If you search the internet you'll finds lots of research on RED behavior and proposed "fixes". The latter is a "clue" that RED stock doesn't work as nicely as we would like. Or, even search for FRED on Cisco's site, and then ponder why Cisco offers it if WRED is so good.
BTW, I'm not saying WRED is useless (far from it), but to really derive full benefit from it, you need to really, really need to understand both its operation and your traffic.
09-25-2014 05:52 AM
The problem i'm seeing with FQ is that with a queue-limit of 256 packets, the per-flow Q size is 64 packets.
This is not enough for, lets say, a simple TCP file upload as packets start to tali-drop pretty much immediately.
I could increase this but i'm struggling to find information on how much i can increase this to without reaching the limits of the Cisco 1921 or 1941 buffers.
09-25-2014 06:06 PM
Disclaimer
The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
Liability Disclaimer
In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
Posting
I believe the two limits are free physical RAM and maximum size for queue limit (4 K?).
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide