
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-14-2017 10:05 PM - edited 03-05-2019 08:01 AM
I have a 150Mbps internet connection connected to a Cisco ISR 3925.
The ISP is telling me they police traffic to ~150Mbps (anything above gets dropped).
So, they recommend I traffic shape on my router facing them.
The connection between my router and the ISP is a gig interface.
I ended up doing this:
policy-map pm-150
class class-default
shape average 150000000
shape max-buffers 4096
interface GigabitEthernet0/1
service-policy output pm-150
I added the max-buffers 4096 command to help alleviate drops I'm seeing on my router interface. However, I am still seeing these drops.
Any other recommendations?
Anything I can do to help with the output drops? I thought the shaping would help.
Solved! Go to Solution.
- Labels:
-
Other Routing
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-15-2017 09:36 AM
However, you have pointed out TCP's flow control mechanism uses drops to throttle itself. Will the traffic shaping get in the way of that?
Yes and no, or it's an "it depends" kind of answer.
Realize w/o shaper, when you burst above 150 Mbps, your provider immediately drops. Those drops might not have been easily visible to you. (Which if you want to easily see them, replace your shaper with a policer.)
Unlike the policer, a shaper queues overrate traffic. This can be both good and bad.
It's generally good if the overrate is from a short term burst, then the buffering precludes packets being lost. It's bad for long term overrate, because the sender gets a delayed notification it's sending more than the path can deal with, it adds latency, and (perversely) it can actually increase the number of drops and/or slow the overall transmission rate.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-14-2017 11:58 PM
Hello,
ideally the average should be 80 percent of the max bandwidth, so the shaper gets activated before the bandwidth limit is reached. Try the below:
policy-map pm-150
class class-default
shape average 120000000 3750000 0
shape max-buffers 4096
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-15-2017 04:10 AM
Hello
I would suggest if possible mark/ classify on your LAN traffic before it reaches the wan interface,Then you can give some preference to the important traffic on the Wan interface and possibly some fair queuing to the default class in times of congestion.
Example:
LAN
access-list 100 permit udp any any range 16384 32767
access-list 100 permit tcp any any eq 1720
access-list 101 permit ip 10.1.1.0 0.0.0 255 any
access-list 102 permit ip any any
class-map match-all VOIP
match access-group 100
class-map match-all MY-DATA
match access-group 101
class-map match-all REST
match access-group 102
Policy-map LAN
class VOIP
set ip dscp ef
class MY-DATA
set ip dscp 24
class REST
set ip dscp default
int xxx
description LAN facing interface
service-policy input LAN
WAN
class-map EF
match ip dscp EF
class-map DSCP24
match ip dscp 24
policy-map WAN_Child
class EF
priority percent 10
class DSCP24
bandwidth percent 20
class class-default
fair queue
policy-map WAN_Parent
class class-default
shape average 153600000 (bc should default to around 2400000 bytes = tc 0.125/8)
service-policy WAN_Child
interface xx
description WAN facing interface
service-policy output WAN_Parent
@georg
looks like your BC rate calculation is a policing defined rate not the suggested shaping TC defined value.
Default tc 0.125/8 = 2343750
res
Paul
Please rate and mark as an accepted solution if you have found any of the information provided useful.
This then could assist others on these forums to find a valuable answer and broadens the community’s global network.
Kind Regards
Paul

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-15-2017 07:24 AM
Thanks Paul. The tagging is a great idea. I may have to do it based on source and destination IP's, but that should still work.
Any reason why you did "shape average153600000"?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-15-2017 05:07 AM
I thought the shaping would help.
The point of shaping, in your case, is to avoid your provider dropping your traffic where you have no control over what gets dropped. Shaping allows you to manage your traffic's congestion.
On many Cisco devices, I believe shapers/policers "count" L3 bandwidth, but providers generally police/limit L2 bandwidth. So, if that's happening, you need to shape your L3 bandwidth less to allow for L2 overhead. Georg suggests shaping at 80%, but I've often found you can do okay at about 85%. (Actual allowance depends on your actual traffic's packet sizes, and how your traffic bursts and how your provider polices.)
As to your drop issue, that can be difficult to totally eliminate. Dedicated traffic shaping devices work best, but on typical Cisco devices, you can sometimes trade off larger queues (which can increase latency), and which you appear to be trying, or try to improve your drop management. The latter might be worked using WRED (very difficult to get it right) or using FQ (also suggested by Paul) and tuning its flow queue sizes (not all devices support the option to change their sizes).
BTW, understand, total elimination of drops, when there's less bandwidth along the path than the end-hosts support, can almost be impossible to accomplish (again especially w/o a dedicated traffic shaping device that does things like throttle ACKs and spoof RWINs). Often the best that can be accomplished is having the fewest drops possible while using drops for bandwidth management.
I believer older IOS version shapers would implicitly use FQ, not so sure about later IOS versions. If not, you can add a child class to provide FQ, e.g.:
policy-map pm-150
class class-default
shape average 150000000
shape max-buffers 4096
service-policy pm-child-FQ
policy-map pm-child-FQ
class class-default
fair-queue
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-15-2017 07:30 AM
Thanks Joseph. So you would do .85*150000000 for the shape average?
So shape average 127500000?
Also, you mentioned tuning flow-queue sizes. Are referring to the "shape max-buffers"? I figured I should set it at the max (in my case, 4096).
Thanks in advance!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-15-2017 08:12 AM
So you would do .85*150000000 for the shape average?
So shape average 127500000?
Correct.
Also, you mentioned tuning flow-queue sizes. Are referring to the "shape max-buffers"?
I don't believe so. I recall that command allocates the total number of buffers. There's a different command (if supported) that determines the maximum queue depth per flow.
BTW, for TCP, I've found for the WAN "bottleneck" device, the egress buffer size needs to support about half the BDP (bandwidth delay product). Less and TCP will be unable to use all the path bandwidth. More, and you create needless drops.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-15-2017 09:14 AM
Thanks. Turns out I am running rather old code.
The shape max-buffers 4096 and queue-limit 4096 commands are actually the same thing (shape max-buffers is the older command).
The old code I am running has a max of 4096, newer versions are 32768.
I may try to do a code upgrade to increase the queue-limit.
However, you have pointed out TCP's flow control mechanism uses drops to throttle itself. Will the traffic shaping get in the way of that?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-15-2017 09:36 AM
However, you have pointed out TCP's flow control mechanism uses drops to throttle itself. Will the traffic shaping get in the way of that?
Yes and no, or it's an "it depends" kind of answer.
Realize w/o shaper, when you burst above 150 Mbps, your provider immediately drops. Those drops might not have been easily visible to you. (Which if you want to easily see them, replace your shaper with a policer.)
Unlike the policer, a shaper queues overrate traffic. This can be both good and bad.
It's generally good if the overrate is from a short term burst, then the buffering precludes packets being lost. It's bad for long term overrate, because the sender gets a delayed notification it's sending more than the path can deal with, it adds latency, and (perversely) it can actually increase the number of drops and/or slow the overall transmission rate.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-15-2017 12:07 PM
Thanks Joseph.
You mentioned:
BTW, for TCP, I've found for the WAN "bottleneck" device, the egress buffer size needs to support about half the BDP (bandwidth delay product). Less and TCP will be unable to use all the path bandwidth. More, and you create needless drops.
The buffer-size/queue-size I see is typically set as "packets".
Ie:
policy-map pm-200
class class-default
shape average 200000000
queue-limit 32768 packets
From what I've read, BDP is typically in the form of bytes. For example 200Mbps, 15ms of latency....would come out to 200000000bps * 0.015s = 3,000,000 bits (or 375000 bytes). How would you equate that to "packets" needed for the queue-limit command?

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-15-2017 12:25 PM
Joseph, any thoughts on how TCP throttles/regulates the bandwidth it consumes when traffic shaping is enabled? From my understanding, TCP needs packet-drops in order to tell a stream to throttle back and use less traffic.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-15-2017 12:54 PM
Again, correct.
However some of the later TCP stacks adjust (slow) their flow rate if they see RTT jump (they assume packets are being queued).
Shaping tries to mimic a slower interface.
If you have gig ingress and FE egress, how would behavior differ from a shaper? (It does, but again, the shaper is trying to mimic a slower interface.)
Actually policers also mimic a slower interface.
If you have a gig ingress and FE egress, how would you expect behavior to differ if the FE egress only supported a queue depth of 2 packets vs. a queue depth of 100 packets? The former is similar to a policer, the latter a shaper. Which is the behavior you want? (Hint: another "it depends" answer.)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-15-2017 12:36 PM
Correct, one of the many limitations of trying to provide really advanced QoS on most Cisco platforms.
Assuming your largest packets are 1500 (or whatever), you can determine the number of packets that will support the BDP desired. Of course, your buffer allocation is too small if packets are not largest, but often it's flows with max sized packets that cause congestion.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-15-2017 08:09 AM
Also, I added a shaping policy to the interface of the router that faces the ISP. And it actually made the intermittent packet-loss worse.
Any ideas why that would be?
In theory, my shaper should be a bit better than the ISP's policer.
