cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1260
Views
10
Helpful
4
Replies

Shaping kicking in too early?

ashkentapa
Level 1
Level 1

Hi all.

My WAN links are delivered over 100 mbps ethernet interfaces. I'm using a hierarchical output QoS policy to apply QoS at the carrier supplied bandwidth. At some of my lower bandwidth sites I'm seeing the shaper kicking in before the allocated bandwidth has been exceeded. With the policy map applied my 2 mbps sites are constrained to 1.5 mbps. This is happening at a number of sites. My higher bandwidth sites (6 mbps and 10 mbps) do not have this problem.

Here's the relevant configuration:

policy-map 2mbps-shape-out

description causes backpressure from the interface to activate queuing

class class-default

    shape average 2000000

    service-policy ROUTER-TO-WAN

!

policy-map ROUTER-TO-WAN

class Gold-nrt

    bandwidth percent 5

    set ip dscp af43

class Silver-nrt3

    bandwidth percent 20

class Silver-nrt2

    bandwidth percent 20

class Silver-nrt1

    bandwidth percent 15

class class-default

    fair-queue

    random-detect dscp-based

!

interface FastEthernet0/0/0

   description WAN link (2 Mbps)

   service-policy output 2mbps-shape-out

If I apply this policy at sites having a 2 mbps carrier link they only achieve about 1.5 mbps output. If I take the policy off they achieve 2 mbps output.

Checking the statistics for the interface shows the shaping is kicking in before the interface reaches 2 mbps:

ROUTER#sh policy-map int FastEthernet0/0/0
FastEthernet0/0/0

  Service-policy output: 2mbps-shape-out

    Class-map: class-default (match-any)
      253744012 packets, 235067072699 bytes
      5 minute offered rate 1487000 bps, drop rate 55000 bps
      Match: any
      Queueing
      queue limit 64 packets
      (queue depth/total drops/no-buffer drops) 34/4582997/0
      (pkts output/bytes output) 250683971/231938977403
      shape (average) cir 2000000, bc 8000, be 8000
      target shape rate 2000000

      Service-policy : ROUTER-TO-WAN

         *** stats of child policy not shown ***

I'm not sure what is going on here. Perhaps the queue limit on the parent policy is too low? If that's the case why don't I see this problem at sites with higher shaping limits?

Thanks very much in advance.

1 Accepted Solution

Accepted Solutions

Joseph W. Doherty
Hall of Fame
Hall of Fame

Disclaimer

The    Author of this posting offers the information contained within this    posting without consideration and with the reader's understanding that    there's no implied or expressed suitability or fitness for any  purpose.   Information provided is for informational purposes only and  should not   be construed as rendering professional advice of any kind.  Usage of  this  posting's information is solely at reader's own risk.

Liability Disclaimer

In    no event shall Author be liable for any damages whatsoever  (including,   without limitation, damages for loss of use, data or  profit) arising  out  of the use or inability to use the posting's  information even if  Author  has been advised of the possibility of such  damage.

Posting

It might have help if you had also posted your child stats, since that's likely where the drops are taking place.

What we can see is 34 packets in the queue, 64 max, and current drop rate of 3.6%, which is high enough to degrade TCP performance.

I too suspect the queue depth is insufficient, but check what's the queue limit of your child policy's class-default.  What you might be especially bumping into is the per flow queue limit, often defaults to 1/4 overall queue limit.

I also see you're using WRED, which I generally advise against, especially using Cisco defaults.

Besides adjusting queue-limits and removing WRED, you might also increase your shaper's Tc and also might enable Te.  Is this a later IOS version?  Older IOS shapers, I recall use to default to a 25 ms Tc, newer versions to 4 ms.  Try a Tc of 10 ms.

View solution in original post

4 Replies 4

Joseph W. Doherty
Hall of Fame
Hall of Fame

Disclaimer

The    Author of this posting offers the information contained within this    posting without consideration and with the reader's understanding that    there's no implied or expressed suitability or fitness for any  purpose.   Information provided is for informational purposes only and  should not   be construed as rendering professional advice of any kind.  Usage of  this  posting's information is solely at reader's own risk.

Liability Disclaimer

In    no event shall Author be liable for any damages whatsoever  (including,   without limitation, damages for loss of use, data or  profit) arising  out  of the use or inability to use the posting's  information even if  Author  has been advised of the possibility of such  damage.

Posting

It might have help if you had also posted your child stats, since that's likely where the drops are taking place.

What we can see is 34 packets in the queue, 64 max, and current drop rate of 3.6%, which is high enough to degrade TCP performance.

I too suspect the queue depth is insufficient, but check what's the queue limit of your child policy's class-default.  What you might be especially bumping into is the per flow queue limit, often defaults to 1/4 overall queue limit.

I also see you're using WRED, which I generally advise against, especially using Cisco defaults.

Besides adjusting queue-limits and removing WRED, you might also increase your shaper's Tc and also might enable Te.  Is this a later IOS version?  Older IOS shapers, I recall use to default to a 25 ms Tc, newer versions to 4 ms.  Try a Tc of 10 ms.

Hi Joseph. Thanks for your response. Good point regarding the shaper's Tc. The problem routers are running 12.4T and have a default Tc of 4ms. 

I left out the child stats to try to keep my post as brief as possible. Looks like it was a little too brief! Here's stats from the child policy:

      Service-policy : ROUTER-TO-WAN

        Class-map: Gold-nrt (match-any)
          34652918 packets, 6425782853 bytes
          5 minute offered rate 50000 bps, drop rate 0 bps
          Match: ip dscp af43 (38)
            34069870 packets, 6216720650 bytes
            5 minute rate 47000 bps
          Match: ip dscp cs6 (48)
            475262 packets, 49846832 bytes
            5 minute rate 0 bps
          Match: ip precedence 4
            107784 packets, 159215209 bytes
            5 minute rate 2000 bps
          Queueing
          queue limit 64 packets
          (queue depth/total drops/no-buffer drops) 0/682/0
          (pkts output/bytes output) 34652217/6282338457
          bandwidth 5% (100 kbps)
          QoS Set
            dscp af43
              Packets marked 34652902

  

      Class-map: Silver-nrt3 (match-any)
          1019063 packets, 457918753 bytes
          5 minute offered rate 11000 bps, drop rate 0 bps
          Match: ip dscp af33 (30)
            1019063 packets, 457918753 bytes
            5 minute rate 11000 bps
          Queueing
          queue limit 64 packets
          (queue depth/total drops/no-buffer drops) 0/127/0
          (pkts output/bytes output) 1018936/457768008
          bandwidth 20% (400 kbps)

        Class-map: Silver-nrt2 (match-any)
          1886420 packets, 810634365 bytes
          5 minute offered rate 28000 bps, drop rate 0 bps
          Match: ip dscp af23 (22)
            1886420 packets, 810634365 bytes
            5 minute rate 28000 bps
          Queueing
          queue limit 64 packets
          (queue depth/total drops/no-buffer drops) 0/533/0
          (pkts output/bytes output) 1885887/809980781
          bandwidth 20% (400 kbps)

        Class-map: Silver-nrt1 (match-any)
          221917 packets, 109436437 bytes
          5 minute offered rate 0 bps, drop rate 0 bps
          Match: ip dscp af13 (14)
            221917 packets, 109436437 bytes
            5 minute rate 0 bps
          Queueing
          queue limit 64 packets
          (queue depth/total drops/no-buffer drops) 0/266/0
          (pkts output/bytes output) 221651/109063499
          bandwidth 15% (300 kbps)

        Class-map: class-default (match-any)
          215963669 packets, 227263297093 bytes
          5 minute offered rate 1387000 bps, drop rate 55000 bps
          Match: any
          Queueing
          queue limit 64 packets
          (queue depth/total drops/no-buffer drops/flowdrops) 27/4581389/0/0
          (pkts output/bytes output) 212905280/224279826658
          Fair-queue: per-flow queue limit 16
            Exp-weight-constant: 9 (1/512)
            Mean queue depth: 13 packets
            dscp     Transmitted       Random drop      Tail/Flow drop Minimum Maximum Mark
                      pkts/bytes    pkts/bytes       pkts/bytes    thresh  thresh  prob
           
            default212905291/224279826658 4550731/5264918227  30658/28982781          20            40  1/10

I think you are right regarding the per flow queue limit. If I'm reading the output above correctly it is saying the per flow queue limit is 16 packets and the mean queue depth is 13 packets. I checked another router having the same problem and the mean queue depth was 15 packets.

So I will try increasing the queue limit on the default queue on the child policy.

Disclaimer

The     Author of this posting offers the information contained within this     posting without consideration and with the reader's understanding  that    there's no implied or expressed suitability or fitness for any   purpose.   Information provided is for informational purposes only and   should not   be construed as rendering professional advice of any kind.   Usage of  this  posting's information is solely at reader's own risk.

Liability Disclaimer

In     no event shall Author be liable for any damages whatsoever   (including,   without limitation, damages for loss of use, data or   profit) arising  out  of the use or inability to use the posting's   information even if  Author  has been advised of the possibility of  such  damage.

Posting

Yes, you're correct that this snapshot shows a mean queue depth of 13 packets for the child policy's class-default.  It also show the actual current queue depth as 27.  You also have a huge number of WRED drops, both early and tail drop.

Remove WRED which offers little benefit since you're also configured FQ in class-default.  Increase Tc.  Increase the child's class-default queue limit.

What appears to be happening, is traffic will saturate the available bandwidth, but the current policy settings are too aggressive regarding drops which overly slows the traffic.

Hi Joseph. I've spent a week working through your suggestions and testing out a few ideas of my own. The hard part about this problem is that only I've got a number of identical sites (same QOS configuration, same router model, same interface types, same IOS, same WAN link type and speed) and I'm only seeing the problem at a few of them. I believe the problem must be related to traffic patterns.

Anyway after much experimentation I've found your suggestion to change my shaper's Tc to 10 ms has fixed the problem. At the affected sites I am manually setting a Bc that gives me a Tc of 10 ms. At the other sites I am leaving the default setting.

For those turning up this post in a web search, here's how you calculate Tc.

On IOS 12.4 when you configure a shaper the IOS will automatically set a Bc that results in a Tc of 4 ms. The output below is taken from a router running 12.4.22:

ROUTER#sh policy-map int fa0/0/0

FastEthernet0/0/0

  Service-policy output: 2mbps-shape-out

    Class-map: class-default (match-any)
      23714871 packets, 18934584387 bytes
      5 minute offered rate 509000 bps, drop rate 0 bps
      Match: any
      Queueing
      queue limit 64 packets
      (queue depth/total drops/no-buffer drops) 0/28867/0
      (pkts output/bytes output) 23860918/19141258101
      shape (average) cir 2000000, bc 8000, be 20000
      target shape rate 2000000

The parent policy map has a CIR of 2000000 and a Bc of 8000 so Tc=8000/2000000=0.004 or 4 ms.

This was causing me problems at some of my sites and Joseph suggested I change the shaping parametres to give a Tc of 10 ms. So I manually set Bc to be 20000 in the parent policy:

policy-map 2mbps-shape-out

description causes backpressure from the interface to activate queuing

class class-default

    shape average 2000000 20000

    service-policy ROUTER-TO-WAN

Review Cisco Networking for a $25 gift card