cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
965
Views
21
Helpful
15
Replies
Highlighted
Beginner

Shaping to a very small percentage of interface rate

Hello all,

I've got a bit of a problem, and I'm hoping someone else has some experience with this.  My client uses a metro ethernet solution to connect to local branches.  The metro ethernet circuit is delivered on a Gigabit interface.  However, the individual subinterfaces for each branch office are only allocated 10 meg a piece.  I've applied shaping to each sub-interface, and I've tuned the time interval to 10ms per Cisco recommendation for converged networks.  Some of my classes utilize WRED for congestion avoidance.

The problem is, I'm dropping packets well before I reach the allocated bandwidth for shaping.  I believe this is due to shaping the interface to such a small percentage of the actual transfer rate.  Essentially the policy sends 10ms worth of traffic in 1ms and queues traffic for the remaining 9ms.  I believe this is causing WRED to start dropping packets in the queues.

I'm trying to determine if I should lower my time interval even further, or if I should increase my queue lengths, or use a combination of both.  If anyone has had experience with this behavior and has suggestions, I'm all ears.

Thanks,

Blake

15 REPLIES 15
Highlighted
Hall of Fame Master

Have you tried working the whole network without any shaping by your side ?

Highlighted

Please elaborate.  I don't see how that applies.  I'm seeing WRED random drops which only happens when packets are queued.  If I don't shape, I won't see any drops because no packets will be queued.  I also won't be able to guarantee bandwidth to voice or critical classes.

Highlighted

You wouldn't be able to guarantee BW, but you may find that things work surprisingly well without any complication.

This is my experience with average enterprise customers using a mix of data and voice on 10 Mbps circuits.

Highlighted

Thanks for the suggestion, but I'm looking for direction on properly configuring QoS, not removing it.

Highlighted

blakeaa827 wrote:

Thanks for the suggestion, but I'm looking for direction on properly configuring QoS, not removing it.

I don't have a problem with that, you're welcome to insist in actively dropping or delaying packets where instead they would have been delivered without any problem.

Just let me remark that your hastily rating of "1" to my post above is kind of showing your inability to keep an open mind in technical discussions, and good dose of disrespect to whom is surely more experienced than you, and tries to give advice with the good will of helping.

Beside that, have a nice day Sir.

Highlighted

I don't know you.  You don't know me.  So lets set comparisons about our experience aside.  When someone asks a technical question about a technology, especially a technology expressly recommended by Cisco in a specific situation, the response should not be "don't use it".  That doesn't help anyone, intellectually or practically.

Now, if you're well versed in QoS, and have suggestions on how to properly tweak it in this situation, I'd be more than happy to hear you out.  However, if not, I'd prefer to leave the floor open to someone who is.  My poor rating for your post was a direct result of your posting a completely irrelevant response to my question.

Highlighted
Cisco Employee

Hi Blake,

If the shaper becomes active, and packets in one class reach the configured wred threshold, drop will happen. You can try to change the queue-limit or wred threshold to provide more buffer within the class.

It would be helpful if you can provide the output of show policy-map interface.

Regards,

Lei Tian

Highlighted

Hello Lei,


Thank you very much for the reply.  I do believe that is what's happening.  I think the shaper is becoming active way too soon.  I considered changing the wred thresholds to higher values, but if I can keep the packets from being queued at all or at least minimize queuing when bandwidth is minimal, I would prefer that.  Here is a copy of my policy-map statistics before I made some modifications this evening.  I have since lowered my Tc value to 5 from 10 in an effort to shrink my queues and shorten time between servicing.  If packets are still queued and dropped during business tomorrow, I intend to try increasing the wred thresholds.  Also, I'm aware my voice traffic is not being tagged properly.  That's a seperate issue, but my primary concern now is my data traffic.

Thanks again,

Blake


GigabitEthernet0/0/0.77

  Service-policy output: shape-10meg
         
    Class-map: class-default (match-any)
      6680824 packets, 3700885290 bytes
      30 second offered rate 928000 bps, drop rate 0 bps
      Match: any
      Traffic Shaping
           Target/Average   Byte   Sustain   Excess    Interval  Increment
             Rate           Limit  bits/int  bits/int  (ms)      (bytes) 
         10000000/10000000  12500  100000    0         10        12500   

        Adapt  Queue     Packets   Bytes     Packets   Bytes     Shaping
        Active Depth                         Delayed   Delayed   Active
        -      0         6676472   3695105621 1290650   1586139549 no

      Service-policy : wan

        Class-map: voice (match-all)
          0 packets, 0 bytes
          30 second offered rate 0 bps, drop rate 0 bps
          Match: ip dscp ef (46)
          Queueing
            Strict Priority
            Output Queue: Conversation 264
            Bandwidth 33 (%)
            Bandwidth 3300 (kbps) Burst 82500 (Bytes)
            (pkts matched/bytes matched) 0/0
            (total drops/bytes drops) 0/0

        Class-map: signaling (match-all)
          0 packets, 0 bytes
          30 second offered rate 0 bps, drop rate 0 bps
          Match: ip dscp cs3 (24)
          Queueing
            Output Queue: Conversation 265
            Bandwidth 5 (%)
            Bandwidth 500 (kbps)Max Threshold 64 (packets)
            (pkts matched/bytes matched) 0/0
        (depth/total drops/no-buffer drops) 0/0/0

        Class-map: control (match-any)
          219454 packets, 63435175 bytes
          30 second offered rate 10000 bps, drop rate 0 bps
          Match: ip dscp cs6 (48)
            5093 packets, 424646 bytes
            30 second rate 0 bps
          Match: ip dscp cs2 (16)
            214361 packets, 63010529 bytes
            30 second rate 9000 bps
          Queueing
            Output Queue: Conversation 266
            Bandwidth 5 (%)
            Bandwidth 500 (kbps)Max Threshold 64 (packets)
            (pkts matched/bytes matched) 18597/4778264
        (depth/total drops/no-buffer drops) 0/0/0

        Class-map: critical (match-all)
          318350 packets, 327989291 bytes
          30 second offered rate 107000 bps, drop rate 0 bps
          Match: ip dscp af21 (18) af22 (20)
          Queueing
            Output Queue: Conversation 267
            Bandwidth 27 (%)
            Bandwidth 2700 (kbps)
            (pkts matched/bytes matched) 126386/172626504
        (depth/total drops/no-buffer drops) 0/10/0
             exponential weight: 9
             mean queue depth: 0

   dscp    Transmitted      Random drop      Tail drop    Minimum Maximum  Mark
           pkts/bytes       pkts/bytes       pkts/bytes    thresh  thresh  prob
   af11       0/0               0/0              0/0           32      40  1/10
   af12       0/0               0/0              0/0           28      40  1/10
   af13       0/0               0/0              0/0           24      40  1/10
   af21  318340/327975571      10/13720          0/0           32      40  1/10
   af22       0/0               0/0              0/0           28      40  1/10
   af23       0/0               0/0              0/0           24      40  1/10
   af31       0/0               0/0              0/0           32      40  1/10
   af32       0/0               0/0              0/0           28      40  1/10
   af33       0/0               0/0              0/0           24      40  1/10
   af41       0/0               0/0              0/0           32      40  1/10
   af42       0/0               0/0              0/0           28      40  1/10
   af43       0/0               0/0              0/0           24      40  1/10
    cs1       0/0               0/0              0/0           22      40  1/10
    cs2      46/23364           0/0              0/0           24      40  1/10
    cs3       0/0               0/0              0/0           26      40  1/10
    cs4       0/0               0/0              0/0           28      40  1/10
    cs5       0/0               0/0              0/0           30      40  1/10
    cs6     420/32760           0/0              0/0           32      40  1/10
    cs7       0/0               0/0              0/0           34      40  1/10
     ef       0/0               0/0              0/0           36      40  1/10
   rsvp       0/0               0/0              0/0           36      40  1/10
default       0/0               0/0              0/0           20      40  1/10


        Class-map: bulk (match-all)
          1256569 packets, 229345468 bytes
          30 second offered rate 2000 bps, drop rate 0 bps
          Match: ip dscp af11 (10) af12 (12)
          Queueing
            Output Queue: Conversation 268
            Bandwidth 4 (%)
            Bandwidth 400 (kbps)
            (pkts matched/bytes matched) 133689/127505237
        (depth/total drops/no-buffer drops) 0/78/0
             exponential weight: 9
             mean queue depth: 0

   dscp    Transmitted      Random drop      Tail drop    Minimum Maximum  Mark
           pkts/bytes       pkts/bytes       pkts/bytes    thresh  thresh  prob
   af11 1256491/229227064      78/118404         0/0           32      40  1/10
   af12       0/0               0/0              0/0           28      40  1/10
   af13       0/0               0/0              0/0           24      40  1/10
   af21       0/0               0/0              0/0           32      40  1/10
   af22       0/0               0/0              0/0           28      40  1/10
   af23       0/0               0/0              0/0           24      40  1/10
   af31       0/0               0/0              0/0           32      40  1/10
   af32       0/0               0/0              0/0           28      40  1/10
   af33       0/0               0/0              0/0           24      40  1/10
   af41       0/0               0/0              0/0           32      40  1/10
   af42       0/0               0/0              0/0           28      40  1/10
   af43       0/0               0/0              0/0           24      40  1/10
    cs1       0/0               0/0              0/0           22      40  1/10
    cs2       0/0               0/0              0/0           24      40  1/10
    cs3       0/0               0/0              0/0           26      40  1/10
    cs4       0/0               0/0              0/0           28      40  1/10
    cs5       0/0               0/0              0/0           30      40  1/10
    cs6     192/14976           0/0              0/0           32      40  1/10
    cs7       0/0               0/0              0/0           34      40  1/10
     ef       0/0               0/0              0/0           36      40  1/10
   rsvp       0/0               0/0              0/0           36      40  1/10
default       0/0               0/0              0/0           20      40  1/10


        Class-map: scavenger (match-all)
          0 packets, 0 bytes
          30 second offered rate 0 bps, drop rate 0 bps
          Match: ip dscp cs1 (8)
          Queueing
            Output Queue: Conversation 269
            Bandwidth 1 (%)
            Bandwidth 100 (kbps)Max Threshold 64 (packets)
            (pkts matched/bytes matched) 0/0
        (depth/total drops/no-buffer drops) 0/0/0

        Class-map: class-default (match-any)
          4886450 packets, 3080115298 bytes
          30 second offered rate 808000 bps, drop rate 0 bps
          Match: any
          Queueing
            Output Queue: Conversation 270
            Bandwidth 25 (%)
            Bandwidth 2500 (kbps)
            (pkts matched/bytes matched) 1016074/1286989849
        (depth/total drops/no-buffer drops) 0/4263/0
             exponential weight: 9
             mean queue depth: 0

  class    Transmitted      Random drop      Tail drop    Minimum Maximum  Mark
           pkts/bytes       pkts/bytes       pkts/bytes    thresh  thresh  prob
      0 4882188/3074467687   4186/5550757       77/97314       20      40  1/10
      1       0/0               0/0              0/0           22      40  1/10
      2      56/35640           0/0              0/0           24      40  1/10
      3       0/0               0/0              0/0           26      40  1/10
      4       0/0               0/0              0/0           28      40  1/10
      5       0/0               0/0              0/0           30      40  1/10
      6    1372/107016          0/0              0/0           32      40  1/10
      7       0/0               0/0              0/0           34      40  1/10
   rsvp       0/0               0/0              0/0           36      40  1/10

Highlighted

Hello from me, i think priority for the critical traffic is better solution, wred, drops random packets to avoid congestion, so what its happening is normal,

try the priority for this class, so it has guaranteed bandwidth and the packets are going first above all other packets to achieve better delay/jitter

HTH

Highlighted

Hello HTH,

I don't believe LLQ will help.  WRED only randomly drops packets that are queued.  LLC doesn't have a queue at all.  So it's more like strict tail drop.  I believe even more packets would get lost.  Priority queueing is really designed for real time UDP traffic.

-Blake

Highlighted

Hi Blake,

The packet will be queued as long as the receiving rate is higher than it can send out through the shaper. Change the tc will not avoid packet being queued, however, it might help lowering the wred drops. Since software router shaper is sending bursts not cir, lower tc will make the router serve more frequently, but I think that will somewhat impact the CPU ; you can try it and see if that helps. If you want change the wred threshold, just becareful with the queue-limit size. The default queue limit for shaper is 1000, that should be > the sum of queue-limit from all classes. You can try low 100 and high 300 as a starting point.

Regards,

Lei Tian

Highlighted

Here is where I think the problem lies.  The shaper is active when the throughput is less than 10% of the target shape...

GigabitEthernet0/0/0.75

  Service-policy output: shape-10meg

    Class-map: class-default (match-any)
      2697195 packets, 634761202 bytes
      30 second offered rate 741000 bps, drop rate 0 bps
      Match: any
      Traffic Shaping
           Target/Average   Byte   Sustain   Excess    Interval  Increment
             Rate           Limit  bits/int  bits/int  (ms)      (bytes) 
         10000000/10000000  6250   50000     0         5         6250    

        Adapt  Queue     Packets   Bytes     Packets   Bytes     Shaping
        Active Depth                         Delayed   Delayed   Active
        -      32        2696152   633261249 157336    186568429 yes

Highlighted

Hi Blake,

Lower tc will make the shaper become active more frequently, I think that is expected. With the current setting, if interface receives more than 6250 bytes within 5ms, the shaper will become active. As I mentioned, change tc won't prevent packet got queued.

Did you have chance to adjust min/max threshold? That should help to prevent drops from wred.

Regards,

Lei Tian

Highlighted

Hello Lei,

I agree the shaper would become active more often, but I would also think that it shouldn't stay active as long.  Which I'm hoping will shorten my queue lengths.  It's still frustrating that it's activating when the interface is pushing less than 10% of the configured throughput.  I'm also aware however, that the throughput reading is a 30 second average, and that the bursty nature of ethernet on such a link-speed mismatched interface is to be expected.  I started with doubling the size of my queues to see if I could minimize drops, but I fear trying to limit throughput to 1% of the serialization speed will require massive queue sizes even if the 30 second average is well below the 1% targetted bandwidth.

-Blake

Content for Community-Ad