cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1043
Views
4
Helpful
6
Replies

Traffic shaping on ASR1004

Tatiana Vargova
Level 1
Level 1

Hello everyone,

We are trying to configure traffic shaping on ASR 1004 but it is not working as we expected.
Since the beginning we can see output drops at the WAN connection.
With a test server we generated a 2,5G traffic class Internal.
We could see the traffic arrived at the router from the LAN, but the WAN output is only 1,3GB and there is many output drops.
Are we missing something in configuration?
My understanding was that when the link is not congested all available bandwidth will be used. Is this behavior somehow different on ASR?
The current used configuration and show policy-map is below.

interface TenGigabitEthernet0/0/0
 service-policy output WAN-OUT
!
!
policy-map WAN-OUT
 class class-default
  shape average 2500000000
   service-policy WAN-Egress
policy-map WAN-Egress
 class Internal
  bandwidth 1615000
 class External
  bandwidth 875000
 class Mgmt
  bandwidth 10000
  shape average 10000000
!
!
class-map match-any Internal
 match precedence 2
 match mpls experimental topmost 2
class-map match-any Mgmt
 match precedence 6
 match mpls experimental topmost 6
class-map match-any External
 match precedence 3
 match mpls experimental topmost 3


sh policy-map interface Te0/0/0
 TenGigabitEthernet0/0/0

  Service-policy output: WAN-OUT

    Class-map: class-default (match-any)
      21745966 packets, 123321032624 bytes
      30 second offered rate 57834000 bps, drop rate 38714000 bps
      Match: any
      Queueing
      queue limit 1698 packets
      (queue depth/total drops/no-buffer drops) 0/5286731/0
      (pkts output/bytes output) 16434386/79767196905
      shape (average) cir 2500000000, bc 10000000, be 10000000
      target shape rate 2500000000

      Service-policy : WAN-Egress

        Class-map: Internal (match-any)
          13541614 packets, 111221377574 bytes
          30 second offered rate 57697000 bps, drop rate 25840000 bps
          Match:  precedence 2
          Match: mpls experimental topmost 2
          Queueing
          queue limit 1096 packets
          (queue depth/total drops/no-buffer drops) 0/5286731/0
          (pkts output/bytes output) 8254883/67669287596
          bandwidth 1615000 kbps

        Class-map: External (match-any)
          8137902 packets, 12072313966 bytes
          30 second offered rate 0000 bps, drop rate 0000 bps
          Match:  precedence 3
          Match: mpls experimental topmost 3
          Queueing
          queue limit 594 packets
          (queue depth/total drops/no-buffer drops) 0/0/0
          (pkts output/bytes output) 8137902/12072313966
          bandwidth 875000 kbps

        Class-map: Mgmt (match-any)
          62216 packets, 24382450 bytes
          30 second offered rate 139000 bps, drop rate 0000 bps
          Match:  precedence 6
          Match: mpls experimental topmost 6
          Queueing
          queue limit 64 packets
          (queue depth/total drops/no-buffer drops) 0/0/0
          (pkts output/bytes output) 37392/22644770
          bandwidth 10000 kbps
          shape (average) cir 10000000, bc 40000, be 40000
          target shape rate 10000000

Many thanks for any suggestion.

Tatiana

6 Replies 6

Peter Paluch
Cisco Employee
Cisco Employee

Hello Tatiana,

Explicit policing and shaping are applied to traffic regardless of interface congestion. You are using a nested service policy, with the topmost policy performing shaping to 2.5 Gbps on the total traffic exiting that interface. This shaper is applied to all traffic regardless of the true congestion on the interface. In other words, the total amount of traffic can not exceed 2.5 Gbps.

The fact that your throughput is lower could be a result of TCP trying to find its equilibrium, and finding it around the 1.3 Gbps. If you used UDP as a measurement traffic, this should get close to 2.5 Gbps provided no other traffic is present.

Is there a particular reason why you are using a nested policy with the general shaping of all traffic down to 2.5 Gbps? There is nothing wrong with it per se but I am asking to find out the purpose - what you are trying to accomplish.

Best regards,
Peter

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

I agree with Peter.

Your drop problem might be, you don't have sufficient queuing to support your BDP for 2.5 Gbps, assuming your transmission stream can arrive at 10G (or more).

Hello Peter :),

 

Thank you for the answer.

I was wondering basically why was the throughput during the test at 1.3Gbps and not higher.

As you mentioned  the total amount of traffic will not exceed 2.5 because of the shaping but measured 1.3 seems low especially when we can see that traffic was dropped.

Once the policy was removed from interface configuration the throughput reached 2.5 (traffic generated by server for test purposes) with no drops.

 

QoS is not really my area of expertise so I am glad for any insights here.

 

Thanks

 

Hi Tatiana,

What was the nature of the test stream? Was it a TCP or a UDP stream? Is there an option of intentionally testing the throughput again using UDP?

With TCP, throughput is always a moving target. TCP tries to have its own mind about reacting to packet losses, and some implementations may react relatively aggressively. Windows and Unix-based TCP implementations may vary wildly. It would actually be very interesting to repeat the same test again, and with TCP as well, but this time make sure that both the server and the client are a totally different operating system than before.

I do not have a firm explanationi but my gut feeling is that this has something to do with how the particular TCP implementation went along with the occassional shaping it was subjected to.

Best regards,
Peter

Umesh Shetty
Level 1
Level 1

Hi Peter/Joseph,


Could this also be because of bursty traffic? The test server may generate load an 2.5 gbps but how the router calulates the shaping rate is a litle different, router does not shape 2.5 gb per second it actually divides this into smaller time periods that we call burts . As in this configuration the burt is not explicitly configured the default burts interval is 4 ms.That would mean there will be 250 such intervals of 4 ms' and the shaping rate 2.5 gbps will be equally divided into 250 intervals ie 10 mb in every 4 ms interval which would sum up to 2.5 gbps in 1 sec (10mbps *250 = 2.5 gb & 4 ms *250=1 sec) to send equal 


So this means the router would allow not more than 10 mb of commited BW in any interval and an additional traffic of up 10 mb(exceed burts) if that was not used by the previous interval. I suspect that the bursty nature of the traffic causes more that the allowed burst being sent during an inteval and the excess traffic is queued and as Joseph said the buffers are not large enough they get dropped.

When there is no shaping there is no shaping in intervals, which means the router would shape at the whole 1 sec interval no matter how burtsy the traffic is if it confines within the 2.5 gb per sec limit which it is it will be sent out.

I hope I am correct here any corrections would be most helpful :) 

 

Regards

Umesh

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

"Could this also be because of bursty traffic?"

Yes, most likely.  One of the attributes of TCP, it tends to be "bursty".  If the feed to the 2.5 Gbps shaper, is greater than 2.5 Gbps, the shaper will queue traffic.  How TCP responds to such queuing, depends much on the particular TCP implementation.

Some TCP implementations respond "poorly" to congestion, especially if there are drops.  One of the worst cases is if the sender keeps cycling TCP slow start.  If it does, there's often lots of drops and poor throughput.  How network devices buffer, can impact how a TCP flow operates.

When the shaper is removed, there's still, it appears, a downstream 2.5 Gbps bottleneck, but how it impacts overrate traffic might be quite different from your shaper's impact, and hence, you obtain your 2.5 Gbps.

Although at first glance, removing the shaper appears the "better" option, that might be far from the case.  The downstream bottleneck may still be dropping packets, and/or creating latency for traffic.  Such might be okay, or not, but if not, you have lost management of such congestion.

The solution, is "tuning" your QoS configuration to obtain your QoS policy goals.

Review Cisco Networking products for a $25 gift card