cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1100
Views
0
Helpful
20
Replies

How to avoid QOS bandwidth limits on SVTIs

Paul Morgan
Level 1
Level 1

Hi all,

 

I have a hub and spoke network with spokes connected using SVTIs back to the hub.

All the tunnels are assigned the same bandwidth command on the hub router to facilitate equal routing metrics.

But this means QOS policies applied to the SVTIs (tunnels) are all using the same bandwidth calculation to determine congestion.

Some spokes have far more REAL bandwidth than others so I want to apply QOS accordingly.

Is there any way of getting QOS to use a different bandwidth than the one specified in the bandwidth command on the tunnel?

thanks,

Paul

1 Accepted Solution

Accepted Solutions

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

The tunnel bandwidth statement will not, by itself, throttle bandwidth.  However, the shaper will.  So yes, in answer to your question, egress traffic could exceed 2.5 Mbps (it's possible to have more than "100%" utilization, in this example).

View solution in original post

20 Replies 20

Joseph W. Doherty
Hall of Fame
Hall of Fame

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

Use a shaper, per spoke.

Doesn't the calculation for whether or not the interface is congested (and hence shaped) use the bandwidth as per the command set on each spoke though?

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

Yes and no.

The bandwidth parameter will be used for certain QoS calculations, but if your path bandwidth is less than your port bandwidth, you need a shaper to create congestion that corresponds to the downstream bandwidth for QoS to "trigger" at that transmission level.

Also except for policers or shapers, QoS based on the bandwidth is use to determine queuing ratios.  So, accurate usage of bandwidth, for QoS, doesn't matter much, except perhaps if you're monitoring usage and graphing percentages of capacity.

That is excellent information, thanks.

So this will completely box your brain...

I want to calculate congestion based on downstream bandwidth but I want it to be able to go higher than the bandwidth set on the interface. Will the QOS calculation default to the interface bandwidth as the max bandwidth before it considers it congested, regardless of the shaper command? So in the below example, would it shape at 4meg and allow it the go past the 2.5meg set on the interface?

eg

policy-map TEST

 class class-default

 shape average 4000000

 service-policy TUNNEL

!

Int tunnel 1

bandwidth 2500

service-pol out TEST

 

Thanks for your help

 

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

The tunnel bandwidth statement will not, by itself, throttle bandwidth.  However, the shaper will.  So yes, in answer to your question, egress traffic could exceed 2.5 Mbps (it's possible to have more than "100%" utilization, in this example).

Ok that sounds awesome. Many many thanks.

One last quick question;

can I apply the same child policy to multiple shaper parent policies?

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

Normally, yes.

really thanks. Testing now. I hate live tests ~_~

Ok now Im just confused. The below output suggests that the policy is dropping traffic even though the target rate hasn't been achieved... ??

 

CORERTR2#sh policy-map interface tunnel 58
 Tunnel58

  Service-policy output: SHAPE58

    Class-map: class-default (match-any)
      119353 packets, 137408752 bytes
      5 minute offered rate 988000 bps, drop rate 7000 bps
      Match: any
      Queueing
      queue limit 64 packets
      (queue depth/total drops/no-buffer drops) 0/804/0
      (pkts output/bytes output) 118548/145193016
      shape (average) cir 9000000, bc 36000, be 36000
      target shape rate 9000000

      Service-policy : TUNNELQOS

        queue stats for all priority classes:
          Queueing
          queue limit 64 packets
          (queue depth/total drops/no-buffer drops) 0/0/0
          (pkts output/bytes output) 0/0

        Class-map: VOICE (match-any)
          0 packets, 0 bytes
          5 minute offered rate 0000 bps, drop rate 0000 bps
          Match:  dscp ef (46)
            0 packets, 0 bytes
            5 minute rate 0 bps
          Match: protocol rtp
            0 packets, 0 bytes
            5 minute rate 0 bps
          Priority: 190 kbps, burst bytes 4750, b/w exceed drops: 0


        Class-map: APPS (match-any)
          699 packets, 211038 bytes
          5 minute offered rate 0000 bps, drop rate 0000 bps
          Match: access-group name APPSERVERS
            699 packets, 211038 bytes
            5 minute rate 0 bps
          Queueing
          queue limit 64 packets
          (queue depth/total drops/no-buffer drops) 0/0/0
          (pkts output/bytes output) 698/263468
          bandwidth 25% (2250 kbps)

        Class-map: class-default (match-any)
          118654 packets, 137197714 bytes
          5 minute offered rate 986000 bps, drop rate 8000 bps
          Match: any
          Queueing
          queue limit 64 packets
          (queue depth/total drops/no-buffer drops/flowdrops) 0/804/0/804
          (pkts output/bytes output) 117850/144929548
          Fair-queue: per-flow queue limit 16 packets

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

Welcome to the world of QoS tuning.

It's likely the target rate has been reached (unless you're looking at a bug).

What you need to realize, the stat you're looking at is a five minute average rate, but shapers tend to measure rates down at the millisecond level, 25 or 4 ms common Tc default intervals.

When there is a burst, your class default only has allocated 64 packets for its queuing (16 packets per flow queue), probably much too small for 9 Mbps.

Hmm...

I seem to be coming up against some kind of interface bandwidth calculation that Im not aware of.

Ive done some testing this morning.

I tried:

bandwidth 8000

bandwidth 9000

bandwidth 32000

ip bandwidth-percent eigrp 1 99

ip bandwidth-percent eigrp 1 10

and with and without a shaper policy.

I also monitored the CPU and mem usage which barely noticed it.

The max send output was around 2.4Mb when the bandwidth was set to 32000. But this only changed marginally from settings of 9000 and 8000, when send output was around 2Mb.

The physical interface is 100Mb and I didn't register any congestion or buffer errors on there.

Not sure what Im missing here? Should I allocate more buffers? I should be able to get my max tunnel transmit of 8Mb shouldn't I, without any changes from default? And if I up the max transmit, it should be unlimited on a tunnel interface, as per physical interface...

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

Sorry, I guess I should have been more clear.  I'm suggesting you adjust, if possible, the buffers within the policy.  Some IOS versions allow queue-length (?) within the class maps.  What I'm suggesting you adjust is:

Class-map: class-default (match-any)
          118654 packets, 137197714 bytes
          5 minute offered rate 986000 bps, drop rate 8000 bps
          Match: any
          Queueing
          queue limit 64 packets
          (queue depth/total drops/no-buffer drops/flowdrops) 0/804/0/804
          (pkts output/bytes output) 117850/144929548
          Fair-queue: per-flow queue limit 16 packets

OK so I played with fair-queue and queue-limit. I can only apply these on the child policy.

First, with fair Q set to 1024, no change.

With Q limit set to 4096, drops no longer occur on the QOS side. But I don't see any improvement in throughput. So it is just buffering the traffic for same overall output.

So Im thinking it is calculating output thresholds within QOS based on a bandwidth it has calculated BEFORE the packets get to the QOS engine and it isn't the interface bandwidth setting since this was set high for a test.

I cant really change the bandwidth setting on the physical interface. Could it be taking a percentage of this based on total sub-interfaces for its calculation?

 

Class-map: class-default (match-any)
          71005 packets, 82469622 bytes
          30 second offered rate 1266000 bps, drop rate 0000 bps
          Match: any
          Queueing
          queue limit 4096 packets
          (queue depth/total drops/no-buffer drops/flowdrops) 0/120/0/120
          (pkts output/bytes output) 70883/87643106
          Fair-queue: per-flow queue limit 1024 packets
                      Maximum Number of Hashed Queues 16

Review Cisco Networking for a $25 gift card