cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1146
Views
0
Helpful
12
Replies

Settinb bc and be values for Shaping on C9200L

marcant01
Level 1
Level 1

I want to confiure Bc and Be values for traffic shaping, but as soon as I give those values to shape average command, it fails.

policy-map ethernetconnect_1gbit
 class class-default
  shape average 900000   

 If I take a look at the olicy-map on the interface, I get this:

#show policy-map interface GigabitEthernet1/0/1
 GigabitEthernet1/0/1 

  Service-policy output: ethernetconnect_1gbit

    Class-map: class-default (match-any)  
      0 packets
      Match: any 
      Queueing
      
      (total drops) 0
      (bytes output) 0
      shape (average) cir 900000, bc 3600, be 3600
      target shape rate 900000

I looks like there are defaults. It it possibl to change these values?

Kind regards!

12 Replies 12

Joseph W. Doherty
Hall of Fame
Hall of Fame

What does your device show when you enter a question mark after entering the shsper's numeric bandwidth setting?

BTW why do you wish to set Bc and Be?  I.e. you fully understand their significance?

Hi Joseph,

thanks for taking a look. I have to follow this documentation (Page 15):
https://community.cisco.com/t5/switching/settinb-bc-and-be-values-for-shaping-on-c9200l/m-p/4939229#M552804

I must set a buffer size of 60.000 Bytes  and a low latency. I understood the other cisco documents in the way, that this is done through the Bc and Be values.

 

 

Your reference is this posting.(?)

To your OP of being unable to set Bc or Be I asked what does using a question mark, during command entry show, and you didn't answer that.  (The question mark should show permitted syntax.)

To your reply of setting buffer to 60.000 (that's 60 thousand?), for low latency, using a shaper, you'll need to further explain your goal.  Generally, a shaper attempts to mimic the behavior of having less bandwidth but depending on your goals, it often doesn't mimic very well.

Hi Joseph,

sorry for missing the CLI part. Here it is:

cpe(config-pmap-c)#shape ?
  average  configure token bucket: CIR (bps) [Bc (bits) [Be (bits)]], send out Bc only per interval

cpe(config-pmap-c)#shape average ?
  <15000-100000000000>  Target Bit Rate (bits/sec). (postfix k, m, g optional; decimal point allowed)
  percent               % of interface bandwidth for Committed information rate

cpe(config-pmap-c)#shape average 100000000000 ?

The device is behind a modem of our local carrier Telekom. We have noticed that if we do not limit the burst size we have a lot of drops. We do see that especially with RTP UDP traffic. That is the 60.000 bytes value. The line itself has a speed of 1gbit. The  shape average above value of 900000 was just for illustration.
For example, we had a netgear switch at the same position with a buffer of 128kbytes value and we still had problems.

So our plan with that configuration is to make sure we have a smooth traffic flow below the limit and a little bit buffering so that RTP UDP traffic is not dropped.

Ah, I understand your situation.  Not that uncommon with carries that offer bandwidth limits less than physical interface bandwidth.

Generally, for what you're trying to mitigate, Bc is not (too much) the variable we're interesting in changing.

First, you need to fully understand, all a shaper really does for us, is move the congestion point from somewhere further upstream, where we can cannot manage it, to somewhere we can manage it.  I.e. if your traffic is overrunning available bandwidth, that will still be true using a shaper (alone).  Again, though, a shaper, allows us to manage congestion.

What you want to do, is set the average rate to be the provider's "up" bandwidth, although, say, about 15% less.  (The latter, because, I believe many Cisco shapers don't account for L2 overhead, and 15%, usually covers that.)

Once all the congestion is at the shape, you can manage this congestion.  For example, for your RTP UDP traffic, you might insure it gets dropped last and/or provide it additional buffer/queuing space and/or prioritize it over other traffic.  I can suggest how you can do any all of the forging, but you need to define what this RTP traffic "needs".

Coming back to Bc (and Be), those might need adjustment too.  You adjust them smaller to make a shaper behave "closer" to a physical link of the same bandwidth, which might be important if jitter is a RTP consideration.

 


@Joseph W. Doherty wrote:

Ah, I understand your situation.  Not that uncommon with carries that offer bandwidth limits less than physical interface bandwidth.

Generally, for what you're trying to mitigate, Bc is not (too much) the variable we're interesting in changing.

First, you need to fully understand, all a shaper really does for us, is move the congestion point from somewhere further upstream, where we can cannot manage it, to somewhere we can manage it.  I.e. if your traffic is overrunning available bandwidth, that will still be true using a shaper (alone).  Again, though, a shaper, allows us to manage congestion.

Ok, I understand.

What you want to do, is set the average rate to be the provider's "up" bandwidth, although, say, about 15% less.  (The latter, because, I believe many Cisco shapers don't account for L2 overhead, and 15%, usually covers that.)

I will make sure I use 15% less!

Once all the congestion is at the shape, you can manage this congestion.  For example, for your RTP UDP traffic, you might insure it gets dropped last and/or provide it additional buffer/queuing space and/or prioritize it over other traffic.  I can suggest how you can do any all of the forging, but you need to define what this RTP traffic "needs".

Coming back to Bc (and Be), those might need adjustment too.  You adjust them smaller to make a shaper behave "closer" to a physical link of the same bandwidth, which might be important if jitter is a RTP consideration.


Do you have a hint for that?

On a linux system, I can use traffic control for that.

tc qdisc add dev eth1 root tbf rate 999mbit latency 50ms burst 60000

It includes the ethernet overheads and creates a buffer with a max burst size of 60.000 Bytes over a timespan of 50msec. With that, the problems with spiky RTP traffic are gone.

Today I will create a test setup, so I can at leat verify a working "speed limit" and provide my "testing config".

 

 

Testsetup done, I get a roung 5% lower bandwidth with IP. So 15% seems like a good reserve.

Are you seeing output drops (with the shaper in place)?

If so, first we might try (global command) "qos queue-softmax-multiplier 1200."

If that doesn't mitigate drops, we can get into further tuning options.

Hi!

No drops. Looks good. But I cannot run tests the provider modem. I will come back once the device is deployed.

Many thanks, Joseph!

What is exactly your SW platform no.? 

Hi !

It is the latest recommended version:

Switch Ports Model              SW Version        SW Image              Mode   
------ ----- -----              ----------        ----------            ----   
*    1 52    C9200L-48T-4X      17.06.04          CAT9K_LITE_IOSXE      INSTALL


@spotifyclub956 wrote:

The gadget is behind a modem of our nearby transporter Telekom. We have seen that in the event that we don't restrict the burst size we have a great deal of drops. We in all actuality do see that particularly with RTP UDP traffic. That is the 60.000 bytes esteem. The actual line has a speed of 1gbit. The shape normal above worth of 900000 was only for representation.
For instance, we had a netgear switch at a similar situation with a cushion of 128kbytes worth we actually had issues.

So our arrangement with that design is to ensure we have a smooth traffic stream beneath the cutoff and a smidgen buffering so RTP UDP traffic isn't dropped.


That's expected behavior.

Why?

Because, generally the service provider will police your traffic at the CIR rate.  It's when you exceed their Bc (and perhaps Be) your traffic will be dropped.

Remember, whether shaping or policing, actual frames are always transmitted at line-rate.  What a shaper or policer does, is measure bandwidth usage consumption during some time interval (Tc), and if over limit, traffic is either queued (shaper) or dropped (policer).  (NB: shaper queues can overflow, which will also drop traffic and policers can be used to mark, not drop, overrate traffic too.)

Also BTW, Bc, i.e. burst, terminology can be a bit misleading, as a "burst" doesn't need to be continuous back-to-back frames/packets.  Again, it's a measure of total bandwidth consumption during some time interval.  Actual usage of bandwidth, during the time interval, doesn't matter; just the bandwidth consumption, overall, during the time interval. 

Review Cisco Networking for a $25 gift card