cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2525
Views
0
Helpful
7
Replies

rate limit on cisco 3750 switch is not working

nawazkaleem
Level 1
Level 1

Hi, we have 3750 switch having c3750-ipservicesk9-mz.122-35.SE1.bin IOS and we want to apply rate limit the traffic on the port which is routed port .We have try to achieve the following task through the rate limit command and also try the policy map but it's not working.

mls qos

!

class-map match-all test

  match access-group 199

!

policy-map test

  class test

    police 1024000 192000 exceed-action drop

!

interface GigabitEthernet1/0/18

no switchport

ip address a.a.a.a 255.255.255.252

rate-limit output 1024000 192000 384000 conform-action transmit exceed-action drop

load-interval 30

spanning-tree portfast

service-policy input test

end

!

Extended IP access list 199

    10 permit ip any any

if there is any configuration mistake or any thing which i never perform Please guide me and help me to short out this issue

7 Replies 7

Joseph W. Doherty
Hall of Fame
Hall of Fame

Disclaimer

The   Author of this posting offers the information contained within this   posting without consideration and with the reader's understanding that   there's no implied or expressed suitability or fitness for any purpose.   Information provided is for informational purposes only and should not   be construed as rendering professional advice of any kind. Usage of  this  posting's information is solely at reader's own risk.

Liability Disclaimer

In   no event shall Author be liable for any damages whatsoever (including,   without limitation, damages for loss of use, data or profit) arising  out  of the use or inability to use the posting's information even if  Author  has been advised of the possibility of such damage.

Posting

Egress rate-limiting (or egress policy policing) might not be supported for 3750 egress.  There's a command to "idle" egress port usage (srr-queue bandwidth limit), which effectively works much like an interface shaper.  There's also per egress queue shaping, if you enable QoS.

in the above configuration i apply policy map in ingress and if i apply rate limit command in ingress than it also not affected.

if i use

srr-queue bandwidth limit it is affected for whole traffic which pass through this interface or i restrict bandwidth for specific traffic through acl

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

I believe you should be able to use an ingress policy that polices.

The srr-queue bandwidth limit command impacts all traffic egressing the interface.

Dear i try switch accept these commands but traffic never restrict so how i restrict that clients never exceed to 10 mbps

Disclaimer

The  Author of this posting offers the information contained within this  posting without consideration and with the reader's understanding that  there's no implied or expressed suitability or fitness for any purpose.  Information provided is for informational purposes only and should not  be construed as rendering professional advice of any kind. Usage of this  posting's information is solely at reader's own risk.

Liability Disclaimer

In  no event shall Author be liable for any damages whatsoever (including,  without limitation, damages for loss of use, data or profit) arising out  of the use or inability to use the posting's information even if Author  has been advised of the possibility of such damage.

Posting

You should be able to police ingress unless service policy doesn't work if QoS is disabled.  I'm not sure about that.  Is QoS enabled?

What kind of testing have you done that shows the ingress policy doesn't work?

When you say you want to limit clients to 10 Mbps, this would be at their edge egress port, correct?  If your really want to limit them to 10 Mbps, you might also consider running their Ethernet interface at 10 Mbps.

Qos is enabled and here is the output

  CIS-NOC-SW158#show mls qos

  QoS is enabled

  QoS ip packet dscp rewrite is enabled

  CIS-NOC-SW158#show mls qos interface gigabitEthernet 1/0/18

  GigabitEthernet1/0/18

  trust state: trust dscp

  trust mode: trust dscp

  trust enabled flag: ena

  COS override: dis

  default COS: 0

  DSCP Mutation Map: Default DSCP Mutation Map

  Trust device: none

  qos mode: port-based

current policy of the interface is

interface GigabitEthernet1/0/18

no switchport

ip address x.x.x.x 255.255.255.252

load-interval 30

srr-queue bandwidth limit 10

spanning-tree portfast

service-policy input test

end

here is the output when i download and upload data

CIS-NOC-SW158(config-if)#do show int gi 1/0/18

GigabitEthernet1/0/18 is up, line protocol is up (connected)

  Hardware is Gigabit Ethernet, address is 0013.7fa9.ee49 (bia 0013.7fa9.ee49)

  Internet address is

  MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,

     reliability 255/255, txload 1/255, rxload 31/255

  Encapsulation ARPA, loopback not set

  Keepalive set (10 sec)

  Full-duplex, 1000Mb/s, media type is 10/100/1000BaseTX

  input flow-control is off, output flow-control is unsupported

  ARP type: ARPA, ARP Timeout 04:00:00

  Last input 00:00:24, output 00:00:08, output hang never

  Last clearing of "show interface" counters 5d01h

  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0

  Queueing strategy: fifo

  Output queue: 0/40 (size/max)

  30 second input rate 121913000 bits/sec, 11783 packets/sec

  30 second output rate 3038000 bits/sec, 5930 packets/sec

     1546162 packets input, 1283765445 bytes, 0 no buffer

     Received 2514 broadcasts (0 IP multicasts)

     0 runts, 0 giants, 0 throttles

     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored

     0 watchdog, 1161 multicast, 0 pause input

     0 input packets with dribble condition detected

     1538163 packets output, 1371828044 bytes, 0 underruns

     0 output errors, 0 collisions, 0 interface resets

     0 babbles, 0 late collision, 0 deferred

     0 lost carrier, 0 no carrier, 0 PAUSE output

     0 output buffer failures, 0 output buffers swapped out

CIS-NOC-SW158(config-if)#do show int gi 1/0/18

GigabitEthernet1/0/18 is up, line protocol is up (connected)

  Hardware is Gigabit Ethernet, address is 0013.7fa9.ee49 (bia 0013.7fa9.ee49)

  Internet address is

  MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,

     reliability 255/255, txload 29/255, rxload 1/255

  Encapsulation ARPA, loopback not set

  Keepalive set (10 sec)

  Full-duplex, 1000Mb/s, media type is 10/100/1000BaseTX

  input flow-control is off, output flow-control is unsupported

  ARP type: ARPA, ARP Timeout 04:00:00

  Last input 00:00:34, output 00:00:01, output hang never

  Last clearing of "show interface" counters 5d01h

  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0

  Queueing strategy: fifo

  Output queue: 0/40 (size/max)

  30 second input rate 4074000 bits/sec, 5623 packets/sec

  30 second output rate 114461000 bits/sec, 10920 packets/sec

     1836659 packets input, 1302757258 bytes, 0 no buffer

     Received 2518 broadcasts (0 IP multicasts)

     0 runts, 0 giants, 0 throttles

     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored

     0 watchdog, 1163 multicast, 0 pause input

     0 input packets with dribble condition detected

     2108803 packets output, 2122981878 bytes, 0 underruns

     0 output errors, 0 collisions, 0 interface resets

     0 babbles, 0 late collision, 0 deferred

     0 lost carrier, 0 no carrier, 0 PAUSE output

     0 output buffer failures, 0 output buffers swapped out

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

Sorry for the delayed response - my home Internet connection has been down from Saturday until now.

From you stats, you're probably wondering how you can have 12 Mbps ingress when you police at 10 Mbps.  Well, first the ingress rate might represent actual bps received, i.e. before traffic is policed.  Second, not 100% sure policer and ingress bps are measuring same bps, i.e. both frames or packets.

For your egress rate, 114 Mbps seems too high for 10% of gig interface, but if you read the manual, you'll find this feature isn't exact.  I.e.:

Usage Guidelines

If you configure this command to 80 percent, the port is idle 20 percent  of the time. The line rate drops to 80 percent of the connected speed.  These values are not exact because the hardware adjusts the line rate in  increments of six.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card