cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1115
Views
0
Helpful
11
Replies

QoS

eXPlosion
Level 1
Level 1

Hi,

we have ME-3400-24FS-A series switch. It's uplink g0/1 is connected to other ME-3400-24FS-A. Clients are connected to f0/1-24

On almost all interfaces i see output drops, for example

1#sh policy-map int f0/12

Class-map: class-default (match-any)
3309253362 packets
Match: any
Tail Packets Drop: 193836
Queue Limit queue-limit 544 (packets)
Common buffer pool: 679
Current queue depth: 0
Current queue buffer depth: 0
Output Queue:
Max queue-limit default threshold: 544
Tail Packets Drop: 193836

FastEthernet0/12 is up, line protocol is up (connected)
Hardware is Fast Ethernet, address is xxxx.xxxx.a48e (bia xxxx.xxxx.a48e)
Description: xxx
MTU 1500 bytes, BW 100000 Kbit/sec, DLY 100 usec,
reliability 255/255, txload 6/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 100Mb/s, link type is auto, media type is 100BaseBX-10U SFP
input flow-control is off, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:10, output 00:03:26, output hang never
Last clearing of "show interface" counters never
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 2462955
Queueing strategy: fifo

5 minute input rate 23000 bits/sec, 50 packets/sec
5 minute output rate 2448000 bits/sec, 435 packets/sec
497430316 packets input, 354368359354 bytes, 0 no buffer
Received 91181 broadcasts (84542 multicasts)
0 runts, 0 giants, 0 throttles
21 input errors, 1 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 84542 multicast, 0 pause input
0 input packets with dribble condition detected
3237335263 packets output, 920304755262 bytes, 0 underruns
0 output errors, 0 collisions, 1 interface resets
0 unknown protocol drops
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 pause output
0 output buffer failures, 0 output buffers swapped out

Uplink is loaded 12% which is 120mbps

1# sh controllers gigabitEthernet 0/1 utilization
Receive Bandwidth Percentage Utilization : 12
Transmit Bandwidth Percentage Utilization : 1

Client port is loaded 2mbps (2%)

1#sh controllers fastEthernet 0/12 utilization
Receive Bandwidth Percentage Utilization : 0
Transmit Bandwidth Percentage Utilization : 2

Policy map for all fastethernet interfaces 

policy-map client
class multicast
bandwidth percent 50
class vod
bandwidth percent 20
queue-limit 544
class class-default
queue-limit 544

Why are we experiencing drops?

11 Replies 11

Philip D'Ath
VIP Alumni
VIP Alumni

Any chance you are getting short bursts of traffic in the Gbe port that is overwhelming the buffer on the 100Mb/s ports?

  • Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages wha2tsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

As Philip has already suggest, a like cause us short duration traffic bursts.

Your load stats appear to be using a 5 minute average.

Drops happen down at the microsecond and millisecond time ranges.

BTW, setting queue depths too large (more than about half the BDP) can sometimes be counter productive.  It can allow bursty flows to drop lots of packets.

Here are some additional info

1#sh platform qos statistics interface fastEthernet 0/12

FastEthernet0/12 (All statistics are in packets)

 

  dscp: incoming

-------------------------------

 

  0 -  4 :    12171894            0            0            0            0

  5 -  9 :           0            0            0            0            0

10 - 14 :           0            0            0            0            0

15 - 19 :           0            0            0            0            0

20 - 24 :           0            0            0            0            0

25 - 29 :           0            0            0            0            0

30 - 34 :           0            0            0            0            0

35 - 39 :           0            0            0            0            0

40 - 44 :           0            0            0            0            0

45 - 49 :           0            0            0         4840            0

50 - 54 :           0            0            0            0            0

55 - 59 :           0            0            0            0            0

60 - 64 :           0            0            0            0

  dscp: outgoing

-------------------------------

 

  0 -  4 :    18325996            0            0            0            0

  5 -  9 :           0            0            0      1178952            0

10 - 14 :           0            0            0            0            0

15 - 19 :           0            0            0            0            0

20 - 24 :           0            0            0            0            0

25 - 29 :           0            0            0            0            0

30 - 34 :           0            0          509            0            0

35 - 39 :           0            0            0            0            0

40 - 44 :           0            0            0            0            0

45 - 49 :           0            0            0       104098            0

50 - 54 :           0            0            0            0            0

55 - 59 :           0            0            0            0            0

60 - 64 :           0            0            0            0

  cos: incoming

-------------------------------

 

  0 -  4 :    12177877            0            0            0            0

  5 -  7 :           0            0            0

  cos: outgoing

-------------------------------

 

  0 -  4 :    82853362      1180423         1324         1538        12715

  5 -  7 :        1707       104669        31752

  output queues enqueued:

queue:    threshold1   threshold2   threshold3

-----------------------------------------------

queue 0:           0           0           0

queue 1:           0     1504256       32323

queue 2:           0           0           0

queue 3:           0           0    85995073

 

  output queues dropped:

queue:    threshold1   threshold2   threshold3

-----------------------------------------------

queue 0:           0           0           0

queue 1:           0           0           0

queue 2:           0           0           0

queue 3:           0           0      195855

All drops are in queue 3 in threshold3, (which i think is 100% by default?). All packets in queue 3 are enqueued (accordind to output queues enqueued) with threshold 3. What does it mean?

Btw, in cisco 3400 documentation: "By default, the total amount of buffer space is divided equally among all ports and all queues per port, which is adequate for many applications. You can decrease queue size for latency-sensitive traffic or increase queue size for bursty traffic." What is BDP?  Buffer size? because  in my config it is more than half:
 Queue Limit queue-limit 544 (packets)
Common buffer pool: 679

But i never see "Current queue depth: 0" other than zero. Aren't those microbursts be seen here? I never see it even when i refresh command every second 1#sh platform qos statistics interface fastEthernet 0/12. Also refreshing command "1#sh controllers fastEthernet 0/12 utilization" shows utilization 2,3, sometimes for example 8%. It never reaches 100% or even close. but it is utilization at that moment.

Can user notice slow connection with those microbursts in place? Because it is 100mbps link and it doesn't make any sence to me.

Whatever (queue3, threshold3) is, it is out pacing the output buffering.  It is highly likely related to bursty traffic.

It's been a while since I had to play with this stuff.  The queue used is related to COS/TOS/DSCP mapping.  Different QoS labels get mapped to different hardware queues.  Which ever one is mapped to this queue is becoming exhausted.

I found this post suggesting this is fairly normal on a ME3400.

http://puck.nether.net/pipermail/cisco-nsp/2010-May/070424.html

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages wha2tsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

I haven't worked with ME switches, other than the ME3750.  I've also worked with the 3650/3750 series.  On those, some IOS stats are not updated by the hardware, IOS interface counters are often not updated.  I.e. Perhaps something similar why you're not seeing queue depth exceed 0.

Also on those, when QoS is enabled, their default settings will result in increased drops, for they too, divide up buffer space between ports and hardware queues on ports.  On those, they also support a "common" pool.  I've found, often with some buffer setting adjustments, packets drops can be much reduced.  (I've had cases where several packets a second were being dropped reduced to several packets a day.)  Again, as I'm unfamiliar with the 3400, cannot say you can do similar.

BDP is bandwidth delay product.  You might search the Internet for more information on it.

You also might research the subject micro bursts.

What is the code running on this box?

Madhu

I meant software version.

sh ver

Cisco IOS Software, ME340x Software (ME340x-METROBASEK9-M), Version 12.2(60)EZ8, RELEASE SOFTWARE (fc2)

You can try increasing the queue-limit to maximum and see if it helps to stop these drops. Apply the below policy-map to one of the interface in out direction and check.

policy-map TEST
 class class-default
  queue-limit percent 100

How can i confirm that? If i increase queue-limit will it help?

Checked that 544 is the maximum, this switch has limited buffer capabilities.