cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
7894
Views
0
Helpful
4
Replies

Output drops Cisco 6509

jonathanaxford
Level 3
Level 3

Hi All,

I have noticed that a number of the interfaces on a Cisco 6509 switch are reporting output drops. The links in question are all using BT "LES" circuits (BT ethernet extension circuits) to connect to Cisco 3560's at the remote sites. There are no drops (Input or output) recorded on the remote ends.

Each interface in question is on a  CEF720 48 port 10/100/1000mb Ethernet  WS-X6748-GE-TX card in the chassis. The interface config is:

interface GigabitEthernet3/39
description Connected to
switchport
switchport trunk encapsulation dot1q
switchport trunk native vlan 36
switchport trunk allowed vlan 36,999
switchport mode trunk
no ip address
speed 100
duplex full
wrr-queue bandwidth 5 25 70
wrr-queue queue-limit 5 25 40
wrr-queue random-detect min-threshold 1 80 100 100 100 100 100 100 100
wrr-queue random-detect min-threshold 2 80 100 100 100 100 100 100 100
wrr-queue random-detect min-threshold 3 50 60 70 80 90 100 100 100
wrr-queue random-detect max-threshold 1 100 100 100 100 100 100 100 100
wrr-queue random-detect max-threshold 2 100 100 100 100 100 100 100 100
wrr-queue random-detect max-threshold 3 60 70 80 90 100 100 100 100
mls qos trust dscp
end

Speed is hardcoded to 100Mbps Full Duplex on both ends. In this example VLAN 36 is used as the Layer 3 routed interface for the connection.

A "Show interface x/x" shows:

NPLYG28-C-1#sho int gi 3/39
GigabitEthernet3/39 is up, line protocol is up (connected)
  Hardware is C6k 1000Mb 802.3, address is 001b.2ab4.71a6 (bia 001b.2ab4.71a6)
  Description: Connected to Corsham Court LES100
  MTU 1500 bytes, BW 100000 Kbit, DLY 10 usec,
     reliability 255/255, txload 1/255, rxload 1/255
  Encapsulation ARPA, loopback not set
  Keepalive set (10 sec)
  Full-duplex, 100Mb/s
  input flow-control is off, output flow-control is on
  Clock mode is auto
  ARP type: ARPA, ARP Timeout 04:00:00
  Last input 00:00:46, output 00:00:45, output hang never
  Last clearing of "show interface" counters 01:22:39
  Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 253
  Queueing strategy: fifo
  Output queue: 0/40 (size/max)
  5 minute input rate 59000 bits/sec, 39 packets/sec
  5 minute output rate 271000 bits/sec, 92 packets/sec
     410316 packets input, 72828784 bytes, 0 no buffer
     Received 2203 broadcasts (1651 multicasts)
     0 runts, 0 giants, 0 throttles
     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
     0 watchdog, 0 multicast, 0 pause input
     0 input packets with dribble condition detected
     793089 packets output, 548616558 bytes, 0 underruns
     0 output errors, 0 collisions, 0 interface resets
     0 babbles, 0 late collision, 0 deferred
     0 lost carrier, 0 no carrier, 0 PAUSE output
     0 output buffer failures, 0 output buffers swapped out

So, its had about 253 drops in the last hour or so. The "Show queueing" output shows:

NPLYG28-C-1#show queueing int gi 3/39
Interface GigabitEthernet3/39 queueing strategy:  Weighted Round-Robin
  Port QoS is enabled
  Trust state: trust DSCP
  Extend trust state: not trusted [COS = 0]
  Default COS is 0
    Queueing Mode In Tx direction: mode-cos
    Transmit queues [type = 1p3q8t]:
    Queue Id    Scheduling  Num of thresholds
    -----------------------------------------
       01         WRR                 08
       02         WRR                 08
       03         WRR                 08
       04         Priority            01

    WRR bandwidth ratios:    5[queue 1]  25[queue 2]  70[queue 3]
    queue-limit ratios:      5[queue 1]  25[queue 2]  40[queue 3]  15[Pri Queue]

    queue tail-drop-thresholds
    --------------------------
    1     70[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8]
    2     70[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8]
    3     100[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8]

    queue random-detect-min-thresholds
    ----------------------------------
      1    80[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8]
      2    80[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8]
      3    50[1] 60[2] 70[3] 80[4] 90[5] 100[6] 100[7] 100[8]

    queue random-detect-max-thresholds
    ----------------------------------
      1    100[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8]
      2    100[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8]
      3    60[1] 70[2] 80[3] 90[4] 100[5] 100[6] 100[7] 100[8]

    WRED disabled queues:

    queue thresh cos-map
    ---------------------------------------
    1     1      0
    1     2      1
    1     3
    1     4
    1     5
    1     6
    1     7
    1     8
    2     1      2
    2     2      3 4
    2     3
    2     4
    2     5
    2     6
    2     7
    2     8
    3     1      6 7
    3     2
    3     3
    3     4
    3     5
    3     6
    3     7
    3     8
    4     1      5

    Queueing Mode In Rx direction: mode-cos
    Receive queues [type = 1q8t]:
    Queue Id    Scheduling  Num of thresholds
    -----------------------------------------
       01         WRR                 08

    WRR bandwidth ratios:  100[queue 1]
    queue-limit ratios:    100[queue 1]

    queue tail-drop-thresholds
    --------------------------
    1     100[1] 100[2] 100[3] 100[4] 100[5] 100[6] 100[7] 100[8]

    queue thresh cos-map
    ---------------------------------------
    1     1      0 1 2 3 4 5 6 7
    1     2
    1     3
    1     4
    1     5
    1     6
    1     7
    1     8


  Packets dropped on Transmit:

    queue     dropped  [cos-map]
    ---------------------------------------------
    1                      253  [0 1 ]
    2                        0  [2 3 4 ]
    3                        0  [6 7 ]
    4                        0  [5 ]

  Packets dropped on Receive:
    BPDU packets:  0

    queue              dropped  [cos-map]
    ---------------------------------------------
    1                        0  [0 1 2 3 4 5 6 7 ]

So, all of this info tells me that 250 packets have been dropped in queue 1. Can anyone point in in the direction of any possible cause for this? The interfaces themselves have extremely low utilisation - our day to day SNMP monitoring of them indicates a peak of about 20% absolulte max, general use sits at about 2%.

One thing I have noticed is that outbound flow control is set to "on" - could this be a cause of a confluct with the QoS settings on the interface?

Any help much appreciated,

Many thanks

Jonathan

1 Accepted Solution

Accepted Solutions

wrr queue-limit is always enforced. It is really making a hardware separation on the port asic of the line card.

Aka on a 6748 you have around 1.2 M of Tx Buffer and it is splitted according to those weight

you can see the exact splitting using : remote com sw sh qm-sp port-data 3 39

for port 3/39 and then look gfor the line :

* TX queue limit(size): [376832 32768 32768]

--> that gives the exact amount of byte of each queue.
Regarding WRR bandwidth, it is how we do empty the queue, but of course if a queue is empty the scheduler do bypass it.
so we are not reserving a certain bandwidth for a queue corresponding to its weight.
Cheers,
Roland

View solution in original post

4 Replies 4

rducombl
Cisco Employee
Cisco Employee

you have configured a very low queue-limit for Q1 (5) so that makes

where as Q2 as 25, Q3, 40 and Priority queue 15

--> that makes only 5 /(5+25+40+15) = 5/85 --> 6% of the buffers

it is very low, so a very limited burst will be enough to get drop.

in addition Q1 is the less frequently schedule queue as it is has lowest WRR weight

WE usually recommend to allocate the most buffer to the lowest priority queue as it is there that we have the more chance to queue packet..

Try to use something like :

wrr-queue bandwidth 5 25 70
wrr-queue queue-limit 55  25 5

That will probably be better.

Cheers,

Roland

Hi Roland,

Thanks for the advice, makes a lot of sense. I have just done some reading up on these commands - I got the values orginally from the "Cisco Catalyst 6500 Voice Deployment Guide", but I can now see that the limits would not be sufficient for our links.

I will give this a try and see what happens,

Many thanks

Jonathan

Hi,

I have one more question on this topice - the wrr-queue bandwidth and queue-limit settings, are these always enforced? Or is it only during times of congestion?

Many thanks

Jonathan

wrr queue-limit is always enforced. It is really making a hardware separation on the port asic of the line card.

Aka on a 6748 you have around 1.2 M of Tx Buffer and it is splitted according to those weight

you can see the exact splitting using : remote com sw sh qm-sp port-data 3 39

for port 3/39 and then look gfor the line :

* TX queue limit(size): [376832 32768 32768]

--> that gives the exact amount of byte of each queue.
Regarding WRR bandwidth, it is how we do empty the queue, but of course if a queue is empty the scheduler do bypass it.
so we are not reserving a certain bandwidth for a queue corresponding to its weight.
Cheers,
Roland

Review Cisco Networking for a $25 gift card