cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1216
Views
5
Helpful
4
Replies

Output Drops won't stop incrementing

Chad Stachowicz
Level 6
Level 6

Hello,

   So I am a voice guy by nature but having an issue that is constantly affecting our UCCE which we believe is related to a point to point link between our clusters.

The point to point link is required by UCCE in order to put high priority traffic.  We noticed one set of servers one the link was constantly having delayed messages.  Some discovery let us to find that the output drops on both the Multilink interface and the Policy map is incrementing continuously.  The link is barely in use by our traffic monitor, and these servers should cause a relatively low amount of traffic in general.  Can someone please help me figure out what to look for, or if something we have configured could be causing out problems?  I have posted relevent snippets below...   We have 4 links in the bundle, all T1's

CONFIG

class-map match-any VMEDHIGH

match access-group name MDS_DMP_High

class-map match-any VDEFAULT

match access-group name The_Rest

class-map match-any VMEDLOW

match access-group name Med_Low

class-map match-any VHIGH

match access-group name UDP_Heartbeat

!

!

policy-map NewOne

class VHIGH

    priority 24

class VMEDHIGH

    bandwidth percent 87

    queue-limit 224 packets

class VMEDLOW

    bandwidth percent 10

    queue-limit 96 packets

class VDEFAULT

    bandwidth percent 1

!

buffers small permanent 800

buffers small max-free 1300

buffers small min-free 100

buffers middle permanent 700

buffers middle max-free 1000

buffers middle min-free 100

!

!

!

!

interface Multilink123

description Verizon Business Multilink Circuit

ip address 192.168.0.1 255.255.255.252

ip flow ingress

load-interval 30

no peer neighbor-route

ppp chap hostname 01524705-1658985

ppp multilink

ppp multilink links minimum 1

ppp multilink group 123

ppp multilink fragment disable

max-reserved-bandwidth 99

service-policy output NewOne

!

interface GigabitEthernet0/0

description Connection to Voice Private Network

ip address 192.168.10.2 255.255.255.0

ip flow ingress

load-interval 30

delay 10000

duplex full

speed 100

standby 1 ip 192.168.10.1

standby 1 priority 120

standby 1 preempt

standby 1 track 1 decrement 10

!

interface GigabitEthernet0/1

description connection to mia0mgr2960sw1 via Vlan 10 for Management

ip address 10.6.10.77 255.255.255.0

load-interval 30

duplex full

speed 100

!

interface GigabitEthernet0/2

no ip address

shutdown

duplex auto

speed auto

!

interface Serial0/0/0:0

description MGBKVHBP0001 Local LEC ID 60HCGS440396SB

no ip address

encapsulation ppp

load-interval 30

ppp quality 90

ppp chap hostname 01524705-1658985

ppp multilink

ppp multilink group 123

max-reserved-bandwidth 99

!

interface Serial0/0/1:0

description MGBKVHBW0001 Local LEC ID 60HCGS440444SB

no ip address

encapsulation ppp

load-interval 30

shutdown

ppp quality 90

ppp chap hostname 01524705-1658985

ppp multilink

max-reserved-bandwidth 99

!

interface Serial0/0/2:0

description MGBKVHB50001 Local LEC ID 60HCGS440428SB

no ip address

encapsulation ppp

load-interval 30

ppp quality 90

ppp chap hostname 01524705-1658985

ppp multilink

ppp multilink group 123

max-reserved-bandwidth 99

!

interface Serial0/0/3:0

description MGBKVHB60001 Local LEC ID 60HCGS440427SB

no ip address

encapsulation ppp

load-interval 30

ppp quality 90

ppp chap hostname 01524705-1658985

ppp multilink

ppp multilink group 123

max-reserved-bandwidth 99

!

INT LOG

Multilink123 is up, line protocol is up

  Hardware is multilink group interface

  Description: Verizon Business Multilink Circuit

  Internet address is 192.168.0.1/30

  MTU 1500 bytes, BW 4608 Kbit/sec, DLY 20000 usec,

     reliability 255/255, txload 34/255, rxload 49/255

  Encapsulation PPP, LCP Open, multilink Open

  Open: IPCP, CDPCP, loopback not set

  Keepalive set (10 sec)

  DTR is pulsed for 2 seconds on reset

  Last input 00:00:00, output never, output hang never

  Last clearing of "show interface" counters 1w1d

  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 4365388

  Queueing strategy: Class-based queueing

  Output queue: 0/1000/0 (size/max total/drops)

  30 second input rate 891000 bits/sec, 694 packets/sec

  30 second output rate 626000 bits/sec, 730 packets/sec

     399677327 packets input, 1614936420 bytes, 0 no buffer

     Received 0 broadcasts, 0 runts, 0 giants, 0 throttles

     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort

     435367479 packets output, 138243357 bytes, 0 underruns

     0 output errors, 0 collisions, 0 interface resets

     0 unknown protocol drops

     0 output buffer failures, 0

policy-map log

  Service-policy output: NewOne

    queue stats for all priority classes:

      queue limit 64 packets

      (queue depth/total drops/no-buffer drops) 0/0/0

      (pkts output/bytes output) 50157990/1705371626

    Class-map: VHIGH (match-any)

      50157990 packets, 1705371660 bytes

      30 second offered rate 18000 bps, drop rate 0 bps

      Match: access-group name UDP_Heartbeat

        50157989 packets, 1705371660 bytes

        30 second rate 18000 bps

      Priority: 24 kbps, burst bytes 1500, b/w exceed drops: 0

    Class-map: VMEDHIGH (match-any)

      370321773 packets, 34267772294 bytes

      30 second offered rate 593000 bps, drop rate 4000 bps

      Match: access-group name MDS_DMP_High

        370321768 packets, 34267772344 bytes

        30 second rate 593000 bps

      Queueing

      queue limit 224 packets

      (queue depth/total drops/no-buffer drops) 0/4365868/0

      (pkts output/bytes output) 365955900/34043099450

      bandwidth 87% (4008 kbps)

    Class-map: VMEDLOW (match-any)

      16489550 packets, 3699211519 bytes

      30 second offered rate 82000 bps, drop rate 0 bps

      Match: access-group name Med_Low

        16489550 packets, 3699211519 bytes

        30 second rate 82000 bps

      Queueing

      queue limit 96 packets

      (queue depth/total drops/no-buffer drops) 0/0/0

      (pkts output/bytes output) 16489550/3699203279

      bandwidth 10% (460 kbps)

    Class-map: VDEFAULT (match-any)

      2784014 packets, 156213164 bytes

      30 second offered rate 1000 bps, drop rate 0 bps

      Match: access-group name The_Rest

        2784014 packets, 156213164 bytes

        30 second rate 1000 bps

      Queueing

      queue limit 64 packets

      (queue depth/total drops/no-buffer drops) 0/0/0

      (pkts output/bytes output) 2784014/156494681

      bandwidth 1% (46 kbps)

    Class-map: class-default (match-any)

      12287 packets, 4398746 bytes

      30 second offered rate 0 bps, drop rate 0 bps

      Match: any

      queue limit 64 packets

      (queue depth/total drops/no-buffer drops) 0/0/0

      (pkts output/bytes output) 12287/4398746

Thanks so much!

Chad

4 Replies 4

Joseph W. Doherty
Hall of Fame
Hall of Fame

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

From looking at what you've posted, looks like burst tail drops.

Without knowing more about the nature of the traffic, can't truly recommend a solution.

Obvious/typical solutions include providing more bandwidth or increasing the class queue depth even further, but percentage of drops might be decreased (and performance improved) by decreasing queue depth and/or using RED.

Joseph,

  This is real-time, mission critical traffic, which is probably alot of packets, with not a lot of size.  Can you explain what increasing queue depth means?  Is that the queue-limit 224 I am seeing?  I don't believe its bandwidth related, because a lot of other servers ride on this link, and have no issues, these UCCE servers are running some specials things that are probably generating alot more 'bursty' traffic.

I am not the data guy here, but was just getting some input.  Would you mind perhaps showing what a configuration of increased queue depth is, and if there is any other show commands which might provide more information / confirmation of this?

Thanks again, upvoted!

Chad

Disclaimer

The  Author of this posting offers the information contained within this  posting without consideration and with the reader's understanding that  there's no implied or expressed suitability or fitness for any purpose.  Information provided is for informational purposes only and should not  be construed as rendering professional advice of any kind. Usage of this  posting's information is solely at reader's own risk.

Liability Disclaimer

In  no event shall Author be liable for any damages whatsoever (including,  without limitation, damages for loss of use, data or profit) arising out  of the use or inability to use the posting's information even if Author  has been advised of the possibility of such damage.

Posting

  This is real-time, mission critical traffic, which is probably alot of packets, with not a lot of size.

If real-time you might want to move to LLQ for this traffic.  Is this traffic part of VMEDHIGH or VHIGH now?

Can you explain what increasing queue depth means?  Is that the queue-limit 224 I am seeing?

Yes, the 224, see below.  You might try 512.

policy-map NewOne

class VHIGH

    priority 24

class VMEDHIGH

    bandwidth percent 87

    queue-limit 224 packets

class VMEDLOW

    bandwidth percent 10

    queue-limit 96 packets

class VDEFAULT

    bandwidth percent 1

Class-map: VMEDHIGH (match-any)

      370321773 packets, 34267772294 bytes

      30 second offered rate 593000 bps, drop rate 4000 bps

      Match: access-group name MDS_DMP_High

        370321768 packets, 34267772344 bytes

        30 second rate 593000 bps

      Queueing

      queue limit 224 packets

      (queue depth/total drops/no-buffer drops) 0/4365868/0

      (pkts output/bytes output) 365955900/34043099450

      bandwidth 87% (4008 kbps)

 and if there is any other show commands which might provide more information / confirmation of this?

In the show policy, when there appears to be active drops, see if you see a large queues depths.

i.e.

Class-map: VMEDHIGH (match-any)

      370321773 packets, 34267772294 bytes

      30 second offered rate 593000 bps, drop rate 4000 bps

      Match: access-group name MDS_DMP_High

        370321768 packets, 34267772344 bytes

        30 second rate 593000 bps

      Queueing

      queue limit 224 packets

      (queue depth/total drops/no-buffer drops) 0/4365868/0

      (pkts output/bytes output) 365955900/34043099450

      bandwidth 87% (4008 kbps)

The above only shows zero, but looking for non-zero values.  If traffic is very bursty, might take several snapshots to catch.

What's show buffers look like?  What's the device and its IOS version?

Disclaimer

The  Author of this posting offers the information contained within this  posting without consideration and with the reader's understanding that  there's no implied or expressed suitability or fitness for any purpose.  Information provided is for informational purposes only and should not  be construed as rendering professional advice of any kind. Usage of this  posting's information is solely at reader's own risk.

Liability Disclaimer

In  no event shall Author be liable for any damages whatsoever (including,  without limitation, damages for loss of use, data or profit) arising out  of the use or inability to use the posting's information even if Author  has been advised of the possibility of such damage.

Posting

BTW, if your critical traffic is already in the LLQ, but you're seeing occasional delays, that might be cause of interface transmission FIFO queues.  The latter might be mitigated by using tx-ring-limit 3 on each serial interface.

Review Cisco Networking for a $25 gift card