cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2831
Views
0
Helpful
10
Replies

What could cause interface output drops on two routers connected to MPLS?

jkeeffe
Level 2
Level 2

We have two 7201 routers, local and remote, and the WAN connection is an ethernet MPLS circuit. Both routers' ethernet interface connected to the ISP's MPLS circuit are experiencing output drops even though there is very little traffic.  Here is 'sh int' from both routers' MPLS connection:

Local 7201 router:

ROC-7201-System>sh int g0/2

GigabitEthernet0/2 is up, line protocol is up

  Hardware is MV64460 Internal MAC, address is e8b7.48b7.da19 (bia e8b7.48b7.da19)

  MTU 1500 bytes, BW 100000 Kbit/sec, DLY 10 usec,

    reliability 255/255, txload 1/255, rxload 1/255

  Encapsulation 802.1Q Virtual LAN, Vlan ID  1., loopback not set

  Keepalive set (10 sec)

  Full-duplex, 1000Mb/s, link type is autonegotiation, media type is SX

  output flow-control is XON, input flow-control is unsupported

  ARP type: ARPA, ARP Timeout 04:00:00

  Last input 00:00:00, output 00:00:00, output hang never

  Last clearing of "show interface" counters 2d22h

  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 1147

  Queueing strategy: Class-based queueing

  Output queue: 0/1000/0 (size/max total/drops)

  30 second input rate 5000 bits/sec, 4 packets/sec

  30 second output rate 4000 bits/sec, 4 packets/sec

Remote 7201 router:

IKA-7201#sh int g0/2
GigabitEthernet0/2 is up, line protocol is up
  Hardware is MV64460 Internal MAC, address is e8b7.48b7.e219 (bia e8b7.48b7.e219)
  Internet address is xxx.xxx.x.x/30
  MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
    reliability 255/255, txload 1/255, rxload 1/255
  Encapsulation ARPA, loopback not set
  Keepalive set (10 sec)
  Full-duplex, 1000Mb/s, link type is autonegotiation, media type is SX
  output flow-control is XON, input flow-control is unsupported
  ARP type: ARPA, ARP Timeout 04:00:00
  Last input 00:00:00, output 00:00:00, output hang never
  Last clearing of "show interface" counters 2d22h
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 22230
  Queueing strategy: Class-based queueing
  Output queue: 0/1000/0 (size/max total/drops)
  30 second input rate 45000 bits/sec, 25 packets/sec
  30 second output rate 331000 bits/sec, 37 packets/sec

10 Replies 10

lgijssel
Level 9
Level 9

Not too much info to go on but I can think of two possibilities:

Duplex mismatch on the interfaces. (Yes, many people still use fixed port settings on gig ports)

Incorrect qos settings (if any).

You can investigate both options or provide more feedback regarding the configuration etc.

regards,

Leo

Edison Ortiz
Hall of Fame
Hall of Fame

You have a 1Gbps connection to the MPLS provider, did you purchase line rate speed to the provider?

If not, how much guaranteed bandwidth was purchased?

If less than 1Gbps, do you have QoS applied to the Gig interfaces with a shaper?

Hi Edison,

First of all, thanks for putting right about Administratve Distance lately.

It is indeed not correct to use the term weight in that context.

This may again be one of such conversations. I was of the opinion that the rate purchased from ISP is not directly relevant. Of course the traffic will be dropped but this would normally occur on the provider's CPE router.

What we are discussing here is drops on the customer side.

So unless the CPE is in some way capable to notify its peer that no more traffic must be sent, this should not result in output drops. The input policy on the CPE will do the dropping. That's probably better from a provider's perspective because the customer is unaware of it.

regards,

Leo

Correct.

It is a 1 gig connection from the provider and we are purchasing 60MB.  So I have a shaper on the interface to shape just below the 60MB threshold as shown here:

policy-map Shape-60MB-speed

class class-default

    shape average 58000000

interface GigabitEthernet0/2

ip address xxx.xx.xxx.x

ip wccp 62 redirect in

load-interval 30

duplex full

speed auto

negotiation auto

service-policy output Shape-60MB-speed

IKA-7201#sh policy-map int

GigabitEthernet0/2

  Service-policy output: Shape-60MB-speed

    Class-map: class-default (match-any)

      5378149 packets, 5240247441 bytes

      30 second offered rate 71000 bps, drop rate 0 bps

      Match: any

      Queueing

      queue limit 64 packets

      (queue depth/total drops/no-buffer drops) 0/22304/0

      (pkts output/bytes output) 5355845/5143459782

      shape (average) cir 58000000, bc 232000, be 232000

      target shape rate 58000000

** The total drops shown in the policy-map output is the same as on the interface.

  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 22304

  Queueing strategy: Class-based queueing

  Output queue: 0/1000/0 (size/max total/drops)

As seen from the g0/2 config, we do have speed auto and duplex full. Should I have both speed and duplex on auto?

Disclaimer

The  Author of this posting offers the information  contained within this  posting without consideration and with the  reader's understanding that  there's no implied or expressed suitability  or fitness for any purpose.  Information provided is for informational  purposes only and should not  be construed as rendering professional  advice of any kind. Usage of  this posting's information is solely at  reader's own risk.

Liability Disclaimer

In   no event shall Author be liable for any damages whatsoever (including,   without limitation, damages for loss of use, data or profit) arising  out  of the use or inability to use the posting's information even if  Author  has been advised of the possibility of such damage.

Posting

Queue limit might be too small for 60 Mbps.  If the IOS allows, trying increasing to about half of BDP.

Joseph - what is BDP?  (I haven't had enough coffee yet)

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

BDP = Bandwidth delay product

Multiply maximum end-to-end bandwidth by end-to-end latency.  This provides the optimal receive buffer for TCP.  Transit devices, where there's a bandwidth difference need to allow for the sender sending a burst to fill the receiver's buffer.

For example, assume you 60 Mbps had 50 ms latency, its BDP would be 60,000,000 * .05 = 3,000,000 bits or 375,000 bytes or 250 max size standard size Ethernet packets.  Your queue limit (per flow) is only currently 64 packets.

Your BC value is very tight, this may cause the drops.

Try increasing it to at least 4.000.000. Perhaps even better to 10.000.000 or 12.000.000.

regards,

Leo

Check with the provider and make sure they are also using auto/auto at their end.