06-19-2013 06:45 AM - edited 03-04-2019 08:15 PM
Hi, at a customers' site we have two 3845 routers connected via a (EVPN) 1Gb WAN connection (fiber).
Most high volume traffic is send from site A (office) to site B (datacenter).
On the A side router we see an extreme high number of output drops;
Total output drops: 473316545
40 seconds later:
Total output drops: 473321960
So rising very quickly.
The max throughput reached is about 380 Mbit/sec. Input interface speed from the LAN is also Gb (copper).
We suspect the two are related (much output drops, bad performance), but where to start troubleshooting?
Could do with some pointers, thanks!!
06-19-2013 07:04 AM
Can you please include the full "show interface" - for all the interfaces.
06-19-2013 07:12 AM
Sure:
Side a:
#show interfaces gigabitEthernet 0/0/0
GigabitEthernet0/0/0 is up, line protocol is up
Hardware is PM-3387, address is 0022.55e6.9913 (bia 0022.55e6.9913)
Description: LINE-ID V268575- ID 6467
Internet address is 10.10.10.1/30
MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
reliability 255/255, txload 96/255, rxload 2/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 1000Mb/s, link type is autonegotiation, media type is SX
output flow-control is XON, input flow-control is XON
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:08, output 00:00:00, output hang never
Last clearing of "show interface" counters never
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 473316545
Queueing strategy: fifo
Output queue: 32/40 (size/max)
5 minute input rate 8473000 bits/sec, 14941 packets/sec
5 minute output rate 379344000 bits/sec, 31404 packets/sec
849214129 packets input, 1155482862 bytes, 29 no buffer
Received 226486 broadcasts, 0 runts, 0 giants, 1 throttles
0 input errors, 4 CRC, 4 frame, 0 overrun, 0 ignored
0 watchdog, 0 multicast, 51856 pause input
0 input packets with dribble condition detected
875497572 packets output, 3304958944 bytes, 0 underruns
0 output errors, 0 collisions, 2 interface resets
0 unknown protocol drops
0 babbles, 0 late collision, 0 deferred
4 lost carrier, 0 no carrier, 434 pause output
0 output buffer failures, 0 output buffers swapped out
Side B:
#show interfaces gigabitEthernet 0/0/0
GigabitEthernet0/0/0 is up, line protocol is up
Hardware is PM-3387, address is 0022.90a8.bb33 (bia 0022.90a8.bb33)
Description: LINE-ID V268575- ID 6467
Internet address is 10.10.10.2/30
MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
reliability 255/255, txload 2/255, rxload 97/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 1000Mb/s, link type is autonegotiation, media type is SX
output flow-control is XON, input flow-control is XON
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:00, output 00:00:00, output hang never
Last clearing of "show interface" counters never
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 10739
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 381081000 bits/sec, 31592 packets/sec
5 minute output rate 8602000 bits/sec, 15138 packets/sec
2413913293 packets input, 2051875065 bytes, 0 no buffer
Received 604359 broadcasts, 1 runts, 0 giants, 0 throttles
0 input errors, 9 CRC, 8 frame, 0 overrun, 0 ignored
0 watchdog, 0 multicast, 840 pause input
0 input packets with dribble condition detected
3218155929 packets output, 1071013994 bytes, 0 underruns
0 output errors, 0 collisions, 2 interface resets
0 unknown protocol drops
0 babbles, 0 late collision, 0 deferred
8 lost carrier, 0 no carrier, 254660 pause output
0 output buffer failures, 0 output buffers swapped out
06-19-2013 07:17 AM
Can you please post ALL the interfaces for either sides or both.
06-24-2013 02:53 AM
The other two interfaces, facing the LAN:
A side:
show interfaces gigabitEthernet 0/1
GigabitEthernet0/1 is up, line protocol is up
Hardware is BCM1125 Internal MAC, address is 0022.55e6.9911 (bia 0022.55e6.9911)
Description: *** LAN ***
Internet address is 172.25.253.3/24
MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive not set
Full-duplex, 1000Mb/s, media type is RJ45
output flow-control is XON, input flow-control is XON
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:00, output 00:00:00, output hang never
Last clearing of "show interface" counters never
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue: 0/100 (size/max)
30 second input rate 234000 bits/sec, 55 packets/sec
30 second output rate 221000 bits/sec, 57 packets/sec
1197472585 packets input, 2490897503 bytes, 0 no buffer
Received 16605 broadcasts, 0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 7423595 multicast, 0 pause input
0 input packets with dribble condition detected
2604090893 packets output, 729062081 bytes, 0 underruns
1 output errors, 0 collisions, 1 interface resets
0 unknown protocol drops
0 babbles, 0 late collision, 0 deferred
1 lost carrier, 0 no carrier, 0 pause output
0 output buffer failures, 0 output buffers swapped out
B side:
show interfaces gigabitEthernet 0/1
GigabitEthernet0/1 is up, line protocol is up
Hardware is BCM1125 Internal MAC, address is 0022.90a8.bb31 (bia 0022.90a8.bb31)
Description: *** LAN ***
Internet address is 172.26.250.20/16
MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
reliability 255/255, txload 3/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive not set
Full-duplex, 1000Mb/s, media type is RJ45
output flow-control is XON, input flow-control is XON
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:09, output 00:00:00, output hang never
Last clearing of "show interface" counters never
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue: 0/100 (size/max)
5 minute input rate 533000 bits/sec, 638 packets/sec
5 minute output rate 15547000 bits/sec, 1307 packets/sec
381530928 packets input, 3569410611 bytes, 0 no buffer
Received 6558886 broadcasts, 0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 20221909 multicast, 0 pause input
0 input packets with dribble condition detected
1648728082 packets output, 1502180012 bytes, 0 underruns
1 output errors, 0 collisions, 1 interface resets
266 unknown protocol drops
0 babbles, 0 late collision, 0 deferred
1 lost carrier, 0 no carrier, 0 pause output
0 output buffer failures, 0 output buffers swapped out
06-24-2013 03:19 AM
You can try increasing hold queue, but output drops are not normal because with a single input interface of the same speed, the output interface cannot (in thoery) experience congestion. If itdoes, it means input packets are clumped and delayed somehow.
I would look at updating IOS and optimizing the configuration.
06-24-2013 06:00 AM
Disclaimer
The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
Liability Disclaimer
In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
Posting
I agree with Paolo, that in theory, if you only have to active ports (of the same "speed"), you shouldn't be able to congest on egress.
Looking at your original "WAN" interface stats, I see "pause" counter has been incrementing. I'm wondering whether your EPVN has been pausing your egress, which if your interface accepts, might answer Paolo's "delayed somehow."
If flow control is creating transient congestion, increasing hold queue might be one method to mitigate. The other is to determine whether flow control is really necessary, and if not, disable it.
06-19-2013 08:32 AM
Disclaimer
The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
Liability Disclaimer
In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
Posting
I'm surprised you don't have more problems trying to support full gig on a 3845.
That aside, the default queue limit of 40 is often too shallow for gig.
As to why you're dropping, notice your int stats for g0/0/0 caught the egress queue at 32.
For gig, you might try a queue limit of 512.
PS:
A different question is why you're queuing at all. You only have single gig port in and out of 3845?
06-19-2013 09:29 AM
Actually no, Gig 0/1 is the LAN interface, G0/0/0 is connected to the WAN service. Nothing else is connected to it, dedicated router.
Gig 0/1 show interface output is eing retrieved now.
How would i change the queue limit? Don't have much experience with this configuration.
Do you do that with the hold queue command?
06-19-2013 12:42 PM
Disclaimer
The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
Liability Disclaimer
In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
Posting
Do you do that with the hold queue command?
Yes.
06-26-2013 11:18 PM
Hi, we have found the cause of the issue.
Raising buffers of turning of flow control did not change anything.
However, we have found why there was such a limitation.
We were using the build in Gb interface for LAN and a HWIC card for the WAN connection. After realising this it triggered us to search for issues with HWIC and found this:
http://www.cisco.com/en/US/prod/collateral/routers/ps5855/prod_qas0900aecd8016a953.html
Q.
What is the maximum throughput of the four onboard HWIC slots?
A. The maximum throughput per slot is up to 400 Mbps full duplex. The new Gigabit Ethernet HWIC and the 4- and 9-port Cisco EtherSwitch HWIC modules are able to take advantage of this performance improvement up to the switching capabilities of individual platforms.
We did some reshuffling of the interfaces and were able to use the two build-in Gb interfaces, reconfigured, and the throughput is now just over 800Mb/sec, perfectly acceptable.
Just to let you know.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide