05-07-2012 08:07 AM - edited 03-07-2019 06:33 AM
Has anyone seen this before:
Users in a LAN are experiencing very high latency mapping a drive and copying files to a server elsewhere on campus. This has been occurring for a few weeks and getting worse.
Topology:
Users-----Switch 1a-2------Switch 1a-1------Core V-----Core H------DC 1------FW------Switch------Server
Standard 'divide and conquer' techniques do not point to a culprit:
1. 10,000 pings from directly connected 2950 to hosts are 100%, 1-2ms.
2. 50,000 pings to remote server are 100% with sub-ms timing.
3. Interface error counters are mostly zero.
4. Retransmissions to the server a very high (+40) in 30 minutes per host. Many hung TCP sessions - many Resets sent by both sides. This is seen via .pcap files from a user and also firewall logs.
However, on the uplink from [1a-2] to [1a-1], I see +.1% 'no buffer' packets. I understand the buffers can be overwhelmed when the link is saturated, but this is a lightly used link (traffic peaks during the day at 1Mb/sec.)
I have done 'sh tech' and dropped it into I/O Output Interpreter; no errors found.
Switches are 2950C-24 & 12.1(22)EA8a, 2950T-24 & 12.1(22)EA8a
The interface on Core-V also shows ongoing output drops (9/10,000 packets)
The other side of this link shows
Switch-1a-2
GigabitEthernet0/1 is up, line protocol is up (connected)
Hardware is Gigabit Ethernet, address is 0009.4327.5c99 (bia 0009.4327.5c99)
MTU 1500 bytes, BW 100000 Kbit, DLY 100 usec,
reliability 255/255, txload 1/255, rxload 3/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 100Mb/s, media type is RJ45
input flow-control is off, output flow-control is off
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:00, output 00:00:02, output hang never
Last clearing of "show interface" counters 2d21h
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 1224000 bits/sec, 136 packets/sec
5 minute output rate 84000 bits/sec, 88 packets/sec
7869256 packets input, 3578119797 bytes, 11691 no buffer
Received 3596908 broadcasts (0 multicast)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 11691 ignored
0 watchdog, 1346959 multicast, 0 pause input
0 input packets with dribble condition detected
3643992 packets output, 1056105832 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 PAUSE output
0 output buffer failures, 0 output buffers swapped out
Switch-1a-1
FastEthernet0/24 is up, line protocol is up (connected)
Hardware is Fast Ethernet, address is 0009.4382.d498 (bia 0009.4382.d498)
MTU 1500 bytes, BW 100000 Kbit, DLY 100 usec,
reliability 255/255, txload 2/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 100Mb/s, media type is 100BaseTX
input flow-control is unsupported output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:12, output 00:00:00, output hang never
Last clearing of "show interface" counters 2d21h
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 100000 bits/sec, 67 packets/sec
5 minute output rate 884000 bits/sec, 110 packets/sec
3909629 packets input, 1120518881 bytes, 17 no buffer
Received 92426 broadcasts (0 multicast)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 17 ignored
0 watchdog, 17738 multicast, 0 pause input
0 input packets with dribble condition detected
8238386 packets output, 3920051772 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 PAUSE output
0 output buffer failures, 0 output buffers swapped out
05-07-2012 02:53 PM
I had the exact same issue at the local school district, nothing at all out of the ordinary, but the ignored value was incrementing. I rebooted both, and after rebooting them the counter stopped. I guessed the buffers were full and not being released correctly.
https://supportforums.cisco.com/message/3625058
Had to do with that and/or MTU size.
05-08-2012 08:57 AM
jimmysands73,
Thanks for the idea, I was thinking along similar lines.
I opened a ticket with Cisco for our upstream 6509 since the same buffer problem was seen there and they are pointing out that the blade is very old (it was EOL in 2001...). Cisco is looking at a potential oversubscription problem but I don't think it's possible in light of the low traffic levels on the card. Oversubscription also does not address the problem on the 2950's either.
We will probably live with the problem until Saturday when student's finals are over and then reboot all of them. We need to look at card replacement too. I will update once we finish the reload. Thanks for the idea.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide