03-05-2018 07:53 AM - edited 03-08-2019 02:08 PM
Hello,
Currently I am seeing a high TX/RX utilization on one of the links between 4507R and 3650. Below are the interface details.
On 4507 side,
S_4507R_2#show interfaces gigabitEthernet 6/24
GigabitEthernet6/24 is up, line protocol is up (connected)
Hardware is Gigabit Ethernet Port, address is XXXX.XXXX.XXXX
Description: ==> S_3650_1, Gig1/1/1
MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 251/255, rxload 111/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 1000Mb/s, link type is auto, media type is 1000BaseSX
input flow-control is on, output flow-control is on
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:05, output never, output hang never
Last clearing of "show interface" counters never
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 903044158
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 436315000 bits/sec, 76859 packets/sec
5 minute output rate 986344000 bits/sec, 90382 packets/sec
116819466860 packets input, 82256757875941 bytes, 0 no buffer
Received 10970441 broadcasts (6491977 multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 input packets with dribble condition detected
139127144885 packets output, 190212955601788 bytes, 0 underruns
0 output errors, 0 collisions, 3 interface resets
0 unknown protocol drops
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier
0 output buffer failures, 0 output buffers swapped out
On 3650 side,
S_3650_1#show interfaces gigabitEthernet 1/1/1
GigabitEthernet1/1/1 is up, line protocol is up (connected)
Hardware is Gigabit Ethernet, address is XXXX.XXXX.XXXX
Description: Uplink to S_4507R_2,Gig6/24
MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
reliability 190/255, txload 111/255, rxload 252/255
Encapsulation ARPA, loopback not set
Keepalive not set
Full-duplex, 1000Mb/s, link type is auto, media type is 1000BaseSX SFP
input flow-control is off, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:00, output never, output hang never
Last clearing of "show interface" counters never
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 2582671556
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 992074000 bits/sec, 90740 packets/sec
5 minute output rate 438071000 bits/sec, 77292 packets/sec
18847103480 packets input, 25965983288951 bytes, 0 no buffer
Received 1917127 broadcasts (1316779 multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 1316779 multicast, 0 pause input
0 input packets with dribble condition detected
15502352195 packets output, 10576201579682 bytes, 0 underruns
2582671556 output errors, 0 collisions, 1 interface resets
0 unknown protocol drops
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 pause output
0 output buffer failures, 0 output buffers swapped out
I have already performed a software upgrade on 3650, looking at it as high CPU utilization. But still issue has been not resolved. We have another 3650 in the same location as the first one, with same IOS image, same kind of load but we don't see any high tx/rx loads and no output drops. I am confused as to why this specific switch is behaving like this. And this high utilization is only on the uplink from 4705R to 3650_1.
The 3650's are supporting IP cameras and Camera Workstations.
Current IOS versions are,
4507R - (cat4500e-ENTSERVICESK9-M), Version 15.2(2)E7
3650 - (CAT3K_CAA-UNIVERSALK9-M), Version 03.06.06E
Any help would be much appreciated.
03-05-2018 12:41 PM
rxload 252/255
Something is filling up the link between these 2 devices.
Is there any QOS configured on either device?
Also, here is a good link to troubleshooting output drops? It is for 3850 but should work for 3650 as well
Have you replaced the cable and/or the SFPs?
HTH
04-13-2018 05:47 AM
Apologies for the late reply, I have been coordinating with Cisco TAC for this issue.
We have tried options like increasing the Softmax buffer etc.
Finally, it seems the bandwidth requirements are little higher and added one more up-link. That does the trick. Currently systems are operating at normalcy.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide