11-20-2024 07:00 AM - editado 11-20-2024 07:03 AM
Hello,
what could be the cause of interface output drops between two Cisco devices? This is what I have:
SW Catalyst interface giga1/0/24 ---------to-------- ASA giga0/0. It is a direct link (no other devices in the middle)
Both ends of the link auto-negotiated correctly to 1Gbps and Full duplex, and traffic is way lower than 1Gbps (aprox: 300Mbps)
And on the Catalyst the Total output drops are always growing: Total output drops: 8210720
What could be the reason of this?
We have configured a Trunk on that link.
Thank you.
show interface on ASA:
ASA# sh int g0/0
Interface GigabitEthernet0/0 "", is up, line protocol is up
Hardware is i82xxx rev00, BW 1000 Mbps, DLY 10 usec
Auto-Duplex(Full-duplex), Auto-Speed(1000 Mbps)
Input flow control is unsupported, output flow control is off
Available but not configured via nameif
MAC address 4001.7a00.000, MTU not set
IP address unassigned
5869923868 packets input, 7130806303738 bytes, 0 no buffer
Received 2209935 broadcasts, 0 runts, 0 giants
12044 input errors, 0 CRC, 0 frame, 12044 overrun, 0 ignored, 0 abort
0 pause input, 0 resume input
195776 L2 decode drops
5539792244 packets output, 899270274727 bytes, 0 underruns
0 pause output, 0 resume output
0 output errors, 0 collisions, 2 interface resets
0 late collisions, 0 deferred
0 input reset drops, 0 output reset drops
input queue (blocks free curr/low): hardware (465/362)
output queue (blocks free curr/low): hardware (511/0)
show interface on SW:
SW#sh int g1/0/24
GigabitEthernet1/0/24 is up, line protocol is up (connected)
Hardware is Gigabit Ethernet, address is d477.9879.0000 (bia d477.9879.0000)
Description: To ASA
MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 34/255, rxload 6/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 1000Mb/s, media type is 10/100/1000BaseTX
input flow-control is off, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input never, output 00:00:00, output hang never
Last clearing of "show interface" counters 2d01h
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 8225229
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 27046000 bits/sec, 15030 packets/sec
5 minute output rate 134833000 bits/sec, 17003 packets/sec
2198873694 packets input, 1432738130 bytes, 0 no buffer
Received 857 broadcasts (0 multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 0 multicast, 0 pause input
0 input packets with dribble condition detected
2375544682 packets output, 2097018381 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 unknown protocol drops
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 pause output
0 output buffer failures, 0 output buffers swapped out
el 11-20-2024 07:03 AM
11-20-2024 01:43 PM - editado 11-20-2024 01:45 PM
In a properly functioning system, output drops are diagnostic of congestion at the interface, with packets being switched from an ingress interface to an egress queue for transmission faster than they are actually being transmitted out that interface The egress queue fills up and newly arriving packets at the full queue are dropped.
But it is a 1G interface and the 5 min output rate on the Catalyst interface is showing only 135Mbps, how can the egress queue be filling up? Well, the 5 min output rate really tells us absolutely nothing about whether the interface is being congested. If the 5 min average is 135Mbps out of a possible 1000Mbps, then the interface utilization during the sampling period was 13.5%. That is, for 13.5% of the time, the interface was busy transmitting a packet and any newly arriving egress packets were queued. If enough packets arrive in a burst at the egress queue, the queue can fill up and drops result. It is these microbursts that cannot be seen with 5 min averages, or even 30 sec averages (the min sampling period). So how do you know if microbursts are occurring? The output drops diagnose that.
What can be done about reducing output drops due to a congested output queue? Two things: (1) reduce the offered load (ie, egress traffic arriving across the switching fabric); and/or (2) increase the service rate (ie, increase the output bandwidth). Most IT organizations cannot tell their customers to use the network service less, so #1 is likely a non-starter, which leaves either going to a 10G interface to the firewall or going to Nx1G. However, increasing the bandwidth could very well hit budget or logistic constraints and may not be a near-term solution. What can be done near term if the drop rate cannot be reduced? Implement a QoS policy to prioritize/de-prioritize traffic classes at the congested interface so that the "right" packets are the ones that are dropped first.
el 11-20-2024 11:38 PM
what is the model of the switch and what IOS Code running.
clear the counters and check how fast they increasing.
if possible increate the softmax config (depends on the device model and code running)
i also see input errors on ASA, can we able to change the cable before we go further and do any other testing.
Descubra y salve sus notas favoritas. Vuelva a encontrar las respuestas de los expertos, guías paso a paso, temas recientes y mucho más.
¿Es nuevo por aquí? Empiece con estos tips. Cómo usar la comunidad Guía para nuevos miembros
Navegue y encuentre contenido personalizado de la comunidad