01-25-2023 10:42 AM
I'm trying to understand how to tell if a Cisco port is using 10G as it is capable. From what I can tell this port is using 1GB due to the media type. It also appears to be having output drops.
Full-duplex, 1000Mb/s, link type is auto, media type is 10/100/1000BaseTX SFP
Media-type configured as connector
Total output drops: 54477
01-25-2023 11:03 AM
Hello!
You are correct - that particular line in the output of show interfaces from an IOS-XE device will indicate the link speed of the interface, which can be used to reliably determine if a 10Gbps-capable interface is operational at a 10Gbps link speed, or is running at a lower link speed. This same logic applies to other link speeds as well (e.g. a 40Gbps-capable interface running at 10Gbps, 25Gbps-capable interface running at 1Gbps, etc.).
Here is an example from my lab, where TenGigabitEthernet1/1/1 of a Catalyst 3650-48TQ is operating at a 10Gbps link speed.
WS-C3850#show module Switch Ports Model Serial No. MAC address Hw Ver. Sw Ver. ------ ----- --------- ----------- -------------- ------- -------- 1 52 WS-C3650-48TQ-E REDACTED 5c71.0dca.4a80 V04 16.12.05b WS-C3850#show interfaces TenGigabitEthernet1/1/1 TenGigabitEthernet1/1/1 is up, line protocol is up (connected) Hardware is Ten Gigabit Ethernet, address is 5c71.0dca.4ac6 (bia 5c71.0dca.4ac6) Description: Internet address is 192.0.2.10/24 MTU 9100 bytes, BW 10000000 Kbit/sec, DLY 10 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation ARPA, loopback not set Keepalive not set Full-duplex, 10Gb/s, link type is auto, media type is SFP-10GBase-SR input flow-control is on, output flow-control is unsupported ARP type: ARPA, ARP Timeout 04:00:00 Last input 00:00:00, output 00:00:00, output hang never Last clearing of "show interface" counters never Input queue: 0/375/0/0 (size/max/drops/flushes); Total output drops: 0 Queueing strategy: fifo Output queue: 0/40 (size/max) 30 second input rate 9000 bits/sec, 6 packets/sec 30 second output rate 121000 bits/sec, 14 packets/sec 926561799 packets input, 785224668847 bytes, 0 no buffer Received 8831193 broadcasts (57912 IP multicasts) 0 runts, 0 giants, 0 throttles 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored 0 watchdog, 8831192 multicast, 0 pause input 0 input packets with dribble condition detected 644936703 packets output, 390742152543 bytes, 0 underruns Output 1 broadcasts (3 IP multicasts) 0 output errors, 0 collisions, 2 interface resets 0 unknown protocol drops 0 babbles, 0 late collision, 0 deferred 0 lost carrier, 0 no carrier, 0 pause output 0 output buffer failures, 0 output buffers swapped out
I hope this helps - thank you!
-Christopher
01-25-2023 11:09 AM
It definitely does, and thank you. Also, if the interface was being overwhelmed and showing output drops, why would the txload and rxload not be maxed...
description: xxx
MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
reliability 255/255, txload 13/255, rxload 80/255
Encapsulation ARPA, loopback not set
Keepalive not set
Full-duplex, 1000Mb/s, link type is auto, media type is 10/100/1000BaseTX SFP
Media-type configured as connector
input flow-control is off, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:54, output 00:00:09, output hang never
Last clearing of "show interface" counters never
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 54477
01-25-2023 11:23 AM
Depends on the device model and confg
Total output drops: 54477 (what is the uptime of the device? ) clear the interface stats and observ is this increasing frequenly ?
sometimes we see high load on the interface we see some drops like this, but some time its ok as long as it not impacting the performance of the network.
This could be various reasons.
example you can see for 3850 as below :
01-25-2023 11:42 AM
Here is the output drops on the aggregating switch for the link having drops...
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 1608268 and a few seconds later...
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 1608681
MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 77/255, rxload 16/255
Encapsulation ARPA, loopback not set
Keepalive not set
Full-duplex, 1000Mb/s, link type is auto, media type is 10/100/1000BaseTX SFP
input flow-control is on, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:00, output 00:00:00, output hang never
Last clearing of "show interface" counters never
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 16085923
3
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 65761000 bits/sec, 18674 packets/sec
5 minute output rate 302946000 bits/sec, 35072 packets/sec
13824987551 packets input, 6348711165422 bytes, 0 no buffer
Received 844310331 broadcasts (677279387 multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 1608592
01-25-2023 12:01 PM
Here is an update from just a few seconds ago...
MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,
reliability 255/255, txload 15/255, rxload 74/255
Encapsulation ARPA, loopback not set
Keepalive not set
Full-duplex, 1000Mb/s, link type is auto, media type is 10/100/1000BaseTX SFP
Media-type configured as connector
input flow-control is off, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:04, output 00:00:06, output hang never
Last clearing of "show interface" counters never
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 56205
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 291866000 bits/sec, 34048 packets/sec
5 minute output rate 61845000 bits/sec, 18407 packets/sec
23443099518 packets input, 24773365278189 bytes, 0 no buffer
Received 544831645 broadcasts (319945062 multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 319945062 multicast, 0 pause input
0 input packets with dribble condition detected
13843484533 packets output, 6356460064718 bytes, 0 underruns
0 output errors, 0 collisions, 2 interface resets
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 PAUSE output
0 output buffer failures, 0 output buffers swapped out
If this is a gig connection it should be able to handle1 billion bits/s correct? Right now it is currently under that...
5 minute input rate 291866000 bits/sec, 34048 packets/sec
5 minute output rate 61845000 bits/sec, 18407 packets/sec
01-25-2023 02:56 PM
@kochjason wrote:
If this is a gig connection it should be able to handle1 billion bits/s correct?
No, it does not.
Catalyst switches are well known to have very shallow per-port-buffer and the large amount of Total Output Drops is evidence that QoS will need to be implemented or adjusted. @Joseph W. Doherty is an authority in QoS so he would be able to provide guidance on how to fine-tune it.
01-25-2023 04:40 PM
"Catalyst switches are well known to have very shallow per-port-buffer"
Actually, not all of them. Low-end Catalyst switches, and some chassis line cards, often do fall into the category of insufficient buffer resources. Sometimes it's actual physical resources, or the way Cisco, especially by default, manages buffer resources.
"large amount of Total Output Drops is evidence that QoS will need to be implemented or adjusted."
Yea, "large amount" of drops do often indicate some form of mitigation should be considered. Even though I am very much a QoS proponent, sometimes additional bandwidth needs to be acquired to meet service needs.
That said, QoS can often offset the need for additional bandwidth, partially or totally. However, as QoS isn't really "free", using it often boils down to two considerations, which are cost of QoS vs. hardware capacity (e.g. LANs tend to favor hardware, WANs not so much), and how "strict" are your service requirements (e.g. backup vs. VoIP). Lastly, QoS vs. hardware, are not mutually exclusive, i.e. mix and match to meet your service goals at the least cost.
01-25-2023 04:21 PM
Regarding your earlier question "if the interface was being overwhelmed and showing output drops, why would the txload and rxload not be maxed...", looking at your current stats, believe you misunderstand txload and rxload (many do).
An interface is either at 100% (actually transmitting/receiving a frame) or 0%. A tx or rx load of 50%, really means, over some measured time interval, 50% of that time interval's capacity was used. What it doesn't tell us, is how that capacity was used.
In your stats, your measured time interval is 5 minutes, so if txload was 128/255 (50%), was link busy the first two and a half minutes, the latter two and a half minutes, busy for alternating 5 seconds or some other combination?
Where drops and even a low load can go hand-in-hand, consider you have a gig ingress and a gig egress. The gig ingress goes 100% for 1 minute out of 5, i.e. a load (average) of 20%.
Next consider you have a 10g ingress and a gig egress. The 10g ingress goes 100% for 10 seconds. (BTW, if this is the only traffic during the five minutes, you have 10 seconds over 300 seconds, or [about] 3.3%.) What happens on egress?
Well, 10 seconds of 10g is 100 seconds of gig. The gig egress interface needs to buffer about 90 seconds of traffic. Assuming it can, its load average is 100 seconds over 300 seconds, or (about) 33%.
Now consider, egress interface's buffers could only contain about 60 seconds of the 10g's traffic. If so, its load average is 60 seconds over 300 seconds, 20%, and it dropped 40 seconds of the 100 seconds traffic!
I.e. load "only" 20%, but there are drops?! Yup!
Much of my example, isn't real world, except in principle. Default load average time interval, for Cisco, is really 5 minutes, but buffer capacity is measured in milliseconds.
In a burst congestion situation (BTW, you might research "micro burst") additional buffering often will reduce drops and increase throughput. However, in a sustained congestion situation, additional buffering doesn't help, and in some cases, can actually slow effective transfer rate. For sustained congestion, the solution is either provide more bandwidth, or, somehow, better regulate/control flow rates.
Oh, although I used the example of a 10g ingress to gig egress, similar issue with 10 gig ingress to single gig egress. (Anytime ingress has more bandwidth than egress, you've oversubscribed your egress, which can lead to reduced performance. It doesn't always do so, though, because perhaps the 10g ingress is only, on average, used 10% of the time, or only one of the 10 gig ingress interfaces is, on average, actually passing traffic, etc.)
01-25-2023 05:00 PM
I deep dive in Buffer and Queue allocation of SW interface, please can I know what platform you use?
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide