ā06-13-2023 02:03 AM
Hi
I have a switch where i see the Hardware Enqueue-TH2 counters increasing. Does this mean I am dropping packets or are they just being put in a specific queue?
#$show platform hardware fed switch active qos queue stats interface gi 1/0/1
----------------------------------------------------------------------------------------------
AQM Global counters
GlobalHardLimit: 6776 | GlobalHardBufCount: 0
GlobalSoftLimit: 13072 | GlobalSoftBufCount: 0
----------------------------------------------------------------------------------------------
High Watermark Soft Buffers: Port Monitor Disabled
----------------------------------------------------------------------------------------------
Asic:0 Core:1 DATA Port:0 Hardware Enqueue Counters
----------------------------------------------------------------------------------------------
Q Buffers Enqueue-TH0 Enqueue-TH1 Enqueue-TH2 Qpolicer
(Count) (Bytes) (Bytes) (Bytes) (Bytes)
-- ------- -------------------- -------------------- -------------------- --------------------
0 0 64 0 4257937308 0
1 0 0 0 61221495471 0
2 0 0 0 0 0
3 0 0 0 0 0
4 0 0 0 0 0
5 0 0 0 0 0
6 0 0 0 0 0
7 0 0 0 0 0
Asic:0 Core:1 DATA Port:0 Hardware Drop Counters
--------------------------------------------------------------------------------------------------------------------------------
Q Drop-TH0 Drop-TH1 Drop-TH2 SBufDrop QebDrop QpolicerDrop
(Bytes) (Bytes) (Bytes) (Bytes) (Bytes) (Bytes)
-- -------------------- -------------------- -------------------- -------------------- -------------------- --------------------
0 0 0 0 0 0 0
1 0 0 0 0 0 0
2 0 0 0 0 0 0
3 0 0 0 0 0 0
4 0 0 0 0 0 0
5 0 0 0 0 0 0
6 0 0 0 0 0 0
7 0 0 0 0 0 0
Just to make sure I understand this? We don't have any packet drop just because the counter increases for "Enqueue-TH2"?
Solved! Go to Solution.
ā06-13-2023 02:24 AM
Hi
The drop would be shown under the table, if packets were dropped.
Asic:0 Core:1 DATA Port:0 Hardware Drop Counters
"The second command is show platform hardware fed switch active qos queue stats interface <interface>. This command allows you to see per-queue statistics on an interface, which includes how many bytes were enqueued into the buffers, and how many bytes were dropped due to lack of available buffers.
9300#show platform hardware fed switch active qos queue stats interface Gig 1/0/1
DATA Port:0 Enqueue Counters
---------------------------------------------------------------------------------------------
Q Buffers Enqueue-TH0 Enqueue-TH1 Enqueue-TH2 Qpolicer
(Count) (Bytes) (Bytes) (Bytes) (Bytes)
- ------- -------------------- -------------------- -------------------- --------------------
0 0 0 0 384251797 0
1 0 0 0 488393930284 0
2 0 0 0 0 0
3 0 0 0 0 0
4 0 0 0 0 0
5 0 0 0 0 0
6 0 0 0 0 0
7 0 0 0 0 0
DATA Port:0 Drop Count
ā06-13-2023 02:24 AM
Hi
The drop would be shown under the table, if packets were dropped.
Asic:0 Core:1 DATA Port:0 Hardware Drop Counters
"The second command is show platform hardware fed switch active qos queue stats interface <interface>. This command allows you to see per-queue statistics on an interface, which includes how many bytes were enqueued into the buffers, and how many bytes were dropped due to lack of available buffers.
9300#show platform hardware fed switch active qos queue stats interface Gig 1/0/1
DATA Port:0 Enqueue Counters
---------------------------------------------------------------------------------------------
Q Buffers Enqueue-TH0 Enqueue-TH1 Enqueue-TH2 Qpolicer
(Count) (Bytes) (Bytes) (Bytes) (Bytes)
- ------- -------------------- -------------------- -------------------- --------------------
0 0 0 0 384251797 0
1 0 0 0 488393930284 0
2 0 0 0 0 0
3 0 0 0 0 0
4 0 0 0 0 0
5 0 0 0 0 0
6 0 0 0 0 0
7 0 0 0 0 0
DATA Port:0 Drop Count
ā06-13-2023 03:29 AM
Thank you for the reply. It was just the verification I needed.
ā06-13-2023 03:32 AM
We don't have any packet drop just because the counter increases for "Enqueue-TH2"? <<- this point confuse me, how you dont have packet drop, check show interface total output drop<<- this must be give you same number or near what you get from TH2
ā06-13-2023 04:45 AM
I don't see any drop there either:
#show inter gi 1/0/1
GigabitEthernet1/0/1 is up, line protocol is up (connected)
Hardware is Gigabit Ethernet, address is d477.9806.3581 (bia d477.9806.3581)
MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 227/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 1000Mb/s, media type is 10/100/1000BaseTX
input flow-control is on, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input never, output 00:00:00, output hang never
Last clearing of "show interface" counters 03:30:25
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 893756000 bits/sec, 83158 packets/sec
5 minute output rate 54000 bits/sec, 104 packets/sec
795986378 packets input, 1093493824940 bytes, 0 no buffer
Received 0 broadcasts (0 multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 0 multicast, 0 pause input
0 input packets with dribble condition detected
967812 packets output, 64495680 bytes, 0 underruns
Output 6339 broadcasts (0 multicasts)
0 output errors, 0 collisions, 0 interface resets
0 unknown protocol drops
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 pause output
0 output buffer failures, 0 output buffers swapped out
#show int sum | inc Inter|1/0/1
Interface IHQ IQD OHQ OQD RXBS RXPS TXBS TXPS TRTL
* GigabitEthernet1/0/1 0 0 0 0 901782000 83482 51000 98 0
ā06-13-2023 04:47 AM - edited ā06-13-2023 04:47 AM
you clear the counter ?
counters 03:30:25
ā06-13-2023 05:24 AM
Yes, I cleared it. We were looking at some old CRC errors so to make sure it was not the issue still I cleared it.
This is the log from before I cleared it:
GigabitEthernet1/0/1 is up, line protocol is up (connected)
Hardware is Gigabit Ethernet, address is d477.9806.3581 (bia d477.9806.3581)
MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 1000Mb/s, media type is 10/100/1000BaseTX
input flow-control is on, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input never, output 00:00:00, output hang never
Last clearing of "show interface" counters never
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 2000 bits/sec, 4 packets/sec
5 minute output rate 3000 bits/sec, 5 packets/sec
1101322861053 packets input, 1671211385170052 bytes, 0 no buffer
Received 59815 broadcasts (0 multicasts)
0 runts, 0 giants, 0 throttles
78 input errors, 42 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 0 multicast, 0 pause input
0 input packets with dribble condition detected
50973252883 packets output, 3366394792387 bytes, 0 underruns
Output 3441439 broadcasts (0 multicasts)
0 output errors, 0 collisions, 2 interface resets
0 unknown protocol drops
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 pause output
0 output buffer failures, 0 output buffers swapped out
ā06-13-2023 05:44 AM
the devil in the details
we sure that there is drop by looking to TH2 but show interface is 0<<- I read before about this case in C9000 total output drop, if the data traffic is bursts fast the output drop is not detect this drop.
""Traffic bursts can cause output drops even when the interface output rate is significantly lower than the maximum interface capacity. By default, the output rates in the show interface command are averaged over five minutes, which is not adequate to capture any short-lived bursts. It is best to average them over 30 seconds, though even in this scenario a burst of traffic for milliseconds could result in output drops that do not cause the 30 second average rate to increase. This document can be used to troubleshoot this any other type of congestion you see on your Catalyst 9000 series switch.""
https://www.cisco.com/c/en/us/support/docs/switches/catalyst-9300-switch/216236-troubleshoot-output-drops-on-catalyst-90.html
ā06-13-2023 06:54 AM
I believe you've misunderstood what your reference is explaining (which, briefly skimming, appears accurate and much better explains congestion than Cisco documentation did years ago).
If a particular switch is not counting drops, it's a bug. Where did you find explicit mention that's happening with C9000s?
ā06-13-2023 07:00 AM
Hmm, I dont 100% sure but if it bug can you check the bug number.
thanks in advance
MHM
ā06-13-2023 07:55 AM
". . . . I read before about this case in C9000 total output drop, if the data traffic is bursts fast the output drop is not detect this drop."
Again, where specifically did you read this? Part of a bug description?
What you quote, from your referenced Cisco TechNote's Congestion with Low Throughput section:
""Traffic bursts can cause output drops even when the interface output rate is significantly lower than the maximum interface capacity. By default, the output rates in the show interface command are averaged over five minutes, which is not adequate to capture any short-lived bursts. It is best to average them over 30 seconds, though even in this scenario a burst of traffic for milliseconds could result in output drops that do not cause the 30 second average rate to increase. This document can be used to troubleshoot this any other type of congestion you see on your Catalyst 9000 series switch.""
That quote is accurate. But it doesn't state, or infer, output drops are not counted. It tells you can have output drops even though the output rate doesn't appear high, and explains why that can happen.
ā06-13-2023 02:28 AM
all TH0 TH1 TH2 count represent the packet drop
TH2 is when the Queue is full and any new packet will drop (tail drop)
ā06-13-2023 05:33 AM
Cannot say for sure on latest platforms, but on older switches, enqueued counts just meant frames/packets were queued. Threshold drop counts show actual drops. Usually the latter is less, but never greater, than the former.
Basically such switch counts are somewhat similar to a router's interface showing queue depth vs drops.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide