12-03-2021 01:58 AM
HI Everyone
I have a fleet of ASR920 and a few (very few) interfaces are reporting transmit discards and I'm suspecting micro bursts as the root cause however the reported avg traffic flow still feels a little low for the risk of micro bursts.
Any hints and tips of finding the exact root cause?
show int below
GigabitEthernet0/0/3 is up, line protocol is up
Hardware is 24xGE-4x10GE-FIXED-S, address is bce7.****.**** (bia bce7.****.****)
Description: *** 1g Link ***
Internet address is 10.1.2.3/31
MTU 9000 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 4/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full Duplex, 1000Mbps, link type is force-up, media type is BX10U
output flow-control is unsupported, input flow-control is on
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:00, output 00:00:00, output hang never
Last clearing of "show interface" counters 18:17:15
Input queue: 0/375/0/0 (size/max/drops/flushes); Total output drops: 22744
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 5318000 bits/sec, 1140 packets/sec
5 minute output rate 19341000 bits/sec, 2995 packets/sec
67744644 packets input, 42049513771 bytes, 0 no buffer
Received 0 broadcasts (0 IP multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 25106 multicast, 0 pause input
135673743 packets output, 87762724636 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
1218 unknown protocol drops
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 pause output
0 output buffer failures, 0 output buffers swapped out
12-03-2021 07:31 AM
Well that's one of the things about microbursts, you don't need a high average load level.
Something to try, especially as your interface is gig but with the default 40 packet egress queue limit, bump it up, even by a lot. Then see if your overall drop rate decreases.
BTW, for TCP, a RWIN is ideally sized to support BDP (bandwidth delay product). I figure the ideal size for a router interface is half the BDP size. Likely 40 packets is way too small.
12-03-2021 12:03 PM
Hello,
according to the document linked below, you can add a service policy to increase the queue limit. It would look something like this:
policy-map PM_1
class cos1
class class-default
queue-limit percent 100
!
interface GigabitEthernet0/0/3
service-policy output PM_1
Handling Microburst on Cisco ASR 920 Platforms
12-06-2021 01:13 AM
Thanks for the suggestion - I had found something similar after creating the thread for Layer 2 service and assumed that I could do something similar for a layer 3 interface
Andy
12-06-2021 07:30 AM
From the reference @Georg Pauwen provided, don't overlook the need to also set the queue size (which the policy's % should utilize), possibly larger than whatever is the default.
Further, when setting queue size, it might be best to allocate via bytes. Packets, of course, are variable sized, so it can be unsure what kind of buffering you'll obtain (although this has been the "traditional" method of queue size allocations). Possible better than allocating by bytes or packets, would be to allocate by time, as that would/should set a boundary on how much queue latency might be added.
In theory, the queue size, when based on maximum queue delay, should automatically change if you run the interface at a different "speed". However, I recall (?) there were some issues with that feature not always correctly computing the queue size. So, again, you might want to use the by bytes size option.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide