01-20-2022 12:34 PM
We have high packet discards between 2 sites
primary site has 2 x 10gig sfp's and other side is connected via x2 1gig
both sites have a port channel with the exact same config
speed negotiated at 2gig
interface configuration is the same on both sides.
gi1/1/2 and gi2/1/2 as a etherchannel / port channel
tengi3/1/2 and tengi4/1/2 as etherchannel / port channel
Site 2 is where the high dicards are happening
Site 1
TenGigabitEthernet3/1/2 is up, line protocol is up (connected)
Hardware is Ten Gigabit Ethernet,
MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 3/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive not set
Full-duplex, 1000Mb/s, link type is auto, media type is 1000BaseSX SFP
input flow-control is off, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:12, output never, output hang never
Last clearing of "show interface" counters never
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 0
TenGigabitEthernet4/1/2 is up, line protocol is up (connected)
Hardware is Ten Gigabit Ethernet,
MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive not set
Full-duplex, 1000Mb/s, link type is auto, media type is 1000BaseSX SFP
input flow-control is off, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:04, output never, output hang never
Last clearing of "show interface" counters never
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 0
Site 2
GigabitEthernet2/1/2 is up, line protocol is up (connected)
Hardware is Gigabit Ethernet,
MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 3/255
Encapsulation ARPA, loopback not set
Keepalive not set
Full-duplex, 1000Mb/s, link type is auto, media type is 1000BaseSX SFP
input flow-control is on, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:06, output 00:00:15, output hang never
Last clearing of "show interface" counters 25w0d
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 479274732293
GigabitEthernet1/1/2 is up, line protocol is up (connected)
Hardware is Gigabit Ethernet,
MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive not set
Full-duplex, 1000Mb/s, link type is auto, media type is 1000BaseSX SFP
input flow-control is on, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:05, output 00:00:03, output hang never
Last clearing of "show interface" counters 25w0d
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 9035358488
any help would be appreciated
TIA
01-24-2022 02:49 PM - edited 01-25-2022 03:02 PM
Ah, those interface stats are quite interesting! Besides g1/1/2 being very high %, g2/1/2 has more drops than packets outputted!!! The transmission load averages, for that interface stats snapshop, are rather low too.
Although the 3850 series is "known" for high drop rates, using default settings (much like the earlier 3750 series with QoS enabled also with default settings), your drop rates are so high, either you're bumping into an extreme case of transient/burst congestion, which will drop transmission rates to low averages or I wonder since the problem is on the switches running one IOS version, different from the other, whether you've bumped into a "software defect".
Have you tried something like "qos queue-softmax-multiplier 1200"
01-23-2022 10:16 AM
Site 1 where we have no drops we are using WS-C3850-48P stacked with 5 switches, IOS version: 03.03.05SE
Site 2 where we do have drops we are using WS-C3850-24T stacked with 2 switches, IOS version: 16.12.05b
01-25-2022 02:27 AM
Could this have am impact if
1 switch has EtherChannel Load-Balancing Configuration:
src-dst-ip
2nd switch has:
EtherChannel Load-Balancing Configuration:
src-mac
should the setting be the same on both sides?
01-25-2022 02:59 AM - edited 01-25-2022 03:03 AM
yes i would suggest both side to be same to get best results.
EDIT : still looking below information : ( also look the suggestion from @Joseph W. Doherty )
ok now Can you post show the router side interface config and show interface output, the one getting output errors and the one not getting
to make it clear, Site 1 switch and Router output (1 attachement) and Site 2 switch and Router output (2nd attachement)
so we can ready easily also what Code they running each device.
01-25-2022 03:11 PM
"Could this have am impact if
1 switch has EtherChannel Load-Balancing Configuration:
src-dst-ip
2nd switch has:
EtherChannel Load-Balancing Configuration:
src-mac"
Possibly; at least when it comes to how well links are load balanced.
"should the setting be the same on both sides?"
Possibly, possibly not; depends on your traffic. In the earlier Cisco switches that only support src-x and dst-x, often the two sides should be different. With switches that support src-dst-x, often, both sides can use the same, although the -x might vary.
Considering your two interfaces output count is:
7359702
14831131
Your links (possibly likely) will better balance if you change your LB selection on that switch to src-dst-IP. However, I suspect you'll still see an abnormally high drop count.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide