cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2024
Views
10
Helpful
19
Replies

high interface discards between 2 sites

BhavMistry
Level 1
Level 1

We have high packet discards between 2 sites

primary site has 2 x 10gig sfp's and other side is connected via x2 1gig

both sites have a port channel with the exact same config

speed negotiated at 2gig

interface configuration is the same on both sides.

gi1/1/2 and gi2/1/2 as a etherchannel / port channel

tengi3/1/2 and tengi4/1/2 as etherchannel / port channel

 

Site 2 is where the high dicards are happening

 

Site 1

TenGigabitEthernet3/1/2 is up, line protocol is up (connected)
Hardware is Ten Gigabit Ethernet,
MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 3/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive not set
Full-duplex, 1000Mb/s, link type is auto, media type is 1000BaseSX SFP
input flow-control is off, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:12, output never, output hang never
Last clearing of "show interface" counters never
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 0

 

TenGigabitEthernet4/1/2 is up, line protocol is up (connected)
Hardware is Ten Gigabit Ethernet,
MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive not set
Full-duplex, 1000Mb/s, link type is auto, media type is 1000BaseSX SFP
input flow-control is off, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:04, output never, output hang never
Last clearing of "show interface" counters never
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 0

Site 2

GigabitEthernet2/1/2 is up, line protocol is up (connected)
Hardware is Gigabit Ethernet,
MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 3/255
Encapsulation ARPA, loopback not set
Keepalive not set
Full-duplex, 1000Mb/s, link type is auto, media type is 1000BaseSX SFP
input flow-control is on, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:06, output 00:00:15, output hang never
Last clearing of "show interface" counters 25w0d
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 479274732293

 

GigabitEthernet1/1/2 is up, line protocol is up (connected)
Hardware is Gigabit Ethernet,
MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive not set
Full-duplex, 1000Mb/s, link type is auto, media type is 1000BaseSX SFP
input flow-control is on, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:05, output 00:00:03, output hang never
Last clearing of "show interface" counters 25w0d
Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 9035358488

any help would be appreciated

 

TIA

 

19 Replies 19

Ah, those interface stats are quite interesting!  Besides g1/1/2 being very high %, g2/1/2 has more drops than packets outputted!!!  The transmission load averages, for that interface stats snapshop, are rather low too.

Although the 3850 series is "known" for high drop rates, using default settings (much like the earlier 3750 series with QoS enabled also with default settings), your drop rates are so high, either you're bumping into an extreme case of transient/burst congestion, which will drop transmission rates to low averages or I wonder since the problem is on the switches running one IOS version, different from the other, whether you've bumped into a "software defect".

Have you tried something like "qos queue-softmax-multiplier 1200"

 

BhavMistry
Level 1
Level 1

Site 1 where we have no drops we are using WS-C3850-48P stacked with 5 switches, IOS version: 03.03.05SE

Site 2 where we do have drops we are using WS-C3850-24T stacked with 2 switches, IOS version: 16.12.05b

BhavMistry
Level 1
Level 1

Could this have am impact if
1 switch has EtherChannel Load-Balancing Configuration:
src-dst-ip
2nd switch has:
EtherChannel Load-Balancing Configuration:
src-mac

should the setting be the same on both sides?

yes i would suggest both side to be same to get best results.

 

EDIT : still looking below information :  ( also look the suggestion from @Joseph W. Doherty )

 

ok now Can you post show the router side interface config and show interface output, the one getting output errors and the one not getting
 
to make it clear, Site 1 switch and Router output  (1 attachement) and Site 2 switch and Router output  (2nd attachement)
so we can ready easily also what Code they running each device.

 

BB

***** Rate All Helpful Responses *****

How to Ask The Cisco Community for Help

"Could this have am impact if
1 switch has EtherChannel Load-Balancing Configuration:
src-dst-ip
2nd switch has:
EtherChannel Load-Balancing Configuration:
src-mac"

Possibly; at least when it comes to how well links are load balanced.

"should the setting be the same on both sides?"

Possibly, possibly not; depends on your traffic.  In the earlier Cisco switches that only support src-x and dst-x, often the two sides should be different.  With switches that support src-dst-x, often, both sides can use the same, although the -x might vary.

Considering your two interfaces output count is:

7359702

14831131

Your links (possibly likely) will better balance if you change your LB selection on that switch to src-dst-IP.  However, I suspect you'll still see an abnormally high drop count.

Review Cisco Networking for a $25 gift card