08-30-2024 05:26 AM - edited 08-30-2024 05:33 AM
Hello and sorry for my bad english,
I have a lag/etherchannel between my Cisco C3850 and two Aruba 8100 in stack. I have troubles because there is a lot of packets discarded as you can see below on picture.
My configuration on Cisco is like this :
interface Port-channel32
description aruba
switchport trunk native vlan 999
switchport mode trunk
switchport nonegotiate
interface TenGigabitEthernet1/0/39
description Aruba A8100
switchport trunk native vlan 999
switchport mode trunk
switchport nonegotiate
channel-group 32 mode active
!
interface TenGigabitEthernet1/0/40
description Aruba A8100
switchport trunk native vlan 999
switchport mode trunk
switchport nonegotiate
channel-group 32 mode active
and on Aruba :
interface lag 1 multi-chassis
description LACP-to-Coeur
no shutdown
no routing
vlan trunk native 999
vlan trunk allowed 1-2,5,15,17,21-22,25-26,45,51,54,56,61-62,70,89,100,102,104,110,999
lacp mode active
spanning-tree bpdu-filter
spanning-tree rpvst-filter
interface 1/1/47
description aggr LAG1 to core
no shutdown
lag 1
!
interface 1/1/48
description aggr LAG1 to core
no shutdown
lag 1
do you think this configuration is bad ?
09-06-2024 12:28 AM - edited 09-06-2024 12:29 AM
To be sure, sorry because my english is not so good :
On Cisco core, i have in my global configuration this : port-channel load-balance src-dst-mac
et let this configuration and just in the 2 port-channel, i had : port-channel load-balance src-dst-mixed-ip-port
(or i change something in the global configuration also ?)
that s good ?
when i apply new port-channl-load balance, this will shutdown fw secondes the port-channel ?
09-04-2024 11:34 PM
I poste for interface 1/0/39, it seems the same for the others interfaces : a lot of drop-th2
09-05-2024 04:00 AM
Hi,
Thanks for the information supplied. Do you experience output drops at the Aruba box as well?
Thanks & Regards,
Antonin
09-05-2024 06:02 AM
not drops from Aruba side connect to Cisco
09-06-2024 01:04 AM
Hi,
Thanks for the reply. Can you please post the outputs when issuing the following commands at the Aruba box:
"show interfaces lag 1 queues"
"show interface lag 1 statistics"
Thanks & Regards,
Antonin
09-06-2024 01:14 AM
09-06-2024 04:10 AM
Hmm, earliest Cisco port channel showed po32 and po33 with 4 ports each, but Aruba lag1 shows two ports?
09-06-2024 04:56 AM
09-06-2024 06:52 AM
Ah, sounds somewhat similar to Cisco's vPC. (?)
Although I don't (yet) believe your drop issues are due strictly to a (permanent) L2 loop which is the case, I believe, @MHM Cisco World is making, I do agree, with him, there may be various L2 issues that should be addressed. The recent fact shutting down one of the 3850 Etherchannels caused lost of Aruba connectivity is quite concerning.
In your recently posted Aruba stats, they also show packets being enqueued but not dropped, that possibly implies increasing queue depths, on the 3850s, may mitigate to eliminate drops.
09-09-2024 02:11 AM - edited 09-09-2024 02:12 AM
Hi,
Thanks for the information provided. You may consider to configure very basic QoS as per beneath to substitute the default one for the Po32 and check if there is any improvement.
class-map CONTROL_TRAFFIC
match dscp 16 24 32 40 46 48 56
policy-map TEST
class CONTROL_TRAFFIC
bandwidth percent 10
queue-buffers ratio 10
class class-default
bandwidth remaining percent 100
queue-buffers ratio 90
interface range te 1/0/39-40, te 2/0/37-38
service-policy output TEST
I understand that you have already configured "qos queue-softmax-multiplier 1200".
Best regards,
Antonin
09-09-2024 09:53 AM
@amikat wrote:
Thanks for the information provided. You may consider to configure very basic QoS as per beneath to substitute the default one for the Po32 and check if there is any improvement.
class-map CONTROL_TRAFFIC
match dscp 16 24 32 40 46 48 56
policy-map TEST
class CONTROL_TRAFFIC
bandwidth percent 10
queue-buffers ratio 10
class class-default
bandwidth remaining percent 100
queue-buffers ratio 90
interface range te 1/0/39-40, te 2/0/37-38
service-policy output TESTI understand that you have already configured "qos queue-softmax-multiplier 1200".
BTW, what Antonin suggests, is a kind of tuning, I alluded to in earlier replies.
I haven't suggested specific configuration's, like Antonin's, because I believe we have insufficient information to do so.
For example, in this suggested configuration, Antonin is defining usage for just two queues, but in an recent reply you made to Antonin, we see traffic in 4 of the 8 possible queues. So, is it "good" to reduce from 4 to 2 queues? I don't know.
Possibly (???) the syntax of the shown policy-map isn't quite correct.
I recall (?) when matching DSCP (or IPPrec) values, you used to be limited to matching just 4 per statement. It's certainly possible your IOS supports 7 values.
If not, you might be able to use:
class-map match-any CONTROL_TRAFFIC
match dscp 16 24 32
match dscp 40 46 48 56
Also, I recall somewhere in IOS upgrades Cisco moved from "match dscp" to "match ip dscp".
On a switch, QoS is supported on the physical interface, but possibly, the service-policy might be applied to the port-channel interface. (Logically, that's where it belongs, and over the years, Cisco has required less and less Etherchannel member port configuration in lieu of port-channel interface configuration.)
09-05-2024 07:17 AM
Such stats are not surprising.
I.e. queue fills, and unless offered rate stops exceeding transmission rate, then drops happen.
So, what can be done to mitigate?
There are several approaches which are not mutually exclusive.
If it's a transient situation, increasing queue limit can mitigate to eliminate drops. You've applied the much earlier suggested softmax change? If so, did the rate of drops decrease? If so, performing additional tuning may reduce drops even further. (It's hopeful that the Aruba side doesn't show drops.)
Before doing additional QoS changes, again, recommend trying another LB algorithm. Here too, Aruba seems to be configured using the LB algorithm I initially suggested. However, as your 3850 and its IOS appear to offer better choices (thanks again to mention by @amikat ), again, suggest you use one of those.
As also suggested, if additional links can be added, or replaced with a single 40g, that should also reduce drops.
Lastly, QoS might be configured to optimize "goodput" and/or treating different kinds of traffic differently. These options often require expert level of QoS knowledge/skills and possible ongoing maintenance.
09-06-2024 12:46 AM
You waste your time
this for you and other
this simple topology R1 connect to R2 SW via two link
before I run bpdufilter in R1
there is reachability and check the input count in interface 3 or little more
after I run bpdufilter WITH Wrong design see what happened
NO reachability and Input count jump to thousand 20716 !!!!
this only two host connect to SW image more than 100 host sure the link see high rate traffic
this my last comment here because I will be busy for other issue
remove one PO
and then use MST in both SW
DONT WASTE YOUR TIME
09-06-2024 02:10 AM
i tried tu shutdown port-channel 32 and let port-channel 33 online
But when i do that, when i shutodwn port-channel 32, i lost my network to Aruba
09-06-2024 02:47 AM
Then keep 32 and shut 33.
MHM
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide