cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
6513
Views
6
Helpful
61
Replies

Trunk between Cisco an Aruba

mulbreizh
Level 1
Level 1

Hello and sorry for my bad english,

I have a lag/etherchannel between my Cisco C3850 and two Aruba 8100 in stack. I have troubles because there is a lot of packets discarded as you can see below on picture. 

My configuration on Cisco is like this :
interface Port-channel32
description aruba
switchport trunk native vlan 999
switchport mode trunk
switchport nonegotiate

interface TenGigabitEthernet1/0/39
description Aruba A8100
switchport trunk native vlan 999
switchport mode trunk
switchport nonegotiate
channel-group 32 mode active
!
interface TenGigabitEthernet1/0/40
description Aruba A8100
switchport trunk native vlan 999
switchport mode trunk
switchport nonegotiate
channel-group 32 mode active

and on Aruba :

interface lag 1 multi-chassis
description LACP-to-Coeur
no shutdown
no routing
vlan trunk native 999
vlan trunk allowed 1-2,5,15,17,21-22,25-26,45,51,54,56,61-62,70,89,100,102,104,110,999
lacp mode active
spanning-tree bpdu-filter
spanning-tree rpvst-filter

interface 1/1/47
description aggr LAG1 to core
no shutdown
lag 1
!
interface 1/1/48
description aggr LAG1 to core
no shutdown
lag 1

do you think this configuration is bad ? 

mulbreizh_0-1725020235531.png

 

61 Replies 61

To be sure, sorry because my english is not so good :

On Cisco core, i have in my global configuration this : port-channel load-balance src-dst-mac

et let this configuration and just in the 2 port-channel, i had : port-channel load-balance src-dst-mixed-ip-port

(or i change something in the global configuration also ?)

that s good ?

when i apply new port-channl-load balance, this will shutdown fw secondes the port-channel ?

 

 

I poste for interface 1/0/39, it seems the same for the others interfaces : a lot of drop-th2

 

mulbreizh_0-1725517937105.pngmulbreizh_1-1725517959174.png

 

Hi,

Thanks for the information supplied. Do you experience output drops at the Aruba box as well?

Thanks & Regards,

Antonin

not drops from Aruba side connect to Cisco

Hi,

Thanks for the reply. Can you please post the outputs when issuing the following commands at the Aruba box:

"show interfaces lag 1 queues"

"show interface lag 1 statistics"

Thanks & Regards,

Antonin

 

mulbreizh_0-1725610453276.png

mulbreizh_1-1725610479658.png

 

 

 

Hmm, earliest Cisco port channel showed po32 and po33 with 4 ports each, but Aruba lag1 shows two ports?

Hello,

It is because twe 2 Aruba is not really a stack but a cluster. One port of
each Aruba can be in the same lag, but when we pass command to see stats or
something else, we can see only for the switch where we are connect. Yes i
not, not very easy. Aruba is VSX to synchronyze configuration between switch
but each switch have an IP adress

Ah, sounds somewhat similar to Cisco's vPC.  (?)

Although I don't (yet) believe your drop issues are due strictly to a (permanent) L2 loop which is the case, I believe, @MHM Cisco World is making, I do agree, with him, there may be various L2 issues that should be addressed.  The recent fact shutting down one of the 3850 Etherchannels caused lost of Aruba connectivity is quite concerning.

In your recently posted Aruba stats, they also show packets being enqueued but not dropped, that possibly implies increasing queue depths, on the 3850s, may mitigate to eliminate drops.

Hi,

Thanks for the information provided. You may consider to configure very basic QoS as per beneath to substitute the default one for the Po32 and check if there is any improvement.

class-map CONTROL_TRAFFIC
 match dscp 16 24 32 40 46 48 56
policy-map TEST
 class CONTROL_TRAFFIC
  bandwidth percent 10
  queue-buffers ratio 10
 class class-default
  bandwidth remaining percent 100
  queue-buffers ratio 90
interface range te 1/0/39-40, te 2/0/37-38
 service-policy output TEST

I understand that you have already configured "qos queue-softmax-multiplier 1200".

Best regards,

Antonin

 


@amikat wrote:

Thanks for the information provided. You may consider to configure very basic QoS as per beneath to substitute the default one for the Po32 and check if there is any improvement.

class-map CONTROL_TRAFFIC
 match dscp 16 24 32 40 46 48 56
policy-map TEST
 class CONTROL_TRAFFIC
  bandwidth percent 10
  queue-buffers ratio 10
 class class-default
  bandwidth remaining percent 100
  queue-buffers ratio 90
interface range te 1/0/39-40, te 2/0/37-38
 service-policy output TEST

I understand that you have already configured "qos queue-softmax-multiplier 1200".


BTW, what Antonin suggests, is a kind of tuning, I alluded to in earlier replies.

I haven't suggested specific configuration's, like Antonin's, because I believe we have insufficient information to do so.

For example, in this suggested configuration, Antonin is defining usage for just two queues, but in an recent reply you made to Antonin, we see traffic in 4 of the 8 possible queues.  So, is it "good" to reduce from 4 to 2 queues?  I don't know.

Possibly (???) the syntax of the shown policy-map isn't quite correct.

I recall (?) when matching DSCP (or IPPrec) values, you used to be limited to matching just 4 per statement.  It's certainly possible your IOS supports 7 values.

If not, you might be able to use:

class-map match-any CONTROL_TRAFFIC
match dscp 16 24 32
match dscp 40 46 48 56

Also, I recall somewhere in IOS upgrades Cisco moved from "match dscp" to "match ip dscp".

On a switch, QoS is supported on the physical interface, but possibly, the service-policy might be applied to the port-channel interface.  (Logically, that's where it belongs, and over the years, Cisco has required less and less Etherchannel member port configuration in lieu of port-channel interface configuration.)

Such stats are not surprising.

I.e. queue fills, and unless offered rate stops exceeding transmission rate, then drops happen.

So, what can be done to mitigate?

There are several approaches which are not mutually exclusive.

If it's a transient situation, increasing queue limit can mitigate to eliminate drops.  You've applied the much earlier suggested softmax change?  If so, did the rate of drops decrease?  If so, performing additional tuning may reduce drops even further.  (It's hopeful that the Aruba side doesn't show drops.)

Before doing additional QoS changes, again, recommend trying another LB algorithm.  Here too, Aruba seems to be configured using the LB algorithm I initially suggested.  However, as your 3850 and its IOS appear to offer better choices (thanks again to mention by @amikat ), again, suggest you use one of those.

As also suggested, if additional links can be added, or replaced with a single 40g, that should also reduce drops.

Lastly, QoS might be configured to optimize "goodput" and/or treating different kinds of traffic differently.  These options often require expert level of QoS knowledge/skills and possible ongoing maintenance.

You waste your time 
this for you and other 
this simple topology R1 connect to R2 SW via two link 
before I run bpdufilter in R1 

there is reachability and check the input count in interface 3 or little more 

after I run bpdufilter WITH Wrong design see what happened
NO reachability and Input count jump to thousand 20716 !!!!

this only two host connect to SW image more than 100 host sure the link see high rate traffic 

this my last comment here because I will be busy for other issue
remove one PO
and then use MST in both SW 

DONT WASTE YOUR TIME 

Screenshot (744).pngScreenshot (745).pngScreenshot (746).pngScreenshot (747).pngScreenshot (748).pngScreenshot (749).pngScreenshot (750).pngScreenshot (751).pngScreenshot (752).pngScreenshot (753).pngScreenshot (754).pngScreenshot (755).png

i tried tu shutdown port-channel 32 and let port-channel 33 online

But when i do that, when i shutodwn port-channel 32, i lost my network to Aruba

Then keep 32 and shut 33.

MHM