ā06-05-2023 04:21 PM - edited ā06-06-2023 06:12 AM
UCS 480M5 with VIC 1455 (4x 25Gb links) and N9K TORs. The UCS server runs Oracle DBs on RHEL8
2x N9K switches have been vPC LACP (active) mode. From server side 2 links, VIC ports 1&2 go to switch 1 and 3&4 go to second switch. PortChannel on the VIC is disabled.
Foe first bond ( host ssh ) - I created 4x vNICs corresponding to respective uplink ports. All vNICs have Trunk mode enabled and default vlan a Setup Linux bonding using 4x VNICs. The bond is working.
For second bond ( for storage L2 connectivity) - I created 4x vNICs corresponding to respective uplink ports. All vNICs have Trunk mode and default vlan b. Setup Linux bonding using 4x VNICs.
As soon I bring up second bond, the port channel goes down on the TORs. As soon as I bring down the second bond, the first bond becomes alive and connectivity gets restored i.e. port channel comes back up.
Any idea what's wrong here?
Thanks
Solved! Go to Solution.
ā06-07-2023 05:42 AM
You can not have two LACP teams/bonds/port-channels over the same physical wires.
You can create one LACP team/bond/port-channel over the four vNICs (one on each physical port).
After the one LACP bond0 is created, then add your SSH IP connectivity:
(bond0 which would be on the untagged / native VLAN).
(or bond0.22 for tagged VLAN 22).
Then add a second storage IP using VLAN tags to the same LACP bond:
(bond0.3260 for tagged VLAN storage access to VLAN 3260).
Hope that helps.
ā06-05-2023 05:18 PM
Hi
I bit difficult to picture the scenario. Have you tried use only one switch? I know you are seeking redundancy but sounds to me it does not support with two switches, if I undertood correctlly the scenario. If it works if one nexus only, you may have the answer.
ā06-06-2023 07:13 PM - edited ā06-07-2023 06:57 PM
Ok I think having 2 LACP bonds on the server side is not good. That could have caused the port channel flapped. I changed the second bond to 2 ((balance-xor), it came up okay.
Based on RH documentation, mode 2 requires a static aggregate i.e. non-LACP. I need to have multiple bonds with IPs on the host. So I think I should go with mode 2 and remove LACP from the switch config to make it compliant.
Can anyone concur my plan?
Thanks
Corrected bond 2 balance-xor
ā06-07-2023 05:42 AM
You can not have two LACP teams/bonds/port-channels over the same physical wires.
You can create one LACP team/bond/port-channel over the four vNICs (one on each physical port).
After the one LACP bond0 is created, then add your SSH IP connectivity:
(bond0 which would be on the untagged / native VLAN).
(or bond0.22 for tagged VLAN 22).
Then add a second storage IP using VLAN tags to the same LACP bond:
(bond0.3260 for tagged VLAN storage access to VLAN 3260).
Hope that helps.
ā06-07-2023 05:50 AM
can you share the topology
ā06-07-2023 02:32 PM
Hi @SamUrai ,
Check @Steven Tardy 's reply - he has answered your question.
You need to understand how LACP works. Once the 9Ks have formed a Port-Channel with your firs 4 vNICs, the bond is complete. LACP has done its job. You can't have your cake and eat it too!
To be able to do what you want to do, you'd need to be able to virtualise the 9K ethernet interfaces in much the same way as the UCS NIC interfaces are virtualised. And I don't know of any switch that does that.
ā06-07-2023 06:55 PM
Yes, I understand now that only one LACP over the same interfaces is possible.. Learned it a bit hard way ... my network guy didn't know that either lol..
As I don't want to put burden on OS by adding tagged bonds I prefer having vNICs instead with default vlan instead. I referred this document - VIC 14XX in Standalone and UCSM Integrated Mode - Cisco
Now I'm planning to use static port-channel on the switch and using VIC HW Port-channel and bond 2 on OS side. Will have to check the type of the load balancing switch can have with static port-channel though. On OS side, I plan to use xmit_hash_policy as layer3+4 for client traffic and layer 2 for Netapp NFS storage
Our network topology is like give below.
Netapp switches
Aggr1 Aggr2
Core1 Core2
Aggr3 Aggr4
TOR1 TOR2
DB servers
ā06-08-2023 06:55 AM
Yeah. Don't think that will do what you envision it to do.
Are TOR1/TOR2 a vPC'd/MCLAG pair? Or is this some VXLAN leaf/spine configuration?
Creating the hardware port-channel on the VIC will prevent enabling vPC upstream, but will provide "50Gbps" vNICs.
You could create 4 vNICs, two on each uplink port-channel. But then without vPC upstream, the upstream switches WILL get very angry when you MAC Addresses flap between switch TOR1 and TOR2 if you use anything besides active-backup.
If you go this route then I would suggest active-backup between the two vNICs for redundancy. Having each active-backup group primary be the "other" side.
Even in this configuration there is still a very real issue of "half of your traffic" will land on the *wrong* upstream and *must* cross (read overload) the peer-link if TOR1/TOR2 are vPC'd upstream.
If TOR1/TOR2 are not vPC'd, then during steady-state/normal operations TOR1 is "dedicated" management and TOR2 is "dedicated" NFS. Which isn't the worst solution to have.
ā06-08-2023 06:59 AM
Just draw your topolgy
Thanks
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide