01-12-2025 09:30 PM
Hello Experts,
I have a setup with 2 sites connected using Nexus 95K (1 at each site) over a L2 trunk link. Each site has redundant Fabric interconnect (A-B) configured in end host mode. The fabric interconnects (A-B) at each site connects to single Nexus 95K at their respective site only.
Site A: FI-A -- Nexus 95K 1 --- > Uplink Etherchannel A
Site A: FI-B -- Nexus 95K 1 --- > Uplink Etherchannel B
Site B: FI-A -- Nexus 95K 2--- > Uplink Etherchannel A
Site B: FI-B -- Nexus 95K 2 --- > Uplink Etherchannel B
The VLANs are configured in dual mode to be added to each uplink from the FI. The connection to single Nexus 95K from the FI-A-B on each site is seen as a potential failure for the VMware environment, as the vMotion VLAN also uses the same uplinks.
We are thinking to add another uplink to fabric Interconnects in parallel to Nexus 95K uplink, to Cisco Catalyst 94K that have cross connection to Nexus 95K in site A and B. The idea is to replicate the added the VLANs from Fabric interconnect to the uplink to N95K and Cat 94K and let the spanning tree on the switch end take care of the VLAN redundancy.
No LAN Pin Groups or VLAN Groups are current been used.
What further needs to be considered in order to get this setup to work. Are there any gotchas using multiple uplinks from FI to separate switches using same VLAN?
01-13-2025 07:57 AM
How do you ensure that Etherchannel A and B's traffic is separated? Have you configured Disjoint Layer 2 in UCS Manager ?
Cross-connecting a UCS FI to separate upstream switches for redundancy is how UCS was designed. We run vPCs (Virtual Port Channels) between our upstream Nexus switches.
01-13-2025 04:02 PM
The traffic separation on Fabric Interconnects (FI-A and FI-B) is managed through the dynamic pinning of VLANs to uplink EtherChannels A and B, which is a feature of FI End-Host Mode.
The output of the show pinning bridge-interface command shows that odd-numbered VLANs are dynamically pinned to the FI-A uplink, while even-numbered VLANs are pinned to the FI-B uplink.
This results in a disjointed Layer 2 UCS Manager configuration, where LAN connectivity policies and VLAN groups are not statically defined but instead managed dynamically through FI End-Host Mode's VLAN pinning.
Regarding the connectivity between the FIs and the upstream switches, we are still relying on Spanning Tree Protocol (STP), with no VSS or vPC configuration in place. This raises a concern about how the traffic will behave, as we are now connecting the FIs to separate Catalyst switches.
I’ve attached a diagram of the setup, where the red lines represent the new uplinks I plan to add. I’d appreciate any thoughts or feedback on this configuration.
01-14-2025 09:35 AM
I think you are missing a few fundamentals about how UCS Fabric Interconnects work.
UCS Fabric Interconnects are (mostly) completely independent devices which forward unknown dest MAC addresses out the "uplink".
UCS Fabric Interconnects presume all uplink ports carry all VLANs.
My reading of your description leads me to believe the Nexus and Catalyst switches carry the same VLANs.
If that is the case, then there is little to do. Add uplinks. Done.
If the uplink ports do NOT carry the same VLANs, then you must configure DJL2.
Out which uplink port if there are multiple? Depends on each servers vNIC "pinning".
Each vNIC for each server should round-robin(ish) pin to one of all available individual uplinks.
This is not true round-robin load balancing, but good enough load sharing.
UCS Fabric Interconnects do not run spanning tree (and should really be thought of as a port-aggregator instead of a switch).
UCS will "listen" on only one uplink port for broadcasts (designated receiver).
UCS will egress out all uplink ports based on vNIC pinning.
Hope this helps.
Feel free to ask any questions this description raises.
01-14-2025 05:53 PM
Thanks, Steven, for the reply and clarifying.
As you got it, the idea is for the uplinks to the Nexus and Catalyst switches carry the same VLANs from FI-A-B and thanks for clarifying that it would just work by adding more uplinks.
My question is, in earlier releases there used to be G-Pin Port dynamic selection criterion which selected a single Ethernet uplink port (or PortChannel) on each fabric interconnect for the broadcast and multicast traffic receiver for all VLANs, and incoming and broadcast traffic was dropped on the other uplinks for loop avoidance. Is this still the case. As the following command does not lists the broadcast, multicast uplink interface on version 4.2 of FI.
show platform software enm internal info global
Is there a way to be get which uplink is being pinned for the traffic.
Thanks for sharing the info.
01-22-2025 12:22 PM
This DJL2 doc describes how to view designated receiver. Section:
Verify the Designated Receiver
For your deployment with two uplink port-channels I would expect to see something like this showing a single DR (Po to Nexus or Catalyst) with multiple Members (One Po to Nexus and one Po to Catalyst):
FI-A(nx-os)# show platform software enm internal info vlandb all
vlan_id 80
-------------
Designated receiver: Po91
Membership:
Po91, Po92
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide