Hi everyone, I have dual 6120 interconnects, each of which have two uplink ports. I also have two Catalyst 4900M switches to uplink to. So I plan to configure both 802.1q and 802.3ad on each of the 6120 uplink pairs. My question relates to the physical cabling, bearing in mind ESX will be running on the blades.
Would you cable both the uplink ports on interconnect 1 to switch 1? Or would you cross-connect them to both switch 1 and switch 2?
As per this article I see ESX does not support splitting a port channel across two switches, but I am unsure how the presence of the 6120s affects this. Can anyone suggest the best way of doing the cabling based on the kit I have? Thanks.
Simon, there are a few ways you could cable it up but I personally like to keep it as simple as possible and keep the fabrics split all the way through the infrastructure.
I would cable up both uplinks from Fabric Interconnect A to 4900 1 and both uplinks from Fabric Interconnect B to 4900 2. You can create a LACP port-channel on those two uplinks if you want but it doesn't buy you whole lot because UCS will round robin which uplink each blades vNIC gets pinned to, works a lot like it does in VMware.
Think of the uplinks from the 6120s to the 4900 like the uplinks from an ESX vSwitch with two VMNIC uplinks. VMs get pinned to a VMNIC and ESX round robins which VMNIC uplink a VM uses.
You know how you could also port-channel two VMNICs on a traditional rack mount server? That didn't buy you a whole lot and just added complexity. Works almost the same with the the uplinks from the 6120s to northbound switches.
Depending on which adapter you have in your blade will will create at least 2 vNICs in your ESX Service Profiles, 1 for fabric A and 1 for Fabric B and I wouldn't enable failover on the vNICs, let ESX handle that at the vSwitch level.
I am sure other people have there own opion on how it should be done but like I stated earlier I like to keep it simple.
Hey Simon, let's address this is two parts because there are two layers: server-bound (ESX-6120) and north-bound (6120-Cat4900).
At the ESX end, use the default srcPort in the vSwitch / port group with both 10GbE uplink ports (vmnic0 and vmnic1) set to active (not active/passive). There are no loops in this configuration. On the VNIC (in UCSM) you select trunk port and allow all the VLANs that your vSwitch port groups needs. If you don't use a VLAN for the Service Console/ESXi Management, then make sure you set Native to the correct VLAN in the VNIC to allow untagged traffic down the right path.
At the 6120 you can't explicitly configure a trunk for border ports, they are already set so and when you add VLANs they are added to every border port trunk configuration. You, of course, need to make sure the Cat4900 ports at the other end of the cable have the same trunkport VLAN configuration.
So in summary - set VLANs in vSwitch port groups, VNIC in UCSM and northbound switch.
Now when it comes to port channels, you can only configure two or more border ports as a port channel if they are cabled to the same Cat4900. If you had Nexus 5k/7k northbound then you could use vPC on them to allow "mesh" cabling (ie. one 6120 connects to both Nexus switches). But as you aren't using Nexus you will end up with a configuration as Jeremy explained:
ESX --> vSwitch0 --> srcPort load balancing --> 2 active uplinks --> many port groups
6120-a --> Cat4900-a
6120-b --> Cat4900-b
However, I would still recommend using a couple of border ports in a port channel on each fabric to each Cat4900 because, if you pull out a cable/lose a port, then when that port is re-enabled the existing live traffic is redistributed whereas without the port channel only new traffic will be spread across both links.