i've been thinking about some common deployments of vSphere 4 and N1kv on UCS Systems.
Usually i have two Fabrics (A&B) on my UCS System and one to many vNICs defined on my Service Profiles.
lets assume that i have a standard ESX deployment with FC Storage and 6 (v)NICs presented to the OS.
the vNIC definition may look like this:
vNIC0.... Fabric A
vNIC1.... Fabric B
vNIC2.... Fabric A
vNIC3.... Fabric B
vNIC4.... Fabric A
vNIC5.... Fabric B
with no failover configured(on the UCS side).
On the N1kv side i would configure 3 Port-Channels each with 2 Uplink Ethernet interfaces.
on the first port-channel i would place the Service-Console
on the second the vm-Kernel interfaces
and on the third the VM-Traffic.
the first and second port-channel could also be configured with only 2 Nics as one port-channel but the 6 NIC config is the VMware "default".
i would use the "recommended" vPC-HM mac-pinning channel-mode.
looks like a common deployment.
Now to my question:
on the UCS side it is important that horizontal traffic (traffic between the blades) should be switched on either Fabric A or B but should not be transported over the LAN infrastructure. Now when i configure the port-channel with mac-pinning using 2 vNICs (one pinned to Fab A and the other to Fab B) traffic will be distributed between the two links according to mac-pinning (and not to the laod-balancing algorithm - in my opinion ) . For Service Console and VM Traffic its a good idea but when i take a closer look at VMotion it seems not so good anymore. Because of mac-pinning i dont have the possibility to define a "primary" and "secondary" interface or some other kind of static pinning and therefore in 50percent of the VMotion processes the traffic is forwarded over the "outside" LAN infrastructure. Are there any possibilities to control the mac-pinning behavior of the N1kv??
Using the Palo Adapter i would suggest to use the standard ESX vSwitch with the VM Kernel on it in a primary/secondary config with each Ethernet Interface pinned to one Fabric.
Has anybody a better / different approach to handle it?
Let's consider the example you mentioned below,
vmnic0 and vmnic1 using Profile A which will result in PO 1.
vmnic2 and vmnic3 using Profile B which will result in PO 2.
vmnic4 and vmnic5 using Profile C which will result in PO 3.
Each port channel will have two sub-groups. Po1 will have SG 0 and SG 1, Po2 will have SG 2 and SG 3 and Po3 will have SG 4 and SG 5.
The virtual ports behind the VEM will be automatically pinned across the sub-groups in round robin fashion. All the traffic from/to a Veth will be sent/received only on/from the pinned SG. When a pinned SG goes down it get re-pinned.
You can use static pinning to achieve your requirement of all the traffic between the blades of a chassis being switched locally on a single fabric. Static pinning can be configured on virtual port profiles and virtual interface configuration mode.
Your requirement will not be satisfied once a VM get vmotioned from one chassis to another.
You can have virtual port profiles per chassis basis and have static pinning configuration in them.
subgroups are a great advice for this situation.
i would still prefer to use mac-pinning for SC/Mgmt or VM traffic but for VMkernel Traffic the manual subgroup configuration is a great possibility.
therefore i would pin the VMotion port-profile for example to FAB A and FT port-profile to FAB B using static pinning on one port-channel. So when a Fabric goes down the veth would get re-pinned to the other Fabric.
even if i wont use it i dont think if i would have any problems when a VM moves to another chassis if the vNICs are configured consistently because the subgroup id should remain the same after a VMotion event.
thanks for your input / advice!