I have a question around vNIC failover and LAN pinning.
1) I understand what when you enable failover on a vNIC say on FI A, it creates a Standby/Passive VIF/circuit on FI B. In doing so, I guess you will have to manually LB the vNIC's accordingly for a particular SP should you want to LB the traffic. Also, is there any reason for NOT enabling vNIC FO and just create another vNIC for the other FI and just allow UCS to manage the FO? I assume in the latter example dynamic LAN pinning come into play to send traffic to both FI's?
2) In regards to LAN pin groups, is it best to just allow the UCS manage the pinning or is it better to maually specify the the FI's? Coming from a network background, I thought it was best to know which way your traffic is traversing so in situations of TS you know exactly which way your traffic is flowing, however reading various articals it says otherwise?
FYI, we do not have any nic teaming or OS level NIC FO. However we do have N1K with HA, if that matters.
Thanks in advance.
1) Most OS support today NIC teaming (including W2012), and therefore this hardware failover flag is no more needed.
With one interface e.g. vnic0 and FF (Failover flag) set, you only have failover; but most likely you want lbfo (Load balancing and failover). Therefore your create 2 vnic's and let the OS NIC teaming (and if you need Fibrechannel, multipathing) do lbfo.
Caveat of FF; in case of a failover, you might send quite a number of GARP messages Northbound, which could stress the CPU of the fabric interconnect.
2) Best practise is to use dynamic pinning; let UCSM do the work; static pinning is a management (and scalability) nightmare.
Thanks for getting back.
If I've understood your response correctly, you are refering to OS installed natively on the blades.
What if we have ESX hosts with N1K DVS. For example consider the following:-
VM N1K ESXi UCS
VM Guest 1xNIC > PC (2 x vNIC) uplink > 2 x vNIC uplinks > Dynamic/Static LAN Pin Group F1 A or B
When traffic passes the VM, the N1K and then to the ESXi, since its not PC/teaming on the ESXI to UCS, how does ESXi know where to send the traffic to?
Also curious to know which one it send it to since it will have to be the FI with the ACTIVE VIF path?
1) best practise is to use old fashioned vswitch for mgt and/or vmotion
2) On the N1k (due to no LACP support between N1k and UCS fabric interconnect): MAC pinning is used to distribute the outgoing traffic:
With MAC Pinning all the uplinks that have been defined with the N1k VEM will be defined as standalone links and will pin different MAC's to the uplinks based on a load balance algorithm. This will ensure that the there is no MAC Flapping on the upstream switches. Advantages of MAC pinning is that there is not any special configuration needed on the upstream switches.
3) dynamic/static pinning applies to Northbound interface of the fabric interconnect
4) both FI are active switching ethernet frames