W2012 R2 with 2 veths (veth0 and veth1) (one connected to Fabric A, resp. B); team built with veth0 and veth 1; veth0 and veth1 in same Vlan, e.g. 10.
LBFO (Load Balancing Failover): Switch independent, Hyper-V port
2 such Hyper-V server in a UCS domain.
Q. a VM on one Hyper-V server, talks to a destination VM on the other Hyper-V server, isn't there a chance, that the traffic exits the interface on the source on Fabric A, and the destination interface is on Fabric B, therefore one has to switch the frame outside of UCS.
Thanks Padma ! Yes I know that. The point is that Hyper-V teaming does its own pinning, that is out of control, and can therefore result in a situation, that the traffic has to be switched outside UCS (even within the same Vlan)
Yes your statement is true. If the sender on a host was going out fabric A then the only way for it to transmit to a receiver on fabric b would be to traverse the upstream switch. This is true for any access layer failover/load balancing mechanism where the access switches are not connected as part of a multi chassis link aggregation group (e.g. stack/cluster/vpc/vss). Switch independent load balancing does not require the switches to posses any special technology where both switches share a mac address table and posses a forwarding mechanism between the devices. In the sense of networking and forwarding this is true of the FI.
In these cases the user needs to understand the limitation of switch independent load balancing and plan appropriately. If the traffic is primarily in and out of the UCS system then it is an acceptable method of load balancing. However if the goal is to localize the traffic to the FI then it may not be the most appropriate form of load balancing.
With Fabric Failover customers can prevent a single point of failure by selecting that a link fail to the second fabric in the even of an upstream network failure. If they also want to achieve some type of load balancing they can also set up a second interface for the system to traverse the second link. In many cases customers will set up one link for internode communications (e.g. cluster network, vmotion, etc.) and then set up a second link to provide a front end network for application access.
This is no different than designs connecting to 2 TOR access switches with the exception that failover can be configured within the service profile and not within the OS. It's a matter of choosing the network traffic design and failover mechanism that works best for a particular application.