Showing results for 
Search instead for 
Did you mean: 


UCS vNICS for ESXi Management and vMotion networks/vSwitches

Hi UCS Community!

I've heard from Cisco that using a single vNIC for vMotion and Management AND having the "Enable Failover" checkbox checked in UCSM, so UCS manages the redundant links through Fabric A and B is the way to go.

And then, for Fibre Channel, iSCSI and guest networking, presenting two vNICS to ESXi- one pinned to Fabric A and the other pinned to Fabric B (no failover on them)- and letting ESXi manage the failover for those.

Is this a practice UCS / ESXi shops are following?

Everyone's tags (5)

UCS vNICS for ESXi Management and vMotion networks/vSwitches

That is an option but I personally haven't been using hardware failover for ESXi hosts. I have been letting ESXi handle the failover. My typical setup has been 2 vNICs for mgmt, 2 for vMotion, 2 for VM traffic and if needed 2 for IP storage. Each pair has a vNIC in Fabric A and B.

I usually use hardware failover for Windows and Linux servers.

Cisco Employee

UCS vNICS for ESXi Management and vMotion networks/vSwitches

I would try to keep things as simple as possible. I'm not sure where the single vnic with fabric failover recommendation came from, but in the past we've always said no fabric failover when you are using a hypervisor vSwitch. That includes vSwitch, vDS, and Nexus 1000V.

I can understand that by using a single nic you are keeping all the vmotion traffic on one FI but then you have to make sure you are always using the right NIC for vmotion.

Keep it simple :-)


Cisco Employee

UCS vNICS for ESXi Management and vMotion networks/vSwitches


I was wondering if thought about just using 2 vNIC for everything under the idea to keep it simple now with multiple 10G on each vNIC? vNIC a on fabirc A and vNIC b on fabric B with no failover? Pros / Cons?




UCS vNICS for ESXi Management and vMotion networks/vSwitches

There are a few reasons why I break everything out:

  • QoS policies - We create QoS policies for each traffic type and map them to different QoS system classes. One reason for this is to prevent vMotion from stepping on other Ethernet traffic. Placing an ESXi host into maintenance mode will trigger a lot of vMotions and with a 10G NIC ESXi will try to do 8 at a time and could potentially saturate a 10G link.
  • Distributed Switch VMware or Nexus 1000v - We only want VMs to live on the distribute switch not the Management and vMotion vmkernels. The management and vMotion vSwitches are standard whereas the VM vSwitch is distributed, either VMware or Nexus 1000v or VM-FEX. It is a smoother deployment and easier to manage/troubleshoot if your management vmkernel is on a standard vSwitch.
  • Traffic Engineering - This comes into play with the vMotion vSwitch, we make vmnic 3 active and vmnic 2 standby. This forces vMotion traffic to stay on Fabric Interconnect B and if B is down all traffic is then on A. There is not reason for the vMotion traffic to ever traverse the L2 uplink switches and if you don't configure the active/standby NIC teaming then you could run into a situration where host 1 uses vmnic 2 but host 3 uses vmnic 3. In this case vMotions between them would travers the uplink L2 switch.
  • Simplified Management - In my opinion it is easier to manage if things are broken out because there is less clutter.
  • UCS VIC vNICs are free so why not - The Cisco VIC was designed for creating a lot of vNICs/vHBAs for these reasons
Cisco Employee

UCS vNICS for ESXi Management and vMotion networks/vSwitches

Thanks -

Excellent perspective, there are a lot of different ways however not a “best" guide… I do like some of your examples on why.

CreatePlease to create content
Content for Community-Ad
August's Community Spotlight Awards