12-15-2012 04:48 PM - edited 03-01-2019 10:46 AM
how many vnics or vhbas do you create for esxi setup?
for a non ucs, I used to do, 8 vmnics (physical nics)
2, for mgmt
2, for vmotion
2, for vms (trunk)
2, for iscsi
should I create just 2 vnics for ucs?
or do i need 8 vnics?
Solved! Go to Solution.
12-18-2012 08:58 PM
Should be 2 from FI-A to DCX1 and 2 from FI-B to DCX2
Sent from Cisco Technical Support iPhone App
12-16-2012 05:13 AM
We always create a pair for each traffic type and create QoS policies that match up to each.
2 for mgmt; one on fabric A one on Fabric B - standard vswitch0
2 for vMotion; one on fabric A one on Fabric B - standard vswitch1
2 for VM traffic; one on fabric A one on Fabric B (these 2 are placed in a Distributed Switch if Enterprise Plus licensing)
mgmt VLAN is trunked to the vNICs for mgmt
vMotion VLAN is trunked to the vNICs for vMotion
All VM VLANs are trunked to the vNICs for VMs
Another trick we implement for the vMotion switch is to put the vNIC for Fabric A in standby on the NIC teaming vSwitch config. By doing this you can guarantee vMotion traffic never leaves Fabric Interconnect B and never touches your northbound LAN switch. If Fabric B fails then all vMotion traffic is shifted to Fabric A.
12-17-2012 01:34 PM
hi Jeremy,
just wondering if its ok to put both vmotion and mgmt on the same nics (2 nics) redundant 10gb for mgmt seems overkill. Maybe I can do Qos on them.
Also
now quite understanding this
Another trick we implement for the vMotion switch is to put the vNIC for Fabric A in standby on the NIC teaming vSwitch config. By doing this you can guarantee vMotion traffic never leaves Fabric Interconnect B and never touches your northbound LAN switch. If Fabric B fails then all vMotion traffic is shifted to Fabric A.
Also,
if I have 4 FC uplinks (not fcoe) on each FI to my brocade, do I have to create 4 vHBAs per fabric on my service profile? Total of 8 vhbas?
12-17-2012 02:04 PM
No need to be sparing with vNIC in UCS, they are free Creating 2 dedicated vNICs allows you to apply different QoS policies and a dedicated vSwitch.
As for the vMotion question, if you add both vMotion vNICs as uplinks to the vMotion vSwitch you have no way to guarantee that all ESXi hosts will use the same vNIC. Some may use vmnic2 some may use vmnic3, if they aren't all using the same vNIC then vMotion traffic will have to traverse the northbound L2 switch and saturate it vMotion with traffic that it doesn't need to see. By making one vmnic standby on all hosts it forces all vMotion traffic on the same Fabric Interconnect and the northbound L2 switch never sees it.
You will only need 2 vHBAs; one for each SAN Fabric. With IO virtualization there isn't a 1:1 mapping to physical uplinks and virtual adapters. For example, you could have a single uplink per IOM, single uplink for LAN and single uplink for SAN and still have 20+ UCS servers. You never have a system with single uplinks though.
UCS will round-robin pin the vHBAs from server to an FC uplink. For example, with 4 uplinks and 16 servers you should see 4 vHBA initiators logged on each FC uplink. If one of those uplinks fails UCS will automatically re-pin the vHBAs on that failed link to one of the other uplinks.
12-17-2012 04:28 PM
thanks.
for iscsi, the vnics need to be created under iscsci vnics or under vnics?
12-18-2012 04:39 AM
Only if you are doing ISCSI boot would you need to create iscsi vNICs.
12-18-2012 08:08 AM
ok so how many fc uplinks from each FI to the brocade do I need? 2 or 4?
12-18-2012 09:56 AM
How many SAN fabrics to you have? Typically I only see 2 but have seen more. You will need a vHBA for every SAN fabric your servers need access to. For 2 SAN fabrics you will need 2 vHBAs; one in Fabric A and one in B.
12-18-2012 08:39 PM
I have 2 dcx switches.
how should the fi be connected to the dcx?
for 2 FC ports per fabric, I have
FI-A -------> DCX1
FI-A--------> DCX2
FI-B-------->DCX1
FI-B--------->DCX2
12-18-2012 08:58 PM
Should be 2 from FI-A to DCX1 and 2 from FI-B to DCX2
Sent from Cisco Technical Support iPhone App
06-19-2013 05:09 PM
Just regarding the default round robin behaviour of pinning vHBA's to SAN uplinks. I've got two SAN uplinks between a Fabric Interconnect and a Cisco MDS and I have 8 Servers. However all 8 servers are only pinning to 1 SAN uplink. I don't have any static pinning configured. So shouldn't 4 servers be pinned to 1 uplink and the other 4 servers be pinned to the other uplink?
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide