cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1797
Views
0
Helpful
10
Replies

esxi vnics, vhbas

Dragomir
Level 1
Level 1

how many vnics or vhbas do you create for esxi setup?

for a non ucs, I used to do, 8 vmnics (physical nics)

2, for mgmt

2, for vmotion

2, for vms (trunk)

2, for iscsi

should I create just 2 vnics for ucs?

or do i need 8 vnics?

1 Accepted Solution

Accepted Solutions

Should be 2 from FI-A to DCX1 and 2 from FI-B to DCX2

Sent from Cisco Technical Support iPhone App

View solution in original post

10 Replies 10

Jeremy Waldrop
Level 4
Level 4

We always create a pair for each traffic type and create QoS policies that match up to each.

2 for mgmt; one on fabric A one on Fabric B - standard vswitch0

2 for vMotion; one on fabric A one on Fabric B - standard vswitch1

2 for VM traffic; one on fabric A one on Fabric B (these 2 are placed in a Distributed Switch if Enterprise Plus licensing)

mgmt VLAN is trunked to the vNICs for mgmt

vMotion VLAN is trunked to the vNICs for vMotion

All VM VLANs are trunked to the vNICs for VMs

Another trick we implement for the vMotion switch is to put the vNIC for Fabric A in standby on the NIC teaming vSwitch config. By doing this you can guarantee vMotion traffic never leaves Fabric Interconnect B and never touches your northbound LAN switch. If Fabric B fails then all vMotion traffic is shifted to Fabric A.

hi Jeremy,

just wondering if its ok to put both vmotion and mgmt on the same nics (2 nics) redundant 10gb for mgmt seems overkill. Maybe I can do Qos on them.

Also

now quite understanding this

Another trick we implement for the vMotion switch is to put the vNIC for Fabric A in standby on the NIC teaming vSwitch config. By doing this you can guarantee vMotion traffic never leaves Fabric Interconnect B and never touches your northbound LAN switch. If Fabric B fails then all vMotion traffic is shifted to Fabric A.

Also,

if I have 4 FC uplinks (not fcoe) on each FI to my brocade, do I have to create 4 vHBAs per fabric on my service profile? Total of 8 vhbas?

No need to be sparing with vNIC in UCS, they are free Creating 2 dedicated vNICs allows you to apply different QoS policies and a dedicated vSwitch.

As for the vMotion question, if you add both vMotion vNICs as uplinks to the vMotion vSwitch you have no way to guarantee that all ESXi hosts will use the same vNIC. Some may use vmnic2 some may use vmnic3, if they aren't all using the same vNIC then vMotion traffic will have to traverse the northbound L2 switch and saturate it vMotion with traffic that it doesn't need to see. By making one vmnic standby on all hosts it forces all vMotion traffic on the same Fabric Interconnect and the northbound L2 switch never sees it.

You will only need 2 vHBAs; one for each SAN Fabric. With IO virtualization there isn't a 1:1 mapping to physical uplinks and virtual adapters. For example, you could have a single uplink per IOM, single uplink for LAN and single uplink for SAN and still have 20+ UCS servers. You never have a system with single uplinks though.

UCS will round-robin pin the vHBAs from server to an FC uplink. For example, with 4 uplinks and 16 servers you should see 4 vHBA initiators logged on each FC uplink. If one of those uplinks fails UCS will automatically re-pin the vHBAs on that failed link to one of the other uplinks.

thanks.

for iscsi, the vnics need to be created under iscsci vnics or under vnics?

Only if you are doing ISCSI boot would you need to create iscsi vNICs.

ok so how many fc uplinks from each FI to the brocade do I need? 2 or 4?

How many SAN fabrics to you have? Typically I only see 2 but have seen more. You will need a vHBA for every SAN fabric your servers need access to. For 2 SAN fabrics you will need 2 vHBAs; one in Fabric A and one in B.

I have 2 dcx switches.

how should the fi be connected to the dcx?

for 2 FC ports per fabric, I have

FI-A -------> DCX1

FI-A--------> DCX2

FI-B-------->DCX1

FI-B--------->DCX2

Should be 2 from FI-A to DCX1 and 2 from FI-B to DCX2

Sent from Cisco Technical Support iPhone App

Just regarding the default round robin behaviour of pinning vHBA's to SAN uplinks. I've got two SAN uplinks between a Fabric Interconnect and a Cisco MDS and I have 8 Servers. However all 8 servers are only pinning to 1 SAN uplink. I don't have any static pinning configured. So shouldn't 4 servers be pinned to 1 uplink and the other 4 servers be pinned to the other uplink?

Review Cisco Networking for a $25 gift card

Review Cisco Networking for a $25 gift card