cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3229
Views
0
Helpful
7
Replies

New UCS and VMware setup Questions

tdavis
Level 1
Level 1

We are currently in the process of migrating out vmware infrastructure from HP to UCS.  We are utilizing the Virtual Connect Adapters for the project.  With the migration we also plan on implementing the cisco nexus v1000 in our environment.  I have demo equipment setup and have had a chance to install a test environment, but still have a few design questions.

When implementing the new setup, what is a good base setup for the virtual connect adapters with the v1000?  How many Nics should I dedicate?  Right now I run 6 nics per server (2 console, 2 Virtual Machines, and 2 Vmotion).  Is this a setup I should continue with going forward?  The only other thing I am looking to implement is another set of nics for nfs access.  In a previous setup at a different job, we had 10 nics per server (2 console, 4 virtual machines, 2 vmotion and 2 iSCSI).  Is there any kind of standard for this setup?

The reason I am asking is I want to get the most out of my vmware environment as we will be looking to migrate Tier 1 app servers once we get everything up and running.

Thanks for the help!

7 Replies 7

work
Level 1
Level 1

Hi Tim,

According to VMWare documentation, vSphere only supports 4 10gbit interfaces (regardless of chipset) - im guessing from your post that your old environments had 1gbit NICs ?

my experience with B250 (full width) blades in UCS and vSphere 4.1 installs is as follows :

- 16 vnics and vSphere fails to install at all

- 8 vnics and vSphere will install, but ISCSI connectivity is terminally broken (even when i only configure 1 single iscsi vmk port)

- 4 vnics seems OK.

this means that my B250 gets 2 dedicated 10gbit data nics, and 2 dedicated 10gbit ISCSI nics.

If i get approval for the FCoE expansion modules for the FIs, i will change this design and have 4 vHBAs and 4 data vNICs in order to get a better spread of load.

We haven't had any problems using 6 or 8 vNICs for ESXi 4.1 hosts on UCS. Our standard config is 6 vNICs but if IP storage is needed (iSCSI or NFS) we create 2 more. On the FC side we create 2 vHBAs.

Jeremy,

I did some more digging and found the cause of my ISCSI woes.

I have my QoS policies set with VMWare Data NICs getting best-effort, and ISCSI getting gold.

best-effort was 'normal' MTU, gold was 9216 MTU.

once I changed best-effort to be 9216 as well, my 8 vNIC service profile no longer has any issues connecting to the EMC SAN - which is wierd given that the iSCSI vmkernel ports are locked to different vNICs on differnet VLANs......

so now its back to the drawing board with my automated builds !!

Robert Burns
Cisco Employee
Cisco Employee

Tim,

Migrating from HP Virtual Connect (VC) -> UCS might change your network design slightly, for the better of course .  Not sure if you're using 1G or 10G VC modules but I'll respond as if you've using 10G modules because this is what UCS will provide. VC modules provide a 10G interface that you can logically chop up into a max of 4 host vNIC interfaces totaling 10G. Though it's handy to divide a single 10G interfaces into virtual NICs for Service Console, VMotion, iSCSI etc, this creates the opportunity for wasted bandwidth.  The logical NICs VC creates provides a max limit of bandwidth to the adapter.  For example if create a 2GB interface for your host to use for vMotion, then 2G of your 10G pipe is wastes when there's no vMotions taking place!

UCS & 1000v offer a different solution in terms of bandwidth utilization by means of QoS.  We feel it's more appropriate to specifiy a "minimum" bandwidth guarantee rather than a hard upper limit - leading to wasted pipe.  Depending on which UCS blade and mezz card option you have, the # of adapters you can present to the Host varies.  B200 blades can support one mezz card (with 2 x 10G interfaces) while the B250 and B440 are full width blades and support 2 Mezz cards.  In terms of Mezz cards now, there's the Intel/Emulex/Qlogic/Broamcom/Cisco VIC options.  In my opinion the M81KR (VIC) is best suited for virtualized environments as you can present up to 56 virtual interfaces to the host, each having various levels of QoS applied.  When you roll the 1000v into the mix you have a lethal combination of adding some of the new QoS features that automatically match traffic types such as Service Console, iSCSI, VMotion etc.  See this thread for a list/explanation of new features coming in the next verison of 1000v due out in a couple weeks https://www.myciscocommunity.com/message/61580#61580

Before you think about design too much, tell us what blades & adapters you're using and we can offer some suggestions for setting them up in the best configuration for your virtual infrastructure.

Regards,

Robert

BTW - Here's a couple Best Practice Guides with UCS & 1000v that you might find useful.

JEFFREY SESSLER
Level 1
Level 1

Tim,

I deployed 4.1 ESXi on UCS using SAN boot (truly stateless). I presented six vNICs to each host, three on each fabric. One pair is for Nexus 1K Ctrl/Packet, one pair for VM network access, and one pair for VMKernel (VMotion) and management. I'm using FC storage, so I also present a pair of vHBAs. I'm running the Nexus 1K, so those six vNICs connect to it, and the VMs to the Nexus 1K. Everything including VMotion use the Nexus 1K.

It's been in production for about four months now and is working great.

The number of vNICs reqd for a soft switch environment does not come from UCS but from the soft switch in question.

You want 2 vNICs, you have them. You want 20, you got them too in UCS using the M81KR (VIC/Palo) adapter.

If you see the VMware/Cisco guide on 1kv deployments in 10 Gig environment, it calls for 2x10 gig's.

If you look at the 1kv in UCS deployment guide, it talks abt 2x10Gig (though it does mention how to create more vNICs if so desired).

2x10Gig's will work just fine. Very simple as you have the same uplink port profiles for both vNICs.

Multiple work too but at the added complexity of multiple uplink port profiles, vNICs etc.

Mgmt overhead and somewhat harder to troubleshoot when things need to be looked at.

I have seen setups with 16 vNICs given to soft switches. Not a pretty site and very hard to troubleshoot etc. Yes, it works too.

Unless you have very set requirements (queuing at the soft switch level, using static pinning for various vNICs which in turn correspond to VM port groups), keep the vNICs to a minimum. Will help keep things simple in the long run.

Mark traffic (CoS) if desired at 1kv level and using system classes in UCS, give it treatment if QoS is a criteria.

Thanks

--Manish

tdavis
Level 1
Level 1

In my past setups I have used 1gig nics.  Of cource with UCS the nics will be 10gig.  My setup includes

6 UCS chassis with 42 200M2 and 6 230M1 blades with the virtual interface adapter.  Our storage is EMC Symetrix and we are using FCoE.

I think I have gotten the infromation I needed from all the replies, of which I appreciate.  I think I will stick with our current setup of the 6 nics (2 console, 2 VM Network and 2 vmotion)  When we bring in NFS I will setup another pair or nics.  I will spend more time with our two test chassis and come up with a solid design based on the excellent information that has been provided.

Review Cisco Networking for a $25 gift card

Review Cisco Networking for a $25 gift card