cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2089
Views
0
Helpful
3
Replies

Best Practices for vSphere 4.(1) on UCS

rsavage2k
Level 1
Level 1

I posted this on the VMWare Community forum, and they were no help... so I am posting here.

Good Morning,

I come from previous VMWare/ESX experience, and that  experience has always been with ESX hosts on stand-alone servers.   I am  getting ready to do a migration from ESX 3.5 to vSphere 4.1 on Cisco  UCS hardware.   My current ESX host configuration is made up of multiple  1GB and 10GB ethernet adapters (4x1GB for vMotion, Service Console,  Guest Networks), and (2x10GB for Storage - NFS backended).   While this  is a good model, I want to know what is the BEST approach when going  forward with Cisco UCS.

I know that our new UCS chassis will come with  10Gb interconnect modules, and also has the ability to carve out  virtualized adapters for the ESX hosts.   I want to know, if in UCS  world -- it is better to use multiple virtualized 1GB adapters to  segment VMWare protocols as I did in ESX 3.5 (with all the  active-standby uplinks, etc).  Or is it better to go with just 2x10GB  interfaces and not segment the various VMWare type traffic?  Or is there  an even better way to take advantage of the new UCS networking?    Another change for us, is that we're dropping NFS and going to HBAs for  storage.

Any  advice would be greatly appreciated.

Thanks,

Rory Savage

3 Replies 3

Shea Lambert
Level 1
Level 1

Rory,

     I would start by reading the post I included at the bottom by Brad Hedlund.  It has several different configuration examples including options that take advantage of the Palo adapter.

   I use the Palo adapters and I have created separate VIF's for Management, vMotion, and Data.  You will want to use QoS esp. for vMotion.  You don't want vMotion taking up all available bandwidth to the detriment of everything else, especially FCoE!

http://bradhedlund.com/2010/09/15/vmware-10ge-qos-designs-cisco-ucs-nexus/

Shea

Thanks for the info, I really like Design #1 - Cisco UCS + VIC QoS Design (required traffic only).  However, what are the speeds of the vmnics at the VMWare level?  1G?  10GB?

Thanks,

Rory Savage

Shea

Just a clarification here to your statement -

> I use the Palo adapters and I have created separate VIF's for Management, vMotion, and Data.  You will want to use QoS esp. for vMotion.  You don't > want vMotion taking up all available bandwidth to the detriment of everything else, especially FCoE!

If you look at the System QoS configuration (default out of the box), only best effort and FC are enabled (each with 50%) with FC as lossless.

If you do not change this, VMotion traffic will not starve FCoE as its a different class altogether.

VMotion along with other port groups like VM-data, Service Console etc are going to get clubbed into best-effort class.

So you don't need extra vNICs to prevent VMotion starving FCoE.

You might need multiple vNICs for soft switches (which don't support queuing like the vSwitch) to not starve *other* port groups i.e ethernet traffic as by default all that is getting clubbed in the best-effort class.

You could set the FC class to something greater than 50% if you feel bandwidth contention is something you will hit and at that time want more reserved bandwidth for FC. No contention and the System QoS policy does not apply.

Thanks

--Manish

Review Cisco Networking for a $25 gift card

Review Cisco Networking for a $25 gift card