Showing results for 
Search instead for 
Did you mean: 

UCS and VMware QoS/DVS NIOC Design Help


                   I'm looking at a fresh way to configure a UCS host with vSphere 5.1/5.5 host using Enterprise Plus licensing, DVS (distributed virtual switch), in combination with VMware NIOC and UCS QoS tags. Starting with vSphere 5.0 the DVS can now tag traffic with QoS markings, like the N1K. My understanding is that VCE normally presents only the two pNICS to the DVS (vice carving up the pNICs into vNICs), then uses the DVS for bandwidth control. I don't know if they normally use QoS marking, but I would imagine so. 

Here's the QoS System Class config:

Plat: Disabled, CoS 5

Gold, enabled, CoS 4, 30% weight

Silver, enabled, CoS 2, 10% weight (best effort)

Bronze, enabled, CoS 1, 10% weight (best effort)

Best Effort, enabled, 20% weight

FC, CoS 3, Weight 30%

We have no real time/VoIP/video/stock trading type traffic. FC storage is used for all server VMs, NFS for VDI. On the QoS policy I set Host Control to Full (to respect the DVS QoS tags) and priority to "best effort".

What I'm trying to figure out is the best mapping of the vCenter NIOC pools, NIOC shares to the proper CoS. vCenter NIOC pools are:

Fault Tolerance (not used), 10 shares

iSCSI (not used), 20 shares

ESX Management, 5 shares

NFS (used for VDI datastores), 20 shares

vMotion, 20 shares

vSphere Replication (not used), 5 shares

So far I have:

IP storage (iSCSI/NFS) mapped to Gold (QoS 4)

Management/Replication mapped to Bronze (QoS 1)

vMotion mapped to Silver (QoS 2)

So I'm wondering about VM traffic. If I don't have the DVS mark it, will it fall into the UCS "best effort" priority bucket and thus get 20% min bandwidth? I don't see us using FT soon, but would it make sense to give that Platinum/QoS 5 priority and a small weight, like 10%? That way UCS/DVS is configured for FT, should someone decide we need it down the road.

Or should I make other adjustments to the QoS/priority mappings? Open to any real-world input on what people are doing.



Out of curiosity, how much bandwidth do you have to each blade?

Are you using the 2208XP + VIC 1240/80 and getting the 80Gbps?

Just using the VIC1240 with the 2204XP, so just 20Gb per blade. SInce this post I've modified the DVS shares to set all traffic to 'normal' (50), and just let UCS deal with traffic shaping. Seemed like a better idea than trying to do it on the DVS and UCS.

Thanks.  Now I'm wondering, how many DVS uplinks do you have per host in your design? And what traffic type does each uplink pair handle?

The DVS is configured for two uplinks per host. Route baserd on physical NIC load is used for each port group and both NICs are active uplinks. The only exception are the two vMotion portgroups where we have active/standby and standby/active to enable multi-nic vMotion.

The design is in production and seems to be working fine. UCS respects the CoS tagging and puts the packets in the correct QoS group.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: