cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1134
Views
0
Helpful
1
Replies

UCSM VMQ Connection Policy with ESXi

ednied
Level 1
Level 1

Hello, 

 

I am trying to get some understanding here on VMQ connection policies on systems running ESXi.  I cannot find much information on this to be honest and am growing more confused by it.  Currently, we have VMQ set up as: 

- Number of VMQs: 16 

- Number of Interrupts: 34

 

To be honest, I didn't configure these settings for our organization and I am trying to figure out why they are set the way they are.  From what I understand the number of "VMQs per adapter must be one more than the maximum number of VM NICs", but what I understand is that VMware allows a max of 10 vNICS so 16 doesn't make much sense.  I did read that, "The VMware ESXi eNIC driver limits the number of VMQs to 16, but VMware recommends that you not exceed 8 VMQs per vNIC".  

 

These 2 statements contradict things if you ask me.  UCSM says "1 more than the max NICs" and VMware says, "no more than 16".  

 

Does anyone else run VMQ connection policies with ESXi guests? What do you base your configuration decisions on this off of?  Also, what are the pros and cons of using this? 

 

Thank you. 

1 Reply 1

Tom MacDonald
Level 1
Level 1

Yes, I have been using VMQ for years now. The Cisco docs are horrible to find anything in one place and usually doesn't give a straight answer. To that end, you should look for some additional tuning references for VMQ on Cisco UCS on the VBlock/VxBlock(Dell) website as they have most of the knobs tuned on those systems.

From a high level VMQ helps reduce the burden for the hypervisor and eliminate the queue sharing, the Cisco UCS VIC can create dedicated queue pairs for guest machines running under VMware ESXi. This approach provides significant benefits. First, the transmit and receive queue pairs are no longer shared with other guest machines. Second, the hypervisor is no longer responsible for sorting and switching packets because packet steering is moved to the adapter. This approach improves I/O performance and frees the hypervisor for other tasks.

Review Cisco Networking products for a $25 gift card