cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
16194
Views
17
Helpful
7
Replies

UCS HBA Queue Depth Vmware ESXi 5.1

mgkramer99
Beginner
Beginner

We have our UCS environment connected to an upstream FC SAN fabric which contains an all flash based SSD array.  I don't believe that the UCS is fully taxing the array and have noticed that the queue depth within esxtop->u is showing DQLEN 32 for LUNs connected via the Cisco VIC HBAs.  During certain operations this queue is maxing out at 100% utilized.

From what I can gather this is hard coded in to the Cisco VMware driver. 

Is there any way or future possibility to up this queue depth to 128 or a set to a value of our choice?

Thanks.

7 Replies 7

Robert Burns
Cisco Employee
Cisco Employee

At the present code release 2.1(1a) this value is hard coded in the FW.  There is an enhancement in an upcoming Maintenance Release (tentative spring) to expose this value and allow adjustments via CLI.  This feature enhancement is tentative and could be pushed out - just as a heads up.

The Enhancement Request is tracked as

CSCud30257 - fNIC Driver Tuneables Exposed through CLI

If this feature is critical to you, please open a quick TAC case and ask to have this enhancement attached to your case.  This will increase the priority

On another note now...we have done extensive testing and found that in 99% of our cases and tests that changing the LUN queue depth did not improve performance.  Working with our vendor partners we found that properly tuning the array side actually had much more impact.  In short the fNIC driver was never shown to be the bottleneck.  However,  after hearing from some of these customers that they really still wanted the tunable since they were comfortable with it,  we decided to add it.

Our FC processing is done in our own ASIC and driver and as such is different from the Emulex, Qlogic approaches.

Hope this helps.

Regards,

Robert

Thank you for the status update.  I will keep an eye out for a future update.  We run in to certain scenarios now under heavy workloads where the queue fills up to the max depth of 32.  I have to wonder if bumpng this up would increase the throughput as the array still has more IO to give....

If LUN queue gets full (32 commands), then you will see remaining commands queuing up in kernel, spiking KAVG times. If HBA adapter queue (512 commands) gets full, then following message will be displayed:

2013-01-30T10:43:38.731Z cpu6:4102)ScsiSched: 2147: Reduced the queue depth for device naa.60a9800037533351672b4266354f7934 to 16, due to queue full/busy conditions. The queue depth could be reduced further if the condition persists.

In this case, more HBAs would need to be created in service profile.

But looking at your screenshot, you are using iSCSI LUNs, and above limitation of 32 queues, to my knowledge, only applies to FC attached LUNs.

It is possible to change iSCSI LUN queue depth:

http://communities.vmware.com/thread/403522

Hi Robert

regarding queue depth issue I have one question if you can answer to me, I have UCS 5108 chassiss with esxi hosts on servers,  which is connected to VNX5500 storage, and I'm doing some testing on this array. I'm testing troughput with SQLIO and i receive maximum of 15000 iops from array, which is not bad, but latency is very high if I go with larger number of outstanding I/Os in test. Kernel latency is going up to 3 - 3.5 ms and number of queds is tipically between 25 and 32, total DAVG is about 4,5 ms and test Windows machine see very high latency ( up to 160ms avg ). What is your opinion, is this expected behaviour for SQLIO tests, or we can make some tuning ( you mention in previous posts that you have some recommendation from storage vendors, can you share them ). Chassis is not yet in production so I dont have real life data...

br

Aco

Jeremy Waldrop
Enthusiast
Enthusiast

According to the release notes the latest fnic driver update on the VMware site it looks like adjusting the queue depth is one of the new features.

New Features:

- New driver tuneables: queue depth, maximum number of

outstanding I/O's.

- Statistics collection/reporting

This is from this driver - https://my.vmware.com/group/vmware/details?downloadGroup=DT-ESXI5X-CISCO-FNIC-15045&productId=285

fnic version 1.5.0.45 for ESXi 5.x

Is there any documentaiton on how to adjust the queue depth?

Just installed the updated driver in the lab and figured out the commands to adjust the queue depth.

Run this command to view the current configuraiton:

esxcli system module parameters list -m fnic

Name                  Type  Value  Description

--------------------  ----  -----  -------------------------------------------------------------------------

fnic_max_qdepth       uint         Queue depth to report for each LUN

fnic_trace_max_pages  uint         Total allocated memory pages for fnic trace buffer

heap_initial          int          Initial heap size allocated for the driver.

heap_max              int          Maximum attainable heap size for the driver.

skb_mpool_initial     int          Driver's minimum private socket buffer memory pool size.

skb_mpool_max         int          Maximum attainable private socket buffer memory pool size for the driver.

Run this command to change the queue depth to 64:

esxcli system module parameters set -p fnic_max_qdepth=64 -m fnic

Run this command to check it

esxcli system module parameters list -m fnic

Name                  Type  Value  Description

--------------------  ----  -----  -------------------------------------------------------------------------

fnic_max_qdepth       uint  64     Queue depth to report for each LUN

fnic_trace_max_pages  uint         Total allocated memory pages for fnic trace buffer

heap_initial          int          Initial heap size allocated for the driver.

heap_max              int          Maximum attainable heap size for the driver.

skb_mpool_initial     int          Driver's minimum private socket buffer memory pool size.

skb_mpool_max         int          Maximum attainable private socket buffer memory pool size for the driver.

Thanks Jeremy, I was able to use this and increase the fnic_max_qdepth to 128. I also put four vHBA in the Service Profile. Makes for a nice screenshot.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Recognize Your Peers