I’m most of the way through rebuilding my home lab and have swapped to Cisco UCS M4 generation gear and am also using UCS manager.
I’m continuing to run VMWare vSAN so tracked down the required UCSB-MRAID12G-HE controllers for my B200M4 blades.
The environment is up and running UCS wise and I’m about to start the VMware config and decided to check the controller and disk queue lengths.
The controller is listed in esxtop as 896 depth, which is correct, by the Cisco SAS SSDs attached to it are only reporting a depth of 64.
Im using the Cisco ESXi 6.7 U3 custom ISO, and my firmware versions are all the latest release of 4g.
The Cisco SAS SSDs are part numbers:
All the hardware is on the vSAN HCL and with the exception of the “capacity” drive my setup essentially mirrors the B200M4 AF4 vSAN ready nodes setup.
Does the LSI controller in the “HE” only allow a depth of 64 to drives (instead of the expected 256 depth), or are these drives limited show how?
Any suggestions welcome!
Do you have the 2G flash backed cache module present? The queue depth should be around 975 if the cache module is healthy.
What specific driver are you using? Megaraid_sas, lsi_mr3 ?
Thanks for the reply!
This controller is for the B200 M4/M5, and the cache is not removable? (So the 2GB is there)
The controller queue depth is showing as 916 (The value in my original post was typed incorrectly).
The driver in use is the lsi_mr3.
I've taken some pictures which hopefully helps show the details...etc.
Is it just a case of the firmware on the drives limiting the drive queue depth? (ie, is it by design?)
My last 12Gb SAS3 vSAN setup was on lower end HP G9 servers with H240 controllers HP SAS3 SSDs (hgst HUSMM's) all showed as 256 queue depth, so I assumed all 12Gb SAS3 products would also be the same depth?
I tried the same disks in one of my C220 M4 servers which has a MRAID12G-1GB controller and this controller and the drives have the same value.
I will try the drives in my Dell R630 when I have time just to see if the queue is the same.
Perhaps it’s just the queue depth level that the drives run at?
Maybe try updating the megaraid_sas driver/vib (or confirm it matches the Cisco HCL),,, and then disable the MR3 driver that the system defaults to. Believe I have seen some cases were different drivers seemed to impact the reported queue depth on the drives.
I've played with some different versions of drivers (and lsi_mr3 vs megaraid_sas) and it doesn't appear to have made a difference.
It's very hard to find similar information, but I found 2 other posts referring to similar controllers or driver stack being used, all showing a max of 64 queue depth.
I've also checked on my Dell R630 which uses the H730P mini and also the lsi_mr3 driver.
It's running SATA SSDs at a depth of 32, and SAS SSDs and SAS HDDs at a depth of 64.
Perhaps it's a limitation/design of the driver stack?!
Since its only a lab, I'd say the performance will be fine, and this mystery will continue to remain unsolved for now.
Thanks for the input/help though!