09-11-2013 12:17 PM - edited 03-01-2019 11:14 AM
I recently migrated our ESXi hosts from HP servers to UCS B230 blades. As part of this configuration I set up the fabric interconnects so they are in FC Switch mode since I plugged in the fiber from the fabric interconnects directly into the VNX storage array. On the HP hardware I had 2 HBAs plugged into a fiber switch that had 4 GB ports. Now that I've plugged the UCS FIs into the VNX directly I have 8 GB connections.
For the most part we haven't seen any performance drops off. However we have one monthly job that is running about an hour longer than it ran on the HP hardware. I have looked at the VNX side and there isn't an issue there. I've suspected that there's something between VMWare and the UCS now that it's going through the FIs for the fiber connections. I've played with the IOPS settings on the LUNS for this VM and gained a bit of performance back but not enough.
At this point the only thing I haven't looked at is if there's a potential issue running the fiber through the FIs instead of plugging everything into a separate fiber switch. Does anyone know if there's a performance drop off with VMWare (or the UCS fiber) when running fiber through the FIs in FC switch mode? Would I get better performance if I had separate fiber switches?
Thanks.
09-11-2013 03:20 PM
I've yet to see anything that makes me think there is any performance difference between uplinking through a switch and direct connection to the FI, but most of our disk is HDS, and anywhere we aren't using switches is a pretty small environment.
How large an environment? Same number of fiber links from the FI's to the VNX as before? How many uplinks to your FI's are you bringing out per enclosure, and how many UCS blades are in each enclosure? It's very easy with UCS to wind up with a much higher fan in ratio than in traditional blades or rack mounts.
Just a thought, have you looked at error counters for the ports in question?
09-12-2013 05:48 AM
We're a pretty small shop. There were 4 fiber links to the VNX before the switch so we didn't increase that at all. There are only 5 blades in the enclosure. I did look at the error counters for the ports and there aren't any errors. It just seems like we aren't getting the throughput we need for this job. I know part of it is the inefficiency of the program but unfortunately I can't do anything about that issue. But the fact remains that this job is running about an hour longer now that it's on the UCS from when it was on standalone HP equipment.
I'm just trying to look at all angles to see if I can come up with anything we can change.
Thanks.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide