I recently migrated our ESXi hosts from HP servers to UCS B230 blades. As part of this configuration I set up the fabric interconnects so they are in FC Switch mode since I plugged in the fiber from the fabric interconnects directly into the VNX storage array. On the HP hardware I had 2 HBAs plugged into a fiber switch that had 4 GB ports. Now that I've plugged the UCS FIs into the VNX directly I have 8 GB connections.
For the most part we haven't seen any performance drops off. However we have one monthly job that is running about an hour longer than it ran on the HP hardware. I have looked at the VNX side and there isn't an issue there. I've suspected that there's something between VMWare and the UCS now that it's going through the FIs for the fiber connections. I've played with the IOPS settings on the LUNS for this VM and gained a bit of performance back but not enough.
At this point the only thing I haven't looked at is if there's a potential issue running the fiber through the FIs instead of plugging everything into a separate fiber switch. Does anyone know if there's a performance drop off with VMWare (or the UCS fiber) when running fiber through the FIs in FC switch mode? Would I get better performance if I had separate fiber switches?
I've yet to see anything that makes me think there is any performance difference between uplinking through a switch and direct connection to the FI, but most of our disk is HDS, and anywhere we aren't using switches is a pretty small environment.
How large an environment? Same number of fiber links from the FI's to the VNX as before? How many uplinks to your FI's are you bringing out per enclosure, and how many UCS blades are in each enclosure? It's very easy with UCS to wind up with a much higher fan in ratio than in traditional blades or rack mounts.
Just a thought, have you looked at error counters for the ports in question?
We're a pretty small shop. There were 4 fiber links to the VNX before the switch so we didn't increase that at all. There are only 5 blades in the enclosure. I did look at the error counters for the ports and there aren't any errors. It just seems like we aren't getting the throughput we need for this job. I know part of it is the inefficiency of the program but unfortunately I can't do anything about that issue. But the fact remains that this job is running about an hour longer now that it's on the UCS from when it was on standalone HP equipment.
I'm just trying to look at all angles to see if I can come up with anything we can change.
Was trying to get neworking to work on a new Ubuntu install on a UCS server. I did a "shutdown -r" and rebooted the server. Now it seems to be stuck in the Aptio setup utility. I either choose "save and exit" or exit without saving changes and I am brough...
Just wonder we recently try to add 2 M200 with firmware 4.1.1b, our UCS manager version is 4.0.2d, we got error said cannot donwgrade. I know we can update ucs manager from 4.0 to 4.1, bur wonder we can downgrade
OpenStack Neutron project offers pluggable framework means you can extend the capability of Neutron by orchestrating the Neutron functions to your upstream networking gears. For example, if you have provisioned a VLAN tagged Neutron network in Op...
https://soundcloud.com/user-327105904/s7e25-from-the-office-to-anywhere-empowering-secure-remote-work-with-cisco-vdi-solutions As organizations have had to rapidly respond and transition in the face of swift change, Cisco VDI solutions have enabled ...