Hi everyone, I have installed a Flexpod Environment with 2 Nexus 7k, 2 Nexus 5k, 2 FI 6248UP, 7 Chassis 5108 and aproximately 28 servers B200.
All blades are virtualized with ESXi but my customer is looking a slow connection when he tries to connect to a NAS storage from the virtual machines in these ESXi hosts.
Each B200 with a VIC1280 adapter has a service profile template with 4 vNICs, the IOM in the chassis is a 2208 and i have 4 links from each IOM to the Fabric Interconnect (FI) then each FI has 4 LAN Uplinks to the two Nexus 5k with vPC and the NAS Storage is a NetApp connected to the same pair of Nexus 5k with 4 links from each Nexus 5k.
DO you think i have congestion o saturation over the links or maybe is a internal configuration in VMWare to improve the performance???
The customer has made a test making a ping from the virtualized servers in UCS to the NAS Storage but with a size of 3000 bytes it doesnt work, but they have old virtualized servers in a IBM blade and these servers dont have this problem.
I have changed the mtu for VNIC templates but the problem persists, i need to change the mtu in the server ports from Fabric Interconnect, in the uplink ports and in the ports of the switch that connects to the storage???
I've configured peer-gateway in both of my HSRP gateways (Nexus 7K) for the vlan of the NAS network, now the servers can access without problem to the two NAS controllers, i've found in the configuration guide of Nexus 7000 that HSRP need the peer gateway configuration in the vpc domain to resolve incompatibilities with some NAS and load-balancers third-party devices.