UCS customer, using ESXi 5.1 and vswitch, no DS nor N1k
NFS with multiple shares on the same Netapp Filer
ESXi and Netapp are on the same Vlan and IP subnet
With standard balancing setup (originating virtual port ID), the VMkernel interface sends actually traffic over one outbound interface (VIC-1240) to a single NFS share
Statement from VMware:
It is also important to understand that there is only one active pipe for the connection between the ESX server and a single storage target (LUN or mountpoint). This means that although there may be alternate connections available for failover, the bandwidth for a single datastore and the underlying storage is limited to what a single connection can provide. To leverage more available bandwidth, an ESX server has multiple connections from server to storage targets. One would need to configure multiple datastores with each datastore using separate connections between the server and the storage. This is where one often runs into the distinction between load balancing and load sharing. The configuration of traffic spread across two or more datastores configured on separate connections between the ESX server and the storage array is load sharing.
Question from customer
For e.g. 2 NFS Shares on the same Netapp filer, how can loadbalancing be changed to make use of the second uplink interface ?? if the same VMkernel interface is used for both NFS shares, all the traffic flows over one interface.
We know, that IP-hash isn’t possible; FI doesn't support LACP
After many discussion, the solution is the following (special thanks to Ramses Smeyers)
One vswitch with 2 interfaces vnic0 and vnic1:
Create multiple vmk interfaces for NFS, each one in a different IP subnet:
Then on NetApp side
Int 1 10.0.0.0/24 for NFS Share nr. 1
Int 2 10.0.1.0/24 for NFS Share nr. 2
Int 3 10.0.2.0/24 for NFS Share nr. 3
On UCS-B: src port distribution
On NetApp: LACP/port-channel
On the NFS mount command on the ESX server, you can only specify the destination IP address. ESX will automatically choose the vmk interface which is in the same subnet as the destination (I assume here L2 between server and storage, which was a VMware requirement up to V5); because the different vmk interfaces have different source ports, the perfect distribution over the outbound interfaces vnic0 and 1 is achieved.
Was trying to get neworking to work on a new Ubuntu install on a UCS server. I did a "shutdown -r" and rebooted the server. Now it seems to be stuck in the Aptio setup utility. I either choose "save and exit" or exit without saving changes and I am brough...
Just wonder we recently try to add 2 M200 with firmware 4.1.1b, our UCS manager version is 4.0.2d, we got error said cannot donwgrade. I know we can update ucs manager from 4.0 to 4.1, bur wonder we can downgrade
OpenStack Neutron project offers pluggable framework means you can extend the capability of Neutron by orchestrating the Neutron functions to your upstream networking gears. For example, if you have provisioned a VLAN tagged Neutron network in Op...
https://soundcloud.com/user-327105904/s7e25-from-the-office-to-anywhere-empowering-secure-remote-work-with-cisco-vdi-solutions As organizations have had to rapidly respond and transition in the face of swift change, Cisco VDI solutions have enabled ...