Showing results for 
Search instead for 
Did you mean: 

iSCSI MPIO (Multipath) with Nexus 1000v

Level 1
Level 1

Anyone out there have iSCSI MPIO successfully working with Nexus 1000v? I have followed the Cisco Guide to the best of my understanding and I have tried a number of other configurations without success - vSphere still shows the same number of Paths as it shows Targets.

The Cisco document states the following:

Before starting the procedures in this section you must know or do the following:

You have already configured the host with one port channel that includes two or more physical NICs.

You have already created VMware kernel NICs to access the SAN external storage.

A Vmware Kernel NIC can only be pinned or assigned to one physical NIC.

A physical NIC can have multiple VMware Kernel NICs pinned or assigned to it.

What does " A Vmware Kernel NIC can only be pinned or assigned to one physical NIC" mean in regard to Nexus 1000v? I know how to pin a physical NIC with standard vDS, but how does that work with 1000v? The only thing related to "pinning" I could find inside of 1000v was with port channel sub-groups. I tried creating a port channel with manual sub-groups, assigning sub-group-id values to each uplink, then assigning a pinning id to my two VMkernel port profiles (and directly to the vEthernet ports as well). But, that didn't seem to work for me.

I can ping both of the iSCSI VMkernel ports from the upstream switch and from inside the VSM, so I know Layer 3 connectivity is there. One odd thing, however, is that I only see one of the two VMkernel MAC addresses bound on the upstream switch. Both addresses show bound from inside the VSM.

What am I missing here?

2 Replies 2

Level 1
Level 1

Hi Josh,

In regards to the uplink pinning (manual subgroups): Are your VMKs for each iSCSI profile different VLANs?

What you'll find is that it will only pin to separate uplinks if the iSCSI port-profile used is within the same VLAN; ie: using same port-profile for two different iSCSI VMKs. I ran into this problem when I had to use two iSCSI VLANs / two port-profiles for my MPIO setup with a Clarion. It's actually a problem in the flare code that forced us into the setup we're in today, but it just so happens to work against the logic that the 1000v does its iSCSI pinning with.

Link to a thread:    


Thank you for your response. I believe myy configuration matches what you mention in your posting. I have a few more details over in the VMware community site. I think the issue may be related to the fact that our iSCSI storage (EqualLogic) only has a single IP Address. For some reason, that is not an issue for iSCSI Multipath with VMware Standard Switches, but it may be for iSCSI Multipath with Nexus 1000v.