Anyone out there have iSCSI MPIO successfully working with Nexus 1000v? I have followed the Cisco Guide to the best of my understanding and I have tried a number of other configurations without success - vSphere still shows the same number of Paths as it shows Targets.
The Cisco document states the following:
Before starting the procedures in this section you must know or do the following:
•You have already configured the host with one port channel that includes two or more physical NICs.
•You have already created VMware kernel NICs to access the SAN external storage.
•A Vmware Kernel NIC can only be pinned or assigned to one physical NIC.
•A physical NIC can have multiple VMware Kernel NICs pinned or assigned to it.
What does " A Vmware Kernel NIC can only be pinned or assigned to one physical NIC" mean in regard to Nexus 1000v? I know how to pin a physical NIC with standard vDS, but how does that work with 1000v? The only thing related to "pinning" I could find inside of 1000v was with port channel sub-groups. I tried creating a port channel with manual sub-groups, assigning sub-group-id values to each uplink, then assigning a pinning id to my two VMkernel port profiles (and directly to the vEthernet ports as well). But, that didn't seem to work for me.
I can ping both of the iSCSI VMkernel ports from the upstream switch and from inside the VSM, so I know Layer 3 connectivity is there. One odd thing, however, is that I only see one of the two VMkernel MAC addresses bound on the upstream switch. Both addresses show bound from inside the VSM.
What am I missing here?