This is not so much a post looking for assistance (maybe it will turn in to that, I'm not sure). Rather I thought I'd share what I think is happening now with the VIC 1340 cards so others can avoid some headaches.
It looks like Cisco made some changes with the VIC 1340 card installed in B200 M4 blades. These cards now have two PCI paths which result in effectively appearing to have two adapters, without actually appearing to have two adapters (you won't see a second adapter listed in server inventory, but you do now have two distinct PCI ID trees). Cisco bug ID CSCut78943 has more information on this. It appears that now if you have a VIC 1340 card you now have the ability to specify a host-port (controlled in PCI placement screen) to tell UCS what port of the 1340 the virtual adapter should be using.
Perhaps someone else could validate my thinking here (I've opened a case with TAC for validation as well), but that seems to be what's happening.
More details on my specific situation:
This is causing a massive headache for me now with updating templates running ESXi. We just recently added a second pair of NICs to support a new disjoint layer 2 uplink. When we did that the PCI order for the B200 M3's with the VIC 1240 cards added two more PCI devices to the end of my order -- PCI ID 11 and 12 This bumped my vHBAs from 11 and 12 to 13 and 14. This had no effect on my B200 M3s -- ESXi could care less. It saw two new NICs, it still saw the same PCI HBAs even though they had new IDs -- no problem. On the B200 M4s with VIC 1340, however, chaos ensued. Currently my VIC1340s place the PCI devices as 1-5 as vNICs-A, 6 as vHBA-A, 7-11 as vNICs-B and 12 as vHBA-B. When we added two additional vNICs to support the new disjoint layer 2 connection, though, it added them in the middle. I now had ID 1-6 for vNICs, 7 for vHBA-A, 8-12 as vNICs and 13 as vHBA-B. This caused massive chaos within ESXi -- adapters that were configured to specific port groups were now not associated with the right port groups. VMs that vMotioned to the B200 M4's lost connectivity and all sorts of bad stuff...
What seems to make things better right now is to specify in the PCI Placement screen that all virtual interfaces are only using admin host-port 1. This makes the B200 M4s act exactly like the B200 M3s. Ideally I'd like to use both ports, but I'm not sure how I can do that and keep ESXi happy right now.
Any validation I can get plus from TAC would be helpful before we schedule a window to change this.
I really wanted to get this information out there as up til now all the blogs and posts I'm reading here speak about how the half-width blades only have one VCON/PCI tree/adapter port. It appears Cisco has muddied the waters a bit by now allowing the 1340 to have multiple ports, and thus muck with the PCI order more.
Steve, I have this exact same problem. I have 2 vHBA and 2 vNIC. Went to add 2 more vNIC to run a disjoint L2 vlan dedicated for backup and it took vm's down. The vHBA and vNIC’s unknowingly reordered and we're added to the dvSwitches like our M3 are setup. Basically vmnic1 became vmnic2. Well...vmnic1 now only had the 1 backup vlan and not all the production vlans when a VM decided to use that path. Down it went...meaning it lost IP connectivity. I am surprised this hasn't caused more issues than it has. Looks like it's fixed in 6c so I will be upgrading soon.
It's not really fixed in 6c, they just provide a warning that you may have PCI enumeration issues.
The solution TAC has suggested is as follows:
place all vNICs onto the appropriate VCON (in my case only VCON 1 since I only have half width blades). Let them use ANY admin host port.
Then place all vHBAs on the VCON and set their admin host port to 2.
This will force the HBA to always be the absolute last device to enumerate and therefore will not mess with PCI ordering, again providing you always add NICs to the end of the PCI order not in the middle or end.
There is nothing "fixed" in 2.2(6c) other than Cisco providing a warning now when you make a change to vHBAs/vNICs placement. There is nothing that can be fixed since this is just how PCI enumeration works.
Here is the release note on what was changed for that bug report:
"When making changes to vNICs or vHBAs that will be provisioned on Cisco Virtual Interface Cards (VICs) 1340 and 1380 adapters, a warning on the placement order impact appears."
That at is the fix. Cisco warns you that changing the order will impact placement on 1340/1380 cards. You still have to follow the work around. I am 100% certain on this -- we are on 2.2(6d) with a slew of 1340 VICs now. There is nothing changed to force the ordering. You must follow the work around in the bug report and list the VHBAs as the last two devices and on the last VCON and on the last admin port available.
Just to make sure, I put all the vNIC's on vCON1 with ANY admin port and place all the vHBA's on vCON1 with admin port 2 specified?
Put all the vNIC's on vCON1 with ANY admin port and place all the vHBA's on vCON2 with admin port 2 specified?
OK...I got it fixed. I put all the vNIC's on vCON 1 and specified Admin Host Port 1. I put all the vHBA's on vCON 1 and specified Admin Host Port 2. This put the vNIC's in PCI slot 6,7,8, and 9. The vHBA's are in PCI slot 14 and 15. Now...the ESXi host I was messing with got so messed up that I had to go in and re-alias all the vmnic's following the commands in this Cisco bug number: CSCuv34051.
What a mess. All is good now.
bit late to the party on this one but i've just run into this issue. I have associated all of the vnics with vcon1, restarted the server but still running into this issue.
Any other steps i can take for this?
Have you verified that your placement put your adapters in the right order? You set your FC adapters to the highest PCI ID?
Need some more information. Check what ESXi is showing for MAC address for every vmnic and verify they match.