cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
7743
Views
5
Helpful
18
Replies

VIC 1340 PCI placement issues

ssumichrast
Level 1
Level 1

This is not so much a post looking for assistance (maybe it will turn in to that, I'm not sure).  Rather I thought I'd share what I think is happening now with the VIC 1340 cards so others can avoid some headaches.

 

It looks like Cisco made some changes with the VIC 1340 card installed in B200 M4 blades.  These cards now have two PCI paths which result in effectively appearing to have two adapters, without actually appearing to have two adapters (you won't see a second adapter listed in server inventory, but you do now have two distinct PCI ID trees).  Cisco bug ID CSCut78943 has more information on this.  It appears that now if you have a VIC 1340 card you now have the ability to specify a host-port (controlled in PCI placement screen) to tell UCS what port of the 1340 the virtual adapter should be using.

Perhaps someone else could validate my thinking here (I've opened a case with TAC for validation as well), but that seems to be what's happening.

 

More details on my specific situation:

This is causing a massive headache for me now with updating templates running ESXi.  We just recently added a second pair of NICs to support a new disjoint layer 2 uplink.  When we did that the PCI order for the B200 M3's with the VIC 1240 cards added two more PCI devices to the end of my order -- PCI ID 11 and 12  This bumped my vHBAs from 11 and 12 to 13 and 14.  This had no effect on my B200 M3s -- ESXi could care less.  It saw two new NICs, it still saw the same PCI HBAs even though they had new IDs -- no problem.  On the B200 M4s with VIC 1340, however, chaos ensued.  Currently my VIC1340s place the PCI devices as 1-5 as vNICs-A, 6 as vHBA-A, 7-11 as vNICs-B and 12 as vHBA-B.  When we added two additional vNICs to support the new disjoint layer 2 connection, though, it added them in the middle.  I now had ID 1-6 for vNICs, 7 for vHBA-A, 8-12 as vNICs and 13 as vHBA-B.  This caused massive chaos within ESXi -- adapters that were configured to specific port groups were now not associated with the right port groups.  VMs that vMotioned to the B200 M4's lost connectivity and all sorts of bad stuff...

What seems to make things better right now is to specify in the PCI Placement screen that all virtual interfaces are only using admin host-port 1.  This makes the B200 M4s act exactly like the B200 M3s.  Ideally I'd like to use both ports, but I'm not sure how I can do that and keep ESXi happy right now.

Any validation I can get plus from TAC would be helpful before we schedule a window to change this.

I really wanted to get this information out there as up til now all the blogs and posts I'm reading here speak about how the half-width blades only have one VCON/PCI tree/adapter port.  It appears Cisco has muddied the waters a bit by now allowing the 1340 to have multiple ports, and thus muck with the PCI order more.

 

 

18 Replies 18

davidoliver1
Level 1
Level 1

Steve, I have this exact same problem.  I have 2 vHBA and 2 vNIC.  Went to add 2 more vNIC to run a disjoint L2 vlan dedicated for backup and it took vm's down.  The vHBA and vNIC’s unknowingly reordered and we're added to the dvSwitches like our M3 are setup.  Basically vmnic1 became vmnic2.  Well...vmnic1 now only had the 1 backup vlan and not all the production vlans when a VM decided to use that path.  Down it went...meaning it lost IP connectivity.  I am surprised this hasn't caused more issues than it has.  Looks like it's fixed in 6c so I will be upgrading soon.

It's not really fixed in 6c, they just provide a warning that you may have PCI enumeration issues. 

The solution TAC has suggested is as follows:

place all vNICs onto the appropriate VCON (in my case only VCON 1 since I only have half width blades). Let them use ANY admin host port. 

Then place all vHBAs on the VCON and set their admin host port to 2. 

This will force the HBA to always be the absolute last device to enumerate and therefore will not mess with PCI ordering, again providing you always add NICs to the end of the PCI order not in the middle or end. 

Thanks Steven for sharing !

According to CSCut78943 this is fixed in 2.2(6c).

Sorry, I didn't see the comment below !

There is nothing "fixed" in 2.2(6c) other than Cisco providing a warning now when you make a change to vHBAs/vNICs placement. There is nothing that can be fixed since this is just how PCI enumeration works. 

Here is the release note on what was changed for that bug report:

"When making changes to vNICs or vHBAs that will be provisioned on Cisco Virtual Interface Cards (VICs) 1340 and 1380 adapters, a warning on the placement order impact appears."

That at is the fix. Cisco warns you that changing the order will impact placement on 1340/1380 cards. You still have to follow the work around. I am 100% certain on this -- we are on 2.2(6d) with a slew of 1340 VICs now. There is nothing changed to force the ordering. You must follow the work around in the bug report and list the VHBAs as the last two devices and on the last VCON and on the last admin port available. 

Steven,

Just to make sure, I put all the vNIC's on vCON1 with ANY admin port and place all the vHBA's on vCON1 with admin port 2 specified?

Or

Put all the vNIC's on vCON1 with ANY admin port and place all the vHBA's on vCON2 with admin port 2 specified?

Thanks.

OK...I got it fixed.  I put all the vNIC's on vCON 1 and specified Admin Host Port 1.  I put all the vHBA's on vCON 1 and specified Admin Host Port 2.  This put the vNIC's in PCI slot 6,7,8, and 9.  The vHBA's are in PCI slot 14 and 15.  Now...the ESXi host I was messing with got so messed up that I had to go in and re-alias all the vmnic's following the commands in this Cisco bug number:  CSCuv34051.

What a mess.  All is good now.

Thanks. 

Hi,

bit late to the party on this one but i've just run into this issue. I have associated all of the vnics with vcon1, restarted the server but still running into this issue.

Any other steps i can take for this?

Thanks,

Richard 

Have you verified that your placement put your adapters in the right order? You set your FC adapters to the highest PCI ID?

Need some more information. Check what ESXi is showing for MAC address for every vmnic and verify they match. 

gman66
Level 1
Level 1

I just set the host port for the VMHBAs to 2 and rebooted, nothing working, MAC addresses still not properly assigned to the VMNICs.  

This is absolutely awful!!!!!!!!!!!!!!!!!!!!!!!   

 

 

gman66
Level 1
Level 1

Does anyone have a resolution for this issue?    We can't get the disjointed network working unless we have stability with the MAC addresses.   Do I have to hardcode the MAC addresses to each blade???  WTF

 

This caught me out too and this is not a great answer but..

The solution I came up with involves manually reconfiguring the vmware vmnic to the PCI address I wanted it on and then building host profiles in virtual center to enforce this configuration on the cluster. Also results in having to reset host to default and reconfigure ip addressing..

https://kb.vmware.com/s/article/2091560

 

 

 

 

 

gman66
Level 1
Level 1

redak163 - can you provide some more details about how you did this?   In my config 4 of the vNICs all see the same VLANs, only 2 vNICs are configured differently.  Therefore, I only needed to swap 2 of the vmnics around, but after doing so using the KB you referenced the results were completely different than what I attempted and the MAC addresses were not assigned to the appropriate VMNIC as planned.   I have no idea why they got assigned the way they did, this is why I ask if you can share more details.  

Yep its a mess to get sorted and there are likely better ways to get it done...

Working from memory here.

I built a new service profile in UCS manager with the additional NIC's.

Pick a host and remove from the cluster in VC.

Remove the existing profile from your host and assign the new profile, before acking the reboot to the new profile reset the host to default. You can reboot after too this just saves a reboot cycle.

Once the host is back up reconfigure the management network.

Add back to vcenter.

Then look at the vmnic/pci assignments and decide what needs to be moved. Hopefully your managemnt nics are in a good spot and can stay where they are. Make a single change and reload, check assignments. 

Vcenter Host>Configure>Physical adapters shows the PCI assignment for each VMNIC

Host profiles are pretty well documented. I thnk the thing that has to be set for the PCI assignments is "Device Alias Configuration"...

 

 

rekdal163
Level 1
Level 1

Forgot to mention I also built a vNIC/vHBA placement policy for the service profile. This was done to be sure our vHBA's were always at the top for boot to SAN. Everything else is spread as equally as possible across the four vCon's. 

Review Cisco Networking for a $25 gift card

Review Cisco Networking for a $25 gift card