cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
9158
Views
5
Helpful
20
Replies

VIC 1340 PCI placement issues

ssumichrast
Level 1
Level 1

This is not so much a post looking for assistance (maybe it will turn in to that, I'm not sure).  Rather I thought I'd share what I think is happening now with the VIC 1340 cards so others can avoid some headaches.

 

It looks like Cisco made some changes with the VIC 1340 card installed in B200 M4 blades.  These cards now have two PCI paths which result in effectively appearing to have two adapters, without actually appearing to have two adapters (you won't see a second adapter listed in server inventory, but you do now have two distinct PCI ID trees).  Cisco bug ID CSCut78943 has more information on this.  It appears that now if you have a VIC 1340 card you now have the ability to specify a host-port (controlled in PCI placement screen) to tell UCS what port of the 1340 the virtual adapter should be using.

Perhaps someone else could validate my thinking here (I've opened a case with TAC for validation as well), but that seems to be what's happening.

 

More details on my specific situation:

This is causing a massive headache for me now with updating templates running ESXi.  We just recently added a second pair of NICs to support a new disjoint layer 2 uplink.  When we did that the PCI order for the B200 M3's with the VIC 1240 cards added two more PCI devices to the end of my order -- PCI ID 11 and 12  This bumped my vHBAs from 11 and 12 to 13 and 14.  This had no effect on my B200 M3s -- ESXi could care less.  It saw two new NICs, it still saw the same PCI HBAs even though they had new IDs -- no problem.  On the B200 M4s with VIC 1340, however, chaos ensued.  Currently my VIC1340s place the PCI devices as 1-5 as vNICs-A, 6 as vHBA-A, 7-11 as vNICs-B and 12 as vHBA-B.  When we added two additional vNICs to support the new disjoint layer 2 connection, though, it added them in the middle.  I now had ID 1-6 for vNICs, 7 for vHBA-A, 8-12 as vNICs and 13 as vHBA-B.  This caused massive chaos within ESXi -- adapters that were configured to specific port groups were now not associated with the right port groups.  VMs that vMotioned to the B200 M4's lost connectivity and all sorts of bad stuff...

What seems to make things better right now is to specify in the PCI Placement screen that all virtual interfaces are only using admin host-port 1.  This makes the B200 M4s act exactly like the B200 M3s.  Ideally I'd like to use both ports, but I'm not sure how I can do that and keep ESXi happy right now.

Any validation I can get plus from TAC would be helpful before we schedule a window to change this.

I really wanted to get this information out there as up til now all the blogs and posts I'm reading here speak about how the half-width blades only have one VCON/PCI tree/adapter port.  It appears Cisco has muddied the waters a bit by now allowing the 1340 to have multiple ports, and thus muck with the PCI order more.

 

 

20 Replies 20

gman66
Level 1
Level 1

@ REkdal163 - thanks for this, however, the web client doesn't have an area that shows PCI assignments, you may have been looking at the old thick client that used flash.  This is where I'm stumped because the output of the command "localcli --plugin-dir /usr/lib/vmware/esxcli/int/ deviceInternal alias list"  is difficult to interpret, it doesn't show you PCI 1 - what ever, it looks like this:

pci p0000:67:00.3 vmhba1
pci p0000:67:00.2 vmnic2
pci s00000001:03.02 vmnic1
pci p0000:67:00.4 vmhba2
pci p0000:67:00.0 vmnic4
pci s00000001:03.00 vmnic0
pci p0000:67:00.1 vmnic5
pci s00000001:03.01 vmnic3
pci s00000004.00 vmhba3
pci p0000:00:11.5 vmhba0

Some entries start with P, some with S, and the numbering..  I do see some resemblance of ordering but need some clarification.   

 

 

gman66
Level 1
Level 1

I have stability when configuring the admin host port to 1 for the vNICs and 2 for the vHBAs, but my concern is how this affects traffic.  I recognize that setting the admin host port to 1 vs 2 will affect enumeration, but does it affect IO at all?  do these settings have an impact on IO and bandwidth?  

Apologies for late reply, was OOO for a few days..

Looks as though you have the PCI/vmnic association sorted. I just had to do some stare and compare work to make the associations between what the CLI and vcenter shows.

Screen shot attached to show what I used...

As for IO, I have no doc indicating how B/W is allocated across the virtual ports within the VIC.

gman66
Level 1
Level 1

@ Rekdal - thanks for helping.  

Anyone else want to help with explaining what impact the admin host port setting, manually defining the PCI channel within the 1340 VIC, will have on throughput?  if any at all?    

From my perspective, the Admin Host Port has no effect on the way the data travels, it's only used to place the vNIC on a specific PCI address. Note that the vNIC Templates is where we choose which fabric the data flows and this is automatically handled by the UCS hardware.

JDMils
Level 1
Level 1

I don't see any good solutions here. I've been working with B200 M5 servers and found that most of us may be using the VIC cards incorrectly. I bet you have all your vNICs across both the VIC1340 & VIC1380 adapters!? If so then you have setup your servers' networking incorrectly. Let me explain.

The VIC1340 is a 40Gbps adapter.
The VIC1380 is a 80Gbps adapter.

Thus, it would be beneficial and as per Cisco specifications to put only the management vNICs on the VIC1340 while putting the heavy-traffic vNICs, for ESXi hosts in this case, on the VIC1380.

To make things more complicated, you then consider using NSX-T. Each cluster needs a Host Transport Node Profile defined in NSX-T where you specify which ESXi vmnic is to be the NVDS uplink. I chose vmnic4 & vmnic5 as my ESXi hosts have management on vmnic0 & vmnic1, ESXi port groups on vmnic2 & vmnic3 and thus NVDS on vmnic4 & vmnic5.

Now, to get this ordering to be recognised by ESXi, you should note the way ESXi discovers vNICs, which is like this:

Scan Adapter #1.
Scan Admin Host Port #1.
Enumerate vNIC by (Desired) Order # sequence.
Scan Admin Host Port #2.
Enumerate vNIC by (Desired) Order # sequence.
Scan Adapter #2.
Scan Admin Host Port #1.
Enumerate vNIC by (Desired) Order # sequence.
Scan Admin Host Port #2.
Enumerate vNIC by (Desired) Order # sequence.

So, I would set my Service Profile Template vNIC Placement like this:

vnic-mgt-a on vCon1/adapter 1 on Admin Host Port 1 with Order 1.
vnic-mgt-b on vCon1/adapter 1 on Admin Host Port 2 with Order 2.
vnic-esxi-a on vCon2/adapter 2 on Admin Host Port 1 with Order 3.
vnic-esxi-b on vCon2adapter 2 on Admin Host Port 1 with Order 4.
vnic-nvds-a on vCon2/adapter 2 on Admin Host Port 2 with Order 5.
vnic-nvds-b on vCon2/adapter 2 on Admin Host Port 2 with Order 6.

See, ESXi scans adapter #1 first. It then scans adapter 1/Admin Host Port #1. If more than one vNIC is found here, it then arranges the vNICs by the Order set. ESXi then scans adapter 1/Admin Host Port 2.If more than one vNIC is found here, it then arranges the vNICs by the Order set. This places my management vNICs on vmnic0 & vmnic1.

ESXi then scans adapter #2 in the same way thus giving me, in the order I want, ESXi port groups on vmnic2 & vmnic3 and NVDS uplinks on vmnic4 & vmnic5.

This setup ensures low-volume data is sent to the low bandwidth VIC card and high-volume data is sent to the high-bandwidth VIC card. It also ensures a static vmnic placement in the ESXi OS.

Review Cisco Networking for a $25 gift card

Review Cisco Networking for a $25 gift card