I cought into weird situation after upgrade VMware ESXi 5.0 to 5.1 on UCS C200M2. Before upgrade I have had server with configured static adapter-fex VNICS and floating/dynamic vm-fex interfaces working perfectly in DirectPathIO mode for VMs. Initial deployment was done according Cisco N5K VM-fex operation guide for NX-OS 5.1(3)N1(1). There were vCenter 5.1, 2xN5K and 2xC200M with ESXi 5.0 integrated so that N5K DVS was brought to vCenter and hosts could be added to DVS. One of C200M2 (still on ESXi5.0) has no problem while operating VM-Fexs and adapter-fexs while part of its uplinks are on VSS and othe part on DVS (in fact there are none VMs on VSS, so all the workloads use DVS).
But upgraded C200M cannot operate with DVS correctly: any vmks, vm networks, VMs connected to DVS lose network connectivity.
Attached picture illustrates diag of connectivity status (uplinks/ets are blocked).
VEM are fresh and running:
~ # vem status -v
Date Sat Sep 28 18:47:14 PDT 2013
VEM modules are loaded
Switch Name Num Ports Used Ports Configured Ports MTU Uplinks
vSwitch0 128 6 128 1500 vmnic0,vmnic1
DVS Name Num Ports Used Ports Configured Ports MTU Uplinks
N5KDVS-1 256 14 256 1500 vmnic3,vmnic2
VEM Agent (vemdpa) is running
~ # esxcli software vib list | grep cisco
cisco-vem-v162-esx 220.127.116.11.2.1a.0-3.1.1 Cisco PartnerSupported 2013-11-11
Do anybody have an idea why it is going wrong?
Of course they exist.There is unpatched server with ESXi 5.0 near upgraded server and it works perfectly.
Finally I downgraded server to ESXi5.0 again and VM-FEX functionality restored.