cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2289
Views
0
Helpful
3
Replies

Nexus 1000v: 2 of 4 VEMs connecting

VirtualEB
Level 1
Level 1

Team,

I have a UCS deployment with 4 ESXi 5.0 update 1 hosts using a service profile template for all components.  We have provisioned 8 NICs, two of which are designated for 1000v. The UCS uplinks are port-channeled to a VSS core port-channel which are connected and trunking all VLANs (including control and packet VLANS, 901 and 902 respectively):

 

CORE-R1#sh int statu | i UCS

Te1/1/4      UCS-FI-A-ETH1/1    connected    trunk        full    10G 10Gbase-SR

Te1/1/5      UCS-FI-B-ETH1/1    connected    trunk        full    10G 10Gbase-SR

Te1/1/6      UCS-FI-A-ETH1/3 vM connected    1000         full    10G 10Gbase-SR

Gi1/8/44     UCS FIBER CONNECT  connected    101        a-full a-1000 10/100/1000BaseT

Po17         UCS-FI-A           connected    trunk      a-full    10G

Po18         UCS-FI-B           connected    trunk      a-full    10G

Te2/1/4      UCS-FI-A-ETH1/2    connected    trunk        full    10G 10Gbase-SR

Te2/1/5      UCS-FI-B-ETH1/2    connected    trunk        full    10G 10Gbase-SR

Te2/1/6      UCS-FI-B-ETH1/3 vM connected    1000         full    10G 10Gbase-SR

Gi2/8/44     UCS FIBER CONNECT  connected    101        a-full a-1000 10/100/1000BaseT

CORE-R1#sh int trunk | i Po17|Po18

Port                Mode         Encapsulation  Status        Native vlan

Po17                on              802.1q                 trunking      1

Po18                on              802.1q                 trunking      1

Port                Vlans allowed on trunk

Po17                1-4094

Po18                1-4094

Port                Vlans allowed and active in management domain

Po17                1,10-11,16,69,89-99,101,190-192,194-195,197-203,888,901-903,999-1000

Po18                1,10-11,16,69,89-99,101,190-192,194-195,197-203,888,901-903,999-1000

Po17                1,10-11,16,69,89-99,101,190-192,194-195,197-203,888,901-903,999-1000

Po18                1,10-11,16,69,89-99,101,190-192,194-195,197-203,888,901-903,999-1000

     

What is occurring is that two of the four VEM heartbeats are not reaching the VSM, yet we've captured the initial communication between the VEM and VSM before it goes offline:

1000v-VSM01# sho mod 2012 Aug 24 22:06:30 1000v—VSM01 VEM_MGR-2-VEM_MG
R_DETECTED: Host esx01 detected as module 6
2012 Aug 24 22:06:30 1000v-VSM01 %VEM_MGR-2-MOD_ONLINE: Module 6 is online

Mod Ports Module-Type Model Status
1 0 Virtual Supervisor Module Nexus 1000V active
4 248 Virtual Ethernet Module NA ok
5 248 Virtual Ethernet Module NA 0k
6 248 Virtual Ethernet Module NA 0k

Mod Sw Hw
1 4.2(1)SV1(5.1a) 0.0
4 4.2(1)SV1(5.1a) VMware ESXi 5.0.0 Releasebuild-623860 (3.0)
5 4.2(1)SV1(5.1a) VMware ESXi 5.0.0 Releasebulld-623860 (3.0)
6 4.2(1)SV1(5.1a) VMware ESXi 5.0.0 Releasebuild—623860 (3.0)

Mod MAC-Address (es) Serial-Num
1 00-19-07- 6c-5a-a8 to 00- 19-07-6c-62 -aS NA
4 02—00—0c-00-04-00 to 02-00-0c—00—04—80 NA
5 02 -00-0c-00-05-00 to 02-00-0c-00-05-60 NA
6 02-00-0c-00-06—00 to 02-00-0c-00-06—80 NA

Mod Server-IP Server-UUID Server-Nan
1 xx.xx.100.101 NA NA
4 xx.xx.100.93 9bb84276-a385-llel-0000—000000000005 esx04.xxxxxx.com
5 xx.xx. 100.92 9bb84276-a385-llel-0000—000000000006 esx03.xxxxxx.com
6 xx.xx.100.90 9hb84276-a385-llel--0000—000000000008 esx01

1000v-VSM01$ 2012 Aug 24 22:06:37 1000v-VSM01 %VEM_MGR-2-VEM_MGR_REMOV
E_NO_HB: Removing VEM 6 (heartbeats lost)

2012 Aug 24 22:06:38 HERC—l000v-VSM01 VEM_MGR-2-MOD_OFFLINE: Module 6 is offline

1000v-VSM01#

The show card of the missing VEMs have populated card slots, recognize the VSM MAC, and the card UUID matches that of the UCS service profile UUID:

ESX01

~ # vemcmd show card

Card UUID type 2: 9bb84276-a385-11e1-0000-000000000008

Card name: esx01

Switch name: 1000v-VSM01

Switch alias: DvsPortset-0

Switch uuid: 44 62 05 50 1a e5 91 2b-94 45 7a 25 cd 07 25 56

Card domain: 1

Card slot: 6

VEM Tunnel Mode: L2 Mode

VEM Control (AIPC) MAC: 00:02:3d:10:01:05

VEM Packet (Inband) MAC: 00:02:3d:20:01:05

VEM Control Agent (DPA) MAC: 00:02:3d:40:01:05

VEM SPAN MAC: 00:02:3d:30:01:05

Primary VSM MAC : 00:50:56:85:31:85

Primary VSM PKT MAC : 00:50:56:85:31:87

Primary VSM MGMT MAC : 00:50:56:85:31:86

Standby VSM CTRL MAC : 00:50:56:85:31:82

Management IPv4 address: xx.xx.100.90

Management IPv6 address: 0000:0000:0000:0000:0000:0000:0000:0000

Secondary VSM MAC : 00:00:00:00:00:00

Secondary L3 Control IPv4 address: 0.0.0.0

Upgrade : Default

Max physical ports: 32

Max virtual ports: 216

Card control VLAN: 901

Card packet VLAN: 902

Card Headless Mode : Yes

       Processors: 24

Processor Cores: 12

Processor Sockets: 2

Kernel Memory:   100598580

Port link-up delay: 5s

Global UUFB: DISABLED

Heartbeat Set: False

PC LB Algo: source-mac

Datapath portset event in progress : no

Licensed: No

~ #

~ # vemcmd show port

LTL   VSM Port Admin Link State PC-LTL SGID Vem Port Type

   21     Eth6/5     UP   UP   FWD     305     4   vmnic4

   22     Eth6/6     UP   UP   FWD     305     5   vmnic5

305       Po4     UP   UP   FWD       0

~ #

~ # vemcmd show trunk

Trunk port 6 native_vlan 1 CBL 1

vlan(1) cbl 1, vlan(3970) cbl 1, vlan(3969) cbl 1, vlan(3968) cbl 1, vlan

(3971) cbl 1, vlan(901) cbl 1, vlan(902) cbl 1, vlan(90) cbl 1, vlan(92)

cbl 1, vlan(94) cbl 1, vlan(95) cbl 1, vlan(96) cbl 1, vlan(98) cbl 1, vlan

(99) cbl 1, vlan(194) cbl 1, vlan(200) cbl 1, vlan(201) cbl 1,

Trunk port 16 native_vlan 1 CBL 1

vlan(1) cbl 1, vlan(3970) cbl 1, vlan(3969) cbl 1, vlan(3968) cbl 1, vlan

(3971) cbl 1, vlan(901) cbl 1, vlan(902) cbl 1, vlan(90) cbl 1, vlan(92)

cbl 1, vlan(94) cbl 1, vlan(95) cbl 1, vlan(96) cbl 1, vlan(98) cbl 1, vlan

(99) cbl 1, vlan(194) cbl 1, vlan(200) cbl 1, vlan(201) cbl 1,

Trunk port 21 native_vlan 1 CBL 1

vlan(1) cbl 1, vlan(901) cbl 1, vlan(902) cbl 1, vlan(90) cbl 1, vlan(92)

cbl 1, vlan(94) cbl 1, vlan(95) cbl 1, vlan(96) cbl 1, vlan(98) cbl 1, vlan

(99) cbl 1, vlan(194) cbl 1, vlan(200) cbl 1, vlan(201) cbl 1,

Trunk port 22 native_vlan 1 CBL 1

vlan(1) cbl 1, vlan(901) cbl 1, vlan(902) cbl 1, vlan(90) cbl 1, vlan(92)

cbl 1, vlan(94) cbl 1, vlan(95) cbl 1, vlan(96) cbl 1, vlan(98) cbl 1, vlan

(99) cbl 1, vlan(194) cbl 1, vlan(200) cbl 1, vlan(201) cbl 1,

Trunk port 305 native_vlan 1 CBL 1

vlan(1) cbl 1, vlan(901) cbl 1, vlan(902) cbl 1, vlan(90) cbl 1, vlan(92)

cbl 1, vlan(94) cbl 1, vlan(95) cbl 1, vlan(96) cbl 1, vlan(98) cbl 1, vlan

(99) cbl 1, vlan(194) cbl 1, vlan(200) cbl 1, vlan(201) cbl 1,

~ #

ESX02

~ # vemcmd show card

Card UUID type 2: 9bb84276-a385-11e1-0000-000000000007

Card name: esx02

Switch name: 1000v-VSM01

Switch alias: DvsPortset-0

Switch uuid: 44 62 05 50 1a e5 91 2b-94 45 7a 25 cd 07 25 56

Card domain: 1

Card slot: 3

VEM Tunnel Mode: L2 Mode

VEM Control (AIPC) MAC: 00:02:3d:10:01:02

VEM Packet (Inband) MAC: 00:02:3d:20:01:02

VEM Control Agent (DPA) MAC: 00:02:3d:40:01:02

VEM SPAN MAC: 00:02:3d:30:01:02

Primary VSM MAC : 00:50:56:85:31:85

Primary VSM PKT MAC : 00:50:56:85:31:87

Primary VSM MGMT MAC : 00:50:56:85:31:86

Standby VSM CTRL MAC : 00:50:56:85:31:82

Management IPv4 address: xx.xx.100.91

Management IPv6 address: 0000:0000:0000:0000:0000:0000:0000:0000

Secondary VSM MAC : 00:00:00:00:00:00

Secondary L3 Control IPv4 address: 0.0.0.0

Upgrade : Default

Max physical ports: 32

Max virtual ports: 216

Card control VLAN: 901

Card packet VLAN: 902

Card Headless Mode : Yes

       Processors: 24

Processor Cores: 12

Processor Sockets: 2

Kernel Memory:   100598580

Port link-up delay: 5s

Global UUFB: DISABLED

Heartbeat Set: False

PC LB Algo: source-mac

Datapath portset event in progress : no

Licensed: No

~ #

~ # vemcmd show port

LTL   VSM Port Admin Link State PC-LTL SGID Vem Port Type

   21               UP   UP   F/B*     0         vmnic4

   22               UP   UP   F/B*     0         vmnic5

* F/B: Port is BLOCKED on some of the vlans.

Please run "vemcmd show port vlans" to see the details.

~ #

~ # vemcmd show trunk

Trunk port 6 native_vlan 1 CBL 1

vlan(1) cbl 1, vlan(3970) cbl 1, vlan(3969) cbl 1, vlan(3968) cbl 1, vlan

(3971) cbl 1, vlan(901) cbl 1, vlan(902) cbl 1,

Trunk port 16 native_vlan 1 CBL 1

vlan(1) cbl 1, vlan(3970) cbl 1, vlan(3969) cbl 1, vlan(3968) cbl 1, vlan

(3971) cbl 1, vlan(901) cbl 1, vlan(902) cbl 1,

Trunk port 21 native_vlan 1 CBL 0

vlan(901) cbl 1, vlan(902) cbl 1,

Trunk port 22 native_vlan 1 CBL 0

vlan(901) cbl 1, vlan(902) cbl 1,

~ #

We have verified that the vCenter dVS uplinks are active and are on the correct 1000v portgroup.  We've even removed the hosts from the dVS, removed the VEM component, and added the host back to the dVS which spawned a fresh install of the VEM.  To no avail.  This configuration looks solid end-to-end but something is being missed.  Why would the other 2 VEMs communicate just fine over the same network?  And why do we see the missing VEMs being recognized by the VSM but then disappear?

As a side note, I noticed that the startup config of the VSM shows 4 ehternet interfaces which inherit the 1000v-uplink port-profile but they are not present in the running config.  Clearly these are the uplink interfaces for the missing VEMs - they just don't seem to stay active.  Here's some additional info but let me know if I can provide anything else:

1000v-VSM01# sho svs conn

connection vcenter:

  ip address: xx.xx.99.175

  remote port: 80

  protocol: vmware-vim https

  certificate: default

  datacenter name: xxxx

  admin:

  max-ports: 8192

  DVS uuid: 44 62 05 50 1a e5 91 2b-94 45 7a 25 cd 07 25 56

  config status: Enabled

  operational status: Connected

  sync status: Complete

  version: VMware vCenter Server 5.0.0 build-455964

  vc-uuid: 54FA1653-9E82-440B-8F0A-64AFE1B01D90

esx01

~ # vem restart

stopDpa

VEM SwISCSI PID is 5425

Killing vem-swiscsi PID 5425

Warn: DPA running host/vim/vimuser/cisco/vem/vemdpa.5423

Warn: DPA running host/vim/vimuser/cisco/vem/vemdpa.5423

watchdog-vemdpa: Terminating watchdog process with PID 5416

grep: 5424: No such file or directory

startDpa

~ # vem status

VEM modules are loaded

Switch Name      Num Ports   Used Ports  Configured Ports  MTU     Uplinks

vSwitch0         128         4           128               1500

vmnic0,vmnic1

vSwitch1         128         1           128               1500

vSwitch2         128         5           128               9000

vmnic6,vmnic7

vSwitch3         128         4           128               9000

vmnic2,vmnic3

DVS Name         Num Ports   Used Ports  Configured Ports  MTU     Uplinks

1000v-VSM01  256         13          256               1500

vmnic5,vmnic4

VEM Agent (vemdpa) is running

esx02

~ # vem restart

stopDpa

VEM SwISCSI PID is 5517

Killing vem-swiscsi PID 5517

Warn: DPA running host/vim/vimuser/cisco/vem/vemdpa.5505

Warn: DPA running host/vim/vimuser/cisco/vem/vemdpa.5505

watchdog-vemdpa: Terminating watchdog process with PID 5498

grep: 5506: No such file or directory

startDpa

~ # vem status

VEM modules are loaded

Switch Name      Num Ports   Used Ports  Configured Ports  MTU     Uplinks

vSwitch0         128         4           128               1500

vmnic0,vmnic1

vSwitch1         128         1           128               1500

vSwitch2         128         5           128               9000

vmnic6,vmnic7

vSwitch3         128         4           128               9000

vmnic2,vmnic3

DVS Name         Num Ports   Used Ports  Configured Ports  MTU     Uplinks

1000v-VSM01  256         13          256               1500

vmnic5,vmnic4

VEM Agent (vemdpa) is running

Thanks in advance - any help getting these connected would be appreciated!

3 Replies 3

Robert Burns
Cisco Employee
Cisco Employee

Your config looks ok.  What I'd like to know is if the "working" and "flapping" VEMs are hitting the VSM through the same path.

Assuming your two DVS uplinks go to fabric-a and fabric-b respectfully, you check the following:

1. Find out which host has the primary VSM. 

     show int virtual

2. Next find out which uplink path the VSM's control interface uses (Fabric A or B) Use the X = VEM # from the previous command.

     module vem X execute vemcmd show port 

You'll need to find out which vmnic the VSM's control interface LTL maps to. 

3. Then using CDP in vCenter find out which Interconnect that vmnic connects to.

4. Next you have to do the same for your "problem" ESX hosts.  If they're not showing up as modules on the VSM you'll need to open KVM/SSH and excute the following from the CLI

     vemcmd show port-old

Look for LTL 10 (control interface of the VEM) and look up which vmnic LTL 10 maps to. 

Ex.

# vemcmd show port-old   

  LTL    IfIndex   Vlan/    Bndl  SG_ID Pinned_SGID  Type  Admin State      CBL Mode   Name

                  SegId

    6          0      1 T     0     32           3  VIRT     UP    UP    1  Trunk vns

    8          0   3969       0     32          32  VIRT     UP    UP    1 Access

    9          0   3969       0     32          32  VIRT     UP    UP    1 Access

   10          0    177       0     32           4  VIRT     UP    UP    1 Access << This VEM Control Interface maps to Pinned_SGID 4 (which happens to be vmnic4)

   11          0   3968       0     32          32  VIRT     UP    UP    1 Access

   12          0    177       0     32           4  VIRT     UP    UP    1 Access

   13          0      1       0     32          32  VIRT     UP    UP    0 Access

   14          0   3971       0     32          32  VIRT     UP    UP    1 Access

   15          0   3971       0     32          32  VIRT     UP    UP    1 Access

   16          0      1 T     0     32          32  VIRT     UP    UP    1  Trunk ar

   20   2500c0c0      1 T   305      3          32  PHYS     UP    UP    1  Trunk vmnic3

   21   2500c100      1 T   305      4          32  PHYS     UP    UP    1  Trunk vmnic4 << vmnic = SG 4

   49          0      1       0     32          32  VIRT   DOWN    UP    0 Access VM-Web-1 ethernet0

   50   1c000060    177       0     32           4  VIRT     UP    UP    1 Access n1000v-sec.eth2

   51   1c000050    177       0     32           4  VIRT     UP    UP    1 Access n1000v-sec.eth1

   52   1c000040    177       0     32           4  VIRT     UP    UP    1 Access n1000v-sec.eth0

   53   1c000110      1       0     32          32  VIRT   DOWN    UP    0 Access VM-App-1.eth0

   54   1c0000e0      1       0     32          32  VIRT   DOWN    UP    0 Access VSG-1 ethernet0

   55          0      1       0     32          32  VIRT   DOWN    UP    0 Access DCNM ethernet0

   56   1c000000    177       0     32           4  VIRT     UP    UP    1 Access vCenter ethernet0

   57   1c0000d0      1       0     32          32  VIRT   DOWN    UP    0 Access VSG-1.eth2

   58   1c0000c0    177       0     32           3  VIRT     UP    UP    1 Access VSG-1.eth1

   59   1c000120      1       0     32          32  VIRT   DOWN    UP    0 Access VM-App-2.eth0

   60   1c000030    177       0     32           3  VIRT     UP    UP    1 Access n1000v-pri.eth2

   61   1c000020    177       0     32           3  VIRT     UP    UP    1 Access n1000v-pri.eth1

   62   1c000010    177       0     32           3  VIRT     UP    UP    1 Access n1000v-pri.eth0 << My VSM's Control Interface is using SG_ID 3

  305   16000001      1 T     0     32          32  CHAN     UP    UP    1  Trunk

**In my example above though the VSM is hosted on the VEM, it has to traverse the northbound network (north of UCS) and back down to reach it.

My hunch is that the "good" hosts are UCS blades where the DVS uplink for the VEM's control interfaces and the VSM both go to the same Fabric Interconnect.  This would be all local switching.  If the VEM's control interface and the VSM are taking different paths, then we're going upstream of UCS to your VSS and back down the other side.

Let's isolate first whether there's a correlation like I suspect with the uplink paths first, then we'll go from there.

Regards,

Robert

Hi Robert,

Thanks for the reply. Here's the info you requested:

1000v-VSM01# sho int virtual

-------------------------------------------------------------------------------

Port        Adapter        Owner                    Mod Host

-------------------------------------------------------------------------------

Veth1       Net Adapter 1  1000v-VSM01-1            4   esx04.xxxx.c

Veth2       Net Adapter 1  1000v-VSM01-2            4   esx04.xxxx.c

Veth3       Net Adapter 3  1000v-VSM01-2            4   esx04.xxxx.c

Veth4       Net Adapter 3  1000v-VSM01-1            4   esx04.xxxx.c

Veth5       Net Adapter 1  xxxx1                  5   esx03.xxxx.c

Veth7       Net Adapter 1  xxxxVM01                 4   esx04.xxxx.c

1000v-VSM01# mod vem 4 execute vemcmd show port

                    ^

% Invalid command at '^' marker.

1000v-VSM01#

1000v-VSM01# mod vem 4 execute vemcmd show port

  LTL   VSM Port  Admin Link  State  PC-LTL  SGID  Vem Port  Type

   21     Eth4/5     UP   UP    FWD     305     4    vmnic4 

   22     Eth4/6     UP   UP    FWD     305     5    vmnic5 

   49      Veth3     UP   UP    FWD       0     4  1000v-VSM01-2.eth2 

   50      Veth2     UP   UP    FWD       0     5  1000v-VSM01-2.eth0 

   51      Veth7     UP   UP    FWD       0     4  xxxxVM01 ethernet0 

   52      Veth4     UP   UP    FWD       0     4  1000v-VSM01-1.eth2 

   53      Veth1     UP   UP    FWD       0     5  1000v-VSM01-1.eth0 

  305        Po2     UP   UP    FWD       0                 

* F/B: Port is BLOCKED on some of the vlans.

Please run "vemcmd show port vlans" to see the details.

esx01

~ # vemcmd show port-old
   LTL    IfIndex   Vlan/    Bndl  SG_ID Pinned_SGID  Type  Admin State      CBL Mode   Name
                   SegId
     6          0      1 T     0     32          32  VIRT     UP    UP    1  Trunk vns
     8          0   3969       0     32          32  VIRT     UP    UP    1 Access
     9          0   3969       0     32          32  VIRT     UP    UP    1 Access
    10          0    901       0     32           4  VIRT     UP    UP    1 Access
    11          0   3968       0     32          32  VIRT     UP    UP    1 Access
    12          0    902       0     32           5  VIRT     UP    UP    1 Access
    13          0      1       0     32          32  VIRT     UP    UP    0 Access
    14          0   3971       0     32          32  VIRT     UP    UP    1 Access
    15          0   3971       0     32          32  VIRT     UP    UP    1 Access
    16          0      1 T     0     32          32  VIRT     UP    UP    1  Trunk ar
    21   25014100      1 T   305      4          32  PHYS     UP    UP    1  Trunk vmnic4
    22   25014140      1 T   305      5          32  PHYS     UP    UP    1  Trunk vmnic5
   305   16000003      1 T     0     32          32  CHAN     UP    UP    1  Trunk
~ #

esx02

~ # vemcmd show port-old
   LTL    IfIndex   Vlan/    Bndl  SG_ID Pinned_SGID  Type  Admin State      CBL Mode   Name
                   SegId
     6          0      1 T     0     32          32  VIRT     UP    UP    1  Trunk vns
     8          0   3969       0     32          32  VIRT     UP    UP    1 Access
     9          0   3969       0     32          32  VIRT     UP    UP    1 Access
    10          0    901       0     32          32  VIRT     UP    UP    1 Access
    11          0   3968       0     32          32  VIRT     UP    UP    1 Access
    12          0    902       0     32          32  VIRT     UP    UP    1 Access
    13          0      1       0     32          32  VIRT     UP    UP    0 Access
    14          0   3971       0     32          32  VIRT     UP    UP    1 Access
    15          0   3971       0     32          32  VIRT     UP    UP    1 Access
    16          0      1 T     0     32          32  VIRT     UP    UP    1  Trunk ar
    21          0      1 T     0     32          32  PHYS     UP    UP    0  Trunk vmnic4
    22          0      1 T     0     32          32  PHYS     UP    UP    0  Trunk vmnic5

The UCS configuration has all even NICs utilizing Fabric A and all odd Fabric B.  I see that the port-old on ESX01 shows control (901) going to vmnic 4 and packet (902) going to vmnic 5, however, ESX02 doesn't show either vmnic on the LTLs even though the vmnics are trunked.  I validated that all UCS VIFs for all service profiles are up and show no errors.  Here's the CDP info from ESXi:

esx01

   (vim.host.PhysicalNic.NetworkHint) {
       dynamicType = ,
       device = "vmnic4",
       connectedSwitchPort = (vim.host.PhysicalNic.CdpInfo) {
          dynamicType = ,
          cdpVersion = 2,
          timeout = 0,
          ttl = 162,
          samples = 4399,
          devId = "UCS01-A(SSI15510PYV)",
          address = "xx.xx.100.111",
          portId = "Vethernet801",
          deviceCapability = (vim.host.PhysicalNic.CdpDeviceCapability) {
             dynamicType = ,
             router = false,
             transparentBridge = false,
             sourceRouteBridge = false,
             networkSwitch = true,
             host = false,
             igmpEnabled = true,
             repeater = false,
          },
          softwareVersion = "Cisco Nexus Operating System (N",
          hardwarePlatform = "UCS-FI-6248UP",
          ipPrefix = "0.0.0.0",
          ipPrefixLen = 0,
          vlan = 1,
          fullDuplex = false,
          mtu = 1500,
          systemName = "UCS01-A",
          systemOID = "1.3.6.1.4.1.9.12.3.1.3.1062",
          mgmtAddr = "xx.xx.100.111",
          location = "snmplocation",
       },
       lldpInfo = (vim.host.PhysicalNic.LldpInfo) null,
    },
    (vim.host.PhysicalNic.NetworkHint) {
       dynamicType = ,
       device = "vmnic5",
       connectedSwitchPort = (vim.host.PhysicalNic.CdpInfo) {
          dynamicType = ,
          cdpVersion = 2,
          timeout = 0,
          ttl = 165,
          samples = 4399,
          devId = "UCS01-B(SSI15450F17)",
          address = "xx.xx.100.112",
          portId = "Vethernet802",
          deviceCapability = (vim.host.PhysicalNic.CdpDeviceCapability) {
             dynamicType = ,
             router = false,
             transparentBridge = false,
             sourceRouteBridge = false,
             networkSwitch = true,
             host = false,
             igmpEnabled = true,
             repeater = false,
          },
          softwareVersion = "Cisco Nexus Operating System (N",
          hardwarePlatform = "UCS-FI-6248UP",
          ipPrefix = "0.0.0.0",
          ipPrefixLen = 0,
          vlan = 1,
          fullDuplex = false,
          mtu = 1500,
          systemName = "UCS01-B",
          systemOID = "1.3.6.1.4.1.9.12.3.1.3.1062",
          mgmtAddr = "xx.xx.100.112",
          location = "snmplocation",
       },
       lldpInfo = (vim.host.PhysicalNic.LldpInfo) null,
    },

esx02

   (vim.host.PhysicalNic.NetworkHint) {
       dynamicType = ,
       device = "vmnic4",
       connectedSwitchPort = (vim.host.PhysicalNic.CdpInfo) {
          dynamicType = ,
          cdpVersion = 2,
          timeout = 0,
          ttl = 128,
          samples = 4399,
          devId = "UCS01-A(SSI15510PYV)",
          address = "xx.xx.100.111",
          portId = "Vethernet705",
          deviceCapability = (vim.host.PhysicalNic.CdpDeviceCapability) {
             dynamicType = ,
             router = false,
             transparentBridge = false,
             sourceRouteBridge = false,
             networkSwitch = true,
             host = false,
             igmpEnabled = true,
             repeater = false,
          },
          softwareVersion = "Cisco Nexus Operating System (N",
          hardwarePlatform = "UCS-FI-6248UP",
          ipPrefix = "0.0.0.0",
          ipPrefixLen = 0,
          vlan = 1,
          fullDuplex = false,
          mtu = 1500,
          systemName = "UCS01-A",
          systemOID = "1.3.6.1.4.1.9.12.3.1.3.1062",
          mgmtAddr = "xx.xx.100.111",
          location = "snmplocation",
       },
       lldpInfo = (vim.host.PhysicalNic.LldpInfo) null,
    },
    (vim.host.PhysicalNic.NetworkHint) {
       dynamicType = ,
       device = "vmnic5",
       connectedSwitchPort = (vim.host.PhysicalNic.CdpInfo) {
          dynamicType = ,
          cdpVersion = 2,
          timeout = 0,
          ttl = 133,
          samples = 4399,
          devId = "UCS01-B(SSI15450F17)",
          address = "xx.xx.100.112",
          portId = "Vethernet706",
          deviceCapability = (vim.host.PhysicalNic.CdpDeviceCapability) {
             dynamicType = ,
             router = false,
             transparentBridge = false,
             sourceRouteBridge = false,
             networkSwitch = true,
             host = false,
             igmpEnabled = true,
             repeater = false,
          },
          softwareVersion = "Cisco Nexus Operating System (N",
          hardwarePlatform = "UCS-FI-6248UP",
          ipPrefix = "0.0.0.0",
          ipPrefixLen = 0,
          vlan = 1,
          fullDuplex = false,
          mtu = 1500,
          systemName = "UCS01-B",
          systemOID = "1.3.6.1.4.1.9.12.3.1.3.1062",
          mgmtAddr = "xx.xx.100.112",
          location = "snmplocation",
       },
       lldpInfo = (vim.host.PhysicalNic.LldpInfo) null,
    },

Thanks!

All,

It turns out that the UCS VLANs needed to be defined in  the LAN Uplinks Manager because we have two separate uplink pingroups  per fabric. Having VLANs defined on the VNICs and the VNICs associated  to uplink paths wasn't enough.  The connectivity inconsistencies we  witnessed (the VEMs initial connection to the VSM and MAC addresses not  being learned in UCS or the core) were most likely caused by ARP requests being sent over the wrong broadcast link and thus being discarded.  And while half the VEMs were registered and working with the VSM, they could have eventually stopped working if their ARP requests were sent up the wrong path.

Thank you, Robert, for your continual contribution to this forum!  Your help is much appreciated!

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: