cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
10293
Views
25
Helpful
8
Replies

ACI - VMM intergration

Hi All,

I'm having some difficulties in understanding the reason why we need to turn on the cdp/lldp on vds.

What i understand is that the leaf can detect the ESXi host running cdp/lldp, and the ESXi host can do the same to detect the leaf switch. Once this is done, vCenter sends information back to the APIC controller and the APIC controller can work out which leaf interface the hosts are connected to.

Hence what's the importance of turning on lldp/cdp at vds layer ?

Sorry i'm a ACI newby.

Thanks for your help.

1 Accepted Solution

Accepted Solutions

lpember
Level 1
Level 1

Hi Munesh,

The reason you need to turn on LLDP/CDP at the VDS layer is because the LLDP/CDP information needs to pass through the ESXi hosts vmnic to the actual ESXi host OS so that vCenter will see it. If the NIC is sending/receiving and processing all of the LLDP/CDP, then vCenter will never know about it. So it's important that the vmnics on that host in the VDS have LLDP/CDP turned on at the DVS level. Here's a rough diagram:

leaf -> leaf interface -> cable -> server NIC -> ESXi OS -> vCenter

Hope that helps,

Leigh

View solution in original post

8 Replies 8

lpember
Level 1
Level 1

Hi Munesh,

The reason you need to turn on LLDP/CDP at the VDS layer is because the LLDP/CDP information needs to pass through the ESXi hosts vmnic to the actual ESXi host OS so that vCenter will see it. If the NIC is sending/receiving and processing all of the LLDP/CDP, then vCenter will never know about it. So it's important that the vmnics on that host in the VDS have LLDP/CDP turned on at the DVS level. Here's a rough diagram:

leaf -> leaf interface -> cable -> server NIC -> ESXi OS -> vCenter

Hope that helps,

Leigh

Thanks Ipmember. That helped, and i manage to verify it in my lab. As you mentioned the ACI will detect the vDS by cdp/lldp. But i also realized through my testing that the other reason is required is for EPG port group creation on the vds. Apparently if ACI does not detect th cdp/lldp, eventhough the vds is up, it will not create the port group on the vds.

Hello Munesh,

What you are referring to is controlled by the Resolution Immediacy attribute, specifically that of "Immediate" and "On Demand". This is set upon domain association to the EPG. From the ACI Virtualization guide, these are the three options:

  • Pre-provision—Specifies that a policy (for example, VLAN, VXLAN binding, contracts, or filters) is downloaded to a leaf switch even before a VM controller is attached to the virtual switch (for example, VMware VDS, or Cisco AVS) thereby pre-provisioning the configuration on the switch.

When using pre-provision immediacy, policy is downloaded to leaf regardless of CDP/LLDP neighborship. Without a host even connected to the VMM switch.

  • Immediate—Specifies that EPG policies(including contracts and filters) are downloaded to the associated leaf switch software upon VM controller attachment to a virtual switch. LLDP or OpFlex permissions are used to resolve the VM controller to leaf node attachments.

The policy will be downloaded to leaf when you add host to the VMM switch. CDP/LLDP neighborship from host to leaf is required.

  • On Demand—Specifies that a policy (for example, VLAN, VXLAN bindings, contracts, or filters) is pushed to the leaf node only when a VM controller is attached to a virtual switch and a VM is placed in the port group (EPG).

The policy will be downloaded to leaf when host is added to VMM switch and virtual machine needs to be placed into port group (EPG). CDP/LLDP neighborship from host to leaf is required.

With both immediate and on demand, if host and leaf lose LLDP/CDP neighborship the policies are removed.

Cheers,

-Gabriel

Hello All

Our VMware Infra is divided into multiple clusters within single VMware Datacenter. We plan to use one DVS per Cluster, which means multiple DVS under on datacenter.

When VMM integration is done between vCenter and APIC -

Q1) do we have to add the same vCenter as a VMM domain for each DVS ? I dont see any option to add multiple DVS under one domain.

Q2) when multiple domains are added for the same vCenter, same set of ESXi hypervisors (under VMware datacenter) are visible under different VMM domains on ACI. Will there be an issue in applying Portgroups and programming vlans ?

What's the right way to implement multiple DVS under one VMware datacenter ?

Hello, Thanks for using SupportForums

Q1)Use the same vCenter IP and Datacenter name for all VMM domains. 

Q2)There will not be an issue. just make sure you add the correct hosts in a particular cluster to the appropriate DVS that was created by the APIC

You are on the right track for implementing multiple DVS under one Datacenter!

what other questions do you have?

paul.mancuso
Level 1
Level 1

I have an issue whereby when the VDS is manufactured by VMM domain within ACI, the vCenter environment receives the VDS, the proper settings, etc, but the NICs state that CDP and LLDP is not available? The VDS that was created shows LLDP is enabled? 

The vCenter environment has hosts within 3 clusters. Each host in each cluster was already configured to use two of the 4 NICs (Dell Server with 4 10Gb nics, identical, same driver) attached to a pre-existing VDS that is using CDP. On those two NICs, CDP is available and functioning. The two NICs are VMNIC2 and VMNIC3. They are performing all of the management functions for the vSphere environment on each host.

The remaining two NICs were attached to the ACI created vDS per each host. (VMNIC0 and VMNIC1). When viewing the Phyical Adapters property in the Web Client and selecting any of these two NICs on any host, the same message appears for the Discovery protocols: "Link Layer Discovery Protocol is not available on this physicall network adapter"

As i stated, the VDS that was created shows that LLDP is enabled. Also, anything that is created functions. 

When creating EPGs within ACI, the dvPortGroups appear on the VDS and they are functional. But, i have 6 faults, one per each of the VMNICs of the six hosts (2 host per 3 clusters).The fault notes that CDP/LLDP adjacency information for the physical adapters on the host. 

"[FSM:FAILED]: Get LLDP/CDP adjacency information for the physical adapters on the host: host1.tenant1.local (TASK:ifc:vmmmgr:CompHvGetHpNicAdj)" dn="uni/vmmp-VMware/dom-Tenant1-VMM/ctrlr-vcsa02.corp.local/fd-[comp/prov-VMware/ctrlr-[Tenant1-VMM]-host1.tenant1.local/hv-host-25]-fault-F606391" domain="infra" highestSeverity="major" lastTransition="2016-09-13T15:39:17.640+00:00" lc="raised" occur="1" origSeverity="major" prevSeverity="major" rule="fsm-get-hp-nic-adj-fsm-fail" severity="major" status="" subject="task-ifc-vmmmgr-comp-hv-get-hp-nic-adj" type="config"/></imdata>

Any help with this? 

Paul

Take a look at the Interface Policy Group for those ESXI hosts and check what is the LLDP or CDP settings. By default, LLDP is on and CDP is off. 

If your servers only support CDP then you will get those adjacency faults. the Interface Policy Group is what programs the VDS settings (if there is no Vswitch override)

The policy that affects the vDS uplinks is the vSwitch policy settings configured on the VMM domain.  Be sure these are set to the appropriate CDP/LLDP values.

Also, ensure if you want to use CDP, that LLDP is disabled.  LLDP is enabled by default and will override against CDP.  If your host isn't configured for LLDP, but the VMM domain is - you will not be able to build adjacencies

Robert

Save 25% on Day-2 Operations Add-On License