cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
8194
Views
5
Helpful
2
Comments
Robert Burns
Cisco Employee
Cisco Employee

This document details a high level checklist for setting up VMM integration, then is followed by some common mistakes/errors and faults to look for.   Finally we will cover additional troubleshooting commands to RCA common VMM domain related issues.  This article is a work in progress & living document.  Please add any comments to help improve the content for others.

 

Overview

 

VMM Integration allows a Virtual Machine Manager (vCenter, SCVMM etc) to be linked to ACI so that policies can be made available for virtual machines in the same way as bare metal.   ACI supports multiple VMM domains to be configured which can be a mix of Hypervisor managers.  At FCS only vCenter wil be supported, but expect HyperV and other hypervisors to be added not long after. 

 

End Point Groups (EPGs) are used in the same way with virtual machines as they are with bare metal servers.  The only difference is that with bare metal endpoints we normally statically bind an EPG to a Leaf/Interface, whereas with Virtual Machines we bind the VMM Domain to the EPG.  Doing so allows the APIC to create a DVS (distributed virtual switch) within vCenter to which hosts can be added.  Once the hypervisor hosts (ESX) are added to the DVS, the EPG becomes available to the virtual machines as a network binding (aka Port Group).

 

Fig.1: ACI EPG shown in vCenter as Virtual Machine Network Port Group.

vm-dvs-binding.JPG

 

VMM Integration Configuration

 

Configuring VMM integration has a number of steps.  Missing any step will result in the configuration not applied to vCenter or VMs being able to pass traffic through the fabric.  Below are the high level steps with explanations as to what each step enables.   For full details and procedures please refer to the configuration guides and/or training NPI.

 

High Level Procedure

 

Pre-Req Tasks

 

- Create Tenant

- Create BD

- Assign appropriate IP subnets to BD

- Create Associated Attachable Entity Profile (AEP)

- Create Switch Profile

- Create Interface Policy Group

- Create Interface Profile

 

VMM Specific Tasks

 

1. Create vCenter Domain.  VM Networking - VM Provider VMware - Create VM Provider. 

 

Here we configure the logical VM Domain which includes the defining vCenter Credentials, the vCenter Host details then binding them together.  Here we also create/assign the VLAN pool which will be used by this VM Domain.  The VLAN Pool should include all VLANs your VMs utilize.   The last step is to assign this VMM Domain to the Associated Attachable Entity Profile (AEP) previoiusly created.  The AEP should have been previously linked to the Interface Policy Group and Interface Profile respectfully.  This allows the VM Domain to be accessible on defined Leaf Interfaces.  Essentially we're telling ACI where Hypervisors for this VM Domain connect to the fabric.  If you fail to associate the AEP, the Leaf will never program itself with the related EPGs.  Be sure the vCenter Datacenter name exactly matches - See Fig 2.

 

Fig.2: VMM Controller Datacenter Name - APIC vs. vCenter

 

vmm-vc-dc-name.JPG

 

 

2.  Bind EPG to VMM Domain.  Tenants - Tenant X - Application Profiles - Application X -  Application EPGs - EPG X - Domains (VMs and Baremetal).

 

This tasks makes the EPG available the VMM domain (including all VMs on the associated DVS hosts). The only option other than selecting the VMM domain here is to set the policy Deployment and Reporting Immediacy.  This tells the APIC to either push the EPG & related config to the associated AEP leafs immediately, or only when a VM comes online which is associated with that EPG/PortGroup (On demand).  On Demand is the default and preferred choice for resource scaling.

 

Fig.3 - Adding VMM Domain Associate to EPG

 

vmm-association.JPG

 

Assuming all the pre-req tasks were completed, you should be all set.  Next on to verification.

 

VMM Intregration Verification

 

1. DVS is create on vCenter.  As soon as the VMM domain is created the DVS should be created in vCenter.  To verify, from the VI Client navigate to Home - Inventory - Networking.  The DVS should be present along with the name given to the VMM Provider.

 

DVS Created.JPG

 

Troubleshooting

 

If you do not see the DVS created on vCenter, check the Faults within the VM Networking - VMM Domain section.  The likely culprit is simple L2 connectivity.  Ensure the Management EPG associated with the vCenter host is using the correct Bridge Domain - typically this will be the inband BD. 

 

2. EPGs programmed on Leaf.  Assuming the DVS is created, and you've assigned VMs to the correct EPG/Portgroup, powered up VMs, you should see both the BD and EPG programmed on the Hypervisor connected Leaf switches. 

 

Verification.

 

- Connect to the Leaf via SSH.  You can do this directly or from the APIC.  Connecting from the APIC allows you to reference the DNS name rather than determining the leaf IP and use 'tab' to autocomplete the leaf name.

 

admin@apic2:~> ssh admin@leaf101
Password:

leaf101# show vlan extended

 VLAN Name                             Status    Ports
 ---- -------------------------------- --------- -------------------------------
 13   --                               active    Eth1/1, Eth1/3
 21   VMM-Test:VMM-Test-BD             active    Eth1/25
 22   VMM-Test:VMM-Test-App:Test_DB    active    Eth1/25

 VLAN Type  Vlan-mode  Encap
 ---- ----- ---------- -------------------------------
 13   enet  CE         vxlan-16777209, vlan-4093
 21   enet  CE         vxlan-16646014
 22   enet  CE         vlan-305


leaf101#
 

From here we can see the BD is correctly programmed on the leaf using internal VLAN 21.  For intra-fabric transport across this BD, the system uses VXLAN 16646014.  The encapsulation VLAN (aka wire-vlan) is 305.  This is the VLAN the host will see on the DVS Port Group.  This is one of the VLANs pulled from the attached VLAN pool.

 

vlan 305.JPG

 

-Check Visore for expected config.  In this example, the EPG name is 'Test_DB"

 

visore-1.JPG

visore-2.JPG

visore-3.JPG 

 

3. 

 

Workflow and troubleshooting checklist

 

The following figure may be used for a pictorial representation as well as a checklist for VMM integration.

 

checklist-workflow.png

Comments
swapnendum
Level 1
Level 1

Hello All

Our VMware Infra is divided into multiple clusters within single VMware Datacenter. We plan to use one DVS per Cluster, which means multiple DVS under on datacenter.

When VMM integration is done between vCenter and APIC -

Q1) do we have to add the same vCenter as a VMM domain for each DVS ? I dont see any option to add multiple DVS under one domain.

Q2) when multiple domains are added for the same vCenter, same set of ESXi hypervisors (under VMware datacenter) are visible under different VMM domains on ACI. Will there be an issue in applying Portgroups and programming vlans ?

What's the right way to implement multiple DVS under one VMware datacenter ?

michgri
Level 1
Level 1

Robert, your images are pointing at techzone.cisco.com, which appears to be no longer publicly accessible.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: