Showing results for 
Search instead for 
Did you mean: 

Cisco ACI and AVS Recommended Topologies and Solution Guide




This document is intended as a solution-level reference for technical professionals responsible for planning, designing and implementing the Cisco Application Virtual Switch (AVS) for Data Center customer.

This document provides AVS planning considerations and topology recommendations, but does not discuss all the foundational technologies, procedures and best practices for deploying the routing, switching and data center setup required by the solution. Instead, it refers to detailed documents that discuss those technologies and implementation methods, while focusing on specific configuration and topologies for deploying AVS within the ACI (Application Centric Infrastructure) Solution.



This document assumes that readers have thorough understanding of the Cisco ACI (Application Centric Infrastructure) and other Cisco Data Center technologies. Please refer to following links to understand these concepts.


Cisco Application Virtual Switch (AVS) Introduction

Cisco Application Virtual Switch (AVS) is a hypervisor-resident distributed virtual switch that is specifically designed for the Cisco Application Centric Infrastructure (ACI) and managed by Cisco APIC (Application Policy Infrastructure Controller).


AVS for Application Centric Infrastructure

The Cisco AVS is integrated with the Cisco Application Centric Infrastructure (ACI). Unlike Nexus1000V where management is done by a dedicated Virtual Supervisor Module, Cisco AVS is managed by the Cisco APIC. Cisco AVS implements the OpFlex protocol for control plane communication.


For more details about Cisco AVS refer to following URL


Cisco AVS is the Cisco distributed vSwitch for ACI mode. If you are running Nexus 9000 in standalone mode, then you can use Cisco Nexus 1000V as distributed vSwitch.


Before we dive into the specifics of AVS, it is important to understand the basic concepts about Cisco Application Centric Infrastructure fabric.


Cisco ACI Fabric Overview

The Cisco Application Centric Infrastructure Fabric (ACI) fabric includes Cisco Nexus 9000 Series switches with the APIC (Application Policy Infrastructure Controller) to run in the leaf/spine ACI fabric mode. In a recommended minimum configuration


  • Three Cisco Nexus 9K (9500 series or 9336PQ) switches deployed as spines.
  • Only Cisco Nexus 9K switches (9300 Series) can connect to the spine switches as leaf switches or nodes (All other devices, appliances and switches connect to the leaf nodes)
  • The APIC is an appliance running on Cisco UCS server and connects to one of the leaf nodes. It manages the ACI fabric. The recommended minimum configuration for the APIC is a cluster of three replicated hosts. One can connect APIC to two different leaf switches for redundancy


The APIC fabric management functions do not operate in the data path of the fabric. The management is done via out of the band management network.


The following figure illustrates an overview of the leaf/spine ACI fabric.




The ACI fabric provides low-latency forwarding across high-bandwidth links (40 Gbps, with a 100-Gbps future capability). Traffic with the source and destination on the same leaf switch/node is handled locally. All other traffic traveling from the ingress leaf to the egress leaf goes through a spine switch. Although this architecture appears as two hops from a physical perspective, it is actually a single Layer 3 hop because the fabric operates as a single Layer 3 switch.


End Point Groups (EPGs)

For security and policy enforcement, admins typically group endpoints (EPs) with identical semantics into endpoint groups (EPGs) and then write policies that regulate how such groups can interact with each other.


OpFlex Protocol

OpFlex, the southbound API, is an open and extensible policy protocol used to transfer abstract policy in XML or JavaScript Object Notation (JSON) between Cisco APIC and AVS switch.


AVS Switching Modes


Cisco AVS supports two modes of traffic forwarding:


  1. Local Switch Mode
    1. With VLAN encapsulation or
    2. With VXLAN encapsulation
  2. No Local Switching Mode
    1. With VXLAN encapsulation


Following two figures shows the Cisco AVS modes and different options.






The forwarding mode is selected during Cisco AVS configuration in APIC at the time when the VMware vCenter Domain is created. VMware vCenter Domain creation is the step where APIC will communicate with vCenter and will dynamically create Cisco AVS Distributed Virtual Switch in vCenter.


No Local Switching Mode

“No Local Switching” mode was formerly known as FEX enable mode. In “No Local Switching” mode, all traffic (intra-EPG and/or inter-EPG) is forwarded by the physical leaf. In this mode, VXLAN is the only allowed encapsulation type.






Local Switching

“Local Switching” was formerly known as FEX Disable mode. In this mode all intra-EPG traffic is locally forwarded by the Cisco AVS, without the involvement of the physical leaf, if the traffic bound for the same host. All inter-EPG traffic is forwarded via the physical leaf.



In “Local Switching” mode, the Cisco AVS can either use VLAN or VXLAN encapsulation for forwarding traffic to the leaf and back. The encapsulation type is selected during Cisco AVS installation.


  • If VLAN encapsulation mode is used, a range of VLANs must be available for use by the Cisco AVS. These VLANs have local scope in that they have significance only within the Layer 2 network between the Cisco AVS and the leaf.


  • If VXLAN encapsulation mode is used, only the infra-VLAN needs to be available between the Cisco AVS and the VXLAN. This results in a simplified configuration and is the recommended encapsulation mode if there are one or more switches between the Cisco AVS and the leaf.


Cisco recommends Local Switching mode. This is the mode which will provide best performance and forwarding by keeping the traffic local to the AVS host itself, if the end-points are part of the same EPG and residing on the same host.


Switch Failover and Link Aggregation


Network architects can use different approaches for protection against switch or link failover and link aggregation. The most common design approaches with Cisco AVS are virtual PortChannel (vPC) and MAC pinning. Both design approaches provide protection against single-link and physical-switch failures, but they differ in the way that the virtual and physical switches are coupled and the way that the VMware ESX or ESXi server traffic is distributed over the 10 Gigabit Ethernet links. The essence of all these approaches is Port-Channel technology.



Port-Channel Technology

A Port-Channel (also referred to as Ether-Channel) on the Cisco AVS implements the standards-based IEEE 802.3ad or 802.1AX link aggregation protocol that incorporates the Link Aggregation Control Protocol (LACP) for automatic negotiation.




LACP dynamically bundle several physical ports together to form a single port channel. LACP enables a node to negotiate an automatic bundling of links by sending LACP packets to the peer node. LACP is simply a way to dynamically build Port-Channel. Essentially, the “active” end of the LACP group sends out special frames advertising the ability and desire to form a Port-Channel.



Standard Port Channel

Standard Port-Channel requires that all uplinks from one ESXi host in the Port-Channel group must be connected to single and same upstream physical switch.



Virtual Port Channel

When ESXi host uplinks are spread across more than one upstream physical switch, the upstream switches are clustered using Virtual Port-Channel (vPC).



Static Port-Channel


When LACP protocol is not running on the links, it is called Static Port-Channel mode. In Cisco OS and Cisco Nexus OS, Static Port-Channel mode is also called “Mode ON”. Following command displays the Static Port-Channel configuration on Nexus 5000 switch.


# interface ethernet 1/1-2
    channel-group 1 mode on


On the contrary, in APIC controller, LACP mode “Off” represents Static Port-Channel configuration.




A maximum of 16 links can be configured to form a Port-Channel group.


You can configure one of several types of port channel policies on the Cisco AVS: Link Aggregation Control Policy (LACP) in active or passive mode, MAC pinning, or static. You can configure port channel policies through the Cisco APIC GUI, the REST API, or the CLI.


  • MAC Pinning — MAC Pinning
  • Active — LACP active
  • Passive — LACP passive
  • Off — LACP Off (i.e Static Port-Channel)


MAC Pinning

In a MAC Pinning mode, the Gigabit Ethernet uplinks from the Cisco AVS are treated as stand-alone links. In a two Gigabit Ethernet uplinks scenario, each Gigabit Ethernet interface is connected to a separate physical switch with Layer 2 continuity on all IEEE 802.1Q trunked VLANs between the two switches. Virtual Ethernet ports supporting virtual machines and vmkernel ports are allocated in a round-robin fashion over the available Gigabit Ethernet uplinks. Each MAC address is pinned to one of the uplinks until a failover event occurs. MAC pinning does not rely on any protocol to distinguish the different upstream switches, making the deployment independent of any hardware or design. This independence enables consistent and easy deployment of the Cisco AVS, and it is the preferred method for deploying the Cisco AVS when the upstream switches cannot be clustered using Cisco vPC.


Virtual Port Channel (vPC)

vPC is required on the upstream physical switches to enable the Port-Channel to span both upstream physical switches and still maintain availability for the VMware ESXi host should one switch fail or lose connectivity. This vPC clustering is transparent to the Cisco AVS. From AVS Host point of view, it sees the vPC cluster as one single switch.



For vPC to work, the Cisco Application Virtual Switch should be configured with LACP Port-Channel (configuration is done via APIC) with the two Gigabit Ethernet uplinks defined by one port profile.



The two upstream physical switches should be configured with vPC. The upstream switch (for example a Cisco Nexus 5000 or 7000 series switch) will appear as a single logical switch distributed over two physical chassis.



Differences Between vPC and MAC Pinning





Physical-Switch Requirements


Single logical Port-Channel

Clustered physical switches using a multichassis Ether-Channel (MEC) implementation such as Cisco vPC, virtual switching system (VSS), or virtual blade switch (VBS) technologies

MAC Pinning

All teamed uplinks in same Layer 2 domain

No special configuration other than Layer 2 continuity between both switches on all VLANs trunked to the VMware ESX or ESXi server



vPC is the recommended approach when vPC or clustered physical switches are available at the physical access layer. MAC pinning should be chosen when these options are not available.



AVS Recommended and Tested Topologies


On a very high level there are seven commonly deployed topologies supported by AVS. For ease of understanding these are divided into two groups



  1. Standard Topologies
    • Topology#1 AVS host directly connected to N9K leaf switch
    • Topology#2 AVS host connected to N9K leaf switch via FEX
    • Topology#3 AVS host connected to N9K leaf switch via UCS FI





  1. Extended Topologies
    • Topology#4 - AVS host connected to N9K leaf via a single physical switch
      1. Double-Sided VPC with Nexus 5000 and AVS with MAC Pinning
      2. Double-Sided VPC with Nexus 5000 and AVS with VPC
    • Topology#5 - AVS host connected to N9K leaf via a switch-FEX
    • Topology#6 - AVS host connected to N9K leaf via multiple switches
    • Topology#7 - AVS host connected to N9K leaf via UCS FI and a Switch
      1. Single-Side VPC with Nexus 5000/UCS FI and AVS with MAC Pinning



Topology #1  AVS Host Directly Connected to ACI Fabric 

In this topology ESXi host is directly connected to ACI Leaf Switch. This is typical scenario where a rack mount server (For example a Cisco UCS C-Series Server) is running ESXi hypervisor and AVS is running as a distributed virtual switch on it. It is recommended to use VLAN in this topology.


For the link aggregation AVS supports both MAC pinning or LACP active policy.




Topology#2 AVS Host Connected to ACI Fabric via FEX


In this topology AVS ESXi host (For example a rack mount server like Cisco UCS C-Series Server) is connected to the FEX. FEX is then directly connected to APIC Leaf switch.  VLAN local switching mode, VXLAN local switching mode, and VXLAN no local switching mode are supported with this topology. It is recommended to use VLAN with this topology.




There are some limitations with this topology that you should be aware of

  • vPC is not supported between FEX  and Leaf or for AVS (ESXi) hosts directly connected to FEX
  • Only a single physical link is supported between the FEX and AVS (ESXi) host connected to that FEX. This means that LACP or MAC Pinning is not supported for AVS (ESXi) hosts connected directly to a FEX that is connected directly to a leaf.
  • For the single physical link connected between FEX and an AVS (ESXi) host, when you choose a LACP mode for Cisco AVS, you should choose either MAC Pinning or Off on the APIC controller as shown in the following diagram





  • These limitations does not apply to an extender that is connected to a Nexus 5000 or 7000 switch that is connected to a leaf (as shown in Topology#5)


1.1Topology#3   AVS Host Connected to ACI Fabric via UCS FI            

In topology#3, ESXi host is running on a Cisco UCS B-Series blade server. The B-Series server or the chassis is connected to UCS Fabric Interconnect. FI is then directly connected to ACI Leaf switch. This topology connects the ESX hypervisor to the Cisco APIC using via fabric interconnect, VPCs, LACP, and MAC pinning. In topology#3 one can use either VLAN or VXLAN. The main concept is to use VXLAN when there is more than one hop between ESXi host and Leaf Switch. Following picture shows the logical diagram of topology#3.