on 04-07-2015 07:54 AM - edited on 03-25-2019 01:25 PM by ciscomoderator
This document is intended as a solution-level reference for technical professionals responsible for planning, designing and implementing the Cisco Application Virtual Switch (AVS) for Data Center customer.
This document provides AVS planning considerations and topology recommendations, but does not discuss all the foundational technologies, procedures and best practices for deploying the routing, switching and data center setup required by the solution. Instead, it refers to detailed documents that discuss those technologies and implementation methods, while focusing on specific configuration and topologies for deploying AVS within the ACI (Application Centric Infrastructure) Solution.
This document assumes that readers have thorough understanding of the Cisco ACI (Application Centric Infrastructure) and other Cisco Data Center technologies. Please refer to following links to understand these concepts.
http://www.cisco.com/go/aci
http://www.cisco.com/go/datacenter
Cisco Application Virtual Switch (AVS) is a hypervisor-resident distributed virtual switch that is specifically designed for the Cisco Application Centric Infrastructure (ACI) and managed by Cisco APIC (Application Policy Infrastructure Controller).
The Cisco AVS is integrated with the Cisco Application Centric Infrastructure (ACI). Unlike Nexus1000V where management is done by a dedicated Virtual Supervisor Module, Cisco AVS is managed by the Cisco APIC. Cisco AVS implements the OpFlex protocol for control plane communication.
For more details about Cisco AVS refer to following URL
http://www.cisco.com/c/en/us/products/switches/application-virtual-switch/index.html
Cisco AVS is the Cisco distributed vSwitch for ACI mode. If you are running Nexus 9000 in standalone mode, then you can use Cisco Nexus 1000V as distributed vSwitch.
Before we dive into the specifics of AVS, it is important to understand the basic concepts about Cisco Application Centric Infrastructure fabric.
The Cisco Application Centric Infrastructure Fabric (ACI) fabric includes Cisco Nexus 9000 Series switches with the APIC (Application Policy Infrastructure Controller) to run in the leaf/spine ACI fabric mode. In a recommended minimum configuration
The APIC fabric management functions do not operate in the data path of the fabric. The management is done via out of the band management network.
The following figure illustrates an overview of the leaf/spine ACI fabric.
The ACI fabric provides low-latency forwarding across high-bandwidth links (40 Gbps, with a 100-Gbps future capability). Traffic with the source and destination on the same leaf switch/node is handled locally. All other traffic traveling from the ingress leaf to the egress leaf goes through a spine switch. Although this architecture appears as two hops from a physical perspective, it is actually a single Layer 3 hop because the fabric operates as a single Layer 3 switch.
For security and policy enforcement, admins typically group endpoints (EPs) with identical semantics into endpoint groups (EPGs) and then write policies that regulate how such groups can interact with each other.
OpFlex, the southbound API, is an open and extensible policy protocol used to transfer abstract policy in XML or JavaScript Object Notation (JSON) between Cisco APIC and AVS switch.
http://tools.ietf.org/html/draft-smith-opflex-00
Cisco AVS supports two modes of traffic forwarding:
Following two figures shows the Cisco AVS modes and different options.
The forwarding mode is selected during Cisco AVS configuration in APIC at the time when the VMware vCenter Domain is created. VMware vCenter Domain creation is the step where APIC will communicate with vCenter and will dynamically create Cisco AVS Distributed Virtual Switch in vCenter.
“No Local Switching” mode was formerly known as FEX enable mode. In “No Local Switching” mode, all traffic (intra-EPG and/or inter-EPG) is forwarded by the physical leaf. In this mode, VXLAN is the only allowed encapsulation type.
“Local Switching” was formerly known as FEX Disable mode. In this mode all intra-EPG traffic is locally forwarded by the Cisco AVS, without the involvement of the physical leaf, if the traffic bound for the same host. All inter-EPG traffic is forwarded via the physical leaf.
In “Local Switching” mode, the Cisco AVS can either use VLAN or VXLAN encapsulation for forwarding traffic to the leaf and back. The encapsulation type is selected during Cisco AVS installation.
Cisco recommends Local Switching mode. This is the mode which will provide best performance and forwarding by keeping the traffic local to the AVS host itself, if the end-points are part of the same EPG and residing on the same host.
Network architects can use different approaches for protection against switch or link failover and link aggregation. The most common design approaches with Cisco AVS are virtual PortChannel (vPC) and MAC pinning. Both design approaches provide protection against single-link and physical-switch failures, but they differ in the way that the virtual and physical switches are coupled and the way that the VMware ESX or ESXi server traffic is distributed over the 10 Gigabit Ethernet links. The essence of all these approaches is Port-Channel technology.
A Port-Channel (also referred to as Ether-Channel) on the Cisco AVS implements the standards-based IEEE 802.3ad or 802.1AX link aggregation protocol that incorporates the Link Aggregation Control Protocol (LACP) for automatic negotiation.
LACP dynamically bundle several physical ports together to form a single port channel. LACP enables a node to negotiate an automatic bundling of links by sending LACP packets to the peer node. LACP is simply a way to dynamically build Port-Channel. Essentially, the “active” end of the LACP group sends out special frames advertising the ability and desire to form a Port-Channel.
Standard Port-Channel requires that all uplinks from one ESXi host in the Port-Channel group must be connected to single and same upstream physical switch.
When ESXi host uplinks are spread across more than one upstream physical switch, the upstream switches are clustered using Virtual Port-Channel (vPC).
When LACP protocol is not running on the links, it is called Static Port-Channel mode. In Cisco OS and Cisco Nexus OS, Static Port-Channel mode is also called “Mode ON”. Following command displays the Static Port-Channel configuration on Nexus 5000 switch.
# interface ethernet 1/1-2 channel-group 1 mode on
On the contrary, in APIC controller, LACP mode “Off” represents Static Port-Channel configuration.
A maximum of 16 links can be configured to form a Port-Channel group.
You can configure one of several types of port channel policies on the Cisco AVS: Link Aggregation Control Policy (LACP) in active or passive mode, MAC pinning, or static. You can configure port channel policies through the Cisco APIC GUI, the REST API, or the CLI.
In a MAC Pinning mode, the Gigabit Ethernet uplinks from the Cisco AVS are treated as stand-alone links. In a two Gigabit Ethernet uplinks scenario, each Gigabit Ethernet interface is connected to a separate physical switch with Layer 2 continuity on all IEEE 802.1Q trunked VLANs between the two switches. Virtual Ethernet ports supporting virtual machines and vmkernel ports are allocated in a round-robin fashion over the available Gigabit Ethernet uplinks. Each MAC address is pinned to one of the uplinks until a failover event occurs. MAC pinning does not rely on any protocol to distinguish the different upstream switches, making the deployment independent of any hardware or design. This independence enables consistent and easy deployment of the Cisco AVS, and it is the preferred method for deploying the Cisco AVS when the upstream switches cannot be clustered using Cisco vPC.
vPC is required on the upstream physical switches to enable the Port-Channel to span both upstream physical switches and still maintain availability for the VMware ESXi host should one switch fail or lose connectivity. This vPC clustering is transparent to the Cisco AVS. From AVS Host point of view, it sees the vPC cluster as one single switch.
For vPC to work, the Cisco Application Virtual Switch should be configured with LACP Port-Channel (configuration is done via APIC) with the two Gigabit Ethernet uplinks defined by one port profile.
The two upstream physical switches should be configured with vPC. The upstream switch (for example a Cisco Nexus 5000 or 7000 series switch) will appear as a single logical switch distributed over two physical chassis.
Design | Uplinks | Physical-Switch Requirements |
---|---|---|
vPC | Single logical Port-Channel | Clustered physical switches using a multichassis Ether-Channel (MEC) implementation such as Cisco vPC, virtual switching system (VSS), or virtual blade switch (VBS) technologies |
MAC Pinning | All teamed uplinks in same Layer 2 domain | No special configuration other than Layer 2 continuity between both switches on all VLANs trunked to the VMware ESX or ESXi server |
vPC is the recommended approach when vPC or clustered physical switches are available at the physical access layer. MAC pinning should be chosen when these options are not available.
On a very high level there are seven commonly deployed topologies supported by AVS. For ease of understanding these are divided into two groups
In this topology ESXi host is directly connected to ACI Leaf Switch. This is typical scenario where a rack mount server (For example a Cisco UCS C-Series Server) is running ESXi hypervisor and AVS is running as a distributed virtual switch on it. It is recommended to use VLAN in this topology.
For the link aggregation AVS supports both MAC pinning or LACP active policy.
In this topology AVS ESXi host (For example a rack mount server like Cisco UCS C-Series Server) is connected to the FEX. FEX is then directly connected to APIC Leaf switch. VLAN local switching mode, VXLAN local switching mode, and VXLAN no local switching mode are supported with this topology. It is recommended to use VLAN with this topology.
There are some limitations with this topology that you should be aware of
In topology#3, ESXi host is running on a Cisco UCS B-Series blade server. The B-Series server or the chassis is connected to UCS Fabric Interconnect. FI is then directly connected to ACI Leaf switch. This topology connects the ESX hypervisor to the Cisco APIC using via fabric interconnect, VPCs, LACP, and MAC pinning. In topology#3 one can use either VLAN or VXLAN. The main concept is to use VXLAN when there is more than one hop between ESXi host and Leaf Switch. Following picture shows the logical diagram of topology#3.
This topology is a very common use case in scenario where for example customer has already deployed Cisco Nexus 5000, 6000 or 7000 and they want to have Cisco ACI fabric inserted in their current architecture. This topology connects the ESXi hypervisor to a Cisco APIC through the Cisco Nexus 5000 switch, virtual port channels, and MAC pinning. VXLAN will be used in this topology.
This topology is not very different than topology#4. The only difference is that the L2 switch in the middle of AVS and Leaf switch has a Cisco Nexus 2000 Fabric Extender (FEX). And the ESXi host or AVS is connected to FEX. This is the most common scenario where FEX is deployed as a Top of the Rack (TOR) switch. VXLAN will be used in this topology. Topology#4 and 5 works almost the same in terms of multicast.
This topology represents a customer running data center with core and aggregation architecture with Cisco Nexus 5000 or 7000 series switches. The customer wants to migrate to ACI based architecture in phases. The leaf switch could be connected to Nexus 5000 or 7000 switch at the aggregation layer. VXLAN is recommended in this topology.
It is highly recommended to use VXLAN here because there are more than one hop between AVS and Leaf switch.
When you are working with the Cisco UCS B-series server and using an APIC policy, Link Layer Discovery Protocol (LLDP) is not supported and must be disabled.
Cisco Discovery Prototol (CDP) is disabled by default in the Cisco UCS Manager fabric interconnect. In the Cisco UCS Manager, you must enable CDP by creating a policy under Network Control Policies > CDP.
Do not enable Fabric failover on the adapters in the UCS server service profiles.
Refer to following documentation for detailed information about this limitation
This topology connects the AVS ESXi host to the leaf switches using MAC pinning, directly or via Cisco Nexus 5000 switches and Cisco UCS 62xx Series Fabric Interconnects.
Following configuration are recommended when deploying Cisco AVS with the Cisco APIC
UCS port channel configuration is statically set to Link Aggregation Control Protocol (LACP) mode active. This configuration cannot be modified; therefore, all upstream port-channel configurations must adhere to LACP mode active as well. Alternatively, you can configure the upstream switch ports for LACP mode passive.
Following table list different switches and their supported port-channeling modes
| LACP | MAC Pinning | Static Port Channel | VPC |
AVS | Yes | Yes | Yes | Not Applicable |
Nexus 9000 Leaf | Yes | Yes | Yes | Yes |
UCS Fabric Interconnect | Yes (Northbound Ports) No (Southbound Ports) | Yes | No | No |
Nexus 5000 | Yes | Yes | Yes | Yes |
Please refer to following supporting documents for more detailed information.
Cisco Application Virtual Switch
http://www.cisco.com/c/en/us/products/switches/application-virtual-switch/index.html
Cisco Application Centric Infrastructure Fundamentals
The author of this document is Shahzad Ali. Shahzad is part of Cloud Networking and Virtualization Product Management team at Cisco Systems Inc.,
Special thanks to Balaji Sivasubramanian (Director, Product Management), Srinivas Sardar (Senior Manager, Software Development) and Krishna Nekkanti (Software Engineer) for providing content and feedback.
--------IN PROGESS ------
Good work done... very helpful
Thank you Shahzad Ali. Looking Forward for similar Doc.
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: