cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
13214
Views
19
Helpful
2
Comments

{toc}

Purpose

This document is intended as a solution-level reference for technical professionals responsible for planning, designing and implementing the Cisco Application Virtual Switch (AVS) for Data Center customer.

This document provides AVS planning considerations and topology recommendations, but does not discuss all the foundational technologies, procedures and best practices for deploying the routing, switching and data center setup required by the solution. Instead, it refers to detailed documents that discuss those technologies and implementation methods, while focusing on specific configuration and topologies for deploying AVS within the ACI (Application Centric Infrastructure) Solution.

 

Pr-Requisite

This document assumes that readers have thorough understanding of the Cisco ACI (Application Centric Infrastructure) and other Cisco Data Center technologies. Please refer to following links to understand these concepts.

http://www.cisco.com/go/aci
http://www.cisco.com/go/datacenter

 

Cisco Application Virtual Switch (AVS) Introduction

Cisco Application Virtual Switch (AVS) is a hypervisor-resident distributed virtual switch that is specifically designed for the Cisco Application Centric Infrastructure (ACI) and managed by Cisco APIC (Application Policy Infrastructure Controller).

 

AVS for Application Centric Infrastructure

The Cisco AVS is integrated with the Cisco Application Centric Infrastructure (ACI). Unlike Nexus1000V where management is done by a dedicated Virtual Supervisor Module, Cisco AVS is managed by the Cisco APIC. Cisco AVS implements the OpFlex protocol for control plane communication.

 

For more details about Cisco AVS refer to following URL
http://www.cisco.com/c/en/us/products/switches/application-virtual-switch/index.html

 

Cisco AVS is the Cisco distributed vSwitch for ACI mode. If you are running Nexus 9000 in standalone mode, then you can use Cisco Nexus 1000V as distributed vSwitch.

 

Before we dive into the specifics of AVS, it is important to understand the basic concepts about Cisco Application Centric Infrastructure fabric.

 

Cisco ACI Fabric Overview

The Cisco Application Centric Infrastructure Fabric (ACI) fabric includes Cisco Nexus 9000 Series switches with the APIC (Application Policy Infrastructure Controller) to run in the leaf/spine ACI fabric mode. In a recommended minimum configuration

 

  • Three Cisco Nexus 9K (9500 series or 9336PQ) switches deployed as spines.
  • Only Cisco Nexus 9K switches (9300 Series) can connect to the spine switches as leaf switches or nodes (All other devices, appliances and switches connect to the leaf nodes)
  • The APIC is an appliance running on Cisco UCS server and connects to one of the leaf nodes. It manages the ACI fabric. The recommended minimum configuration for the APIC is a cluster of three replicated hosts. One can connect APIC to two different leaf switches for redundancy

 

The APIC fabric management functions do not operate in the data path of the fabric. The management is done via out of the band management network.

 

The following figure illustrates an overview of the leaf/spine ACI fabric.

 

 

 

The ACI fabric provides low-latency forwarding across high-bandwidth links (40 Gbps, with a 100-Gbps future capability). Traffic with the source and destination on the same leaf switch/node is handled locally. All other traffic traveling from the ingress leaf to the egress leaf goes through a spine switch. Although this architecture appears as two hops from a physical perspective, it is actually a single Layer 3 hop because the fabric operates as a single Layer 3 switch.

 

End Point Groups (EPGs)

For security and policy enforcement, admins typically group endpoints (EPs) with identical semantics into endpoint groups (EPGs) and then write policies that regulate how such groups can interact with each other.

 

OpFlex Protocol

OpFlex, the southbound API, is an open and extensible policy protocol used to transfer abstract policy in XML or JavaScript Object Notation (JSON) between Cisco APIC and AVS switch.

http://tools.ietf.org/html/draft-smith-opflex-00

 

AVS Switching Modes

 

Cisco AVS supports two modes of traffic forwarding:

 

  1. Local Switch Mode
    1. With VLAN encapsulation or
    2. With VXLAN encapsulation
  2. No Local Switching Mode
    1. With VXLAN encapsulation

 

Following two figures shows the Cisco AVS modes and different options.

 

 

 

 

 

The forwarding mode is selected during Cisco AVS configuration in APIC at the time when the VMware vCenter Domain is created. VMware vCenter Domain creation is the step where APIC will communicate with vCenter and will dynamically create Cisco AVS Distributed Virtual Switch in vCenter.

 

No Local Switching Mode

“No Local Switching” mode was formerly known as FEX enable mode. In “No Local Switching” mode, all traffic (intra-EPG and/or inter-EPG) is forwarded by the physical leaf. In this mode, VXLAN is the only allowed encapsulation type.

 

 

 

 

 

Local Switching

“Local Switching” was formerly known as FEX Disable mode. In this mode all intra-EPG traffic is locally forwarded by the Cisco AVS, without the involvement of the physical leaf, if the traffic bound for the same host. All inter-EPG traffic is forwarded via the physical leaf.

 

 

In “Local Switching” mode, the Cisco AVS can either use VLAN or VXLAN encapsulation for forwarding traffic to the leaf and back. The encapsulation type is selected during Cisco AVS installation.

 

  • If VLAN encapsulation mode is used, a range of VLANs must be available for use by the Cisco AVS. These VLANs have local scope in that they have significance only within the Layer 2 network between the Cisco AVS and the leaf.

 

  • If VXLAN encapsulation mode is used, only the infra-VLAN needs to be available between the Cisco AVS and the VXLAN. This results in a simplified configuration and is the recommended encapsulation mode if there are one or more switches between the Cisco AVS and the leaf.

 

Cisco recommends Local Switching mode. This is the mode which will provide best performance and forwarding by keeping the traffic local to the AVS host itself, if the end-points are part of the same EPG and residing on the same host.

 

Switch Failover and Link Aggregation

 

Network architects can use different approaches for protection against switch or link failover and link aggregation. The most common design approaches with Cisco AVS are virtual PortChannel (vPC) and MAC pinning. Both design approaches provide protection against single-link and physical-switch failures, but they differ in the way that the virtual and physical switches are coupled and the way that the VMware ESX or ESXi server traffic is distributed over the 10 Gigabit Ethernet links. The essence of all these approaches is Port-Channel technology.

 

 

Port-Channel Technology

A Port-Channel (also referred to as Ether-Channel) on the Cisco AVS implements the standards-based IEEE 802.3ad or 802.1AX link aggregation protocol that incorporates the Link Aggregation Control Protocol (LACP) for automatic negotiation.

 

 

LACP

LACP dynamically bundle several physical ports together to form a single port channel. LACP enables a node to negotiate an automatic bundling of links by sending LACP packets to the peer node. LACP is simply a way to dynamically build Port-Channel. Essentially, the “active” end of the LACP group sends out special frames advertising the ability and desire to form a Port-Channel.

 

 

Standard Port Channel

Standard Port-Channel requires that all uplinks from one ESXi host in the Port-Channel group must be connected to single and same upstream physical switch.

 

 

Virtual Port Channel

When ESXi host uplinks are spread across more than one upstream physical switch, the upstream switches are clustered using Virtual Port-Channel (vPC).

 

 

Static Port-Channel

 

When LACP protocol is not running on the links, it is called Static Port-Channel mode. In Cisco OS and Cisco Nexus OS, Static Port-Channel mode is also called “Mode ON”. Following command displays the Static Port-Channel configuration on Nexus 5000 switch.

 

# interface ethernet 1/1-2
    channel-group 1 mode on

 

On the contrary, in APIC controller, LACP mode “Off” represents Static Port-Channel configuration.

 

 

 

A maximum of 16 links can be configured to form a Port-Channel group.

 

You can configure one of several types of port channel policies on the Cisco AVS: Link Aggregation Control Policy (LACP) in active or passive mode, MAC pinning, or static. You can configure port channel policies through the Cisco APIC GUI, the REST API, or the CLI.

 

  • MAC Pinning — MAC Pinning
  • Active — LACP active
  • Passive — LACP passive
  • Off — LACP Off (i.e Static Port-Channel)

 

MAC Pinning

In a MAC Pinning mode, the Gigabit Ethernet uplinks from the Cisco AVS are treated as stand-alone links. In a two Gigabit Ethernet uplinks scenario, each Gigabit Ethernet interface is connected to a separate physical switch with Layer 2 continuity on all IEEE 802.1Q trunked VLANs between the two switches. Virtual Ethernet ports supporting virtual machines and vmkernel ports are allocated in a round-robin fashion over the available Gigabit Ethernet uplinks. Each MAC address is pinned to one of the uplinks until a failover event occurs. MAC pinning does not rely on any protocol to distinguish the different upstream switches, making the deployment independent of any hardware or design. This independence enables consistent and easy deployment of the Cisco AVS, and it is the preferred method for deploying the Cisco AVS when the upstream switches cannot be clustered using Cisco vPC.

 

Virtual Port Channel (vPC)

vPC is required on the upstream physical switches to enable the Port-Channel to span both upstream physical switches and still maintain availability for the VMware ESXi host should one switch fail or lose connectivity. This vPC clustering is transparent to the Cisco AVS. From AVS Host point of view, it sees the vPC cluster as one single switch.

 

 

For vPC to work, the Cisco Application Virtual Switch should be configured with LACP Port-Channel (configuration is done via APIC) with the two Gigabit Ethernet uplinks defined by one port profile.

 

 

The two upstream physical switches should be configured with vPC. The upstream switch (for example a Cisco Nexus 5000 or 7000 series switch) will appear as a single logical switch distributed over two physical chassis.

 

 

Differences Between vPC and MAC Pinning

 

 

Design

Uplinks

Physical-Switch Requirements

vPC

Single logical Port-Channel

Clustered physical switches using a multichassis Ether-Channel (MEC) implementation such as Cisco vPC, virtual switching system (VSS), or virtual blade switch (VBS) technologies

MAC Pinning

All teamed uplinks in same Layer 2 domain

No special configuration other than Layer 2 continuity between both switches on all VLANs trunked to the VMware ESX or ESXi server

 

 

vPC is the recommended approach when vPC or clustered physical switches are available at the physical access layer. MAC pinning should be chosen when these options are not available.

 

 

AVS Recommended and Tested Topologies

 

On a very high level there are seven commonly deployed topologies supported by AVS. For ease of understanding these are divided into two groups

 

 

  1. Standard Topologies
    • Topology#1 AVS host directly connected to N9K leaf switch
    • Topology#2 AVS host connected to N9K leaf switch via FEX
    • Topology#3 AVS host connected to N9K leaf switch via UCS FI

 

 

 

 

  1. Extended Topologies
    • Topology#4 - AVS host connected to N9K leaf via a single physical switch
      1. Double-Sided VPC with Nexus 5000 and AVS with MAC Pinning
      2. Double-Sided VPC with Nexus 5000 and AVS with VPC
    • Topology#5 - AVS host connected to N9K leaf via a switch-FEX
    • Topology#6 - AVS host connected to N9K leaf via multiple switches
    • Topology#7 - AVS host connected to N9K leaf via UCS FI and a Switch
      1. Single-Side VPC with Nexus 5000/UCS FI and AVS with MAC Pinning

 

 

Topology #1  AVS Host Directly Connected to ACI Fabric 

In this topology ESXi host is directly connected to ACI Leaf Switch. This is typical scenario where a rack mount server (For example a Cisco UCS C-Series Server) is running ESXi hypervisor and AVS is running as a distributed virtual switch on it. It is recommended to use VLAN in this topology.

 

For the link aggregation AVS supports both MAC pinning or LACP active policy.

 

 

 

Topology#2 AVS Host Connected to ACI Fabric via FEX

 

In this topology AVS ESXi host (For example a rack mount server like Cisco UCS C-Series Server) is connected to the FEX. FEX is then directly connected to APIC Leaf switch.  VLAN local switching mode, VXLAN local switching mode, and VXLAN no local switching mode are supported with this topology. It is recommended to use VLAN with this topology.

 

 

    

There are some limitations with this topology that you should be aware of

  • vPC is not supported between FEX  and Leaf or for AVS (ESXi) hosts directly connected to FEX
  • Only a single physical link is supported between the FEX and AVS (ESXi) host connected to that FEX. This means that LACP or MAC Pinning is not supported for AVS (ESXi) hosts connected directly to a FEX that is connected directly to a leaf.
  • For the single physical link connected between FEX and an AVS (ESXi) host, when you choose a LACP mode for Cisco AVS, you should choose either MAC Pinning or Off on the APIC controller as shown in the following diagram

 

 

 

 

  • These limitations does not apply to an extender that is connected to a Nexus 5000 or 7000 switch that is connected to a leaf (as shown in Topology#5)

 

1.1Topology#3   AVS Host Connected to ACI Fabric via UCS FI            

In topology#3, ESXi host is running on a Cisco UCS B-Series blade server. The B-Series server or the chassis is connected to UCS Fabric Interconnect. FI is then directly connected to ACI Leaf switch. This topology connects the ESX hypervisor to the Cisco APIC using via fabric interconnect, VPCs, LACP, and MAC pinning. In topology#3 one can use either VLAN or VXLAN. The main concept is to use VXLAN when there is more than one hop between ESXi host and Leaf Switch. Following picture shows the logical diagram of topology#3.

 

 

 

 

 

 

1.1Topology#4   AVS Host Connected to Leaf via Switch

This topology is a very common use case in scenario where for example customer has already deployed Cisco Nexus 5000, 6000 or 7000 and they want to have Cisco ACI fabric inserted in their current architecture. This topology connects the ESXi hypervisor to a Cisco APIC through the Cisco Nexus 5000 switch, virtual port channels, and MAC pinning. VXLAN will be used in this topology.

 

1.1.1Double-Sided VPC with Nexus 5000 and AVS with MAC Pinning

 

 

 

1.1.1Double-Sided VPC with Nexus 5000 and AVS with VPC

 

 

 

1.1Topology#5   AVS Host Connected to Leaf via Switch-FEX

This topology is not very different than topology#4. The only difference is that the L2 switch in the middle of AVS and Leaf switch has a Cisco Nexus 2000 Fabric Extender (FEX). And the ESXi host or AVS is connected to FEX. This is the most common scenario where FEX is deployed as a Top of the Rack (TOR) switch. VXLAN will be used in this topology. Topology#4 and 5 works almost the same in terms of multicast.

 

 

 

 

 

1.1Topology#6   AVS Host Connected to Leaf via Multiple Switches

This topology represents a customer running data center with core and aggregation architecture with Cisco Nexus 5000 or 7000 series switches. The customer wants to migrate to ACI based architecture in phases. The leaf switch could be connected to Nexus 5000 or 7000 switch at the aggregation layer. VXLAN is recommended in this topology.

 

 

 

 

 

1.1Topology#7   AVS Host Connected to Leaf via UCS FI and Switch

It is highly recommended to use VXLAN here because there are more than one hop between AVS and Leaf switch.  

When you are working with the Cisco UCS B-series server and using an APIC policy, Link Layer Discovery Protocol (LLDP) is not supported and must be disabled.

Cisco Discovery Prototol (CDP) is disabled by default in the Cisco UCS Manager fabric interconnect. In the Cisco UCS Manager, you must enable CDP by creating a policy under Network Control Policies > CDP.

Do not enable Fabric failover on the adapters in the UCS server service profiles.

Refer to following documentation for detailed information about this limitation

http://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/1-x/getting-started/b_APIC_Getting_Started_Guide/b_APIC_Getting_Started_Guide_chapter_010.html

 

 

1.1.1Single-Side VPC with Nexus 5000/UCS FI and AVS with MAC Pinning

This topology connects the AVS ESXi host to the leaf switches using MAC pinning, directly or via Cisco Nexus 5000 switches and Cisco UCS 62xx Series Fabric Interconnects.

 

 

 

 

Following configuration are recommended when deploying Cisco AVS with the Cisco APIC

  • Infra VLAN must be configured in the layer 2 network to establish connection between N9K leaf and AVS for OpeFlex.
  • Infra VLAN must be configured on the leaf side port and AVS side ports of Cisco Fabric Interconnect
  • DHCP relay policy must be configured for AVS so that the APIC can assign the IP address for the AVS vtep vmk in ESXi host
  • Configure vMotion on a separate VMKernel NIC with a dedicated EPG. Do not configure vMotion on the VMKernel NIC created for OpFlex channel
  • One must not delete or change any parameters for the VMkernel NIC created for the OpFlex channel
  • If VMkernel NIC created for the OpFlex channel is deleted by mistake, recreate it with the attach port-group vtep, and configure it with a dynamic IP address. One should never configure a static IP address for an OpFlex vmk NIC.
  • Cisco Fabric Interconnect doesn’t support LACP on their southbound ports so it is recommended to configure AVS with the “Mac Pinning” policy (as shown in figure)

 

 

 

 

 

 

 

1.1Support Table

 

UCS port channel configuration is statically set to Link Aggregation Control Protocol (LACP) mode active. This configuration cannot be modified; therefore, all upstream port-channel configurations must adhere to LACP mode active as well. Alternatively, you can configure the upstream switch ports for LACP mode passive.

Following table list different switches and their supported port-channeling modes

 

LACP

MAC Pinning

Static Port Channel

VPC

AVS

Yes

Yes

Yes

Not Applicable

Nexus 9000 Leaf

Yes

Yes

Yes

Yes

UCS Fabric Interconnect

Yes (Northbound Ports)

No (Southbound Ports)

Yes

No

No

Nexus 5000
Nexus 7000

Yes

Yes

Yes

Yes

 

 

 

 

 

 

 

 

Please refer to following supporting documents for more detailed information.

Cisco Application Virtual Switch

http://www.cisco.com/c/en/us/products/switches/application-virtual-switch/index.html

Cisco Application Centric Infrastructure Fundamentals
 

http://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/1-x/aci-fundamentals/b_ACI-Fundamentals.html

 

 

 

 

 

 

Credits

The author of this document is Shahzad Ali. Shahzad is part of Cloud Networking and Virtualization Product Management team at Cisco Systems Inc.,

Special thanks to Balaji Sivasubramanian (Director, Product Management), Srinivas Sardar (Senior Manager, Software Development) and Krishna Nekkanti (Software Engineer) for providing content and feedback.

 

 

--------IN PROGESS ------

Comments
ccie29824
Level 1
Level 1

Good work done... very helpful

vivekkumarv
Level 1
Level 1

Thank you Shahzad Ali. Looking Forward for similar Doc.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: