Shahzad Ali is part of Cloud and Networking Services Product Management Team. He is currently involved in marketing, managing and selling Cisco Application Centric Infrastructure (ACI) and Cisco Application Virtual Switch (AVS) products in SDN and Virtual Networking areas.
In the past he worked as Technical Marketing Engineer for Unified Communication Systems, Data Center Automation and Virtualization Solution (CIAC and Desktop Virtualization). And he has been instrumental in large system level network designs and driving the strategy for Cloud Automation & Orchestration, Desktop Virtualization, Unified Communications and Unified Contact Center Enterprise.
He has authored many documents, SRND/CVD, network architecture white-papers and implementation guides for larger enterprise and service provider customers.
Biography from communities:
Shahzad Ali is a Technical Marketing Engineer at Cisco. He is responsible for system-level network designs and strategy. He defines best practices for virtualized and non-virtualized Data Center Solutions.
Previously, he worked as a QA Team Lead in and also worked in Cisco Technical Assistance Center (TAC), supporting complex IP networks.
Shahzad holds a double CCIE certification in Voice and Routing & Switching tracks. He has a bachelor’s degree in Computer Systems Engineering and a master’s degree in Electrical Engineering.
Shahzad Ali is part of Cloud and Networking Services Product Management Team. He is currently involved in marketing, managing and selling Cisco Application Centric Infrastructure (ACI) and Cisco Application Virtual Switch (AVS) products in SDN and Virtual Networking areas.
In the past he worked as Technical Marketing Engineer for Unified Communication Systems
This document is intended as a solution-level reference for technical professionals responsible for planning, designing and implementing the Cisco Application Virtual Switch (AVS) for Data Center customer.
This document provides AVS planning considerations and topology recommendations, but does not discuss all the foundational technologies, procedures and best practices for deploying the routing, switching and data center setup required by the solution. Instead, it refers to detailed documents that discuss those technologies and implementation methods, while focusing on specific configuration and topologies for deploying AVS within the ACI (Application Centric Infrastructure) Solution.
This document assumes that readers have thorough understanding of the Cisco ACI (Application Centric Infrastructure) and other Cisco Data Center technologies. Please refer to following links to understand these concepts.
Cisco Application Virtual Switch (AVS) Introduction
Cisco Application Virtual Switch (AVS) is a hypervisor-resident distributed virtual switch that is specifically designed for the Cisco Application Centric Infrastructure (ACI) and managed by Cisco APIC (Application Policy Infrastructure Controller).
AVS for Application Centric Infrastructure
The Cisco AVS is integrated with the Cisco Application Centric Infrastructure (ACI). Unlike Nexus1000V where management is done by a dedicated Virtual Supervisor Module, Cisco AVS is managed by the Cisco APIC. Cisco AVS implements the OpFlex protocol for control plane communication.
For more details about Cisco AVS refer to following URL http://www.cisco.com/c/en/us/products/switches/application-virtual-switch/index.html
Cisco AVS is the Cisco distributed vSwitch for ACI mode. If you are running Nexus 9000 in standalone mode, then you can use Cisco Nexus 1000V as distributed vSwitch.
Before we dive into the specifics of AVS, it is important to understand the basic concepts about Cisco Application Centric Infrastructure fabric.
Cisco ACI Fabric Overview
The Cisco Application Centric Infrastructure Fabric (ACI) fabric includes Cisco Nexus 9000 Series switches with the APIC (Application Policy Infrastructure Controller) to run in the leaf/spine ACI fabric mode. In a recommended minimum configuration
Three Cisco Nexus 9K (9500 series or 9336PQ) switches deployed as spines. Only Cisco Nexus 9K switches (9300 Series) can connect to the spine switches as leaf switches or nodes (All other devices, appliances and switches connect to the leaf nodes) The APIC is an appliance running on Cisco UCS server and connects to one of the leaf nodes. It manages the ACI fabric. The recommended minimum configuration for the APIC is a cluster of three replicated hosts. One can connect APIC to two different leaf switches for redundancy
The APIC fabric management functions do not operate in the data path of the fabric. The management is done via out of the band management network.
The following figure illustrates an overview of the leaf/spine ACI fabric.
The ACI fabric provides low-latency forwarding across high-bandwidth links (40 Gbps, with a 100-Gbps future capability). Traffic with the source and destination on the same leaf switch/node is handled locally. All other traffic traveling from the ingress leaf to the egress leaf goes through a spine switch. Although this architecture appears as two hops from a physical perspective, it is actually a single Layer 3 hop because the fabric operates as a single Layer 3 switch.
End Point Groups (EPGs)
For security and policy enforcement, admins typically group endpoints (EPs) with identical semantics into endpoint groups (EPGs) and then write policies that regulate how such groups can interact with each other.
AVS Switching Modes
Cisco AVS supports two modes of traffic forwarding:
Local Switch Mode With VLAN encapsulation or With VXLAN encapsulation No Local Switching Mode With VXLAN encapsulation
Following two figures shows the Cisco AVS modes and different options.
The forwarding mode is selected during Cisco AVS configuration in APIC at the time when the VMware vCenter Domain is created. VMware vCenter Domain creation is the step where APIC will communicate with vCenter and will dynamically create Cisco AVS Distributed Virtual Switch in vCenter.
No Local Switching Mode
“No Local Switching” mode was formerly known as FEX enable mode. In “No Local Switching” mode, all traffic (intra-EPG and/or inter-EPG) is forwarded by the physical leaf. In this mode, VXLAN is the only allowed encapsulation type.
“Local Switching” was formerly known as FEX Disable mode. In this mode all intra-EPG traffic is locally forwarded by the Cisco AVS, without the involvement of the physical leaf, if the traffic bound for the same host. All inter-EPG traffic is forwarded via the physical leaf.
In “Local Switching” mode, the Cisco AVS can either use VLAN or VXLAN encapsulation for forwarding traffic to the leaf and back. The encapsulation type is selected during Cisco AVS installation.
If VLAN encapsulation mode is used, a range of VLANs must be available for use by the Cisco AVS. These VLANs have local scope in that they have significance only within the Layer 2 network between the Cisco AVS and the leaf.
If VXLAN encapsulation mode is used, only the infra-VLAN needs to be available between the Cisco AVS and the VXLAN. This results in a simplified configuration and is the recommended encapsulation mode if there are one or more switches between the Cisco AVS and the leaf.
Cisco recommends Local Switching mode. This is the mode which will provide best performance and forwarding by keeping the traffic local to the AVS host itself, if the end-points are part of the same EPG and residing on the same host.
Switch Failover and Link Aggregation
Network architects can use different approaches for protection against switch or link failover and link aggregation. The most common design approaches with Cisco AVS are virtual PortChannel (vPC) and MAC pinning. Both design approaches provide protection against single-link and physical-switch failures, but they differ in the way that the virtual and physical switches are coupled and the way that the VMware ESX or ESXi server traffic is distributed over the 10 Gigabit Ethernet links. The essence of all these approaches is Port-Channel technology.
A Port-Channel (also referred to as Ether-Channel) on the Cisco AVS implements the standards-based IEEE 802.3ad or 802.1AX link aggregation protocol that incorporates the Link Aggregation Control Protocol (LACP) for automatic negotiation.
LACP dynamically bundle several physical ports together to form a single port channel. LACP enables a node to negotiate an automatic bundling of links by sending LACP packets to the peer node. LACP is simply a way to dynamically build Port-Channel. Essentially, the “active” end of the LACP group sends out special frames advertising the ability and desire to form a Port-Channel.
Standard Port Channel
Standard Port-Channel requires that all uplinks from one ESXi host in the Port-Channel group must be connected to single and same upstream physical switch.
Virtual Port Channel
When ESXi host uplinks are spread across more than one upstream physical switch, the upstream switches are clustered using Virtual Port-Channel (vPC).
When LACP protocol is not running on the links, it is called Static Port-Channel mode. In Cisco OS and Cisco Nexus OS, Static Port-Channel mode is also called “Mode ON”. Following command displays the Static Port-Channel configuration on Nexus 5000 switch.
# interface ethernet 1/1-2
channel-group 1 mode on
On the contrary, in APIC controller, LACP mode “Off” represents Static Port-Channel configuration.
A maximum of 16 links can be configured to form a Port-Channel group.
You can configure one of several types of port channel policies on the Cisco AVS: Link Aggregation Control Policy (LACP) in active or passive mode, MAC pinning, or static. You can configure port channel policies through the Cisco APIC GUI, the REST API, or the CLI.
MAC Pinning — MAC Pinning Active — LACP active Passive — LACP passive Off — LACP Off (i.e Static Port-Channel)
In a MAC Pinning mode, the Gigabit Ethernet uplinks from the Cisco AVS are treated as stand-alone links. In a two Gigabit Ethernet uplinks scenario, each Gigabit Ethernet interface is connected to a separate physical switch with Layer 2 continuity on all IEEE 802.1Q trunked VLANs between the two switches. Virtual Ethernet ports supporting virtual machines and vmkernel ports are allocated in a round-robin fashion over the available Gigabit Ethernet uplinks. Each MAC address is pinned to one of the uplinks until a failover event occurs. MAC pinning does not rely on any protocol to distinguish the different upstream switches, making the deployment independent of any hardware or design. This independence enables consistent and easy deployment of the Cisco AVS, and it is the preferred method for deploying the Cisco AVS when the upstream switches cannot be clustered using Cisco vPC.
Virtual Port Channel (vPC)
vPC is required on the upstream physical switches to enable the Port-Channel to span both upstream physical switches and still maintain availability for the VMware ESXi host should one switch fail or lose connectivity. This vPC clustering is transparent to the Cisco AVS. From AVS Host point of view, it sees the vPC cluster as one single switch.
For vPC to work, the Cisco Application Virtual Switch should be configured with LACP Port-Channel (configuration is done via APIC) with the two Gigabit Ethernet uplinks defined by one port profile.
The two upstream physical switches should be configured with vPC. The upstream switch (for example a Cisco Nexus 5000 or 7000 series switch) will appear as a single logical switch distributed over two physical chassis.
Differences Between vPC and MAC Pinning
Design Uplinks Physical-Switch Requirements vPC Single logical Port-Channel Clustered physical switches using a multichassis Ether-Channel (MEC) implementation such as Cisco vPC, virtual switching system (VSS), or virtual blade switch (VBS) technologies MAC Pinning All teamed uplinks in same Layer 2 domain No special configuration other than Layer 2 continuity between both switches on all VLANs trunked to the VMware ESX or ESXi server
vPC is the recommended approach when vPC or clustered physical switches are available at the physical access layer. MAC pinning should be chosen when these options are not available.
AVS Recommended and Tested Topologies
On a very high level there are seven commonly deployed topologies supported by AVS. For ease of understanding these are divided into two groups
Standard Topologies Topology#1 AVS host directly connected to N9K leaf switch Topology#2 AVS host connected to N9K leaf switch via FEX Topology#3 AVS host connected to N9K leaf switch via UCS FI
Extended Topologies Topology#4 - AVS host connected to N9K leaf via a single physical switch Double-Sided VPC with Nexus 5000 and AVS with MAC Pinning Double-Sided VPC with Nexus 5000 and AVS with VPC Topology#5 - AVS host connected to N9K leaf via a switch-FEX Topology#6 - AVS host connected to N9K leaf via multiple switches Topology#7 - AVS host connected to N9K leaf via UCS FI and a Switch Single-Side VPC with Nexus 5000/UCS FI and AVS with MAC Pinning
Topology #1 AVS Host Directly Connected to ACI Fabric
In this topology ESXi host is directly connected to ACI Leaf Switch. This is typical scenario where a rack mount server (For example a Cisco UCS C-Series Server) is running ESXi hypervisor and AVS is running as a distributed virtual switch on it. It is recommended to use VLAN in this topology.
For the link aggregation AVS supports both MAC pinning or LACP active policy.
Topology#2 AVS Host Connected to ACI Fabric via FEX
In this topology AVS ESXi host (For example a rack mount server like Cisco UCS C-Series Server) is connected to the FEX. FEX is then directly connected to APIC Leaf switch. VLAN local switching mode, VXLAN local switching mode, and VXLAN no local switching mode are supported with this topology. It is recommended to use VLAN with this topology.
There are some limitations with this topology that you should be aware of
vPC is not supported between FEX and Leaf or for AVS (ESXi) hosts directly connected to FEX Only a single physical link is supported between the FEX and AVS (ESXi) host connected to that FEX. This means that LACP or MAC Pinning is not supported for AVS (ESXi) hosts connected directly to a FEX that is connected directly to a leaf. For the single physical link connected between FEX and an AVS (ESXi) host, when you choose a LACP mode for Cisco AVS, you should choose either MAC Pinning or Off on the APIC controller as shown in the following diagram
These limitations does not apply to an extender that is connected to a Nexus 5000 or 7000 switch that is connected to a leaf (as shown in Topology#5)
1.1Topology#3 AVS Host Connected to ACI Fabric via UCS FI
In topology#3, ESXi host is running on a Cisco UCS B-Series blade server. The B-Series server or the chassis is connected to UCS Fabric Interconnect. FI is then directly connected to ACI Leaf switch. This topology connects the ESX hypervisor to the Cisco APIC using via fabric interconnect, VPCs, LACP, and MAC pinning. In topology#3 one can use either VLAN or VXLAN. The main concept is to use VXLAN when there is more than one hop between ESXi host and Leaf Switch. Following picture shows the logical diagram of topology#3.
1.1Topology#4 AVS Host Connected to Leaf via Switch
This topology is a very common use case in scenario where for example customer has already deployed Cisco Nexus 5000, 6000 or 7000 and they want to have Cisco ACI fabric inserted in their current architecture. This topology connects the ESXi hypervisor to a Cisco APIC through the Cisco Nexus 5000 switch, virtual port channels, and MAC pinning. VXLAN will be used in this topology.
1.1.1Double-Sided VPC with Nexus 5000 and AVS with MAC Pinning
1.1.1Double-Sided VPC with Nexus 5000 and AVS with VPC
1.1Topology#5 AVS Host Connected to Leaf via Switch-FEX
This topology is not very different than topology#4. The only difference is that the L2 switch in the middle of AVS and Leaf switch has a Cisco Nexus 2000 Fabric Extender (FEX). And the ESXi host or AVS is connected to FEX. This is the most common scenario where FEX is deployed as a Top of the Rack (TOR) switch. VXLAN will be used in this topology. Topology#4 and 5 works almost the same in terms of multicast.
1.1Topology#6 AVS Host Connected to Leaf via Multiple Switches
This topology represents a customer running data center with core and aggregation architecture with Cisco Nexus 5000 or 7000 series switches. The customer wants to migrate to ACI based architecture in phases. The leaf switch could be connected to Nexus 5000 or 7000 switch at the aggregation layer. VXLAN is recommended in this topology.
1.1Topology#7 AVS Host Connected to Leaf via UCS FI and Switch
It is highly recommended to use VXLAN here because there are more than one hop between AVS and Leaf switch.
When you are working with the Cisco UCS B-series server and using an APIC policy, Link Layer Discovery Protocol (LLDP) is not supported and must be disabled.
Cisco Discovery Prototol (CDP) is disabled by default in the Cisco UCS Manager fabric interconnect. In the Cisco UCS Manager, you must enable CDP by creating a policy under Network Control Policies > CDP.
Do not enable Fabric failover on the adapters in the UCS server service profiles.
Refer to following documentation for detailed information about this limitation
1.1.1Single-Side VPC with Nexus 5000/UCS FI and AVS with MAC Pinning
This topology connects the AVS ESXi host to the leaf switches using MAC pinning, directly or via Cisco Nexus 5000 switches and Cisco UCS 62xx Series Fabric Interconnects.
1AVS Implementation Best Practices
Following configuration are recommended when deploying Cisco AVS with the Cisco APIC
Infra VLAN must be configured in the layer 2 network to establish connection between N9K leaf and AVS for OpeFlex. Infra VLAN must be configured on the leaf side port and AVS side ports of Cisco Fabric Interconnect DHCP relay policy must be configured for AVS so that the APIC can assign the IP address for the AVS vtep vmk in ESXi host Configure vMotion on a separate VMKernel NIC with a dedicated EPG. Do not configure vMotion on the VMKernel NIC created for OpFlex channel One must not delete or change any parameters for the VMkernel NIC created for the OpFlex channel If VMkernel NIC created for the OpFlex channel is deleted by mistake, recreate it with the attach port-group vtep, and configure it with a dynamic IP address. One should never configure a static IP address for an OpFlex vmk NIC. Cisco Fabric Interconnect doesn’t support LACP on their southbound ports so it is recommended to configure AVS with the “Mac Pinning” policy (as shown in figure)
UCS port channel configuration is statically set to Link Aggregation Control Protocol (LACP) mode active. This configuration cannot be modified; therefore, all upstream port-channel configurations must adhere to LACP mode active as well. Alternatively, you can configure the upstream switch ports for LACP mode passive.
Following table list different switches and their supported port-channeling modes
LACP MAC Pinning Static Port Channel VPC AVS Yes Yes Yes Not Applicable Nexus 9000 Leaf Yes Yes Yes Yes UCS Fabric Interconnect Yes (Northbound Ports)
No (Southbound Ports) Yes No No Nexus 5000 Nexus 7000 Yes Yes Yes Yes
Please refer to following supporting documents for more detailed information.
Cisco Application Virtual Switch
Cisco Application Centric Infrastructure Fundamentals
The author of this document is Shahzad Ali. Shahzad is part of Cloud Networking and Virtualization Product Management team at Cisco Systems Inc.,
Special thanks to Balaji Sivasubramanian (Director, Product Management), Srinivas Sardar (Senior Manager, Software Development) and Krishna Nekkanti (Software Engineer) for providing content and feedback.
--------IN PROGESS ------
... View more
This document is intended as a reference for technical professionals for managing and operating OpenStack Cloud based Infrastructure as a Service (IaaS) using Cisco Nexus 1000V for OpenStack as their virtual switch. This document will guide how OpenStack admin can offer network, storage and compute services to tenant admins. And how tenant admins will consume those services and offer it to their customer within their organization.
Nexus 1000V For OpenStack Setup
Following are the versions being used in this document
Compute Node 1 (KVM Ubuntu 12.04 LTS) Compute Node 2 (KVM Ubuntu 12.04 LTS) Network Node (Ubuntu 12.04 LTS) Openstack Controller (Grizzly. Ubuntu 12.04 LTS) Build Server (Ubuntu 12.04 LTS)
Base OpenStack installation and configuration is completed and admin is able to access the OpenStack Horizon Dashboard Nexus 1000V VEM is installed VSM is installed and configured with basic config
For detailed installation instructions, refer to Cisco Nexus 1000V For Ubuntu documentation.
OpenStack Cloud Admin Horizon Dashboard
Following is what admin sees when he/she logs into the OpenStack Horizon Dashboard browser based GUI interface.
Connecting to the Nexus 1000V VSM
SSH into Nexus 1000V VSM (Virtual Supervisor Module) and run show version command on the CLI
The Nexus-OS version is 5.2(1)SK1(2.1). "K" in the name indicates KVM Hypervsior version.
Verify the VEM modules on the VSM
Run show module command to display various components
Module 1 and Module 2 are reserved for the Virtual Supervisor Module. The Cisco Nexus 1000V supports VSM high availability. Hence, VSM virtual machine can run in an active/standby high availability mechanism. The output shows that there is only one VSM, that means it is a standalone deployment.
The rest of the modules (6 through 8) are Virtual Ethernet Modules (VEM). As per the network diagram above the VSM is managing the network node and the two compute nodes in the setup. As shown at the bottom of the screen, each VEM corresponds to a node, identified by the server IP address and name. This mapping of virtual line-card to a physical server eases the communication between the network and server team.
Verify service-policy configuration
Verify the default service-policy, that will be used for instances, created on this VSM. Nexus1000V allows for the segregation of network policy and service policy (ACL or Firewall Services for example). If services such as ACL or firewall were to be applied on an instance, the service-policy would be configured accordingly.
To view the default service-policy used in this lab enter following command
# show run port-profile Client-Policy
To check the restricted service policy configured, run the command
# show running-config port-profile restricted-client-policy
Creating a Network-Profile
Cloud admin can create network-profile by using OpenStack Horizon dashboard.
Also, notice that the lists of Policy Profiles are reflected in the bottom part of the page. This information is retrieved by OpenStack Controller from Nexus1000V VSM and displayed here. We will be consuming this Policy Profile when we create a new tenant network segment.
Now the admin should complete the creation of Network Profile by entering Name, Segment Type and other information as shown below
Following is what you see on the OpenStack Horizon dashboard after it is done.
The Sub-Type says “Unicast”, although we selected ‘Enhanced” when we were creating the network profile. Enhanced mode is in fact unicast. The normal mode of operation is multicast.
The Nexus 1000V supports an enhanced mode for VXLAN which does not require multicast in the underlay network.
You can also verify it on the Cisco Nexsu 1000V VSM (Virtual Supervisor Module) by ssh into the VSM virtual machine as shown below
# show nsm logical network
Create Networks in the Silver-Corp project
Now create networks in the silver-corp project.
Now we will login as silver-corp admin to create network for silver-corp company. This is imp to show that now you can give control to the tenant and he will be able to do the rest from here and can control his own department or orginzation.
Now we will login and silver-admin and following is how it shows on OpenStack dashboard.
Create overlay network segment in the Cloud
In this section, we will perform the following steps:
Create two VXLAN backed overlay networks. Create a logical router and attach the two overlay networks to this router Instantiate one Virtual Machine each on the two overlay networks Test network connectivity between these VMs over the overlay network through the virtual router
After the above step, now submit “Create this network”.
Following is what you see after it is successful
Now create another network in the same siver-corp
After the above steps is done, you can click on the create subnet button now.
Now you see the following screen on OpenStack horizon dashboard.
That means it is done successfully.
If you click on the any network-1 or network-2 you will see following type of information
Now click on the net-2, you will see following screen.
Notice the Segment ID in the above networks. That is your VXLAN segment IDs is assigned automatically.
You can also verify it on Nexus 1000V VSM. The creation of the networks Silver-Net-1 and Silver-Net-2 trigger the allocation of a network segment. The segment configuration can be viewed by entering the command
# show nsm network segment.
These network segments are also configured with the IP pool information such as the subnet range and default gateway. This configuration can be verified by entering the command
# show nsm ip pool template.
Corresponding to a network segment, a bridge-domain is created which corresponds to a single virtual network. The segment ID for a bridge domain corresponds to the segment ID of the network that in the OpenStack dashboard. To view the bridge-domains enter the command show bridge-domain. The key points to note in this output are:
1. The segment mode is unicast-only. The Nexus 1000V supports an enhanced mode for VXLAN which does not require multicast in the underlay network
2. The segment ID matches the ID for the networks in the OpenStack dashboard
3. The virtual interfaces correspond to each bridge-domain
VXLAN requires each node in the network to have a Virtual Tunnel Endpoint (VTEP) configured. A VTEP is used as the source or destination of the encapsulated VXLAN packet. To view the bridge-domains associated with a VTEP enter the command
# show bridge-domain vteps
Configuring a Router to connect the networks
The click create router and following is what you will see
You see following
Now add interface to this router.
You will see following interface created
Now if you click on the 18088510 name of the interface, you will see following info.
Now repeat the steps above and created the second interface on the same logical router.
Shown in the following diagram
Cline on the second interface-ID
Verifying the Network Topology
Click on the network topology.
Launching Virtual Machine Instances
Leave access and security to default as shown below
But change the network tab options as follows
Leave volume default as shown below
Post creation is default too as shown below
Now click Launch on the above diagram.
And now do the same for the other network silver-net-2
Verify Network Connectivity between VM-1 and VM-2
This shows how Nexus 1000V for OpenStack will be consumed and used by tenant administrators.
The Nexus 1000V is based on two important pillars:
Mobility of the network Non-disruptive operational model
For more information about the Cisco Nexus 1000V, visit http://www.cisco.com/go/nexus1000v
Most Frequently Used Terminologies and Acronyms
Cisco Nexus 1000V is a distributed virtual switch. This is a multi-hypervisor switch supported on VMware ESXi, Microsoft Hyper-V and OpenStack KVM hypervisors.
OpenStack is opne source software for creating private and public clouds. OpenStack software controls large pool of compute, storage and networking resources throughout a datacenter, managed through a dashboard or via OpenStack API.
KVM (Kernel Virtual Machine) is a Linux kernel module that allows a user-space program to utilize the hardware virtualization features of various processors. One can run multiple virtual machines running unmodified Linux or Windows images.
Virtual Tunnel Endpoint
These interfaces represent the VTEPs on each host that function as the VXLAN tunnel endpoint for the overlay network. A VTEP is required on every node that will participate in the overlay. Both compute nodes and the network node have a VTEP configured.
The author of this document is Shahzad Ali, Cisco Systems Cloud and Network Virtualization Group. This document is written using Cisco’s Demonstration lab equipment and resources.
... View more
Deploying the Cisco AVS on ACI using VSUM - VoD 2/2 This video covers following topics VSUM Download VSUM Installation AVS VEM Install on ESXi host using VSUM Add host to AVS Distributed Switch
... View more
This video series shows how to configure and deploy Cisco Application Virtual Switch on ACI using Cisco Virtual Switch Update Manager. Following topics are covered in this video AVS and VSUM Introduction APIC VMM Domain creation for Cisco AVS
... View more
[toc:faq] Cisco Application Virtual Switch Application Virtual Switch is the purpose-built, hypervisor-resident virtual switches designed for the ACI fabric. AVS is integrated into the ACI architecture and supports Application Network Profile (ANP) enforcement at the virtual host layer consistent with the Nexus 9000 series physical switches. AVS is managed centrally along with rest of the ACI fabric components through the Application Policy Infrastructure Controller (APIC). Virtual Machine Manager Domains The APIC is a single pane of glass that automates the entire networking for all virtual and physical workloads including access policies and Layer 4 to Layer 7 services. In the case of the Cisco Application Virtual Switch (AVS) on ESXi, all the networking functionalities and port groups (EPGs) creation on vCenter are performed using the APIC . Virtual Machine Manager Domain groups vCenter Servers with similar networking policies requirement. Multiple vCenters can share VLAN or VXLAN space and application endpoint groups (EPGs). For Cisco AVS the APIC communicates with vCenter using OpFlex protocol to publish network configurations such as port groups that are then applied to the virtual workloads. Provisioning of EPGs in VMM Domain The APIC pushes EPGs as port groups in vCenter. The vCenter administrator then places vNICs into these port groups. An EPG can span multiple VMM domains, and a VMM domain can contain multiple EPGs. Policy Enforcement Options For VMM Domain When you associate EPG to VMM domain, there are different ways to enforce and push the policy in the physical and virtual fabric. These options to enforce policy are shown in the following picture. It is important to understand what those options are and how are they going to impact during deployment. Following table lists these options, their description and recommendation for Cisco AVS. Property Description Deploy Immediacy This property is to deploy policy from APIC controller to physical leaf switch. It specify whether policies are applied immediately or when needed. The recommended deployment option for Cisco AVS is On Demand. The default is also On Demand. Resolution Immediacy This property is to deploy policy from physical leaf switch to virtual leaf (AVS is a virtual leaf) It specify whether policies are resolved immediately or when needed. The recommended value for Cisco AVS is On Demand. The default is also On Demand. Deployment Immediacy Once the policies are downloaded to the physical leaf software, deployment immediacy can specify when the policy is pushed into the hardware policy CAM. Immediate — Specifies that the policy is programmed in the hardware policy CAM as soon as the policy is downloaded in the leaf software. On Demand — Specifies that the policy is programmed in the hardware policy CAM only when the first packet is received through the data path. This process helps to optimize the hardware space. Resolution Immediacy Immediate — Specifies that EPG policies (for example VLAN, VXMLN bindings, contracts or filters) are downloaded to the associated virtual leaf switch (AVS) software upon hypervisor attachment to Cisco AVS. On Demand — Specifies that EPG policies (for example VLAN, VXLAN bindings, contracts or filters) is pushed to the virtual leaf node (Cisco AVS) only when a physical NIC (pNIC) attaches to the hypervisor and a VM is placed in the port group (EPG). This policy enforcement is also summarized in the following diagram: GUI Reference Following diagram is just for the reference. It shows a sample tenant (te1), its application profile (ap1), an EPG (epg1) and VMM domain association to the EPG.
... View more
Purpose This document is intended as a reference for technical professionals responsible for installing the Cisco Application Virtual Switch (AVS) for VMware ESXi hosts. This document covers the download instructions for AVS. It also provides reference to installation instructions towards the end. Cisco Application Virtual Switch (AVS) Cisco AVS is the distributed virtual switch for Cisco ACI (Application Centric Infrastructure) architecture. Application Virtual Switche is the purpose-built, hypervisor-resident virtual network edge switches designed for the ACI fabric. AVS is robustly integrated into the ACI architecture and supports Application Network Profile (ANP) enforcement at the virtual host layer consistent with the Nexus 9000 series physical switches. AVS is managed centrally along with rest of the ACI fabric components through the Application Policy Infrastructure Controller (APIC) AVS enables optimal traffic steering between virtual and physical layers of the fabric to maximize performance and resource utilization. For example, if the web and app tier are located on the same host, AVS can route traffic or apply security policies between these end point groups within the hypervisor itself. AVS Download You can obtain the Cisco Application Virtual Switch software from the Cisco Application Virtual Switch Series web page. The web page can be accessed using following steps Browser to http://www.cisco.com Click on Product & Service and Switches as shown below Click on Virtual Networking at the bottom of the page Then click on Cisco Application Virtual Switch as shown below On Cisco AVS page, you will see Download Software for this Product link Download and extract the following AVS Zip File from Cisco.com There are multiple files and folder created once the zip file is extracted. Click on the Cisco AVS folder as shown below Inside the AVS folder there are vib and zip files available. These files are used based on the type of installation and version of ESXi hypervisor. Note that Cisco AVS is controlled from Cisco APIC controller, hence there is no VSM (Virtual Switch Manager) software included with the Cisco AVS software bundle. AVS Installation Options There are three options available to install the AVS software on ESXi hosts. AVS Manual Installation using SSH/CLI AVS Automated Installation using VMware Update Manager AVS Automated Installation using Cisco Virtual Switch Update Manager (VSUM) Option#1 and Option#2 are discussed in this document, using the file that you just downloaded using the above steps. Third option is the recommended option for AVS installation and it is done using Cisco Virtual Switch Update Manager (Cisco VSUM) Appliance. Following videos show detailed instructions on downloading and installing the Cisco AVS software using the VSUM. Deploying Cisco Application Virtual Switch For ACI Part 1 of 2 Deploying Cisco Application Virtual Switch For ACI Part 2 of 2 Manual Installation Using SSH This option allows you to install AVS using CLI command via SSH. In order to use this option make sure you have access to the ESXi SSH. For this option go ahead and download the avs file with the VIB extension Example: cross_cisco-vem-v170-18.104.22.168.1.1.0-3.1.1.vib For detailed instructions on installing AVS using manual method refer to AVS Installation Guide. Automated Installation Using VMware Update Manager This option allows you to install AVS using VMware Update Manager. This type of installation can be scheduled and automatically done at a specific time. For this installation download the ZIP file located on the AVS download page Example: VEM510-201401164101-BG-release.zip For detailed instructions on installing AVS using manual method refer to AVS Installation Guide. AVS Files and Suppored ESXi Versions Following table explains the file name contained inside the AVS folder, their installation method and to the ESXi version they are applicable to. File Name Installation Method ESXi Version cross_cisco-vem-v170-22.214.171.124.1.1.0-3.1.1.vib Manual (CLI) 5.1 cross_cisco-vem-v170-126.96.36.199.1.1.0-3.2.1.vib Manual (CLI) 5.5 VEM510-201401164101-BG-release.zip Automated (VUM) 5.1 VEM550-201401164104-BG-release.zip Automated (VUM) 5.5
... View more
Make sure to install SQL server with the Latin Collation with Binary enabled. I discussed it in the previous doc. Also ICMDBA must be there. I dont know why you dont find it in your directory.
... View more
Open CAD Admin Desktop And then start configuring parameters for your installation Following screen shots are sample configuration option Step#1 Step#2 * Highlight <Default> and then click edit * Add Rining, Answered, Dropped and Work Events with the default rules and actions
... View more
Introduction In cloud automation it is important that the life-cycle and state of the virtual machine should be monitored. Cisco Cloud Portal or any other portal for that matter should have knowledge of the actual state of the VM. Even if cloud user show down or power on VM directly from the VNC or through RDP. This can be achieved by using SNMP. SNMP trap is a powerful mechanisn to track not only the state of the VM but also any other object within vCenter. And this is true for any element manager that supports SNMP traps. CPO (Cisco Process Orchestrator) can trigger workflow based on the SNMP trap it recieves and can informa Cisco Cloud Portal (CCP) via API. Lets assume a user doesn't use portal but goes directly in the VM via RDP and power it off. Now vCenter trap will be sent to CPO and CPO will send the status back to CCP. CCP will then update the VM property and show that VM is actually powered-off on the Portal. This way portal user will always get the real-time stats from the VM. This could be valid for other scenarios as well for example if a VM get deleted, you want to have a trap received in CPO and then run some scripts accordingly to delete DHCP reservation or clean up other database table. Prerequisite Make sure UDP ports 161 and 162 are open for SNMP trap to receive by the orchestrator. VMware Configuration Steps Global SNMP Configuration 172.21.54.228 is the IP address for the CPO (Cisco Process Orchestrator) server. It will receive SNMP traps based on the alter criteria that we will see in the later section. Up to 5 SNMP receiver system can be defined in vCenter vCenter Alarm Setting To Generate SNMP trap Alarms can be configured and defined at various objects in vCenter. It is the responsibility of vCenter admin to define these alarm condition. It will depend on what trap vCenter admin wants to send to CPO server and on the use case. Alarm should be carefully configured for all the objects in the vCenter otherwise a wrong configuration might create SNMP traffic storm on the network Alarm definition must be very specific and not too generic We will look at the example of defining an alarm so that we can trap Power ON and Power Off state of a VM. Step#1 In vCenter go to the folder object that would contain the VM that you want to monitor. And create a new alarm Step#2 Step#3 Step#4 Step#5 This new alarm will now show under the folder-object as shown below Following screen shot is take from CPO server for a power-on event. CPO will receive the trigger and based on the trigger CPO workflow will be started CPO configuration will be shown in the following sections. Repeat Steps for PowerOff Similarly you can create the SNMP alerts for many actions available in vCenter Alarm configuration section. Process Orcehstrator Workflow Development Create Target In the following screen shot notice that community string must match the string that was configured on the vCenter SNMP properties. Create Workflow First part of the workflow will read the SNMP trap. Save it in XML format. Read table from XML and put it in a local table. It will then run SQL query to find the state of the VM. In the second part it will collect the name of the VM and other properties. Workflow SNMP Trigger The workflow will be triggered based on a specific OID (188.8.131.52.4.1.68184.108.40.206.203 ) in the trap that it receives from vCenter Lets take a look at the properties and logic to build this workflow Breakdown of Various Activities and Their Properties Step#1 CPO automatically captures the SNMP trap data in XML. We will save it in a local variable for further processing. Step#2 Now select the "Row" XML element from the saved value and put them in a table format. Step#3 Select for the OID (220.127.116.11.4.1.6818.104.22.1686.0) value. This would have information if the event was for a VM Power ON or VM Power OFF. Step#4 Now use the condition branch to check the value and set the variables accordingly Step#5 Now select a different OID ('22.214.171.124.4.1.68126.96.36.1997.0') to pick the name of VM and its associated properties. Workflow Execution Workflow will be executed based on the SNMP trigger automatically. SNMP Trap XML <DocumentElement> <Row> <OID>188.8.131.52.4.1.68184.108.40.2068.0</OID> <Value>3</Value> </Row> <Row> <OID>220.127.116.11.4.1.6818.104.22.1684.0</OID> <Value>Gray</Value> </Row> <Row> <OID>22.214.171.124.4.1.68126.96.36.1995.0</OID> <Value>Yellow</Value> </Row> <Row> <OID>188.8.131.52.4.1.68184.108.40.2066.0</OID> <Value>Power State - State = Powered on</Value> </Row> <Row> <OID>220.127.116.11.4.1.6818.104.22.1687.0</OID> <Value>Tidal-N1KV-PRI</Value> </Row> <Row> <OID>22.214.171.124.126.96.36.199.4.1.0</OID> <Value>188.8.131.52.4.1.68184.108.40.206.203</Value> </Row> </DocumentElement> Output Saved In Table
... View more
Sorry folks for the inconvenience. The document was originally published on supportwiki.cisco.com site and since then it has moved to two different publishing systems. The migration team did fantastic job in moving large number of documents but still there are issues left here and there. I am going to fix the images in this document. Thanks for your feedback and pointers. Regards, Shahzad
... View more