cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2391
Views
11
Helpful
0
Comments

Cisco Nexus 1000V cloud switch is a virtual appliance. It provides integration of physical and virtualized network infrastructure. Cisco Nexus 1000V switch is compatible with VMware ESX and vSphere (ESXi) hypervisors. There is a version for Microsoft Hyper-V and Open KVM as well for additional public cloud and platform support.

The virtual switch is based on the Nexus NX-OS switching platform. The virtual switch manages control plane and data plane forwarding. In addition it provides cloud tunnel connectivity and policy-based orchestration for the virtualized platform. The virtualized platform include server virtual machines and virtual service nodes.

Figure 1  Network Virtualization Platforms

architecture.png

The virtual service nodes include Virtual Security Gateway (VSG), ASAv firewall, 1000V InterCloud, NetScaler Load Balancer and vWAAS. It is part of the larger software defined networking architecture that enables physical transport layer abstraction and device programmability for cloud-based applications.

Feature Support

The software for any cloud deployment is selected based on requirements and feature support. The primary considerations for deploying applications include hypervisor and operating system support provided by the cloud provider. Table 1 describes hypervisor and operating system support for the most common virtual switches.

Table 1  Cloud Platform Support for Hypervisors 

hypervisors.png

Cloud Switching

The Cisco Nexus 1000V virtual switch provides Layer 2 campus switching between virtual machines. The virtual switch supports 802.1q VLAN trunking to virtual and physical switches. The traffic is redirected by vPath between server VMs and virtual service nodes. The purpose of virtual service nodes is to provide networking services to cloud traffic. 

The Nexus 1000V virtual switch forwards packets to the CSR 1000V for all external destination subnets including VLANs. That would include cloud instances and the enterprise network. The Cisco Nexus 1000V switch primary components including a Virtual Ethernet Module (VEM) and Virtual Supervisor Module (VSM). The VEM is part of the hypervisor and the VSM is a virtual appliance.

Virtual Ethernet Module (VEM)

The Cisco Nexus 1000V cloud switch manages multiple Virtual Ethernet Modules (VEM). The VEM is embedded with the VMware ESX or ESXi hypervisor. The VEM is similar to a traditional switch line card. The VEM is configured from the VSM and provides switching between servers. In addition it forwards packets between virtualized servers and virtual service nodes (appliances).

Figure 2  Cisco Nexus 1000V Cloud Switch Architecture

nexus 1000v switch.png

The VEM includes Cisco vPath for intelligent traffic steering. Each VM is assigned a dedicated virtual switch port on the VEM. The architecture bundles multiple VEMs as a virtual distributed switch. There is support for a maximum of 64 VEM per vSwitch and most standard switching features. The VMware VEM supports Non-Stop Forwarding (NSF) of packets as well. That occurs when the primary VSM fails. The NSF feature provides continuous data plane forwarding during the switchover to the standby VSM. 

Cisco Nexus 1000V Switch Interfaces

The Cisco Nexus 1000V switch is comprised of multiple interface types used for virtual and physical connectivity. There are virtual side interfaces and physical side interfaces. The virtual side interfaces connect to serves VMs and virtual appliances. The physical side interfaces connect to external destinations. The Cisco Nexus 1000V switch ports are configured as virtual Ethernet (vEth), Ethernet or port channel interfaces. The interface types are assigned to a switch port type. 

Virtual Side Interfaces

The VEM is comprised of multiple virtual Ethernet (vEth) interfaces that are used for connecting (mapping) to virtual machines. The virtual Ethernet interface is assigned to an access port or trunk port type. The server virtual machine vNIC is used for connecting to the virtual Ethernet interface. The VEM switches traffic between the connected server VMs. In addition the virtual service nodes (virtual appliances) use a vNIC to connect to the virtual switch VEM as well.

  • vEth is a logical Ethernet interface created on a VEM that connects (maps) to a vNIC adapter on the VM.
  • vNIC is a logical adapter (NIC) created on the VM that connects to vEth on VEM. The vSwitch link between them is called VN_Link.
  • vmnic is a physical interface (pNIC) on VMware ESXi server that connects via an uplink to the physical switch.
  • vmknic is a virtual NIC configured for VEM to VEM communication. 

Figure 3  Cisco Nexus 1000V for VMware Switch Interfaces

nexus 1000v switch interfaces 2.png

Physical Side Interfaces

The Ethernet interface on the virtual switch is similar to the standard Ethernet port designator on a physical switch. It is assigned to an uplink port (uplink0).  The uplink port is mapped to a physical NIC (vmnic) defined on the hypervisor. The vmnic is the actual physical NIC on the server.

Figure 4  Virtual Switch Uplink Port Configuration

port uplink.png

The uplink port profile is configured as an Ethernet interface that forwards tenant traffic to the physical switch. The Ethernet interface can be assigned to an access port or trunk port as well. The port channel is an aggregate or bundling of multiple Ethernet interfaces on the same VEM. That creates a single logical connection between virtual switches.

Management Interface

VMware vSphere is the server virtualized platform and ESXi is the hypervisor component. There are additional VEM interfaces including service console (vswif) and vmkernel (vmknic) used for management purposes. VMware vSphere ESX hypervisor has a service console interface to communicate with vCenter. VMware vSphere ESXi is a newer hypervisor with a management architecture that does not use a service console.

The ESXi is managed 100% with VMware vCenter and vMotion. That is accomplished with a vmknic interface. There is a vmkernel interface used for Layer 3 control traffic from the VSM to VEM.  The ESX hypervisor uses a vmknic as well for vMotion to move VMs between virtual switches. The management interface is assigned to a VLAN with a port profile.

Inter-VEM Connectivity

The VEM could be part of the same switch or assigned to a separate 1000V switch. Inter-VEM connectivity is across a physical switch. Traffic destined to a different VEM is forwarded on a virtual switch uplink.  The 1000V switch uplink is configured as a trunk port with allowed VLANs. The IP address for the remote VEM is obtained from a DNS request.

MAC Address Learning

The Cisco Nexus 1000V switch uses the standard Ethernet MAC address to port mapping table for packet forwarding. Each VEM maintains a separate forwarding table with MAC address to port mappings for each VLAN. In addition the VEMs do not synchronize their MAC forwarding tables. The Inter-VEM traffic as mentioned requires an uplink to a cloud physical Ethernet switch. The MAC address forwarding table is created when each VM attaches to the switch. The MAC address for the VM is learned and added to the switch table. The statically learned MAC address has no time out value assigned unlike physical switches.

The MAC address for a VM connected to a remote VEM is learned from the server pNIC table of MAC addresses. The VEM floods all ports of the same VLAN for a MAC address that isn’t known. That is commonly referred to as MAC address learning. The server VM will send an ARP request to the hypervisor for the MAC address of any VM assigned to a different VLAN.

Virtual Supervisor Module (VSM)

The Virtual Supervisor Module (VSM) is the control plane that can manage up to 64 Virtual Ethernet Modules (VEM) as a single logical switch. VSM is an SDN controller for the Cisco virtualized compute platform. The VEMs can be distributed across multiple servers. The VSM is a virtual appliance similar to a Supervisor Engine, and based on the Nexus NX-OS platform. The VSM manages VEM configuration as well including switch port profiles. Cisco 1000V virtual switch supports VSM redundancy. There is a primary and standby VSM configured for active/standby mode. The primary and standby VSM maintain synchronized state tables for fast stateful failover to standby VSM. 

VSM to VEM Traffic Control

There are two VSMs managing one or more VEMs that comprise a single Cisco Nexus 1000V Series Switch instance. The VSM has a control interface for monitoring active and failover VSMs. In addition there is a management interface that connects to the VEM vmkernel interface for configuring VEM modules. The packet interface is used for CDP and multicast traffic between the VSM and VEM.

There are Layer 2 and Layer 3 control modes available for connecting VSM to VEM modules. Layer 2 mode is available if the VSM control interface and all managed VEM modules are in the same Layer 2 VLAN. Layer 3 mode is required when the VSM control, management and packet interfaces are on a different subnet than the VEM. The Layer 3 mode encapsulates control and packet frames with a UDP tunnel for VSM to VEM connectivity.

The tenant would configure a VMware vmkernel interface on the VMware ESX host. In addition the port profile for the vmkernel interface is configured with the capability l3control enabled. The Layer 3 mode is recommended considering that Nexus 1000V is a distributed switch with multiple VEMs. That allows for complex cloud designs with multiple VEMs and subnets. The Layer 3 control mode is recommended considering the Nexus 1000V is a distributed switch with multiple VEMs. The complex cloud design is often comprised of multiple VEM with multiple subnets.

Layer 2 Loop Prevention

The Nexus 1000V switch doesn’t require spanning tree protocol. The switch doesn’t send BPDU packets and drops any BPDU that arrive at interfaces. Loop prevention is provided with inspection of source and destination MAC address on ingress Ethernet interfaces. The packet is dropped if the source MAC address is internal to the local VEM. The packet is dropped as well if the destination MAC address is external to the local VEM.

Switch Port Profiles

The Nexus 1000V switch assigns port profiles to virtual machines. The port profiles defines virtual machine network attributes for different types of virtual machines. The port profiles are defined at the VSM and propagated to VMware vCenter as port groups. The port group is assigned to a server vNIC through VMware vCenter. 

Figure 5  Port Profile

nexus 1000v switch port profile 2.png

Port profiles are defined with some of the same switch settings as with a physical switch port. That includes interface type, port type (access or trunk), VLAN, MTU, QoS, port security and channel group. The port profile is assignable to single or multiple server vNIC interfaces.

SLAs can be defined and managed with port profiles that include QoS and security policies for each VM. Virtual servers use the same network configuration, security policy, diagnostic tools as physical servers attached to switches. Any port profile changes are detected by the Cisco Nexus 1000V that sends automatic real time updates to all the vNIC interfaces assigned that port profile. The security attributes are key to enforcing security policies and compliance regulations. It provides a unified security posture across virtual and physical servers. 

VN-Link Server Technology

The Cisco VN-Link technology is a set of features integrated with the Cisco Nexus 1000V switch. It enables policy-based VM connectivity, policy mobility and a non-disruptive operational model. The VN link is a logical connection between server vNIC and virtual switch vEth interface on the VEM.

VMware vMotion

The virtual machine (VM) mobility feature is managed by VMware vMotion for virtualized platforms using the VMware hypervisor. The purpose of VMware vMotion is to manage the migration or moving of VMs between servers or data centers. The port profile defines network and security policies that move with the virtual machine. That allows cloud providers to deploy VM mobility.

The VM can be moved dynamically based on performance, scalability and availability requirements. It allows for easier security auditing from server to server as well. In addition to policy migration, the virtual switch VSM provides network stateful awareness that moves with the virtual machine. That increases network availability and avoids session resets. The Cisco Nexus 1000V switch allows network interface cards (vNIC) to be bundled as a single logical channel with QoS assigned to each traffic type. For instance vMotion traffic could be prioritized with QoS for improved performance. 

Table 2  Cisco Nexus 1000V VMware vSphere Feature Support 

switch features.png

Scalability (Essential Edition)

  • 128 VMware ESX or ESXi hosts per VSM
  • 4096 virtual Ethernet ports per VMware vDS
  • 300 virtual Ethernet ports per physical host
  • 2048 active VLANs
  • 2048 active VXLANs
  • 2048 port profiles
  • 32 physical NICs per physical host
  • 256 Port Channels per VMware vDS
  • 8 Port Channels per physical host 

Copyright CiscoNet Solutions All Rights Reserved 

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: