cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1373
Views
0
Helpful
5
Replies

Installing multiple AVS switches on the same host

Ben Jenkins
Level 1
Level 1

I have hosts with multiple uplinks that connect to separate L2 disjointed networks.

 

Each AVS instance will have uplinks that connect to the different networks, is it possible to install multiple AVS switches on the same host? 

 

Thanks

5 Replies 5

Robert Burns
Cisco Employee
Cisco Employee

Ben,

Each host can be connected to a single AVS instance at a time.

If you have multiple disjoint L2 Networks various AVS endpoints need to reach, you can simply connect your hosts to any ACI leaf, create a single AVS domain, and connect the Disjoint L2 networks to ACI as an L2 Out of EPG Static path mapping. 

Unless you require each host to only have the ability to connect to specific L2 networks (physical separation), would you need multiple AVS domains.  Otherwise you can easily accomplish this with one VMM domain - many customers have done this.

Robert

Hi  Robert,

what about using VDS with ACI instead of AVS. I was told that in ACI it is possible to install up to three VDS per host. Thereby ACI would give the same level of VM separation in the ESX hosts as would be possible with VMware, right?

Thanks!

Thorsten

You can indeed install multiple vDS on a single ESX host.  I'd probably need to hear more about your requirements to understand the 'need' to have multiple vDS/AVS on a single host.  The same level of separation including RBAC can be accomplished with one for most instances.

Can you elaborate on your technical/business requirements Thorsten?

Robert

Today (without ACI) the VMware team separates VMs on the same ESXi host by using separate vDS, each with its own uplinks. When using ACI and AVS, a single AVS would have to be used, separating the VMs "onyl" by using different port groups. It is a security discussion comparing the two scenarios "before ACI" and "after introducing ACI".

There's going zone separation at some point - at the server, or the network.  Personally, I believe network security zones should be managed by ACI/Network Admins, not the server team and the less # of devices/domains to manage, the better.  I'm an advocate of the keep-it-simple mantra, as long as all your business & technical requirements are met. 

From what I understand above, your VMware admins use multiple vDSs for 'network separation'.  Couldn't a  VMware admin can just as easily attach a VM to the "wrong" vDS by mistake?

Security by vDS is not much safer than security by Port Group IMO - unless each host is only running a single vDS in which case the VMs on that host couldn't possible conenct to the wrong network.  But doing this inhibits which hosts can support certain VMs.  Here's my two cents on design options.

**Before ACI**

Each Host has a separate vDS for each Disjoint Network

Pros: Works today

Cons: Lack of "full network" visibility.  No policy or health monitoring visibility between endpoints. Multiple devices to manage & configure each time a new ESX host is added.

**After ACI**

Option1 - Each host uses a single AVS/vDS managed by ACI. Disjoint Networks connected to ACI Leaf Switches

Pros:  Disjoint L2 connections configured once.  Single VMM domain to manage & single uplinks sets from each host to ACI. Full visibility to all Endpoint traffic regardless of disjoint network.  Health monitoring & endpoint traffic stats from centralized UI. Easy to add additional ESX hosts without having to reconfigure legacy disjoint network switches.  Single pane of glass to operate & manage all virtual endpoints network aspects.

Cons: Possible operations/monitoring behavior changes.

Option2 - Each host leverages multiple ACI managed VMware vDSes.  Disjoint Networks connected to ACI Leaf Switches.

Pros: Similar to legacy implementation. Full visibility to all Endpoint traffic regardless of disjoint network.  Health monitoring & endpoint traffic stats from centralized UI. Easy to add additional ESX hosts without having to reconfigure legacy disjoint network switches.  Single pane of glass to operate & manage all virtual endpoints network aspects. Can restrict ACI RBAC based on a per-VMM Domain basis.

Cons: Multiple VMM Domains to manage.

The big selling point you can make to your VMware team is around visibility.  Today if a host connecting to your legacy (non-ACI) network fails, are they going to instantly know which Applications/Services are affected? I'm going to guess NO, unless they know which specific hosts or VM names were affected.  With the ACI managed vDS/AVS you get a picture of which applications are affected by ACI provided Health Scores.  If I'm a server admin, knowing that my Exchange_Web_Farm's health score has collectively dropped by 50% is far more valuable information than knowing that virtual machine "vm11-abc12" is inaccessible.

If you need to counter the learning curve ACI may bring, you can look at the vCenter ACI Plugin - which brings some of the ACI Configuration ability right into vCenter.  All the main monitoring & visibility is still best provided by the APIC UI, but this is another option for VM admins.

Robert

Review Cisco Networking for a $25 gift card

Save 25% on Day-2 Operations Add-On License