cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
5659
Views
12
Helpful
15
Replies

Cisco Nexus 1000V Series Virtual Switch Module Placement in the Cisco Unified Computing System

pmajumder
Level 3
Level 3

Hi All,
I am reading a Cisco article titled "Best Practices in Deploying Cisco Nexus 1000V Series Switches on Cisco UCS B and C Series Cisco UCS Manager Servers" http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/white_paper_c11-558242.html

Lot's of excellent information, but the section that puzzles me has to do with the VSM's module placement in the UCS. The article lists 4 options in order of preference, but does not provide any clarification or the reasons behind the recommendations. The options are as follows:


============================================================================================================================================================
Option 1: VSM External to the Cisco Unified Computing System on the Cisco Nexus 1010

In this scenario, management operations of the virtual environment is accomplished in an identical method to existing non-virtualized environments. With multiple VSM instances on the Nexus 1010 multiple vCenter data centers can be supported.
============================================================================================================================================================

Option 2: VSM Outside the Cisco Unified Computing System on the Cisco Nexus 1000V Series VEM

This model allows for centralized management of the virtual infrastructure and has proven very stable..
============================================================================================================================================================

Option 3: VSM Outside the Cisco Unified Computing System on the VMware vSwitch

This model provides isolation of the managed devices, and it migrates well to the appliance model of the Cisco Nexus 1010 Virtual Services Appliance. A possible concern here is the management and operational model of the networking links between the VSM and the VEM devices.
============================================================================================================================================================

Option 4: VSM Inside the Cisco Unified Computing System on the VMware vSwitch

This model also was stable in the test deployments. A possible concern here is the management and operational model of the networking links between the VSM and the VEM devices, and having duplicate switching infrastructures within your Cisco Unified Computing System.
============================================================================================================================================================


As a newbie to both the Nexus 100V and UCS, I am hoping that someone can help me understand the physical setup of these options, and equally importantly provide a more detailed explanation of each of the options and the resoning behind the preferences (pro's and con's).

Thank you,
Pradeep  

1 Accepted Solution

Accepted Solutions

No they are different products. vASA will be a virtual version of our ASA device.

ASA is a full featured firewall.

View solution in original post

15 Replies 15

pmajumder
Level 3
Level 3

Hi,

Maybe I should have also asked, How did you implement it, and why?

Thanks,

Pradeep

Pradeep,

The various options will all accomplish the exactly same functionality, they just accomodate to the "personal preference" of each user.

The three main options for consideration are:

1. 1000v vs. 1010.

2. Deploying Nexus 1000v within UCS or externally.

3. Deploying Nexus 1000v Interfaces on the vSwitch or 1000v DVS.

1000v vs. 1010

The 1000v vs. the 1010 is simply a phsyical appliance vs. a virtual machine.  The 1010 physcial appliance is a standlone server (similar to a C200) which is normally deployed in a pair for HA.  The 1010 has the additional functionality of supporting VSB - virtual service blades.  Two of the currently VSB offerings include a NAM (Network Analysis Module) and a VSG (Virtual Security Gateway).  Outside of the VSB's that are available to the 1010, you're also running a phsyical box vs. a VM, which some customers prefer.  You don't have the mobility of Vmotion as you would using a VM, but you have a device which lives outside of your virtual environment safe from any potential disruption from the virtual infrastructure.

See here for more detail about the Nexus 1010 phsyical appliance option. 

http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/ps10785/data_sheet_c78-593960.html

Deploying Nexus 1000v within UCS or externally.

The next decision deals with where to host the VSM either on a UCS blade, or another device outside of UCS.  Again this comes down to customer requirements & preference.  Hosting it within a UCS blade is perfectly fine, and the most common method customer's use.  Some customers however prefer to run any management devices including the VSM, and perhaps vCenter, on dedicated vSphere clusters. 

Deploying Nexus 1000v Interfaces on the vSwitch or DVS

The last consideration is whether to deploy the 1000v's three interfaces (Control, Management & Packet) on a standard vSwitch (vSS) or on the Distributed Virtual Switch (DVS) itself.  Setup correctly there's absolutely no issue running the 1000v on it's own managed DVS.  The interfaces used for 1000v management are required to be "system vlan" interfaces, which ensure they're always forwarding even in the event the 1000v becomes inaccessible.  This method of running the 1000v on itself offers simplified management, not having to manage an  additional vSwitch.  Conversely, some customers prefer to dedicate a vSwitch (and uplink interfaces) for the 1000v, just so it's not running on itself.  This is purely a comfort level, but speaking from experience, when setup correctly, my preference is to run the 1000v on itself, and not lose and interface and add the management overhead of a dedicated vSwitch just for the 1000v. 

Hopefully a few other customers can chime in here and give their opinions.

If you have any other questions, let me know.

Regards,

Robert

I prefer to have the VSM running on a Nexus 1010, helps the network team feel more at home.  Then have the VSM using Layer 3 rather than the default of layer 2 to communicate to the VEM, this has the added advantage of having the ESX hosts in the own management vlan and leaving all the network in their own vlan.

Then have all nics on the esx host set to use the cisco switch, just remember to use system vlans for SC, vMotion and IP storage if used.

Simon.

pmajumder
Level 3
Level 3

Thank you for the insight. I have decided to run the VSM within the UCS cluster. Still havent decided whether to run the VSM and VEM on the same host or have a dedicated vswitch for it.

Thanks,

Pradeep

I have built numerous Nexus 1000v VSMs that run within the UCS cluster and haven't had any issues.

My personal preference is to create 6 vNICs on the Service Profie template for ESXi.

2 for management; 1 on Fabric A and 1 on Fabric B

2 for vMotion; 1 on Fabric A and 1 on Fabric B

2 for VM Networking; 1 on Fabric A and 1 on Fabric B

My personal preference is to run the VSM VM on a standard vSwitch0 with the ESXi management and create VM port groups for n1kv-mgmt, n1kv-packet and n1kv-control.

The 2 vMotion vNICs are created on standard vSwitch1 with the NIC teaming and failove set so that vmnic3 is active and vmnic2 is stand by so that the vMotion traffic stays on 1 Fabric on the other. You could also just have 1 vNIC for vMotion and use UCS hardware failover but my preference is to have 2 and let ESXi handle the failover.

The 2 vNICs for VM networking are then uplinked to the VSM DVS.

This configuration has been rock solid for us and is very easy to implement and troubleshoot without worring about losing access to the ESXi host manamgent IP.

In my opintion the Nexus 1000v is VM access switch meant for VM networking and not for host vmkernel networking. It can be used for vmkernel stuff but I see the most value and use of the n1kv features at the VM networking level.

The host vmkernel vNICs are already uplinked to a Nexus switch (the 6100s) and can be managed from UCSM and you can also ssh into the Fabric Interconnects, connect to the nxos instance and then detailed interface statics or use an SNMP solution like SolarWinds.

Hi Jeremy,

Thank you for sharing. This is very helpful.

Thanks,

Pradeep

You can also install ASA 1000V into the Nexus 1000V.

We've announced vASA but it's not a shipping product yet. So stayed tuned.

I am assuming the vASA the same as the Cisco Virtual Security Gateway for the Nexus 100v or is that a different product?

No they are different products. vASA will be a virtual version of our ASA device.

ASA is a full featured firewall.

Thanks Iwatta.

Re: the placement of the VSM and VEM, I have a couple more questions, please.

1. Do people run other VMs (excluding Vcenter) on the hosts that has the VSMs?

2. Is it best practice to colocate the Virtual Center VM on the same host as the VSM, or should it be kept separate if possible?

Thanks Again,

Pradeep

I usually run the VSMs on hosts that also run other VMs. The VSMs are not very resource intensive.

The only thing special I do is create a VMware vCenter DRS rule to keep the 2 VSMs on separate ESXi hosts.

Remember that if the VSM is down frames are still switched locally on each host VEM. The only thing that would not work is if you wanted to add a new VM or change a VMs port group or add a new port-profile.

If the customer has the budget we always try to push them to the Nexus 1010s so that the VSM is managed like traditional network switch where the network team has complete control.

we are running vcenter on the hosts that has VSM. Only thing is we are using system vlan for vcenter ip so it does not depend on nexus 1000.

Jeremy:

Why arent you passing the Management and VMotion traffic through the same vNICS? I was planning on creating 4 vNICS with 2 used for Mngt and Vmotion, and the other 2 for data VM traffic.

Thanks,

Pradeep

Review Cisco Networking for a $25 gift card

Review Cisco Networking for a $25 gift card