cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
4168
Views
1
Helpful
7
Replies

Nexus 1000V - what is a DVS really?

robert.horrigan
Level 2
Level 2

Hello 1000V Experts,

I'm hoping I can get some clarification on the functions of a 1000V and what happens if they are shut down.  I've had a few 1KVs in existance for some time and after running them on active servers I've come under the impression that a 1KV is basically a pre-configurator.  It has a predefined configuration that is uploaded to VCenter.  VCenter guys then can use that config to modify their servers uplinks.

It does not do anything for the servers in real time, ie no inspection, no ACLs(that are not uploaded to VCenter), etc (other than CDP).

If i shut down a 1KV live, nothing will happen, services will go on as normal as the config was uplinked to VCenter?

I have a feeling I may be missing something and would really appreciate any clarification.

/r

Rob

2 Accepted Solutions

Accepted Solutions

Vincent La Bua
Cisco Employee
Cisco Employee

A DVS is a distributed virtual switch.  VMware introduced the concept.  On a vmware DVS, the vcenter is used as a control plane and the host is used as the data plane.  Each ESX host gets a copy of the switch config for when vcenter goes down.  This ensures traffic is forwarded.  This file lives in /etc.. file name slips me but `ls` it and you should see it.

Now, for the N1k, the VSM is used as the control plane.  The VEM is the data plane.  vCenter is used for management of which VEM's will connect to which VSM's.

The VEM is a line card.  However, just like some real world  equipment, the line card can still forward traffic for what it is  already programmed for if it has enough brains in it.

An important item to note in the port profile is the system vlan item.  You are limited to the amount of vlans that can be configured as system vlans.  The system vlans will still forward traffic when the VEM loses connectivity to the VSM.

HTH

View solution in original post

Let's set the record straight here - to avoid confusion.

1. VEMs will continue to forward traffic in the event one or both VSM are unavailable - this requires the VEM to remain online and not reboot while both VSMs are offline. VSM communication is only required for config changes (and LACP negociation prior to 1.4)

2.  If there is no VSM reachable, and a VEM is reboot, only then will the System VLANs go into a forwarding state.  All other non-system VLANs will remain down. This is to faciliate the Chicken & Egg theory of a VEM being able to initially communicate with a VSM to obtain its programming.


The ONLY VLANs & vEth Profiles that should be set as system vlans are:

1000v-Control

1000v-Packet

Service Console/VMkernel for Mgmt

IP Storage (iSCSI or NFS)

Everything else should not be defined as a system VLAN including VMotion - which is a common Mistake.

**Remember that for a vEth port profile to behave like a system profile, it must be define on BOTH the vEth and Eth port profiles.  Two factor check.  This allows port profiles that maybe are not critical, yet share the same VLAN ID to behave differently.

There are a total of 16 profiles that can include system VLANs.  If you exceed this, you can potentially run into issues with the Opaque data pushed from vCenter is truncated causing programming errors on your VEMs.  Adhering to the limitations above should never lead to this situation.

Regards,

Robert

View solution in original post

7 Replies 7

Mathew Lewit
Cisco Employee
Cisco Employee

I think this would be a good resource to start to answer your questions on N1K.  Specifically, the diagram breaking out the configuration.


http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/white_paper_c11-558242.html


Vincent La Bua
Cisco Employee
Cisco Employee

A DVS is a distributed virtual switch.  VMware introduced the concept.  On a vmware DVS, the vcenter is used as a control plane and the host is used as the data plane.  Each ESX host gets a copy of the switch config for when vcenter goes down.  This ensures traffic is forwarded.  This file lives in /etc.. file name slips me but `ls` it and you should see it.

Now, for the N1k, the VSM is used as the control plane.  The VEM is the data plane.  vCenter is used for management of which VEM's will connect to which VSM's.

The VEM is a line card.  However, just like some real world  equipment, the line card can still forward traffic for what it is  already programmed for if it has enough brains in it.

An important item to note in the port profile is the system vlan item.  You are limited to the amount of vlans that can be configured as system vlans.  The system vlans will still forward traffic when the VEM loses connectivity to the VSM.

HTH

Vincent,

Thanks a lot.  Just to be sure, so even though:

Each ESX host gets a copy of the switch config for when vcenter goes down.  This ensures traffic is forwarded.  This file lives in /etc.. file name slips me but `ls` it and you should see it.

If I do not configure the server data Vlans as system Vlans on the port profiles then the VEMs will be dead in the water if the 1KV-VSMs go offline?

/r

Rob

Correct, all control, packet, etc.. required vlans must be system vlans.  That should be in the config guide.  Also, I think the max is like 16 system vlans.  Do not quote me on that though.  It could have changed in the most recent release.

If there is a disconnect from vCenter and VSM, it would need to be able to get what is called the opaque data from vCenter.  This is config details that come from the vCenter to VEM link.

It is a triangle basically..

VSM---------VEM

      \         /

       \      /            

      vCenter

VEM will forward system vlans if the triangle breaks.  Note that the file is cat /etc/vmware/dvsdata.db

The N1k is possible because VMware has a 3rd party API that allows other vendors to build off of.

This is very similar to how NetApp has the Virtual Storage Console (VSC) in vCenter AND its SnapMirror for Virtual Infrastructure (SMVI).

They both are uses the same API library but obviously different specific library files resepctively.

Sorry to jump in on this but are you sure that only system vlans are the only ones that continue to forward traffic if the VEM and VSM lose contact with each other? I have been told that when the VSM goes down or is unavailable the VEMs will continue to work and you just lose the ability to change anything on the VEM. I don't think that only forwarding the system vlans in this case is a good thing.

Tom

Sent from Cisco Technical Support iPhone App

Let's set the record straight here - to avoid confusion.

1. VEMs will continue to forward traffic in the event one or both VSM are unavailable - this requires the VEM to remain online and not reboot while both VSMs are offline. VSM communication is only required for config changes (and LACP negociation prior to 1.4)

2.  If there is no VSM reachable, and a VEM is reboot, only then will the System VLANs go into a forwarding state.  All other non-system VLANs will remain down. This is to faciliate the Chicken & Egg theory of a VEM being able to initially communicate with a VSM to obtain its programming.


The ONLY VLANs & vEth Profiles that should be set as system vlans are:

1000v-Control

1000v-Packet

Service Console/VMkernel for Mgmt

IP Storage (iSCSI or NFS)

Everything else should not be defined as a system VLAN including VMotion - which is a common Mistake.

**Remember that for a vEth port profile to behave like a system profile, it must be define on BOTH the vEth and Eth port profiles.  Two factor check.  This allows port profiles that maybe are not critical, yet share the same VLAN ID to behave differently.

There are a total of 16 profiles that can include system VLANs.  If you exceed this, you can potentially run into issues with the Opaque data pushed from vCenter is truncated causing programming errors on your VEMs.  Adhering to the limitations above should never lead to this situation.

Regards,

Robert

Robert,

Thanks much.  Exactly what I was looking for.

/r

Rob