cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
4788
Views
0
Helpful
7
Replies

Questions about 1kv design

bnoeldner
Level 1
Level 1

I have been reading through the configuration guides and have come away with a few questions for possible design choices on the Nexus 1000v.

1. In a pure Nexus 1k environment with all vmknics assigned to the 1kv, and running the VSM's on the same hosts as the VEM's that they manage.  What are the configuration guidelines that must be followed so that the service console is accessible when both VSM's are down due to a site reboot? (power failure, etc)

2. In a large environment, can multiple 1kv's managed by their own VSM's exist in the same vcenter datacenter?  This could be a requirement due to segregated security zones in the same logical datacenter.

3. On a hybrid design (service console, vMotion on a vswitch) where should the packet and control vlans exist.  Are they required to be in the Nexus uplink port groups?

4. When using Layer 3 control mode, the VSM can not use trunked ports.  How does this fall into the best practices of having different vlan's for packet and control.  Also, the documentation states that the uplink on the VEM must be dedicated, and can only have 1 link (causing failover problems)

I would love it if someone could provide some clarification.  This product is still very new, and i'm having problems finding information out there for different design scenarios.

Thanks

7 Replies 7

nsundar
Level 1
Level 1

bnoeldner wrote:

I have been reading through the configuration guides and have come away with a few questions for possible design choices on the Nexus 1000v.

1. In a pure Nexus 1k environment with all vmknics assigned to the 1kv, and running the VSM's on the same hosts as the VEM's that they manage.  What are the configuration guidelines that must be followed so that the service console is accessible when both VSM's are down due to a site reboot? (power failure, etc)

ANSWER: Say that, after the site power failure, the VEMs and the rest of the site are up, but the VSMs are not. The main configuration guideline is that the VLANs used for management and VSM storage must all be system VLANs, both in the appropriate vmknics/vswifs and in the relevant uplinks. (While regular VLANs are configured on a VEM by the VSM, system VLANs are configured in the VEM even before the VEM has contacted the VSM. A VLAN can be a system VLAN in one port but a regular VLAN in another port, depending on the port profile applied. It is best to use system VLANs sparingly. Storage for VMs other than VSM don't need system VLANs.)

4. When using Layer 3 control mode, the VSM can not use trunked ports.  How does this fall into the best practices of having different vlan's for packet and control.  Also, the documentation states that the uplink on the VEM must be dedicated, and can only have 1 link (causing failover problems)

ANSWER: In the L3 Control mode, the Control and Packet VLANs are irrelevant -- they are used in Layer 2 Control mode.


Not sure what exactly is the question about the documentation. An uplink (pnic) can be assigned to either the N1k or the vswitch but cannot be shared between both. If you have enough pnics, it is better to segregate Control/Packet/Mgmt VLANs in one set of uplinks (with a port channel), and the data VLANs in another port channel. But that is not required. You can have Control/Packet/Mgmt VLANs as system VLANs, and data VLANs as non-system VLANs, on the same port channel (with multiple uplinks).


It is recommended that you use more than one pnic (in a port channel) for redundancy. The documentation says that you should not have more than one uplink on the same VLAN without a port channel. With a port channel, you can have multiple uplinks for sure. Please feel free to ask questions if this doesn't address your question.

I would love it if someone could provide some clarification.  This product is still very new, and i'm having problems finding information out there for different design scenarios.

Thanks

nsundar
Level 1
Level 1

bnoeldner wrote:

2. In a large environment, can multiple 1kv's managed by their own VSM's exist in the same vcenter datacenter?  This could be a requirement due to segregated security zones in the same logical datacenter.

ANSWER: yes.

3. On a hybrid design (service console, vMotion on a vswitch) where should the packet and control vlans exist.  Are they required to be in the Nexus uplink port groups?

ANSWER: In a Layer 2 Control mode configuration, the Control and Packet VLANs must be configured as system VLANs on one or more uplinks. The Control/Packet VLANs are used for VSM-VEM communication, and so have nothing to do with the management interface or VMotion.


Others may expand on this answer as needed.

I would love it if someone could provide some clarification.  This product is still very new, and i'm having problems finding information out there for different design scenarios.

Thanks

Have a look through the attached guide.  Might help you with your design questions.

Cheers,

Robert

Robert,

The guide as well as all current documentation I can find on 1000v do not address question #4.  From the guide:

"The control and packet interfaces should be deployed on discreet VLANs, even though no configuration requirement

or CLI enforcement prevents the two interfaces from sharing the same VLAN. Deployment on discreet VLANs is

recommended for several reasons.

The control interface is much chattier and broadcast based than the packet interface. If the two interfaces were in the

same VLAN, the packet interface would receive unnecessary broadcast traffic. Also, although the packet interface is

important, the control interface is much more crucial to overall system function. Keeping the control interface isolated

from any external influence helps ensure reliable system operation.

Multiple Cisco Nexus 1000V Series Switches can share common VLANs for their respective control and packet

interfaces, but ideally each Cisco Nexus 1000V Series has separate VLANs. When the same control and packet

VLANs are shared across multiple Cisco Nexus 1000V Series Switches, take care to help ensure the uniqueness of

domain IDs."

This guide was written before the 4.0(4)SV1(2) update which added layer 3 control as an option.  However, the documentation of the new layer 3 settings is very agnostic.  How are the concerns documented in this section of the guide above which lead to the three interface design of the VSM addressed in a layer 3 control mode?  Also, what is best practice?  Layer 3 or layer 2?  The Nexus 1000v TAC engineers I've talked to seem to prefer layer 3 and I would assume the layer 3 configuration would be easier to troubleshoot.  What considerations led to making this option available?  Initially I was thinking it would allow me to manage VEMs on multiple networks with one pair of VSMs, however, there is still a requirement that all managed VEMs must exist in the same network.

More information from Cisco on the two configuration options available would certainly help clear up some of these questions which come up during implementation.

Jason Masker

Hello,

   To clarify my earlier statement, the Control VLAN is not relevant for VSM-VEM communication in Layer 3 Control, but is still needed for communication between the active and standby VSMs.

I will follow up on your questions and concerns.

Sundar.

"Initially I was thinking it would allow me to manage VEMs on multiple networks with one pair of VSMs, however, there is still a requirement that all managed VEMs must exist in the same network."

The requirement for VEM hosts to be on the same layer 2 domain follows the same topology requirements for vMotion and FT so this shouldn't be a problem unless you have vSphere hosts in your DMZ, which in that case I would hope they are managed separately than your product clusters.

These requirements may/may not change as the product matures.  New features are added during new releases.  If there is a valid business case for managing VEM hosts on disparate networks I'm sure it will be on the roadmap.   I will see if I can find more info to confirm this.

Robert

I liken the VSMs more the vCenter than to the requirements for vmotion and vCenter does allow layer 3 control of all hosts.  My business case is a multiple site scenario.  I have a remote site with very little infrastructure, but they do have one VMWare host.  The resources required to run two VSMs redundant on that host will probably push us towards just leaving that host outside of the 1000v infrastructure on its own vSwitch.  If I could control the VEM on that host with the VSMs at our location, deploying 1000v there may be a possibility.

I am seeing one potential use I would have for layer 3 control that does not fit the implementation.  But what I am wondering is what this functionality was added for and what are the business cases where I would want to use layer 3 versus layer 2?  What are the trade-offs, pros, cons, etc..