cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2940
Views
1
Helpful
5
Replies

Nexus1000v VLANs_Best Practice

mhasabal
Level 1
Level 1

Hello All,

Is the best practice right now to use the same VLAN for mgmt, packet and control?

thanks in advance.

5 Replies 5

abbharga
Level 4
Level 4

Hi,

You can very well use the same VLAN for the three but if you expect either of the VLANs to have heavy traffic its always good and safe for seprate out. For e.g if your management vlan is going to be used by multiple devices it might be heavily utlised which might result in control / packet data being dropped.

You can refer the config guide: http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_1_4/getting_started/configuration/guide/n1000v_gsg_2setup.html#wp1057097

Extract from the guide:

You can use the same VLAN for control, packet, and management, but if needed for flexibility, you can use separate VLANs. Make sure that the network segment has adequate bandwidth and latency.

Hope this helps!

./abhinav

Robert Burns
Cisco Employee
Cisco Employee

I agree with Abhinav's recommendation with a couple additional points.

Just as best practice recommends for any physical switch, it's always a good ideal to segment your Management network from all others - even if purely by VLAN since there's no OOB management interface.  With Control & Packet traffic relatively low, these can safely share the same VLAN.  Separating these networks also become relevant if you even need to troubleshoot certain typs of control traffic (LACP, IGMP, CDP etc).  If you even needed to sniff either of these protocols it just makes it a little cleaner without all your devices management traffic also being captured.

In regards to congestion concerns, in the latest version (SV1.4) you can also take advantage of QoS WFQ (Weighted Fair Queueing).  Integrating with VMware enhancements in 4.1 the 1000v can automatically identify certain traffic types including Control, Mnagemment & Packet - allowing you to easily create a policy that gaurantees a min. bandwidth % to these vital traffic types.  See my comments in this post regarding the new WFQ feature in SV1.4 https://communities.cisco.com/thread/17120?tstart=0

Regards,

Robert

Gentlemen,

Thank you so much for the info. great help.

if you don't mind i have one more question regarding the Nexus1000v with 10G

i am using the attached PDF as a guideline.

I have 4/10G uplink to 2 different Nexus7000 (configured with vPC)

my questions is, do i need to configure MAC-pining on the Ethernet Port Profile (facing the 7K)?

thanks in advance.

When using a host-facing vPC from the N7K's, you need to configure an Ether Channel on the Host/VEM side.

From the N7K sides:  Each N7K would have a two member vPC Port Channel.  This could be either active (LACP) or static -  ("mode active" vs. "mode on").

From the Host/VEM side: The four incoming need to be logically treated as if they all came from a single switch.  In this case you would configure your uplink port profile in a match configuration "channel-group auto mode [active][on]".

Below is a logical topology with the associated config for each device.  This example shows an active Port Channel using LACP.

Capture.JPG

SAMPLE N7K CONFIG

=================

interface Ethernet9/1
  switchport
  switchport trunk allowed vlan 19,25,3001-3003
  spanning-tree port type edge trunk
  spanning-tree bpdufilter enable
  channel-group 100 mode active

interface Ethernet9/2
  switchport
  switchport trunk allowed vlan 19,25,3001-3003
  spanning-tree port type edge trunk
  spanning-tree bpdufilter enable
  channel-group 100 mode active

interface port-channel100
  switchport
  switchport trunk allowed vlan 19,25,3001-3003
  vpc 100
  spanning-tree port type edge trunk
  spanning-tree bpdufilter enable

SAMPLE N1K CONFIG

=================

port-profile type ethernet system-uplink
  vmware port-group
  switchport mode trunk
  switchport trunk allowed vlan 19, 25,3001-3003
  channel-group auto mode active

  no shutdown
  system vlan 3001-3002
  state enabled

MAC pinning assumes all uplink ports are individual members - not channeled together.  If you tried configuring your VEM side with MAC pinning, and N7K in a vPC, you'll get asymetric switching, duplicate packets and a big headache .

Let us know if you have any other questions.

Regards,

Robert

Robert,

can't thank you enough. great info