cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2839
Views
1
Helpful
3
Replies

Does this config make sense ?

riplifer1000v
Level 1
Level 1

Hello there,

I started deploying the Nexus 1000v to a 6 host cluster, all running vSphere 4.1 (vCenter and ESXi). The basic configuration, licensing etc is already completed, and so far no problems.

My doubts are regarding the actual creation of system-uplinks, port-profiles, etc. Basically I want to make sure I'm not making any mistakes in the way I want to set this up.

My current setup per host is like this with standard vSwitches:

vSwitch0: 2 pNICs active/active, with Management and vMotion vmkernel ports.

vSwitch1: 2 pNICs active/active, dedicated to a Storage vmkernel port

vSwitch2: 2 pNICs active/active for virtual machine traffic.

I was thinking of translating that to the Nexus 1000v like this:

system-uplink1 with 2 pNICs where I'll put Management and vMotion vmk ports

system-uplink2 with 2 pNICs for Storage vmk

system-uplink3 with 2 pNICs for VM traffic.

These three system-uplinks are global, right? Or do I need to set up three unique system-uplinks per host?. I thought that by doing 3 global uplinks would make things a lot easier since if I change something in an uplink, it will be pushed to all 6 hosts.

Also, I read somewhere that if I use 2 pNICs per system-uplink, then I need to set up a port-channel on our physical switches?

Right now the VSM has 3 different VLANs for mgmt, control and packet, I'd like to migrate those 3 port groups from the standard switch to the n1kv itself.

So what do you guys think? Also, any other recommended best practices are much appreciated.

Thanks in advance,

3 Replies 3

Robert Burns
Cisco Employee
Cisco Employee

Config from vSw to N1K Port Profile uplinks looks good.  That will translate exactly what you have currently to the DVS.  These are global and will not automagically "push" to all hosts; you will need to assign the various vmnics to the appropriate Uplink Port Profile when adding hosts to the DVS.

When using multiple pnics in the same Port Profile on the same host - yes you need to use a Port Channel.  You have options here.

Assuming you want to carry forward the same type of balancing that is used on the vSwitch (by default) which is hashing based on Virtual Port ID - this is the equivelant to "mac pinning" on the 1000v "Channel-group auto mode on mac-pinning".  No other config on your upstream switches is required.  This is also known as Virtual Port Channel Host Mode or vpc-hm.

No problem migrating your VSM's Port Group to the DVS.  There's heaps of postings on here with the details, but essentially you'll want to create a matching port profile on your DVS - ensure the VLAN is set as a "system vlan" then you can just rebind the VSM's network interfaces from the vSwitch Port groups to the DVS Port Profiles.

Another tip:

- The IP storage vlan in your storage Port Profile on the DVS should also be set as a "system vlan".  The only VLANs/Port Profiles that should be assigned as "system vlans" are Control, Packet, Management/VMkernel and IP Storage.  VMotion should NOT be set as a "system vlan" - not required.

Regards,

Robert

Hey Robert, thanks for the help.

So, if I want to keep the load balancing policy that we use (virtual port ID), I need to set up MAC pinning on the physical switches for all the vmnics. If I want to use Etherchannel, I'll need to use ip hash policy.

What's the preferred way of doing NIC teaming in a typical environment with N1KV ?

Thanks,

If you want to use Mac-pinning you would configure that inside the N1K Uplink port profile.  You do NOT configure anything in this regard on the upstream switch ports.  Upstream physical switch ports should only be set as a trunk, forwarding the appropriate VLANs with Port Fast and BPDUFilter enabled.

If you want to use an Etherchannel, you need to configure both the N1K uplink Port Profile and the upstream switch.

The simplest way is to go with mac pinning.  This way you can add additional physical uplinks without having to do any serious config changes on your upstream switches (no need to add members to the etherchannel etc).

The only consideration is bandwidth.  Mac pinning will "pin" VM's to a particular uplink in a round-robin fashion.  This can give a VM the opportunity to monopolize bandwidth of that link making other VMs pinned to the same link contend for bandwidth.  With an Etherchannel you will get a larger aggregated pipe but there's the overhead of upstream config required as well.

My recommendation:

Use mac pinning - If using 10G uplinks or if you don't have any exorbitantly bandwidth-intensive VMs

Use etherchannel - If bandwidth aggregation is a requirement or you have particularly bandwidth hungry VMs.

Regards,

Robert

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: