cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3171
Views
0
Helpful
7
Replies

Nexus 1000v, port-channel breakdown?

1gtopf
Level 1
Level 1

This is my first foray into the Nexus 1000v, and I'm having some issues.

So my understanding is that each server has one NIC that's system-uplink which is the equivilant to the backplane in the Nexus.

If I have 4 servers, each with a system-uplink NIC, is there a need to dump them each into a port-channel?

Also, should they be trunk ports if all three of my nexus VLANS are using the same vlan?

Second question...the vm-uplink ports...those are, from my understanding, for VM traffic traversing the VM to the physical NICs.

That said, again, if I have 4 servers, each with 2 NICs to use for this...how would I port-channel them?  I originally tested without port-channeling, and my other switch (a 3750G stack) listed several errors with vlan/host flapping between the ports...

So how would I port-channel them?  Or is there a better way to aggregate the bandwidth between the NICs?  It would seem, with a distributed switch, that there would be some bandwidth aggregation available.

Also, concept-wise...if I have my 4 hosts, each with 2 NICs...since it's a distributed switch, in theorey, I could disable 7 of the NICs, and the traffic from all 4 hosts would still use the backplane NICs and use that last nic for egress, right?

2 Accepted Solutions

Accepted Solutions

Daniel Laden
Level 4
Level 4

Each ESX host will have a VEM module. The VEM module will have one or more uplinks.  Each uplink will have one or more physical nics.  If an uplink has more than one nic (recommended for redundancy), a portchannel for that uplink is required.  If you have more than one uplink, you need to ensure they do not have overlapping VLAN. 

In your statement, you stated you have a system and vmware uplink with one link each.  You would not port-channel these uplinks together.  You may wish to collapse the system and vm uplinks into a single uplink and create a port channel for the single uplink.

The recommended means of creating the portchannel is using LACP where supported.  If not supported, MAC PINNING is the recommend means of creating the port channel.

If you have 4 hosts with 2 nics each, you will want to use all eight nics.  Each ESX host will need to have access to the network. Either with two uplinks with a single nic each or a single uplink with two nics each.

The following documents may help:

Cisco Nexus 1000V Series Switches Deployment Guide Version 2

http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/guide_c07-556626.html

Best Practices in Deploying Cisco Nexus 1000V Series Switches on Cisco UCS B and C Series Cisco UCS Manager Servers

http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/white_paper_c11-558242.html

View solution in original post

Because VM traffic and vCenter activities (ie vMotion) can be bandwidth intensive, it is preferred to have a dedicated port channel for VSM-VEM traffic to eliminate any contention for network bandwidth.  With your limitation on NICs, it may want to collapse to single uplink.

You are ontrack with your uplink configuration. If you are using LACP on 4.2(1)SV1(4) or newer, you will want to review 'feature lacp' and 'lacp offload'

Cisco Nexus 1000V Command Reference, Release 4.2(1)SV1(4)

http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_1_4/command/reference/n1000v_cmd_ref.html

View solution in original post

7 Replies 7

Marwan ALshawi
VIP Alumni
VIP Alumni

Hi There

Nexus 1000v is just like a physical switch logically but is is software only, server ports in vm terms vmnic connect to N1K ports by creating a port-group type vm port and in vm level you allocate the port-group to those VMs

again from N1K point of view you create port profile ( to be used as port-group in VM/ESXi ) once you assign a vm to that port group it will be allocated to vNIC in N1K

for th euplinks from N1K you create a port-prfile with ether-channel/trucking

and from VM side you chose this a port-group and you cn do the NIC teaming there as well

again this is if you are using the N1K VEM ( like Ethernet switch module in a normal switch ) and you configure the CLI and the control via VSM ( like supervisor modul in physical siwtch like 6500 )

the VEM is per ESXi Host, while the VSM can control all of your VEMs ( to some extent )

if you are using Cisco UCS B series with bypassing the hypervisore this is a bit different setup where you have VEM of N1K in each Host and no VSM anymore you will have the UCSManager to manager and  populate the port-profiles to the vCenter of VM environment

and there is no backplane concept, VEM communicate with the VSM using special vlan for managment and control trafic need to be created

and for aggregation it depends on what physical system you are using as you can have 2 levels of it in the UCS level and VM level if you are using UCS here

HTH

if helpful rate

So the system-uplink NICs are, as vmware support explained it to me, the backplane used for internal VEM/VSM communication.  Since each is in a different physical host, they cann't be port-channeled, which I guess makes sense.

For the VM traffic...I can only port-channel NICs from the same physical host to the switch?

I had a host flapping between ports before, but I will need to go back and confirm where it was happening.  I will report back.

Daniel Laden
Level 4
Level 4

Each ESX host will have a VEM module. The VEM module will have one or more uplinks.  Each uplink will have one or more physical nics.  If an uplink has more than one nic (recommended for redundancy), a portchannel for that uplink is required.  If you have more than one uplink, you need to ensure they do not have overlapping VLAN. 

In your statement, you stated you have a system and vmware uplink with one link each.  You would not port-channel these uplinks together.  You may wish to collapse the system and vm uplinks into a single uplink and create a port channel for the single uplink.

The recommended means of creating the portchannel is using LACP where supported.  If not supported, MAC PINNING is the recommend means of creating the port channel.

If you have 4 hosts with 2 nics each, you will want to use all eight nics.  Each ESX host will need to have access to the network. Either with two uplinks with a single nic each or a single uplink with two nics each.

The following documents may help:

Cisco Nexus 1000V Series Switches Deployment Guide Version 2

http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/guide_c07-556626.html

Best Practices in Deploying Cisco Nexus 1000V Series Switches on Cisco UCS B and C Series Cisco UCS Manager Servers

http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/white_paper_c11-558242.html

I had set up both uplink port profiles based on Cisco's documentation (the getting started guide) as well as a call with a VMWare engineer...it did seem a little unexpected to have to dedicate a hardware NIC in each server to just management.

So...based on what you're saying...

I could collapse the two uplink groups into one, as long as its a trunk with all of my VLANS allowed, make sure i have the system VLAN assigned (that's what the VEM/VSM communicates on?), and dump each hardware NIC into the uplink group?  Keep my existing vethernet profiles for my vlan tagging...

And set up separate port-channels on my 3750G stack for each host...

That, to me, makes a lot of sense...

Am I on the right track?

Because VM traffic and vCenter activities (ie vMotion) can be bandwidth intensive, it is preferred to have a dedicated port channel for VSM-VEM traffic to eliminate any contention for network bandwidth.  With your limitation on NICs, it may want to collapse to single uplink.

You are ontrack with your uplink configuration. If you are using LACP on 4.2(1)SV1(4) or newer, you will want to review 'feature lacp' and 'lacp offload'

Cisco Nexus 1000V Command Reference, Release 4.2(1)SV1(4)

http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_1_4/command/reference/n1000v_cmd_ref.html

Hi Dan

Just i want ot be clear here regarding the uplinks the NIC teaming,, and port channel has to be done per physical server/VEW

can not be cross server nic teaming/portchanneling ? is that right

with you UCS B we can do it deffrently for example using the Palo adapter we can present several vNICs and get the VM team them while in the UCSM we pin them to diffrent FI

Correct, port-channels are done on a per physical server/VEM.  The port channel does not extend beyond the ESX.  Where a port-profile is applied to two ESX hosts were port channels are needed, two port channels are created, one for each host.

One's configuration options are expanded with the palo adapter because of its ability to create multiple vnics.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card