cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2752
Views
0
Helpful
7
Replies

Nexus 1000v Active/Standby Teaming Between 10G and 1G

canero
Level 1
Level 1

Hello,

We are installing Nexus 1000v for an ESXi 4.1  Blade IBM server (Bay 3-4) which is connected to two different BNT switches with 1G, and connected to two different  Nexus 4001(Bay 7 and 9)  with 10G connections. The design requires that one of the 10G connections should be used for data interconnectivity, while the other 10G connection is used for Vmkernel (Vmotion) network.

If one of the 10G connections is down, we want to fallback to 1Gbps for both Vmotion and data traffic.

We tried to configure VPC-HM (VPC Host Mode) and Mac Pinning.  VPC-HM is distributing the traffic between both 1G and 10G connections, which is not meeting the requirement that 1G stays in backup mode.

The same result occurs if we use Mac pinning, a VM always stays to 10G connection, and if 10G connection is down, it does not move to 1G connection.  i.e we could not find a solution that 1 G connections stay as backup.

Is there a solution to this, or any recommendations?

Thanks in Advance,

Best Regards,

7 Replies 7

lwatta
Cisco Employee
Cisco Employee

Just to confirm.

You have one blade with 4 connections? 2 x 1GB and 2x 10GB. I'm sorry I'm not very familar with IBM's blade server so if I'm wrong let me know.

How many uplink port-profiles do you have?

If you use VPC MAC Pinning you should be able to create an uplink and add a 10GB and 1GB uplink. It's recommended that the links all be the same speed but it should work. You should then be able to pin all your traffic to the 10GB link. If the link fails traffic should get rerouted to the 1GB link.

Assuming your uplink port-profile are correct how are you failing the 10GB link? does it look like it's down on the nexus 1000V? or are you failing a link upstream from the Nexus4K?

Can we see your running-config from the Nexus 1000V?

louis

Hello Louis,

In fact we have 14 Blades in IBM H Chassis, each of them seperately connects to Nexus 40001 downlink ports through Bays 7 and 9. The port numbers x/2 and x/4 refers to one ESX Host uplink port.

The configuration is attached. We could not figure out how to combine 1G and 10G ports to one uplink profile. And how to pin all traffic to 10Gb link, and not 1GB link. According to documentation, we understand that even if one of them is 1G, the traffic is being load-balanced dynamically between both of the links not paying attention to th bandwidth  If one of them 1G and one of the links is 10G, how can we make all traffic to use 10G and not use slower 1G link as the 10G link stays up?

Best Regards,

Hello Louis,

If we configure under

(config)# interface vethernet 1

(config-if)# pinning id 1 ( 1 is the subgroup-id which points 10G interface of vem)

We observe that the packets go through 10G link even if the 1G interface is down, which we think the Vethernet port is statically pinned to 10G interface.  This is the expected behaviour. However, if the 10G Eth interface (eth 3/4) is shutdown, the VM traffic does not begin to use the 1G Uplink, because of vethernet 1 configuration.

If we don't configure the pinning id under vethernet 1 interface, the redundancy is achieved, i.e. if we shut down the eth3/2 or eth3/4, the traffic continues to flow. But in this case since we don't configure anything under vethernet interface, we don't have any control over which link the VM chooses.  We think there may be some kind of priority or cost similar to Spanning-Tree priority or cost,

Best Regards,

lwatta
Cisco Employee
Cisco Employee

Your config looks good but you need to make a few changes.

With "channel-group auto mode on sub-group cdp" in your uplink port-profile you will not be able to pin traffic.

Change the channel-group command to

n1000v-test(config-port-prof)# channel-group auto mode on mac-pinning

To be on the safe side I would remove the hosts from the Nexus 1000V first, change the above channel-group command and then add them back.


Once everything is added back, your port-channels should like the following

n1kv-star# show int brief

--------------------------------------------------------------------------------

Ethernet      VLAN   Type Mode   Status  Reason                   Speed     Port

Interface                                                                   Ch #

--------------------------------------------------------------------------------

Eth4/2        1      eth  trunk  up      none                       1000      1

Eth4/3        1      eth  trunk  up      none                       1000      1

You can see 4/2 and 4/3 are port-channel 1.

Now you need to figure out which Eth interface is which Service Group. We pin based off SG-id

run "n1kv-star# show port-channel internal info all" It will be a lot of output but look for the interfaces in question and then check the subgroup ID line.

Ethernet4/3
state         : up
update        : none
mode          : on
flags         : 2
cfg flags     :
up_time       : 66101 usecs after Fri Dec  3 16:46:19 2010
auto pc       : none
auto retry    : 0
No auto create compat failure
sub-group id  : cli=32 oper=2
device id     :

From the above output you can see that sub-group id oper=2. That means 4/3 is SG-ID 2. Lets say 4/3 is your 10GB interface.

Now what you do is on each port-profile where you want 4/3 to be the primary interface you add the following line

n1kv-star(config-port-prof)# pinning id 2

That will pin all traffic to 4/3.

To verify that pinning is working run

n1kv-star# module vem 4 execute vemcmd show port
  LTL    IfIndex   Vlan    Bndl  SG_ID Pinned_SGID  Type  Admin State  CBL Mode   Name
    6          0      1 T     0     32          32  VIRT     UP    UP    0  Trunk vns
    8          0   3969       0     32          32  VIRT     UP    UP    1 Access
    9          0   3969       0     32          32  VIRT     UP    UP    1 Access
   10          0      2       0     32           1  VIRT     UP    UP    1 Access
   11          0   3968       0     32          32  VIRT     UP    UP    1 Access
   12          0      2       0     32           2  VIRT     UP    UP    1 Access
   13          0      1       0     32          32  VIRT     UP    UP    0 Access
   14          0   3967       0     32          32  VIRT     UP    UP    1 Access
   15          0   3967       0     32          32  VIRT     UP    UP    1 Access
   16          0      1 T     0     32          32  VIRT     UP    UP    1  Trunk arp
   18   2500c040      1 T   305      1          32  PHYS     UP    UP    1  Trunk vmnic1
   19   2500c080      1 T   305      2          32  PHYS     UP    UP    1  Trunk vmnic2
  305   16000000      1 T     0     32          32  CHAN     UP    UP    1  Trunk

Make sure all the VMs show up as 2 under the Pinned_SGID column. Then test failover it should work.

louis

Hello Louis,

Our 10G Eth interface is Eth3/4, and 1G interface is Eth3/2. We changed the port-profile SYSTEM - UPLINK as below:

port-profile type ethernet SYSTEM-UPLINK
  vmware port-group
  switchport mode trunk
  switchport trunk allowed vlan 12,16,20,24,28,32,36,48,76
  channel-group auto mode on mac-pinning
  no shutdown
  system vlan 76
  state enabled

After it we gave the oper=3 to the Vethernet 1 interface:

interface Vethernet1
  inherit port-profile VLAN32_172.20.32
  description TEST01, Network Adapter 1
  vmware dvport 32
  pinning id 3

Ethernet3/4
state         : up
channel_status: unknown
update        : none
mode          : on
flags         : 2
cfg flags     :
up_time       : 424898 usecs after Fri Dec  3 19:06:17 2010
auto pc       : none
auto retry    : 0
No auto create compat failure
sub-group id  : cli=4 oper=3
device id     :

But unfortunately the "show port-channel summary" command never shows up as the 1G and 10G interface bundled together. We observe that if the 10G interface joins the Port-channel Bundle the 1G interface stays Down, and Vice versa.

If we shut down the Up port, it does not switch to 1G interface.

Do we need a port-channel interface even if we're doing Mac-Pinning? If we need a port-channel, can they be of 2 different speeds? It is ok to have a Port-Channel if we are configuring VPC-HM, but we don't think a Port-Channel mechanism should exist with MAC Pinning.

Best Regards,

Hi Louis,

Another option would  be if you could   create seperate port-profile for the 10G link and split the vlans as per the application priority. Of course having said that there isn't HA but atleast you can garentee the direction of where certain traffic should be going.

Just a thought.

Thanks,

Andrew

canero
Level 1
Level 1

Hello,

After a few more hours of troubleshooting, our problem turned out to be cleared as follows:

1. The old well known rules of physical switches about Port-Channels of physical switches are valid for Nexus 1000V.  On the Cisco Nexus 1000V Interface Configuration Guide, Release 4.0(4)SV1(1)  http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_0/interface/configuration/guide/if_5portchannel.html#wp1196432  there is a section named "compatibility checks" which gives the limitations of Port Channels and this is valid for VPC, VPC-HM with Sub-groups, VPC-HM with Mac Pinning. In our case, since we wanted to aggregate a 10G interface with 1G interface,  one of the 10G or 1G interfaces always stayed in link-up "suspended" state. The output of "show port-channel summary" showed one of them stayed down. This is because of this compat failure.

2. Because of this and since for each VEM corresponding to a seperate Blade Server has 4 seperate links to 4 Physically Seperate Upstream Switches, we decided to use Mac Pinning. With Mac Pinning the Sub-group-id's were automatically assigned to the Eth Ports. Each Physical Uplink Port gets a unique Sub-group-id automatically.

3. Under the Veth Ports, we didn't give any pinning-id, which we left the Round Robin Load Balancing mechanism to decide to use 10G interfaces for the Vmotion and VM Data Traffic. The 2 1G interfaces are used for Control and Management Traffic.  On the Uplink Port Profiles, we seperate these 2 groups with "allowed vlan " list under the Dot1Q trunk configuration of the Eth Ports. By this way we have a total of 20G for Vmotion and Data traffic, and 2G for Management and Control Traffic. If an upstream link or switch is down, in just 1 seconds the other link is being used.

A  disadvantage may be of using Mac Pinning is that a single Veth port always uses a single 10G or 1G port, and we lose load-balancing for a sub-group, but this is not important since we don't have dual uplinks to a single upstream switch that we may aggregate more than one Eth Ports.

Best Regards,

Review Cisco Networking for a $25 gift card