cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
7597
Views
6
Helpful
0
Replies

1000v vPC & HP Flex-10

Robert Burns
Cisco Employee
Cisco Employee

Greetings All,

We have been getting an increased # of support questions in regards to using 1000v in vPC mode with HP Flex-10.  I figured it was time to detail some information for those interested here in the community.

Q.  Does 1000v/vPC work with Flex-10?

A. Yes.  vPC-HM as the name implies manages the redundant paths from the host side.  It does not rely on the upstream switch or device to handle load balancing functions.  vPC-HM will work with any upstream device. The only thing you need to be vary of is the "mode" you use.

Let's discuss vPC first

vPC-HM (Virtual Port Channel - Host Mode) was created so hosts could connect to multiple upstream devices and treat them like a Port Channel.  The upstream devices must be layer 2 connected.  With the original release of 1000v - 4.0.4.SV1.1, there were two ways to setup vPC-HM.  With or without CDP.  The command would look like:

channel-group auto mode on sub-group cdp

Now depending on whether or not your upstream deivce supported CDP or not, would dictate if you had to configure manual sub-group IDs or not.  If you upstream device was CDP capable, you were done.  If your device did not support CDP or you wanted to over-ride CDP you could manually assign the sub groups IDs by setting them them at the Ethernet interface level (Note: Flex-10 does NOT support CDP).  If you didn't have CDP support upstream you would add config such as:

Ethernet 3/1

    sub-group 0

Ethernet 3/2

    sub-group 1

In this case a "Sub-group" would refer to a unique upstream device.  In the original release, 1000v supported only 2 sub-groups: 0 and 1.  Having to set the vPC-HM mode to "sub-group" CDP, and then manually assign the sub-groups did cause confusion to some users.

Then comes the latest release of 1000v - 4.0.4.SV1.2 where we've changed things slightly.  We've extended the sub-group IDs up to 32 and now cleaned up the manual vs. CDP config and added a new mode called "mac-pinning".  So all in all we now have:

channel-group auto mode on sub-group cdp

channel-group auto mode on sub-group manual

channel-group auto mode on mac-pinning

The first two should be obvious, the new one "mac-pinning" will pin vethernet ports to a particular interface in a round-robin fashion.  In the event of a link failure, vethernet interfaces will be re-pinned to another.  With mac-pinning there is no need to have the uplinks in a Port channel.  They will be treated as separate uplinks. Mac-pinning should be used when port-channel can't be configured on the upstream switch.  Of course, if the upstream switch supports Port Channel - do it.  You'll get better bandwidth utilisation. Additionally with mac-pinning you can do "Dynamic Pinning" where certain traffic such as control and system traffic is pinned to a particular Port Channel member (while your data traffic pinns to the other) and they still have the ability fail over to each other in the event of a failure.  An example when this is useful could be when you only have 2 uplinks, and you want to pinn your vMotion traffic to one particular link, and keep the rest of your uplink members open for data traffic.  Think of it as traffic engineering + redundancy.  With any Port Channel there are 17 different load-balancing algorithms you can choose from.  The default being source-mac.

Now let's discuss Flex-10

Flex-10 is an HP proprietary technology that can present 1 to 4 physical adapters to each blade server for EACH Flex-10 module installed in the chassis. The total bandwidth of each Flex-10 to Server connection must equal 10G.  You can present a single 10G adapter, or a 2G, 2G, 4.5G and 1.5G adapter for example.  The only drawback to "dividing" the 10G pipe into multple lanes is that these become hard "limits".  If you max out your 1.5G lane, and the rest of the lanes are under-utilised - tough beans.  In my own personal opinion I'd prefer to use QoS, and guarantee a min. bandwidth allocation.  This way if there's available bandwidth I could use, great - it gets used. HP believes bandwidth-to-the-server responsibilties belong to the server admin, whereas Cisco believes this is a Network admin duty.  Problems arise when responsibility boundaries turn into configuration responsibilities and they're not fully understood by those implementing them. Flex-10 is not a switch.  Its behaves as more of a network MUX.  It does support LACP which is how you'll want to connect your Flex-10 to your upstream distribution switches. Flex-10 also provides an option called "Smart Link".  Smart Link will put the downstream interfaces in a Down state if all of the uplink interfaces go down. This is important when vPC needs to failover or re-pin traffic.  It can't do that if the uplink interface (Flex-10) is still Up.  You can team uplink adapters together in either Active/Active (Auto-LACP) or Active/Standby (Failover mode).

1000v & Flex-10 together

What is the best way to configure the two?  Well, it comes down to design.  Remember that Flex-10 doesn't support CDP, so if you want to use vPC-HM you'll be using manual sub-groups or mac-pinning. With mac-pinning you may not get the most efficient use of each uplink when compared to an etherchannel, but you will be protected from mis-configuration of sub-groups and issues that would happen is someone moves a cable from one upstream switch to another.  Mac-pinning will provide all the failover redundancy offered by etherchannel.  If you want the best bandwidth utilisation for your buck, then you'll want to use manual sub-group IDs - just be sure you don't have Datacenter gremlins that move your cables around!

Depending on your traffic requirements the simplest option is to use a single 10G intefaces to the host.  You'll more than likely have at least 2 x Flex-10 modules in your chassis so this would give each host 2 x 10G Adapters.  There are many best-practice documents being re-written with the advent of 10G.  Those include VMotion's best practice of having to dedicate a single NIC to vMotion.  The new requirement is 1G guaranteed bandwidth as now NICs are all virtual anyway.  Configuring your Flex-10 as a single 10G adapter is fine as long as you plan on utilising both 10G adapters with the 1000v (no vSwitches).  To provide redundancy you would need to configure a single Uplink Port Profile carrying all your System, Data, VMotion and IP Storage traffic over a single profile.   In the case where you don't require a separate vSwitch for out-of-band management then this option is the preferred.

For those that wish to segement your single 10G adapter into 1-4, then you can assign some uplinks to the DVS while keeping some on the vSwitch if you so choose.

We're far from experts on Flex-10, so if anyone has any comments or questions, feel free to post and we'll do the best to find answers if we're not sure.

Cheers,

Robert

0 Replies 0