cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
7947
Views
9
Helpful
3
Replies

LACP and vPC or mac pinning ?

Patrick Colbeck
Level 3
Level 3

Hi

Withe a blade server (in this case a C7000 with HP22B FEX) connected via four 10Gb uplinks (two per FEX) to a pair of Nexus switches whats preferable using vPC and LACP to bind the whole lot into a single etherchannel or using mac pinning ?

I am tending towards vPC and LACP as it feels the simplest logically to me but the VMware guys would like MAC pinning as it allows them to use multi-nic Vmotion. I guess you need MAC pinning if you want iscsi multi-pathing but the storage here is traditional FC so that's not an issue.

Are there any pros and cons for either option ? I would have thought Vmotion would be pretty quick over 10Gb anyway.

1 Accepted Solution

Accepted Solutions

sprasath
Level 1
Level 1

Hello,

If the upstream switches can be clustered, then vPC (with LACP) is the preferred option. This allows better load balancing options from both the Nexus 1000v and the upstream switch perspective.

Thanks,

Shankar

View solution in original post

3 Replies 3

sprasath
Level 1
Level 1

Hello,

If the upstream switches can be clustered, then vPC (with LACP) is the preferred option. This allows better load balancing options from both the Nexus 1000v and the upstream switch perspective.

Thanks,

Shankar

paul.tatum
Level 1
Level 1

Since you stated you are using traditional FC I wanted to let you know that if you are using FCoE on the B22 FEX for the four 10G links to your ESXi servers you will be unable to bundle all links in a single vPC.  Currently FCoE-over-vPC only allows for two-link vPCs.

Your two best options with Nexus 1000V are to either use MAC-pinning (simple) or create two vPCs per server and use CDP sub-groups (complex).  I deployed a similar solution and decided to use MAC-pinning.  The solution is very clean and straightforward and since every link was identical I was able to use a single uplink Ethernet port-profile for uplinks on N1KV a single downlink port-profile on the parent Nexus switch for all ESXi servers.  I also enabled support for multi-NIC vMotion which significantly decreased migration time.

If you have any additional questions please let me know.

Paul Tatum

CCDE #20140022, CCIE #26866 (DC/R&S/SP)