03-14-2014 07:10 AM
Hi
Withe a blade server (in this case a C7000 with HP22B FEX) connected via four 10Gb uplinks (two per FEX) to a pair of Nexus switches whats preferable using vPC and LACP to bind the whole lot into a single etherchannel or using mac pinning ?
I am tending towards vPC and LACP as it feels the simplest logically to me but the VMware guys would like MAC pinning as it allows them to use multi-nic Vmotion. I guess you need MAC pinning if you want iscsi multi-pathing but the storage here is traditional FC so that's not an issue.
Are there any pros and cons for either option ? I would have thought Vmotion would be pretty quick over 10Gb anyway.
Solved! Go to Solution.
03-17-2014 03:30 AM
Hello,
If the upstream switches can be clustered, then vPC (with LACP) is the preferred option. This allows better load balancing options from both the Nexus 1000v and the upstream switch perspective.
Thanks,
Shankar
03-17-2014 03:30 AM
Hello,
If the upstream switches can be clustered, then vPC (with LACP) is the preferred option. This allows better load balancing options from both the Nexus 1000v and the upstream switch perspective.
Thanks,
Shankar
03-17-2014 07:51 AM
Hi,
You can find a working config for your scenario here:
N1KV on UCS C-Series Servers or Rack Mount Servers Connected to Nexus 5000/7000 Switches with vPC
Additionally,
Cisco Nexus 1000V Interface Configuration Guide, Release 4.2(1)SV2(1.1)
Thanks,
Joe
03-20-2014 06:22 AM
Since you stated you are using traditional FC I wanted to let you know that if you are using FCoE on the B22 FEX for the four 10G links to your ESXi servers you will be unable to bundle all links in a single vPC. Currently FCoE-over-vPC only allows for two-link vPCs.
Your two best options with Nexus 1000V are to either use MAC-pinning (simple) or create two vPCs per server and use CDP sub-groups (complex). I deployed a similar solution and decided to use MAC-pinning. The solution is very clean and straightforward and since every link was identical I was able to use a single uplink Ethernet port-profile for uplinks on N1KV a single downlink port-profile on the parent Nexus switch for all ESXi servers. I also enabled support for multi-NIC vMotion which significantly decreased migration time.
If you have any additional questions please let me know.
Paul Tatum
CCDE #20140022, CCIE #26866 (DC/R&S/SP)
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide