03-12-2013 05:45 AM
Hi
I have two Nexus 2248 FEX TOR design, and I need to connect my four ESX 5.1 servers. My initial design used the Nexus 1000v, 2 etherchannels upto the nexus 2248's and a vpc link for redundency
N2k N2k
| |
| vPC |
| |
------------------------
N1K
------------------------
I'm not sure if this is possible due to the BPDU's received on the 2248 stop me attaching another switch (nexus 1000v) underneath it. Is this correct?
Also, if I can not use vPC to uplink to the N2K doe sthis mean I need 2 etherchannels, with STP to block one link?
How have other people over come this issue? I'm sure I'm not the only one
Regards
Steve
Solved! Go to Solution.
03-12-2013 06:29 AM
Hello Steve,
This is the recommended topology for N1k hosts. N1k drops all BPDUs on ingress interface and do not transmit any - we don't run STP. Use LACP for PCs.
Matthew
03-12-2013 06:29 AM
Hello Steve,
This is the recommended topology for N1k hosts. N1k drops all BPDUs on ingress interface and do not transmit any - we don't run STP. Use LACP for PCs.
Matthew
03-13-2013 04:02 AM
Hi Matthew
Thanks for the quick reply. Am I right in assuming that this means I will have to implement MAC pinning on the etherchannels?
Steve
03-13-2013 06:15 AM
Hello Steve,
No. I would run VPC across the N2ks with LACP port-channels.
Follow page 2 "N1k on UCS-C or Rack Mount servers connected to Nexus 5k/7k switches with vPC" in this doc:
https://supportforums.cisco.com/docs/DOC-18222
Matthew
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide