05-29-2013 04:03 AM - edited 03-07-2019 01:37 PM
Hi all,
We have a setup of 2 Nexus 7000 chassis and several fexes (N2K-C2248TP-1GE).
The fexes are connected through a port-channel to a single nexus 7000 (no vpc). (Fex 1 to Nexus 1, fex 2 to Nexus 2, fex 3 to nexus 1 etc).
Are there guidelines on how to connect a server to those fexes.
I can see several possible scenario's at our site. I have drawn some scenario's on a design.
Can anyone tell me if these are all possible? I can't find detailed information on which setup is possible and which is not. The goal is to have as much redundancy as we can.
When using scenario 1, do I configure an orphan port on the uplink to this server?
Thanks,
best Regards,
Joris
Solved! Go to Solution.
05-30-2013 05:19 AM
Correct. multiple NICs to 2 different FEXes will you give you the most redundancy. Also one portchannel for all links connecting to both FEXes using vPC.
When active-passive --> connect like access ports.
Correct, you can do access or trunk if you have multiple vlans.
HTH
05-29-2013 05:56 AM
Hi,
Currently, you can connect a FEX to only one 7K. As for your sever connectivity options, have a look at this link for different scenarios.
look under Nexus 7000 topologies
http://jeremywaldrop.wordpress.com/2011/06/30/cisco-nexus-50007000-fex-topologies/
Alos, option 4 in your design seems to most redundant topology.
HTH
05-29-2013 06:02 AM
Thanks Reza,
So it is possible to create a vpc to the server in scenario 4? Putting 2 ports in fex 1 and 2 ports in fex 2 in vpc. Although the fexes themselves can't be connected in vpc.
Is that correct?
Thanks,
Joris
05-29-2013 06:09 AM
Hi,
I would put all the 4 ports connecting to both 2ks in the same Fex. Remember, you also need vPC peer link and vPC keep alive between the 7ks. I don't see these in your diagram currently.
HTH
05-29-2013 06:22 AM
Hi,
The VPC peer link and keepalive is there, it's not on the drawing.
I would like to avoid putting all 4 ports on the same fex. the single fex would be a single point of failure.
I would like to create eg. Po1 on both nexusses with the member ports being the uplink to the server and then making them member of the same vpc.
Would that be a possibility?
Thanks,
Joris
05-29-2013 03:13 PM
Hi Joirs,
Creating a vPC with 4 physical ports in it is not a single point of failure. If one physical link fails you have 3 more, if 2 fails, you have 2 more and so on.
I would like to create eg. Po1 on both nexusses with the member ports being the uplink to the server and then making them member of the same vpc.
The one I also recommenced is one vPC/Po for all links.
HTH
05-29-2013 04:18 PM
Hi,
Sorry.... had to chip in
put all the 4 ports connecting to both 2ks in the same Fex
Reza. I think this is a typoe and that you meant put all the 4 ports connecting to both 2ks in the same vPC. If not Joris is correct that a single FEX is a single point of failure.
Otherwise I would agree with Reza, that option 4 looks to be the most resilient.
Regards
05-29-2013 05:53 PM
Guys,
Sorry about that typo. I meant same vPC.
put all the 4 ports connecting to both 2ks in the same vPC
Reza
05-29-2013 11:21 PM
Thanks Guys,
So the basic guideline is. Connect a server with multiple NIC's to different fexes.
When active-passive --> connect like access ports.
When NIC teaming --> use vPC over multiple fexes.
I would not use 2 active standard port-channels to 2 different fexes. I suppose you would have mac flapping in that case.
Best Regards,
Joris
05-30-2013 05:19 AM
Correct. multiple NICs to 2 different FEXes will you give you the most redundancy. Also one portchannel for all links connecting to both FEXes using vPC.
When active-passive --> connect like access ports.
Correct, you can do access or trunk if you have multiple vlans.
HTH
05-30-2013 09:17 AM
Thanks guys,
05-30-2013 11:43 AM
Joris,
If the answers were helpful to you, can you rate and mark the post as answered, so others can benefit from it.
Thanks
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: