cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3029
Views
20
Helpful
12
Replies

Nexus VPC extension

tsd
Level 1
Level 1

Hi,

 

First time poster so please forgive any mistakes. 

 I need some advise regarding a design question. We want to connect Nexus server switches to our core devices (Nexus 9k). The core devices will be VPC peers. The end servers will be connected to the server switches. Is there any way we can extend the VPC from the core all the way to the end servers? So in effect have the server switch behave similar to a FEX (using fex is not an option). I have attached a diagram of the setup that we're trying to do. 

 

We can also have VPC peering from the cores to each server switch. What would be the pros/cons in that case?

 

Alternatively, another option would be to create the VPCs on the switches connecting to the cores. But that will complicate things further and we want to avoid going that route since most of the end servers in the virtualized environment would need access to the same vlans as they will be clustered together and be able to move VMs seamlessly between physical servers as and when needed. 

 

 

Any advise would be greatly appreciated!

 

Thanks! 

 

 

1 Accepted Solution

Accepted Solutions

Sergiu.Daniluk
VIP Alumni
VIP Alumni

Hi @tsd 

I would assume that the gateway of your servers is residing on the Nexus switches which are marked as CORE, right?

Presuming that is the case and all the access switches are also Nexus switches, then the design I would recommend you to do is the following:

Screenshot 2022-02-06 091811.png

 

This is what is called back-to-back vPC, between two VPC domains.

If the L2 switches (access layer) is not Nexus, the only option you have is to do vPC port-channel to each L2 switch.

 

Stay safe,

Sergiu

View solution in original post

12 Replies 12

follow

Sergiu.Daniluk
VIP Alumni
VIP Alumni

Hi @tsd 

I would assume that the gateway of your servers is residing on the Nexus switches which are marked as CORE, right?

Presuming that is the case and all the access switches are also Nexus switches, then the design I would recommend you to do is the following:

Screenshot 2022-02-06 091811.png

 

This is what is called back-to-back vPC, between two VPC domains.

If the L2 switches (access layer) is not Nexus, the only option you have is to do vPC port-channel to each L2 switch.

 

Stay safe,

Sergiu

Hi Sergiu,

 

You're right, the gateways of the servers will reside on the switches marked as Core.

 

I'm relatively new to Nexus, so will look into back-to-back VPCs as you suggested (This definitely looks like the way we want to go). We'll have 2 core switches and 8 access switches connecting to those 2 cores. I'm assuming we can have 4 pairs of 2 access switches each paired to the cores in back-to-back configuration without issues, so total 5 VPC domains as compared to the 3 in the setup that you demonstrated.

 

Thanks a lot, really appreciate it!

 

Yes, you can connect as many back-to-back vpc pairs directly to the CORE switches. Only limitation is the number of domains chaining back-to-back, which is maximum 3. So for example you cannot connect another vpc pair back-to-back to domain 34 because there will be 4 domains in chain:

 

vpc domain 12 =0= vpc domain 1 =0= vpc domain 34 =0= vpc domain 56

 

Stay safe,

Sergiu

Thanks a lot for that info Sergio! Understood regarding the domains chaining back to back, we will be at 3 domains since each pair will be directly connected to the core switches, so within the permissible limits. 

 

Best regards

Hi Sargiu,

I have tried with similar design in our data center. Comparing to above design, my newly added vpc domain is 34. Whenever I am turning ports from NX 3 and 4 (connected with core switches) up, a STP loop is occurring between vpc domain 1 and 12. I have attached log from N9K-1 switch.

Can you please help with any possible suggestion to avoid this loop?

 

If you see STP loops, the problem usually are related to a misconfig, or unidirectional ports. Most common one are first option. It's hard to pin point to something about the problem you experience without looking at the cfg. Also, the best way to tshoot this is live (with TAC support). If that one is not an option, you should open a new topic about the problem you have and attach the physical topology along with the config you have and logs.

Take care,

Sergiu

Thanks for your response Sergiu!

I have opened a new topic as you suggested https://community.cisco.com/t5/data-center-switches/nexus-vpc-extension-facing-stp-loop/m-p/4697350#M8600

Would you please have a look on this and lend a hand to help?

 

tsd
Level 1
Level 1

Hi Sergiu,

 

Question regarding your comment - "If the L2 switches (access layer) is not Nexus, the only option you have is to do vPC port-channel to each L2 switch." So we'll have vPC from each core switch to each L2 switch in this case? Can the access layer switches be made to behave like transparent switches? 

As I know the back-to-back is between the Access SW and Core/Agg. SW that both support the vPC.
now back-to-back make Server connect to both Access SW "which will appear virtually as one SW" and this called Smart SW. Server will never notice that it connect to two different SW.

in not smart SW the Server will see two SW and there are many VMWare that can handle this case.

Hi @tsd 

If access layer switches are not Nexus, then obviously they do not know vPC. They can have other MC-LAG type of protocol which can bundle two (or more) switches together.  If they don;t then you just have to configure a vPC port-channel toward each Access Switch.

 

Take care,

Sergiu

Thanks Sergiu!

 

How much is the recommended oversubscription? going from access to cores, should I be using 2 100 ports (1:2.4)(and buy new transceivers) or use maybe 3 40g ports (1:4)? If using 3 40g ports to each core (total 6 qsfp ports on the server switch), then I'll have to use copper ports for VPC peer link and maybe keepalive as well. Any concerns with using copper ports for VPC peering? I've read that at a minimum, 2*10GE links are good for VPC peering (though ideally we wont have much traffic flowing across the peer links).  

Review Cisco Networking for a $25 gift card