07-22-2024 04:45 AM
Hi all,
I would like your opinion and guidance for a network design concerning two separate Data Centers running on a Nexus 9000 Compact aggregation-layer design where i would like to have a DCI/Lan extension capabilities.
So, I would like to ask if it is possible to have a LAN extension solution between two Data centers using VXLAN when the network design is based on Nexus 9000 Compact Collapsed Core/aggregation-layer design.
For example, on each site there will be a Pair of Core/Aggregation 9K's running VPC to the Downstream TOR 9K's access layer switches where the servers will be Dual home attached with VPC.
Some VLANs/SVI's will be terminated on an FTD appliance that will be dual attached on the Collapsed Core 9K and some other will be locally connected on the 9K Core/Aggr device.
Is there any how to guide or a CVD for the above scenario?
Is this a valid go to solution and design or should i consider the Spine/Leaf topology? The server racks are less than 10 on each site so i want to keep it as simple as possible.
Thanks.
07-30-2024 11:55 PM
If your purpose is to use VXLAN only between data centers, then you can build back-to-back VPC BGW topology and enable VXLAN only on them. In such case, core switches will also perform BGW role.
07-31-2024 06:33 AM
Yes my main goal is to have L2 extension between our 2 Data Centers. There is no need to run vxlan inside the DC because there are not too many Racks/Servers. The collapsed core design works just fine for us.
Thanks for the info, I will have a look at it and I'll come back for any questions.
10-17-2024 12:05 PM
If i understand correct, in the BGW back to back topology i will have to physically interconnect the BGWs. This creates physical-connectivity issues because i already have a layer 3 core between the two DC's which i want to take advantage of. Due to high telecom costs, there is no option of buying more circuits from the local providers for the back-to-back connectivity of the BGWs'.
So in that case i understand that theBGW-to-cloud model will be a better choice.
But one thing that i should mention,is that the is that on one of the two DC's, the layer 3 core link will be terminated on the Collapsed Core/aggregation 9K switch which will also be the BGW.
The other side of the link will be terminated on a dedicate layer 3 router which wil have layer 3 links back to the BGW.
Do you see any stoppers issue here?
Thanks.
10-17-2024 12:29 PM
I have also checked the VXLAN EVPN Multi-Site with vPC BGWs whitepaper but again it mentions that the the BGWs should be interconnected to each other.
Actually this topology is pretty close to what i want achieve with the difference that the layer 3 link will be terminated to the Collapsed Core/aggregation 9K switch and not back to back to BGWs. Is it valid?(Something like the OTV on a stick topology)
Thanks
10-17-2024 12:46 PM
@Pavel TarakanovSorry i confused. While i was reading the whitepapers i thought that the BGWs and the core switches should be on separate machines? But you wrote that, core switches will also perform BGW role.
If that is possible, then i suppose is valid to Use the same chassis as a BGW and Core/Aggr Switch and then use vPC BGW nodes to connect multiple legacy data center sites.
Thanks
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide