04-12-2017 03:42 PM - edited 03-01-2019 08:31 AM
We are thinking about using VXLAN to extend L2 between the DCs. Customer has N9ks at both the sites and has a single point to point link. I am not able to find a design or configuration guide to use VXLAN for DCI solution. Please note that within the DCs VXLAN will not be used it will be vlan only. We want to implement VXLAN only to extend the L2.
04-12-2017 06:27 PM
What about using OTV?
Scratch that. OTV is not available on the N9K.
04-14-2017 04:37 PM
Hi Philip,
Thanks for your response and the VXLAN for DCI. What I see from the document is Spine-leaf kind of architecture to extend the Layer 2 between geographically dispersed DCs. What we have here is a simple setup of UCS Chassis Connecting to Nexus9k with Nexus9k acting as a layer 2 Access switch which is then connected to Nexus5k Distributed switch so we do not have any Spine-leaf kind of model. Looking at the VXLAN deployment for DCI I think the only real option is to go for ASR1001 routers and extend the vlans using OTV.
What I was thinking and hoping for is to use VXLAN between my Nexus93272UP nexus access switches in both the DCs and extend the VLAN and want to use VXLAN only for the DCI part but not for the actual traffic within the DC. But looking at the options looks like OTV is the only option to go for with additional ASR1000 routers
10-03-2017 07:49 AM - edited 10-03-2017 07:50 AM
Just a slight update on this question in case you have not yet deployed your DCI or for other readers.
Since release 7.0(3)i7(1) there is new feature to interconnect VXLAN EVPN-based fabrics that relies on VXLAN EVPN as the transport (DP + CP) between sites. This feature is called VXLAN EVPN Multi-site. This solution offers a hierachical integration of the DCI services into the same border leaf nodes (single box deployment) that stitches the L2VNI between the intra-fabric and the inter-fabric. Have a look at this CG.
hope that helps, yves
04-12-2017 06:28 PM
10-05-2017 09:55 AM
Hello,
Thank you for the pointer
Let me add a slight comment.
This design was introduced with the 1st generation on Nexus9k, offering a more robust way to strecthed a single VXLAN domain across long distances.
As of today, the CloudScale ASIC (N9k-EX/FX) supports the L2 VNI stitching in hardware, at line rate, in a single box. VXLAN EVPN Multi-site leverages the L2 VNI stitching to create a hierarchical architecture in a single box design, reducing the failure domain to its smallest diameter. This is now the recommended solution to interconnect multiple VXLAN EVPN Fabric for L2 and L3 extension using a single transport .
Even for any VXLAN EVPN fabric built with the first generation of N9k, or even for an existing VXLAN EVPN Multipod deployments, by just adding a single pair on new N93xx-EX as border gateway per site, it is enough to build the full end-to-end Multi-site architecture. It is also possible to reuse the same Border gateway platform to locally attached L3 services (FW, routers, SLB). Endpoint attachement at Layer 2 (including vPC members) will be supported shortly.
Best regards, yves
12-11-2017 12:24 AM
10-05-2017 08:47 AM
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide