07-08-2015 10:27 PM
I'm planning the details of how to configure our new 9300 switches (2 each in 2 DC's, connected by 2 runs of dark-fibre)
Is there anything that I need to be aware of that will not work? I haven't been able to find much in the way of documentation for EVPN.
We have a mixture of 4500E-Sup8's and 3850 stacks that will dual-home to one pair of 9300's with ptp routes.
The 4 x 9300's will form a ring from a cable perspective but I assume I would connect them as ptp /31's, run iGP like EIGRP/OSPF, overlay VXLAN with MP-BGP EVPN
07-09-2015 12:33 AM
Hi,
For general VXLAN, MP-BGP, EVPN etc., then the VXLAN Network with MP-BGP EVPN Control Plane white paper is quite good with designs and configuration examples.
There's also a bunch of presentations on Cisco Live. Take a look at BRKDCT-3378 - Building Data Center Networks with Overlays (VXLAN/EVPN & FabricPath) and BRKDCT-2404 - VXLAN Deployment Models - A practical perspective.
While the aforementioned will give you some good material, they focus mainly on the building of Clos based leaf-spine fabric for connectivity within a single data centre. And there's a reason for that.
Cisco are not proposing MP-BGP and EVPN as a data centre interconnect at this stage as all the required features are not yet implemented. Note in particular slide 108 in the BRKDCT-3378 presentation.
Taken from that slide:
VXLAN applicability evolves as the Control Plane evolves!
If you're watching the presentation the discussion around this is approximately 1hr. 20mins. in.
You might want to go back to your Cisco rep and go through what you're proposing and get their view on it's applicability and risk.
Regards
07-09-2015 07:05 PM
Thanks for the reply. Those slides are great - I had the larger 3378 but hadn't read it yet.. just been skimming through (wow.. that detail!) but I hadn't seen 2404 yet.
I think I understand your points about DCI, however after looking through the slides and the limitations it involves vs OTV I don't think it will be a problem in our particular case.
To expand on my earlier info, the two DC's I'm building this on are our smaller sites that most compute is being moved out of, but there is quite a lot of L2 to contend with in migration. The larger, main DC's currently run OTV over 7K's (only 1 per DC). So my plan was to trial VXLAN/EVPN and interconnect it to the 7K's with plain dot1q trunks since I have been told by Cisco that the loop-free nature of both VXLAN and OTV should contain any issues (the 4 DC's form a square, not mesh)
After reading that doc I'm still not fully clear on how vPC is involved (whether necessary or not), and what issues I might have given that we'll be using only leaf and not spine/leaf.
07-11-2015 06:50 AM
I'd be interested in seeing a diagram of what you're planning. I think that given there are different control planes for OTV and VXLAN you need to be very careful that you can't get a loop created somehow.
As far as vPC, this may be an obvious statement, but Cisco vPC will only be necessary if you want to connect downstream devices to the N9K via port-channels. I've got a setup where my downstream devices are not using any sort of link aggregation, so I don't have any vPC configuration on the switch pair.
If you're using vPC, then there's an additional step from a VXLAN perspective, and that's to configure a secondary IP address on the nve interface. This is documented in the Virtual Port-Channel VTEP in MP-BGP EVPN VXLAN section of white paper.
It's not explicitly stated in the document, or at least not that I've seen, but I think there are a number of reasons for using a shared VTEP IP address:
It's from another vendor, but it may also be worth you also taking a look at the Data Center Interconnection with VXLAN guide. From what you describe, I think what's shown in this guide is similar to what you're intending to do.
Regards
02-19-2016 03:33 AM
I'm looking to do something very similar to this and wondered how you got in with this.
Thanks
03-09-2016 01:52 PM
So it works quite well just in limited production use so far. I can see benefits of the spine/leaf topology for when we grow it further though.
Some things I found were that vrf's were mandatory if you want to use VXLAN gateways on your VXLANs on the same boxes, even if you have only a single tenant.
My topology is mostly as described above - 4 nodes that are performing all functions of spine/leaf/border leaf in each switch, as well as each 2 are a vPC pair to downstream connected hosts. This is pure unicast - no multicast is configured whatsoever and is not required as it uses ingress-replication. I was really disappointed with the lack of knowledge and documentation on this!
03-20-2018 01:49 AM
Hello Brendan,
EVPN on 9300's as collapsed core to routed-access
You can also refer the guide in the below link.
09-27-2023 05:31 PM
Any chance you can share your configs here? I’m looking to do something similar, my sites are just connected via L3 MPLS with veloclouds in between…
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: