Showing results for 
Search instead for 
Did you mean: 

EVPN on 9300's as collapsed core to routed-access


I'm planning the details of how to configure our new 9300 switches (2 each in 2 DC's, connected by 2 runs of dark-fibre)


Is there anything that I need to be aware of that will not work? I haven't been able to find much in the way of documentation for EVPN.


We have a mixture of 4500E-Sup8's and 3850 stacks that will dual-home to one pair of 9300's with ptp routes.

The 4 x 9300's will form a ring from a cable perspective but I assume I would connect them as ptp /31's, run iGP like EIGRP/OSPF, overlay VXLAN with MP-BGP EVPN

6 Replies 6

Steve Fuller


For general VXLAN, MP-BGP, EVPN etc., then the VXLAN Network with MP-BGP EVPN Control Plane white paper is quite good with designs and configuration examples.

There's also a bunch of presentations on Cisco Live. Take a look at BRKDCT-3378 - Building Data Center Networks with Overlays (VXLAN/EVPN & FabricPath)  and BRKDCT-2404 - VXLAN Deployment Models - A practical perspective.

While the aforementioned will give you some good material, they focus mainly on the building of Clos based leaf-spine fabric for connectivity within a single data centre. And there's a reason for that.

Cisco are not proposing MP-BGP and EVPN as a data centre interconnect at this stage as all the required features are not yet implemented. Note in particular slide 108 in the BRKDCT-3378 presentation.

Taken from that slide:

VXLAN applicability evolves as the Control Plane evolves!

  • Yesterday: VXLAN, yet another Overlay
    • Data-Plane only (Multicast based Flood & Learn)
  • Today: VXLAN for the creation of scalable DC Fabrics – Intra-DC
    • Control-Plane, active VTEP discovery, Multicast and Unicast (Head-End Replication)
  • Future: VXLAN for DCI – Inter-DC
    • DCI Enhancements (ARP caching/suppress, Multi-Homing, Failure Domain isolation, Loop Protection etc.)

If you're watching the presentation the discussion around this is approximately 1hr. 20mins. in.

You might want to go back to your Cisco rep and go through what you're proposing and get their view on it's applicability and risk.


Thanks for the reply. Those slides are great - I had the larger 3378 but hadn't read it yet.. just been skimming through (wow.. that detail!) but I hadn't seen 2404 yet.

I think I understand your points about DCI, however after looking through the slides and the limitations it involves vs OTV I don't think it will be a problem in our particular case.

To expand on my earlier info, the two DC's I'm building this on are our smaller sites that most compute is being moved out of, but there is quite a lot of L2 to contend with in migration. The larger, main DC's currently run OTV over 7K's (only 1 per DC). So my plan was to trial VXLAN/EVPN and interconnect it to the 7K's with plain dot1q trunks since I have been told by Cisco that the loop-free nature of both VXLAN and OTV should contain any issues (the 4 DC's form a square, not mesh)

After reading that doc I'm still not fully clear on how vPC is involved (whether necessary or not), and what issues I might have given that we'll be using only leaf and not spine/leaf.

I'd be interested in seeing a diagram of what you're planning. I think that given there are different control planes for OTV and VXLAN you need to be very careful that you can't get a loop created somehow.

As far as vPC, this may be an obvious statement, but Cisco vPC will only be necessary if you want to connect downstream devices to the N9K via port-channels. I've got a setup where my downstream devices are not using any sort of link aggregation, so I don't have any vPC configuration on the switch pair.

If you're using vPC, then there's an additional step from a VXLAN perspective, and that's to configure a secondary IP address on the nve interface. This is documented in the Virtual Port-Channel VTEP in MP-BGP EVPN VXLAN section of white paper.

It's not explicitly stated in the document, or at least not that I've seen, but I think there are a number of reasons for using a shared VTEP IP address:

  • In the event of a failure of one of the switches forming the vPC pair, there's no change in the control plane on any of the other VTEPs. The IP/MAC address of any device connected to the remaining operational switch of the pair will still be reachable by the same shared VTEP address so resulting in faster convergence.
  • Using a shared VTEP address avoids MAC moves. For example, lets assume we have a host h1 connected via vPC to switches s1 and s2. If the first frame h1 goes via s1, this switch will pass the IP/MAC for h1 via the control plane (and route reflector) to all other VTEPs, including s2. When a frame is sent from h1 via s2, then s2 will see this as a MAC move and update the RR accordingly. Another frame via s1 would be seen as another MAC move, and so on.

It's from another vendor, but it may also be worth you also taking a look at the Data Center Interconnection with VXLAN guide. From what you describe, I think what's shown in this guide is similar to what you're intending to do.



I'm looking to do something very similar to this and wondered how you got in with this.


So it works quite well just in limited production use so far. I can see benefits of the spine/leaf topology for when we grow it further though.

Some things I found were that vrf's were mandatory if you want to use VXLAN gateways on your VXLANs on the same boxes, even if you have only a single tenant. 

My topology is mostly as described above - 4 nodes that are performing all functions of spine/leaf/border leaf in each switch, as well as each 2 are a vPC pair to downstream connected hosts. This is pure unicast - no multicast is configured whatsoever and is not required as it uses ingress-replication. I was really disappointed with the lack of knowledge and documentation on this!

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: