11-01-2018 04:38 AM
I have a very simple Nexus 9000v test set up with two leaf switches and a single spine with an EVPN VXLAN configuration. A single L2 VNI is configured and is using BGP ingress-replication. The spine switch is configured with an NVE interface because I wanted to test the new multi-site capabilities available in 9.2.1 in a BGW on spine topology.
With the spine running NX-OS 9.2.1, encapsulated traffic appears to be dropped by the spine switch when the VNI in question is configured as a member on the NVE interface with BGP ingress-replication. The same configuration appears to work perfectly when the spine is running 7.0.(3)I7.
Signalling appears to be working correctly even when traffic is not being forwarded. I see the expected sets of Type 2 and Type 3 routes and I see the source leaf correctly encapsulating both unicast and BUM traffic.
Topology and configs attached.
11-22-2018 02:29 PM
I'm sorry but your configuration is not typical. VTEP is configured on SPINE. I have both versions of the software and both of them work. I'm not sure but somewhere I read that 9.2.1 must work with PIM Sparse. You don't have it configured at SPINE at all.
That is a good example of configuration.
But at the beginning I would check mtu size.
11-22-2018 05:13 PM
Thanks ulasinski.
You're right that this is not a typical configuration for either a single site deployment or for a deployment with larger fabrics - however it IS a supported configuration for EVPN Multisite in the BGW on Spine model (https://www.cisco.com/c/en/us/products/collateral/switches/nexus-9000-series-switches/white-paper-c11-739942.html#_Toc498025667)
I have now been able to verify that this configuration is good on real hardware and it works as expected. My suspicion is that this is a bug in the L2fwd engine of the N9kv specifically. It would be great to get this resolved because it would mean we can build an EVPN multisite deployment with BGP on Spine in our virtual lab - without having to train engineers on customer equipment :)
Regarding 9.2.1 requiring PIM Sparse - I'm not sure where you read that. Ingress-replication works just fine - and in fact is the only option supported over the site-external underlay for a multi-site deployment.
Regarding MTU size - you're right that I would need jumbo MTU configured in a proper deployment - but it doesn't fundamentally affect the operation of VXLAN EVPN unless you're trying to carry Ethernet frames that once encapsulated are larger than 1500 bytes (which in my test environment I'm not - it's just ARP and ICMP echo for now).
Thanks very much for at least taking a look :)
11-22-2018 11:53 PM
you're right you don't need multicast, Ingress replication should be enough.
you do not need jumbo frame.
I would configure the mtu value correctly even in lab.
How do you check communication? VxLAN working? Signaling working?
It's easy but ....have you tried changing firmware on leafs to 9.2.1?
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide