cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1496
Views
0
Helpful
3
Replies

Nexus 9kv NXOS 9.2.1 Spine not forwarding VXLAN traffic

I have a very simple Nexus 9000v test set up with two leaf switches and a single spine with an EVPN VXLAN configuration.  A single L2 VNI is configured and is using BGP ingress-replication.  The spine switch is configured with an NVE interface because I wanted to test the new multi-site capabilities available in 9.2.1 in a BGW on spine topology.

 

With the spine running NX-OS 9.2.1, encapsulated traffic appears to be dropped by the spine switch when the VNI in question is configured as a member on the NVE interface with BGP ingress-replication.  The same configuration appears to work perfectly when the spine is running 7.0.(3)I7.

 

Signalling appears to be working correctly even when traffic is not being forwarded.  I see the expected sets of Type 2 and Type 3 routes and I see the source leaf correctly encapsulating both unicast and BUM traffic.

 

Topology and configs attached.

3 Replies 3

ulasinski
Level 1
Level 1

I'm sorry but your configuration is not typical. VTEP is configured on SPINE. I have both versions of the software and both of them work. I'm not sure but somewhere I read that 9.2.1 must work with PIM Sparse. You don't have it configured at SPINE at all. 

 

https://community.cisco.com/t5/data-center-blogs/vxlan-evpn-configuration-example-n9k-p2p/ba-p/3663830

That is a good example of configuration. 

 

But at the beginning I would check mtu size.

Thanks ulasinski.

 

You're right that this is not a typical configuration for either a single site deployment or for a deployment with larger fabrics - however it IS a supported configuration for EVPN Multisite in the BGW on Spine model (https://www.cisco.com/c/en/us/products/collateral/switches/nexus-9000-series-switches/white-paper-c11-739942.html#_Toc498025667)

 

I have now been able to verify that this configuration is good on real hardware and it works as expected.  My suspicion is that this is a bug in the L2fwd engine of the N9kv specifically.  It would be great to get this resolved because it would mean we can build an EVPN multisite deployment with BGP on Spine in our virtual lab - without having to train engineers on customer equipment :)

 

Regarding 9.2.1 requiring PIM Sparse - I'm not sure where you read that.  Ingress-replication works just fine - and in fact is the only option supported over the site-external underlay for a multi-site deployment.

 

Regarding MTU size - you're right that I would need jumbo MTU configured in a proper deployment - but it doesn't fundamentally affect the operation of VXLAN EVPN unless you're trying to carry Ethernet frames that once encapsulated are larger than 1500 bytes (which in my test environment I'm not - it's just ARP and ICMP echo for now).

 

Thanks very much for at least taking a look :)

you're right you don't need multicast, Ingress replication should be enough.

 

"For BUM replication, either multicast (PIM ASM) or ingress replication can be used. EVPN Multi-Site architecture allows both modes to be configured. It also allows different BUM replication modes to be used at different sites. Thus, the local site-internal network can be configured with ingress replication while the remote site-internal network can be configured with a multicast-based underlay.

Note: BGP EVPN allows BUM replication based on either ingress replication or multicast (PIM ASM). The use of EVPN doesn’t preclude the use of a network-based BUM replication mechanism such as multicast."

 

you do not need jumbo frame. 

EVPN Multi-Site architecture uses VXLAN encapsulation for the data plane, which requires 50 or 54 bytes of overhead on top of the standard Ethernet MTU (1550 or 1554).


I would configure the mtu value correctly even in lab. 

 

How do you check communication? VxLAN working? Signaling working?

 

 

It's easy but ....have you tried changing firmware on leafs to 9.2.1?

 

Review Cisco Networking for a $25 gift card