cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3528
Views
24
Helpful
48
Replies

EVPN VXLAN NVE Peers only showing up on one side and are inconsistent

shannonr
Level 1
Level 1

I am setting up EVPN-VXLAN as an overlay to provide L2 reachability.

I have 2 spines and 4 leaves. Spines are Cat 9500-16x and Leaves are a mix of Cat 9500-16x and Cat 9500-48y4c (2 of each).

Looking at the NVE peers on all the switches:

  • Spine1 peers with Leaf 1-2
  • Spine 2 peers with Leaf 1-2
  • Leaf 1 peers with Spine 1,2, Leaf 2
  • Leaf 2 peers with Spine 1,2, Leaf 1.
  • Leaf 3 thinks it peers with Spine 1-2, and Leaf 1-2
  • Leaf 4 thinks it peers with Spine1-2 and Leaf 1-2.

I'm not sure why leaves 1-2 peer with each other. And I'm not sure why leaves 3-4 think they are peered with everything (except with each other), when the other peer doesn't recognise the peering.

I also receive these errors in the logs on the 9500-16x switches:
*Jul 9 01:42:31.759: NVE-MGR-EI: L2FIB rvtep 100102:UNKNOWN cfg type 2 for BD 102
*Jul 9 01:42:31.759: NVE-MGR-EI ERROR: Invalid peer address for bd 102

These errors are definitely related as, as, if I disable the NVE interfaces on the 9500-48 switches (leaves 3-4) the errors stop.

Config is identical on all leaves.

  • Spine1 = 10.254.1.1 (NVE IP, source interface loopback 1)
  • Spine2 = 10.254.1.2
  • Leaf 1 (9500-16) = 10.254.1.3
  • Leaf 2 (9500-16) = 10.254.1.4
  • Leaf 3 (9500-48) = 10.254.1.5
  • Leaf 4 (9500-48) = 10.254.1.6

 

Output from Leaf 5 showing it has a peering to leaf 2:

RND-9500-48X_VTEP(config-if)#do sh nve peers | i 10.254.1.4
nve1 100100 L2CP 10.254.1.4 4 100100 UP N/A 00:03:49

Output from Leaf 2 showing no peering to leaf 5:

S4NR-9500-16X_VTEP#sh nve peers | i 10.254.1.5
S4NR-9500-16X_VTEP#

S4NR-9500-16X_VTEP#sh nve peers | i 10.254.1.
nve1 100100 L2CP 10.254.1.1 4 100100 UP N/A 23:49:47
nve1 100100 L2CP 10.254.1.2 4 100100 UP N/A 23:49:47
nve1 100100 L2CP 10.254.1.3 4 100100 UP N/A 23:49:47

Note: the loopbacks used by the NVE interfaces can ping each other so the underlay routing is working fine (OSPF).

NVE interface config (identical on all switches):

interface nve1
no ip address
source-interface Loopback1
host-reachability protocol bgp
end

What is going on?

I've not sure how to troubleshoot further.

Thanks!

48 Replies 48

Friend it not ospf issue it bgp issue

Ospf is connect leaf to spine each other but the ibgp not work with this topolgy.

Just give me time I will share with you some solution for this case

MHM

Hello @shannonr 

Thanks for your feedback.

The " UNKNOWN cfg type 2 "suggests that the system is encountering an unexpected or unknown configuration type for the bridge domain 100.

"Invalid peer address for bd 100..." indicates that the NVE does not recognize the peer address configuration for the specified bridge domain.

So, there might be a mismatch in the VNI-to-BD mapping between the 9500-16x and 9500-48 leaf switches.

First, ensure that the VNIs and corresponding bridge domains are consistently configured across all leaf switches.

Verify that the VTEP addresses are correctly advertised and learned. The 9500-16x switches might be receiving or interpreting the VTEP addresses incorrectly.

On the 9500-16x switches, check the NVE interface configuration and ensure that the VNIs are correctly mapped to the appropriate bridge domains.

Compare the configurations with the 9500-48 switches to ensure consistency :

#show running-config interface nve1
#show nve vni
#show nve peers

 

 

 

Best regards
.ı|ı.ı|ı. If This Helps, Please Rate .ı|ı.ı|ı.

Hi and thanks for that. I have gone through and verified and believe everything is correct - have posted output below for 2 of the leaf switches (1x 16 port and 1x 48 port). I've limited it to 2 switches for brevity but have confirmed that it is identical on all 4 leaf switches.

Note: I apologize but my initial assessment was not correct - I have re-tested and it looks like this event occurs event between 2x 16 port switches. When I tested this originally I must not have waited long enough for the error to manifest.

Note: When the error occurs nothing breaks, host connectivity continues uninterrupted and none of the nve peers go down. The different uptimes show below is only from my testing different scenarios and taking them down intentionally.

Leaf 2:
sh running-config interface nve 1
interface nve1
mtu 9198
no ip address
source-interface Loopback1
host-reachability protocol bgp
end

show nve vni
Interface VNI Multicast-group VNI state Mode VLAN cfg vrf
nve1 100103 N/A Up L2CP 103 API N/A
nve1 100102 N/A Up L2CP 102 API N/A
nve1 100101 N/A Up L2CP 101 API N/A
nve1 100100 N/A Up L2CP 100 API N/A

show nve peers
Interface VNI Type Peer-IP RMAC/Num_RTs eVNI state flags UP time
nve1 100100 L2CP 10.254.1.3 4 100100 UP N/A 00:09:19
nve1 100100 L2CP 10.254.1.5 4 100100 UP N/A 00:09:37
nve1 100100 L2CP 10.254.1.6 4 100100 UP N/A 00:19:56
nve1 100101 L2CP 10.254.1.3 4 100101 UP N/A 00:09:19
nve1 100101 L2CP 10.254.1.5 4 100101 UP N/A 00:09:37
nve1 100101 L2CP 10.254.1.6 4 100101 UP N/A 00:19:56
nve1 100102 L2CP 10.254.1.3 4 100102 UP N/A 00:09:19
nve1 100102 L2CP 10.254.1.5 4 100102 UP N/A 00:09:37
nve1 100102 L2CP 10.254.1.6 4 100102 UP N/A 00:19:56
nve1 100103 L2CP 10.254.1.3 4 100103 UP N/A 00:09:19
nve1 100103 L2CP 10.254.1.5 4 100103 UP N/A 00:09:37
nve1 100103 L2CP 10.254.1.6 4 100103 UP N/A 00:19:56

show l2fib bridge-domain 100 detail
Bridge Domain : 100
Reference Count : 16
Replication ports count : 4
Unicast Address table size : 6
IP Multicast Prefix table size : 0

Flood List Information :
Olist: 1124, Ports: 4

Port Information :
BD_PORT Te1/0/9:100
VXLAN_REP PL:1562(1) T:VXLAN_REP [IR]100100:10.254.1.3
VXLAN_REP PL:1546(1) T:VXLAN_REP [IR]100100:10.254.1.5
VXLAN_REP PL:1522(1) T:VXLAN_REP [IR]100100:10.254.1.6

Unicast Address table information :
4880.02f5.3c99 VXLAN_UC PL:1538(1) T:VXLAN_UC [MAC]100100:10.254.1.5
4880.02f5.3cd1 VXLAN_UC PL:1538(1) T:VXLAN_UC [MAC]100100:10.254.1.5
f839.182a.0d0d VXLAN_UC PL:1554(1) T:VXLAN_UC [MAC]100100:10.254.1.3
f839.182a.0d51 VXLAN_UC PL:1554(1) T:VXLAN_UC [MAC]100100:10.254.1.3
f839.1849.05b1 VXLAN_UC PL:1514(1) T:VXLAN_UC [MAC]100100:10.254.1.6
f839.1849.05d1 VXLAN_UC PL:1514(1) T:VXLAN_UC [MAC]100100:10.254.1.6

Leaf 3:
sh running-config interface nve 1
interface nve1
mtu 9198
no ip address
source-interface Loopback1
host-reachability protocol bgp
end

show nve vni
Interface VNI Multicast-group VNI state Mode VLAN cfg vrf
nve1 100103 N/A Up L2CP 103 API N/A
nve1 100102 N/A Up L2CP 102 API N/A
nve1 100101 N/A Up L2CP 101 API N/A
nve1 100100 N/A Up L2CP 100 API N/A

show nve peers
Interface VNI Type Peer-IP RMAC/Num_RTs eVNI state flags UP time
nve1 100100 L2CP 10.254.1.3 4 100100 UP N/A 00:09:19
nve1 100100 L2CP 10.254.1.5 4 100100 UP N/A 00:09:37
nve1 100100 L2CP 10.254.1.6 4 100100 UP N/A 00:19:56
nve1 100101 L2CP 10.254.1.3 4 100101 UP N/A 00:09:19
nve1 100101 L2CP 10.254.1.5 4 100101 UP N/A 00:09:37
nve1 100101 L2CP 10.254.1.6 4 100101 UP N/A 00:19:56
nve1 100102 L2CP 10.254.1.3 4 100102 UP N/A 00:09:19
nve1 100102 L2CP 10.254.1.5 4 100102 UP N/A 00:09:37
nve1 100102 L2CP 10.254.1.6 4 100102 UP N/A 00:19:56
nve1 100103 L2CP 10.254.1.3 4 100103 UP N/A 00:09:19
nve1 100103 L2CP 10.254.1.5 4 100103 UP N/A 00:09:37
nve1 100103 L2CP 10.254.1.6 4 100103 UP N/A 00:19:56

show l2fib bridge-domain 100 detail

Bridge Domain : 100
Reference Count : 16
Replication ports count : 4
Unicast Address table size : 6
IP Multicast Prefix table size : 0

Flood List Information :
Olist: 1124, Ports: 4

Port Information :
BD_PORT Twe1/0/9:100
VXLAN_REP PL:846(1) T:VXLAN_REP [IR]100100:10.254.1.3
VXLAN_REP PL:777(1) T:VXLAN_REP [IR]100100:10.254.1.4
VXLAN_REP PL:778(1) T:VXLAN_REP [IR]100100:10.254.1.6

Unicast Address table information :
f839.182a.0d0d VXLAN_UC PL:734(1) T:VXLAN_UC [MAC]100100:10.254.1.3
f839.182a.0d51 VXLAN_UC PL:734(1) T:VXLAN_UC [MAC]100100:10.254.1.3
f839.182a.158d VXLAN_UC PL:751(1) T:VXLAN_UC [MAC]100100:10.254.1.4
f839.182a.15d1 VXLAN_UC PL:751(1) T:VXLAN_UC [MAC]100100:10.254.1.4
f839.1849.05b1 VXLAN_UC PL:753(1) T:VXLAN_UC [MAC]100100:10.254.1.6
f839.1849.05d1 VXLAN_UC PL:753(1) T:VXLAN_UC [MAC]100100:10.254.1.6

 

shannonr
Level 1
Level 1

Attaching here a couple of diagrams in case it helps. One showing the physical connectivity and the OSPF underlay, and the other showing the BGP overlay and NVE peering.

This is intended to perform bridging of layer across 2 datacentres and several small sites for a bunch of reasons. Full L3 is not an option; neither is traditional stretched VLANs due to scale and admin overhead which is where EVPN comes in. 

Let me check this' 

This explain why leaf not see each other.

The ibgp not advertise update receive frpm ibgp neighbor to other ibgp

Here leaf1 ànd leaf2 send update to RR1 and RR1 advertise it to only leaf3 

So let me check optimal design 

MHM

Thanks for that - just to clarify...with M02@rt37's help I did find an issue with the physical connectivity, and he pointed out that I had NVE peering enabled on the spine switches which has now been removed. With all that fixed up, functionally everything appears to be working. I have hosts connected to all 4 leaf switches and they can all ping each other across the fabric. In the BGP EVPN table I can see the MAC addressing being learned by each leaf switch.

In BGP every leaf is peered with both spines and only the spines. There is no leaf-leaf peering and no spine-spine peering.

If you can find a fault with the BGP I'm keen to hear it though as I am new to this and can easily make mistakes and miss things - please take your time

With the issues that M02@art37 found my concern now is what the errors I am seeing mean, and if they are pointing to a bigger underlying issue or not:

[07/11/24 09:21:23.267 NZST 40808 392] NVE-MGR-EI: L2FIB rvtep 100100:10.254.1.6 cfg type 1 for BD 100
[07/11/24 09:21:23.267 NZST 40809 392] NVE-MGR-DB: creating peer node for 10.254.1.6
[07/11/24 09:21:23.267 NZST 4080A 392] NVE-DB-EVT: Add peer 10.254.1.6 on VNI 100100 on interface 0x45
[07/11/24 09:21:23.267 NZST 4080B 392] NVE-DB-EVT: Added peer 10.254.1.6 on VNI 100100 on NVE 1
[07/11/24 09:21:23.267 NZST 4080C 392] NVE-DB-EVT: Add VNI 100100 to peer 10.254.1.6 source 4
[07/11/24 09:21:23.267 NZST 4080D 392] NVE-DB-EVT: Add VNI 100100 to peer 10.254.1.6 NVE 1 rc 0 flags 4 state 32
[07/11/24 09:21:23.267 NZST 4080E 392] NVE-DB-EVT: Added VNI 100100 on Peer 10.254.1.6
[07/11/24 09:21:23.267 NZST 4080F 392] NVE-MGR-DB: PEER 1/10.254.1.6 ADD oper
[07/11/24 09:21:23.268 NZST 40810 392] NVE-MGR-TUNNEL: Tunnel Endpoint 10.254.1.6 added, dport 4789
[07/11/24 09:21:23.268 NZST 40811 392] NVE-MGR-EI: L2FIB rvtep 100100:UNKNOWN cfg type 2 for BD 100
[07/11/24 09:21:23.268 NZST 40812 392] NVE-MGR-EI ERROR: Invalid peer address for bd 100

Re; This only appears on the 2x 9500-16x Leaf Switches and are only generated when the 9500-48 leaf switches are participating in the fabric.

M02@rt37
VIP
VIP

@shannonr 

The primary issue you're encountering is related to the BGP EVPN configuration in a multi-pod topology rather than the OSPF underlay. While OSPF is providing the necessary IP reachability between nodes, ensuring proper BGP EVPN operation in such a topology requires careful attention to the BGP configuration, especially when direct physical connections between all leaf and spine switches are not present.

In your multi-pod setup, the typical iBGP full-mesh requirement becomes challenging since not all leaf switches can directly connect to all spine switches. This setup can indeed cause problems with establishing BGP EVPN peering correctly. 

BGP requires a full-mesh of iBGP peers by default, meaning every BGP speaker (leaf and spine) should peer with every other BGP speaker.
Without a full physical mesh, you need mechanisms like route reflectors to handle BGP EVPN advertisements....

Does your route reflectoring config is ok ?

Once configured, validate the BGP sessions using commands like show ip bgp summary and show bgp l2vpn evpn summary.

Check that all leaves have BGP sessions established with the spines (RRs) and that EVPN routes are being reflected correctly.

 

 

 

Best regards
.ı|ı.ı|ı. If This Helps, Please Rate .ı|ı.ı|ı.

Hi,
Thanks for the continued help. I believe the RR are working properly, and just can't find fault with the BGP. I have triple checked and can see that each leaf switch has an established BGP peering to each spine switch. Leaf switches are not peering with other leaf switches, and spine switches are not peering with other spines.

This is verified in the output of show ip bgp summary I posted for MHM yesterday.

I have validated the RR are working by looking at MAC addresses connected to leaf 1 (e.g. f839.182a.0d51) and confirming that is learned by leaf 3 and other permutations of connectivity across the fabric:

xxx-9500-48X_VTEP#sh bgp l2vpn evpn f839.182a.0d51
BGP routing table entry for [2][10.254.1.3:32867][0][48][F839182A0D51][0][*]/20, version 146629
Paths: (2 available, best #2, table EVPN-BGP-Table)
Not advertised to any peer
Refresh Epoch 1
Local
10.254.1.3 (metric 201) (via default) from 10.254.0.2 (10.254.0.2)
Origin incomplete, metric 0, localpref 100, valid, internal
EVPN ESI: 00000000000000000000, Label1 100100
Extended Community: RT:23456:100 ENCAP:8
Originator: 10.254.1.3, Cluster list: 10.254.0.2
rx pathid: 0, tx pathid: 0
Updated on Jul 12 2024 08:55:50 NZST
Refresh Epoch 3
Local
10.254.1.3 (metric 201) (via default) from 10.254.0.1 (10.254.0.1)
Origin incomplete, metric 0, localpref 100, valid, internal, best
EVPN ESI: 00000000000000000000, Label1 100100
Extended Community: RT:23456:100 ENCAP:8
Originator: 10.254.1.3, Cluster list: 10.254.0.1
rx pathid: 0, tx pathid: 0x0
Updated on Jul 12 2024 08:55:50 NZST
BGP routing table entry for [2][10.254.1.3:32867][0][48][F839182A0D51][32][192.168.100.103]/24, version 146626
Paths: (2 available, best #2, table EVPN-BGP-Table)
Not advertised to any peer
Refresh Epoch 1
Local
10.254.1.3 (metric 201) (via default) from 10.254.0.2 (10.254.0.2)
Origin incomplete, metric 0, localpref 100, valid, internal
EVPN ESI: 00000000000000000000, Label1 100100
Extended Community: RT:23456:100 ENCAP:8
Originator: 10.254.1.3, Cluster list: 10.254.0.2
rx pathid: 0, tx pathid: 0
Updated on Jul 12 2024 08:55:50 NZST
Refresh Epoch 3
Local
10.254.1.3 (metric 201) (via default) from 10.254.0.1 (10.254.0.1)
Origin incomplete, metric 0, localpref 100, valid, internal, best
EVPN ESI: 00000000000000000000, Label1 100100
Extended Community: RT:23456:100 ENCAP:8
Originator: 10.254.1.3, Cluster list: 10.254.0.1
rx pathid: 0, tx pathid: 0x0
Updated on Jul 12 2024 08:55:50 NZST
BGP routing table entry for [2][10.254.1.5:32867][0][48][F839182A0D51][0][*]/20, version 146631
Paths: (1 available, best #1, table evi_100)
Not advertised to any peer
Refresh Epoch 3
Local, imported path from [2][10.254.1.3:32867][0][48][F839182A0D51][0][*]/20 (global)
10.254.1.3 (metric 201) (via default) from 10.254.0.1 (10.254.0.1)
Origin incomplete, metric 0, localpref 100, valid, internal, best
EVPN ESI: 00000000000000000000, Label1 100100
Extended Community: RT:23456:100 ENCAP:8
Originator: 10.254.1.3, Cluster list: 10.254.0.1
rx pathid: 0, tx pathid: 0x0
Updated on Jul 12 2024 08:55:50 NZST
BGP routing table entry for [2][10.254.1.5:32867][0][48][F839182A0D51][32][192.168.100.103]/24, version 146628
Paths: (1 available, best #1, table evi_100)
Not advertised to any peer
Refresh Epoch 3
Local, imported path from [2][10.254.1.3:32867][0][48][F839182A0D51][32][192.168.100.103]/24 (global)
10.254.1.3 (metric 201) (via default) from 10.254.0.1 (10.254.0.1)
Origin incomplete, metric 0, localpref 100, valid, internal, best
EVPN ESI: 00000000000000000000, Label1 100100
Extended Community: RT:23456:100 ENCAP:8
Originator: 10.254.1.3, Cluster list: 10.254.0.1
rx pathid: 0, tx pathid: 0x0
Updated on Jul 12 2024 08:55:50 NZST

10.254.1.3 is leaf 1 and 10.254.01 is RR1 / Spine 1.

Have I mis-read this output?

If full iBGP mesh are you meaning spines also need to peer with other spines, and leaf switches need to peer with other leaf switches?

Thanks

Hi friend again 

I re check your leaf config and output 

It correct' indeed it not optimal but each leaf connect to both spines via bgp

So both opsf and bgp is correct.

Nve peers is form if the SW have active vlan' i.e  there is same vlan in both leaf 

I see you config range of vlan 

Check vlan add to SW

And check

Show vxlan 

See which vlan is UP and which is down

MHM

Hey, thanks for that.

I have gone through each leaf switch and confirmed that each VLAN is defined and in state "active".

I have checked the trunk ports connected to the downstream access switches and verified that only defined and active VLANs are trunked.

I have also cleaned up the VLAN configuration, previous:
vlan configuration 10-299,320-452
member evpn-instance profile pot-inter-dc

New:
vlan configuration 100-103
member evpn-instance profile pot-inter-dc

Note: VLAN 100-103 are the only VLANs in use on the downstream access switches.

Note: The error seems to be referencing the in-use VLANs, eg 100102 is the VNI for VLAN 102.
[07/15/24 07:57:39.858 NZST 49A1A 392] NVE-MGR-EI: L2FIB rvtep 100102:UNKNOWN cfg type 2 for BD 102
[07/15/24 07:57:39.858 NZST 49A1B 392] NVE-MGR-EI ERROR: Invalid peer address for bd 102
[07/15/24 08:02:39.862 NZST 49A1C 392] NVE-MGR-EI: L2FIB rvtep 100102:UNKNOWN cfg type 2 for BD 102
[07/15/24 08:02:39.863 NZST 49A1D 392] NVE-MGR-EI ERROR: Invalid peer address for bd 102
[07/15/24 08:02:40.988 NZST 49A1E 392] NVE-MGR-EI: L2FIB rvtep 100102:UNKNOWN cfg type 2 for BD 102
[07/15/24 08:02:40.988 NZST 49A1F 392] NVE-MGR-EI ERROR: Invalid peer address for bd 102

Note: All NVE peers are up for all VLANs/VNI's - it is not like the error is relating to a peering being down:

#sh nve peers
'M' - MAC entry download flag 'A' - Adjacency download flag
'4' - IPv4 flag '6' - IPv6 flag

Interface VNI Type Peer-IP RMAC/Num_RTs eVNI state flags UP time
nve1 100100 L2CP 10.254.1.4 2 100100 UP N/A 00:08:30
nve1 100100 L2CP 10.254.1.5 2 100100 UP N/A 00:09:23
nve1 100100 L2CP 10.254.1.6 2 100100 UP N/A 00:10:51
nve1 100101 L2CP 10.254.1.4 2 100101 UP N/A 00:08:30
nve1 100101 L2CP 10.254.1.5 2 100101 UP N/A 00:09:23
nve1 100101 L2CP 10.254.1.6 2 100101 UP N/A 00:10:51
nve1 100102 L2CP 10.254.1.4 2 100102 UP N/A 00:08:30
nve1 100102 L2CP 10.254.1.5 4 100102 UP N/A 00:09:23
nve1 100102 L2CP 10.254.1.6 4 100102 UP N/A 00:10:51
nve1 100103 L2CP 10.254.1.4 2 100103 UP N/A 00:08:30
nve1 100103 L2CP 10.254.1.5 2 100103 UP N/A 00:09:23
nve1 100103 L2CP 10.254.1.6 2 100103 UP N/A 00:10:51

 

long week reading check what is issue here 
I think I find issue here 
you need 
evi-base xxx 
under the l2vpn profile 
add this and check 

thanks for waiting 

MHM

Hi there,

Thanks so much for getting back to me and looking into this issue! I have tried that but unfortunately the error message is still occurring:
[07/18/24 16:22:49.082 NZST 2164 392] NVE-MGR-EI: L2FIB rvtep 100100:UNKNOWN cfg type 2 for BD 100
[07/18/24 16:22:49.082 NZST 2165 392] NVE-MGR-EI ERROR: Invalid peer address for bd 100
[07/18/24 16:22:49.742 NZST 2166 392] NVE-MGR-EI: L2FIB rvtep 100103:UNKNOWN cfg type 2 for BD 103
[07/18/24 16:22:49.742 NZST 2167 392] NVE-MGR-EI ERROR: Invalid peer address for bd 103
[07/18/24 16:22:49.751 NZST 2168 392] NVE-MGR-EI: L2FIB rvtep 100103:UNKNOWN cfg type 2 for BD 103
[07/18/24 16:22:49.751 NZST 2169 392] NVE-MGR-EI ERROR: Invalid peer address for bd 103

Here is the sample config from one of the leaf switches in case you notice anything (they are all the same):
l2vpn evpn
logging peer state
replication-type ingress
router-id Loopback1
!
l2vpn evpn profile pot-inter-dc
evi-base 10000
l2vni-base 100000
!
vlan configuration 100-103
member evpn-instance profile pot-inter-dc

interface nve1
no ip address
source-interface Loopback1
host-reachability protocol bgp

router bgp 65550
bgp router-id 10.254.0.4
bgp log-neighbor-changes
no bgp default ipv4-unicast
neighbor 10.254.0.1 remote-as 65550
neighbor 10.254.0.1 update-source Loopback0
neighbor 10.254.0.2 remote-as 65550
neighbor 10.254.0.2 update-source Loopback0
!
address-family ipv4
exit-address-family
!
address-family l2vpn evpn
neighbor 10.254.0.1 activate
neighbor 10.254.0.1 send-community both
neighbor 10.254.0.2 activate
neighbor 10.254.0.2 send-community both
exit-address-family

Also, here is the output of "sh l2vpn evpn evi" showing the new EVI base taking effect, and the corresponding L2 VNI:
EVI VLAN Ether Tag L2 VNI Multicast Pseudoport

----- ----- ---------- --------- ------------- ------------------
10100 100 0 100100 UNKNOWN Te1/0/9:100
10101 101 0 100101 UNKNOWN Te1/0/9:101
10102 102 0 100102 UNKNOWN Te1/0/9:102
10103 103 0 100103 UNKNOWN Te1/0/9:103

 

show l2vpn evpn <peer IP>
show l2vpn evpn evi <> detail 

check the Route-target is it same or not 
do this for all peer 

MHM

I believe it is - example from peer 4 below but on checking all peers this is consistent across all:

# sh l2vpn evpn peers vxlan address 10.254.1.3 de
Interface: nve1
Local VNI: 100100
Peer VNI: 100100
Peer IP Address: 10.254.1.3
UP time: 00:08:33
Number of routes
EAD per-EVI: 0
MAC: 2
MAC/IP: 1
IMET: 1
Total: 4

Interface: nve1
Local VNI: 100101
Peer VNI: 100101
Peer IP Address: 10.254.1.3
UP time: 00:08:33
Number of routes
EAD per-EVI: 0
MAC: 2
MAC/IP: 1
IMET: 1
Total: 4

Interface: nve1
Local VNI: 100102
Peer VNI: 100102
Peer IP Address: 10.254.1.3
UP time: 00:08:33
Number of routes
EAD per-EVI: 0
MAC: 2
MAC/IP: 1
IMET: 1
Total: 4

Interface: nve1
Local VNI: 100103
Peer VNI: 100103
Peer IP Address: 10.254.1.3
UP time: 00:08:33
Number of routes
EAD per-EVI: 0
MAC: 2
MAC/IP: 1
IMET: 1
Total: 4

 

#show l2vpn evpn evi 10103 de
EVPN instance: 10103 (VLAN Based)
Profile: pot-inter-dc
RD: 10.254.1.6:42870 (auto)
Import-RTs: 23456:10103
Export-RTs: 23456:10103
Per-EVI Label: none
State: Established
Replication Type: Ingress (profile)
Encapsulation: vxlan (profile)
IP Local Learn: Enabled (global)
Adv. Def. Gateway: Disabled (global)
Re-originate RT5: Disabled (profile)
Adv. Multicast: Enabled (profile)
AR Flood Suppress: Enabled (global)
Vlan: 103
Protected: False
Ethernet-Tag: 0
State: Established
Flood Suppress: Attached
Core If:
Access If:
NVE If: nve1
RMAC: 0000.0000.0000
Core Vlan: 0
L2 VNI: 100103
L3 VNI: 0
VTEP IP: 10.254.1.6
Pseudoports:
TwentyFiveGigE1/0/9 service instance 103
Routes: 2 MAC, 1 MAC/IP
Peers:
10.254.1.3
Routes: 2 MAC, 1 MAC/IP, 1 IMET, 0 EAD
10.254.1.4
Routes: 2 MAC, 1 MAC/IP, 1 IMET, 0 EAD
10.254.1.5
Routes: 2 MAC, 1 MAC/IP, 1 IMET, 0 EAD

#show l2vpn evpn evi 10102 de
EVPN instance: 10102 (VLAN Based)
Profile: pot-inter-dc
RD: 10.254.1.6:42869 (auto)
Import-RTs: 23456:10102
Export-RTs: 23456:10102
Per-EVI Label: none
State: Established
Replication Type: Ingress (profile)
Encapsulation: vxlan (profile)
IP Local Learn: Enabled (global)
Adv. Def. Gateway: Disabled (global)
Re-originate RT5: Disabled (profile)
Adv. Multicast: Enabled (profile)
AR Flood Suppress: Enabled (global)
Vlan: 102
Protected: False
Ethernet-Tag: 0
State: Established
Flood Suppress: Attached
Core If:
Access If:
NVE If: nve1
RMAC: 0000.0000.0000
Core Vlan: 0
L2 VNI: 100102
L3 VNI: 0
VTEP IP: 10.254.1.6
Pseudoports:
TwentyFiveGigE1/0/9 service instance 102
Routes: 2 MAC, 1 MAC/IP
Peers:
10.254.1.3
Routes: 2 MAC, 1 MAC/IP, 1 IMET, 0 EAD
10.254.1.4
Routes: 2 MAC, 1 MAC/IP, 1 IMET, 0 EAD
10.254.1.5
Routes: 2 MAC, 1 MAC/IP, 1 IMET, 0 EAD

#show l2vpn evpn evi 10101 de
EVPN instance: 10101 (VLAN Based)
Profile: pot-inter-dc
RD: 10.254.1.6:42868 (auto)
Import-RTs: 23456:10101
Export-RTs: 23456:10101
Per-EVI Label: none
State: Established
Replication Type: Ingress (profile)
Encapsulation: vxlan (profile)
IP Local Learn: Enabled (global)
Adv. Def. Gateway: Disabled (global)
Re-originate RT5: Disabled (profile)
Adv. Multicast: Enabled (profile)
AR Flood Suppress: Enabled (global)
Vlan: 101
Protected: False
Ethernet-Tag: 0
State: Established
Flood Suppress: Attached
Core If:
Access If:
NVE If: nve1
RMAC: 0000.0000.0000
Core Vlan: 0
L2 VNI: 100101
L3 VNI: 0
VTEP IP: 10.254.1.6
Pseudoports:
TwentyFiveGigE1/0/9 service instance 101
Routes: 2 MAC, 1 MAC/IP
Peers:
10.254.1.3
Routes: 2 MAC, 1 MAC/IP, 1 IMET, 0 EAD
10.254.1.4
Routes: 2 MAC, 1 MAC/IP, 1 IMET, 0 EAD
10.254.1.5
Routes: 2 MAC, 1 MAC/IP, 1 IMET, 0 EAD

#show l2vpn evpn evi 10100 de
EVPN instance: 10100 (VLAN Based)
Profile: pot-inter-dc
RD: 10.254.1.6:42867 (auto)
Import-RTs: 23456:10100
Export-RTs: 23456:10100
Per-EVI Label: none
State: Established
Replication Type: Ingress (profile)
Encapsulation: vxlan (profile)
IP Local Learn: Enabled (global)
Adv. Def. Gateway: Disabled (global)
Re-originate RT5: Disabled (profile)
Adv. Multicast: Enabled (profile)
AR Flood Suppress: Enabled (global)
Vlan: 100
Protected: False
Ethernet-Tag: 0
State: Established
Flood Suppress: Attached
Core If:
Access If:
NVE If: nve1
RMAC: 0000.0000.0000
Core Vlan: 0
L2 VNI: 100100
L3 VNI: 0
VTEP IP: 10.254.1.6
Pseudoports:
TwentyFiveGigE1/0/9 service instance 100
Routes: 2 MAC, 1 MAC/IP
Peers:
10.254.1.3
Routes: 2 MAC, 1 MAC/IP, 1 IMET, 0 EAD
10.254.1.4
Routes: 2 MAC, 1 MAC/IP, 1 IMET, 0 EAD
10.254.1.5
Routes: 2 MAC, 1 MAC/IP, 1 IMET, 0 EAD

 

show bgp l2vpn evpn <peer IP> <<- can you share this without add vxlan or mac address

MHM

Review Cisco Networking for a $25 gift card