10-26-2017 04:44 AM - last edited on 03-25-2019 01:31 PM by ciscomoderator
Hello all.
I'm facing an issue I can't understand. The output of the "show nve peers" is still empty, I don't get the point I missed.
If someone working in consulting teams or at the TAC is reading this thread, your help will be extremeley appreciated, here is the full story
My setup is very simple : 2 N7K connected through ethernet7/8 (L3 with OSPF and PIM) on both sides and loopack interfaces to source the local VTEP.
And attached to each 7k, a 5K just used to simulate a host, trunking on vlan 10 and putting an ip address on interface vlan 10
I configure on the first 7K a vrf and OSPF for the underlay (in this template I don't put the mtu at 1550, but if I do it, it doesn't change anything on the end result)
vrf context underlay exit interface ethernet 7/8 no switchport vrf member underlay ip address 192.168.100.71/24 no shut exit interface loopback 0 vrf member underlay ip address 192.168.101.71/32 no shut exit feature ospf router ospf OSPF-UNDERLAY vrf underlay router-id 0.0.0.1 no shutdown exit exit interface loopback 0 ip router ospf OSPF-UNDERLAY area 0 exit interface eth7/8 ip router ospf OSPF-UNDERLAY area 0 exit
I do the same on the other 7K, just changing the ip addresses from .71 to .72 and changing the ospf router id. Everything is fine and I get the routes on both sides
N7K-1-pod1# show ip route ospf-OSPF-UNDERLAY vrf underlay IP Route Table for VRF "underlay" '*' denotes best ucast next-hop '**' denotes best mcast next-hop '[x/y]' denotes [preference/metric] '%<string>' in via output denotes VRF <string> 192.168.101.72/32, ubest/mbest: 1/0 *via 192.168.100.72, Eth7/8, [110/2], 04:26:01, ospf-OSPF-UNDERLAY, intra N7K-1-pod1#
it's fine on the other side
N7K-2-pod2(config)# show ip route ospf-OSPF-UNDERLAY vrf underlay IP Route Table for VRF "underlay" '*' denotes best ucast next-hop '**' denotes best mcast next-hop '[x/y]' denotes [preference/metric] '%<string>' in via output denotes VRF <string> 192.168.101.71/32, ubest/mbest: 1/0 *via 192.168.100.71, Eth7/8, [110/2], 04:27:08, ospf-OSPF-UNDERLAY, intra N7K-2-pod2(config)#
then I make a PIM configuration on both side
feature pim interface loopback 0 ip pim sparse-mode exit interface et7/8 ip pim sparse-mode exit vrf context underlay ip pim bsr-candidate loopback 0 exit ip pim rp-candidate loopback 0 group-list 239.1.1.0/24
also it's fine and the rp is elected on one 7K and the other is registered
N7K-2-pod2(config)# show ip pim rp vrf underlay PIM RP Status Information for VRF "underlay" BSR: 192.168.101.72*, next Bootstrap message in: 00:00:36, priority: 64, hash-length: 30 Auto-RP disabled BSR RP Candidate policy: None BSR RP policy: None Auto-RP Announce policy: None Auto-RP Discovery policy: None
on the other 7K :
N7K-1-pod1# show ip pim rp vrf underlay PIM RP Status Information for VRF "underlay" BSR: 192.168.101.72, uptime: 04:29:46, expires: 00:01:17, priority: 64, hash-length: 30 Auto-RP disabled BSR RP Candidate policy: None BSR RP policy: None Auto-RP Announce policy: None Auto-RP Discovery policy: None
ok, so at this point to me the underlay is fine. Let's work on the overlay.
On both sides I create a bridge domain, a vni, mapping the vlan (from the 5K) to the VNI, creating the interface nve, and then attaching a service instance on the physical interface connected to each 5K
feature nv overlay feature vni vni 5010 exit system bridge-domain 1000-1999 bridge-domain 1010 member vni 5010 exit encapsulation profile vni vlan10-vni5010 dot1q 10 vni 5010 exit int nve 1 source-interface loopback 0 no shut member vni 5010 mcast-group 239.1.1.10 exit interface eth7/1 no switchport no shut service instance 1 vni encapsulation profile vlan10-vni5010 default no shut exit
I do the same on the other 7K, but on this one the downstream 5K is connected on eth7/2 instead of e7/1
so, on both 7K the mapping of the vni with the nve seems to be ok
N7K-1-pod1# show nve vni Codes: CP - Control Plane DP - Data Plane UC - Unconfigured SA - Suppress ARP Interface VNI Multicast-group State Mode Type [BD/VRF] Flags --------- -------- ----------------- ----- ---- ------------------ ----- nve1 5010 239.1.1.10 Up DP L2 [1010] N7K-2-pod2(config)# show nve vni Codes: CP - Control Plane DP - Data Plane UC - Unconfigured SA - Suppress ARP Interface VNI Multicast-group State Mode Type [BD/VRF] Flags --------- -------- ----------------- ----- ---- ------------------ ----- nve1 5010 239.1.1.10 Up DP L2 [1010]
I also get the correct multicast route on both 7K (239.1.1.10)
N7K-2-pod2(config)# show ip mroute vrf underlay IP Multicast Routing Table for VRF "underlay" (*, 232.0.0.0/8), uptime: 04:56:14, pim ip Incoming interface: Null, RPF nbr: 0.0.0.0 Outgoing interface list: (count: 0) (*, 239.1.1.10/32), uptime: 04:54:17, nve ip pim Incoming interface: Null, RPF nbr: 0.0.0.0 Outgoing interface list: (count: 1) nve1, uptime: 04:54:17, nve (192.168.101.72/32, 239.1.1.10/32), uptime: 04:54:17, nve mrib ip pim Incoming interface: loopback0, RPF nbr: 192.168.101.72 Outgoing interface list: (count: 0) N7K-1-pod1# show ip mroute vrf underlay IP Multicast Routing Table for VRF "underlay" (*, 232.0.0.0/8), uptime: 04:55:52, pim ip Incoming interface: Null, RPF nbr: 0.0.0.0 Outgoing interface list: (count: 0) (*, 239.1.1.10/32), uptime: 04:54:24, nve ip pim Incoming interface: Null, RPF nbr: 0.0.0.0 Outgoing interface list: (count: 1) nve1, uptime: 04:54:24, nve (192.168.101.71/32, 239.1.1.10/32), uptime: 04:54:24, nve mrib ip pim Incoming interface: loopback0, RPF nbr: 192.168.101.71 Outgoing interface list: (count: 0)
on both 7k, the nve interface is up and seems to be ok
N7K-1-pod1# show nve interface Interface: nve1, State: Up, encapsulation: VXLAN VPC Capability: VPC-VIP-Only [not-notified] Local Router MAC: f025.72a8.bf42 Host Learning Mode: Data-Plane Source-Interface: loopback0 (primary: 192.168.101.71, secondary: 0.0.0.0) N7K-2-pod2(config)# show nve interface Interface: nve1, State: Up, encapsulation: VXLAN VPC Capability: VPC-VIP-Only [not-notified] Local Router MAC: b414.89dc.7a42 Host Learning Mode: Data-Plane Source-Interface: loopback0 (primary: 192.168.101.72, secondary: 0.0.0.0)
let's check both service instances on physical interfaces
N7K-2-pod2(config)# show service instance vni detail VSI: VSI-Ethernet7/2.1 If-index: 0x35301001 Admin Status: Up Oper Status: Up Auto-configuration Mode: No encapsulation profile vni vlan10-vni5010 dot1q 10 vni 5010 Dot1q VNI BD ------------------ 10 5010 1010 N7K-1-pod1# show service instance vni detail VSI: VSI-Ethernet7/1.1 If-index: 0x35300001 Admin Status: Up Oper Status: Up Auto-configuration Mode: No encapsulation profile vni vlan10-vni5010 dot1q 10 vni 5010 Dot1q VNI BD ------------------ 10 5010 1010
everything is fine, admin up, operationnal state up
let's check the bridge domain
N7K-1-pod1# show bridge-domain 1010 Bridge-domain 1010 (2 ports in all) Name:: Bridge-Domain1010 Administrative State: UP Operational State: UP vni5010 nve1 VSI-Eth7/1.1 N7K-2-pod2(config)# show bridge-domain 1010 Bridge-domain 1010 (2 ports in all) Name:: Bridge-Domain1010 Administrative State: UP Operational State: UP vni5010 nve1 VSI-Eth7/2.1
to me it looks ok.
if I do a ping bewteen my 2 5K (same subnet, same vlan 10 of course) through the vxlan overlay, it doesn't work. WTF ? Let's do the final check
N7K-2-pod2(config)# show nve peers N7K-2-pod2(config)#
For some reason the show nve peers remains silent, what did I miss ?
Solved! Go to Solution.