cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3665
Views
1
Helpful
9
Replies

Hosts within same EPG not able to ping each other in a three Multi-Pod fabric

Patrick3439
Level 1
Level 1

Hello Cisco Community,

 

I have the following problem in a three Multi-Pod fabric:

I can't reach the Hosts within the same EPG in Pod 3. Pod 1 & Pod 2 Hosts are reachable but not Pod 3.

 

The two hosts are in the same subnet, epg and vlan:

Host 1: 10.7.0.210 (Pod 2)

Host 2: 10.7.0.211 (Pod 3)

 

It seems that the ARP request is not resolving properly.

 

The Bridge Domain configuration is L2 Unknown Unicast -> Flood, ARP Flooding -> activated and there is a subnet configured -> 10.7.0.0/24

 

When I disable ARP Flooding and set L2 Unknown Unicast to Hardware Proxy everything is working.

 

The IPN configuration is already checked.

 

What can I also check?

 

Thanks!

9 Replies 9

6askorobogatov
Level 1
Level 1

"When I disable ARP Flooding and set L2 Unknown Unicast to Hardware Proxy everything is working." 

Could it be that you just cleared ARP with switching BD to hardware proxy and you if switch back to flood it will still work 

Nope It only works with ARP Flooding disabled. But I need this option because of silent hosts.

Robert Burns
Cisco Employee
Cisco Employee

Curious, do you have a BGP Route Reflector Spine configured in each Pod?

Robert

Yes, all Spines in each Pod are configured as a BGP Route Reflector.

  Spines are knows MACs , it works when you have " hardware proxy" but L2 broadcast is not reaching Pod3. 

Do you see  drops in tenant>operational>L2drops ?

There are no drops in the L2 Drop Table under both tabs "Packets" and "Flows"

 

What can I check?

I realize you already mentioned that IPN has been checked but what about a double check on the IPN's Multicast PIM BIDIR configurations? Are all Multicast groups taken into consideration there?

- PIM Sparse across all interfaces in the IPN, including the interfaces facing the Spines.

- PIM Adjacency UP between IPN Switches.

- PIM RP on all IPN switches.

- Correct PIM BIDIR configurations.

Just in case:

https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/white-paper-c11-739714.html

 

Hi, I have checked your things:

- All interfaces on the IPN Switches have "ip pim sparse-mode" configured

- All IPN neighbors are reachable and UP

- The RP is configured on each IPN Switch, but the Loopback Phantom RP is only configured on 2 IPN switches in Pod 1. Maybe this a problem? Must be the loopback for the phamton rp configured on each IPN? I have 6 IPN switches, 2 each pod.

Also the RP Address is reachable from all IPNs.

 

The IPN configuration is exactly the same as in the white paper, except the ip addresses.

 

What I have found out is, that the leaf switch in pod 2 dont know about the mac address of the destination host in pod 3. (checked with the command "show endpoint mac XX:XX:XX:XX:XX:XX") there is no entry.

It is OK not to have the Phantom RP loopback configured on all IPN Switches, but all IPN Switches require to know who the RP is and Routing to it.

For the 2 IPN switches that have the Phantom RP loopback configured, you must make sure that one is configured with a larger prefix-length so only that IPN Switch's loopback can be used as the main RP for the Multicast-groups.

I'm assuming you're allowing the complete Multicast range of 224.0.0.0/4

Last but not least, routing reachability on the IPN should be through the IPN switches and not thru Spine Switches (it may happen)

Review Cisco Networking for a $25 gift card

Save 25% on Day-2 Operations Add-On License