cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
438
Views
1
Helpful
5
Replies

Cannot configure IPN interface

hrtendrup
Level 1
Level 1

I have the following faults I receive while trying to configure a routed sub-interface for an IPN in the infra tenant
hrtendrup_0-1743791660014.png
hrtendrup_1-1743791823131.png

I do see that the interface is configured, but I cannot ping it locally. In contrast, I can ping the IPN interface to the other enterprise switch:
hrtendrup_0-1743794928757.png

here, we see there are no static routes configured:
hrtendrup_2-1743791897222.png

yet, when I review operational (supposedly) data from the node, I see a static route configured with the offending, overlapping, next-hop:
hrtendrup_4-1743792043718.png

 

what I find interesting is that the route that has this bad next-hop is a host route to the TEP of APIC #3 in the cluster and the preference is very much different than the default preference. I'm guessing this must have been an artifact of when this fabric was configured as a multi-site fabric. It has been since converted to multi-pod; however, this issue may be only tangential to the actual root cause in this case. 
hrtendrup_6-1743792928251.png
During troubleshooting, I added these two other routes and then deleted them. They now also show up in this view, and I believe that to be incorrect as well. You see, by lack of the left-side drop down arrow, there is no next-hop attached to either of these.

Lastly, I cannot delete these static routes from this screen:
hrtendrup_7-1743793454765.png

I have checked the other interfaces and validated there are no overlapping IPs configured. I have validated that if I set the interface some other IP address, the interface does come up and is pingable. To me, unless there is another place to configure static routes that I have not checked, this appears to be buggy behavior. I've not found anything in the bug search yet. Also, I've been trying to get with TAC on this for some days now, but so far, I've not been able to connect with them. I'm thinking of trying to delete these via the REST API, but I haven't pulled the trigger on that yet.

Any thoughts on this?

 

1 Accepted Solution

Accepted Solutions

hrtendrup
Level 1
Level 1

So the solution here was really simple. TAC just had me do a clean reload of the switch. I had done a couple of reloads already, but I didn't think to do a clean reload (I really should have... ). So, if a regular reload didn't work, try a clean reload. This just means you have to run the setup-clean-config.sh script from the fabric node's CLI first, then reload. Afterwards, my bad routes were gone and my IPN interface came up.

View solution in original post

5 Replies 5

Wassim Aouadi
Level 4
Level 4

To understand the situation better, allow me to ask some questions:

- Have you provisioned the multi-pod using the Wizard completely from scratch or is this an additional IPN connection to the previously existing ACI Multi-Site deployment? 

- Do you have the issue only with this spine-to-IPN interconnection?

- Can you check whether the routing protocol neighborship is established between the spine and the problematic interface of the local IPN?

The inter-site network was completely torn down and then we ran the  multi pod wizard to reestablish the IPN.

We only see this issue on this single interface. Another interface on the same spine is working as expected (OSPF established, etc). The other spine in the pod has both interfaces up and working as expected. It is just this one interface

Routing protocol neighborship is NOT established. General IP stack seems broken, so naturally OSPF is not working either.

 

Thanks for asking. Let me know if you can think of anything to try. 

Can you check the internal TEP pool and see if you have any overlapping IP subnets or addresses between the spine-to-IPN and the internal TEP pool?

No dice. The internal pools for each pod are defined in some 10.x/16 ranges. The IPN transits are in 172.21.x.x/31 ranges

 

hrtendrup
Level 1
Level 1

So the solution here was really simple. TAC just had me do a clean reload of the switch. I had done a couple of reloads already, but I didn't think to do a clean reload (I really should have... ). So, if a regular reload didn't work, try a clean reload. This just means you have to run the setup-clean-config.sh script from the fabric node's CLI first, then reload. Afterwards, my bad routes were gone and my IPN interface came up.

Review Cisco Networking for a $25 gift card

Save 25% on Day-2 Operations Add-On License