cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2481
Views
5
Helpful
5
Replies

Multipod configuration did not work

Lobinho
Level 1
Level 1

I did all the configuration of multipod and I will do some 3x the leafs and spines.
Here is some information:
I'm making my second Fabric, so I'm putting it as Fabric ID: 2
I got to the part where I discover the POD2 Spines, however, I configure the IPN and Multipod and they are in discovering.
In this Fabric, POD1 has only 1 APIC and in POD2 it has the other two, I am starting the configurations by POD1. Is there a minimum number of APICs to climb the Multipod?
Could I upload all of POD2 and start configuring it?

1 Accepted Solution

Accepted Solutions

Hello Robert
Now I configured it with Fabric ID: 1, but I have the same problem.
I configured the Multipod, registered the POD2 Spines, I see that the APIC caused the IPs as "Reserved" and I checked the IPL LLDP that is directly connected with the POD2 Spines and I already see them with the right name.
But it has not yet gone up.

View solution in original post

5 Replies 5

Robert Burns
Cisco Employee
Cisco Employee

First, don't use anything but fabric ID 1.  You will distinguish fabrics by name. Second, you can discover the second pod fine with a single controller.   If it's discovering you missed some config on the IPN most likely.  Recheck everything like the DHCP relay and other critical ospf config.

 

Robert

 

Hello Robert
Now I configured it with Fabric ID: 1, but I have the same problem.
I configured the Multipod, registered the POD2 Spines, I see that the APIC caused the IPs as "Reserved" and I checked the IPL LLDP that is directly connected with the POD2 Spines and I already see them with the right name.
But it has not yet gone up.

Lobinho
Level 1
Level 1

DHCP OK.PNG

Do you have any Leafs connected (physically and powered up) to the Spine in Pod2?

Also between the time you reconfigured the Fabric ID (which can only be done by erasing everything), did you also re-initialize the switches in Pod2 by running the script 'setup-clean-config.sh' locally on the Spine in Pod2?

Robert

I will do everything in POD2, but without success.
I checked the IPs of the OSPF interfaces and the IPN were changed. After fixing the IPs, the Multipod closed.

Thank you for your help!

Save 25% on Day-2 Operations Add-On License