cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
12596
Views
15
Helpful
9
Replies

Unable to log into APIC UI

cooperb01
Level 1
Level 1

Hi

I have erased the configuration from the APIC servers using "eraseconfig setup". I have done this several times before and I have never had any issues.

 

Today I have followed the same process as before and have rebuilt the cluster but I am now unable to sign into the UI. I get the following error from all three servers:

 

" REST Endpoint user authorization datastore is not initialized - Check Fabric Membership Status of this fabric node"

 

Thanks

Ben

2 Accepted Solutions

Accepted Solutions

Tomas de Leon
Cisco Employee
Cisco Employee

Ben,

Access the console or CIMC kvm of each APIC controller.

Please try to login to all 3 APICs using the "rescue-user" username.  There should be no password since you cleared the configuration..

After logging into each APIC, run and screen capture the display output of the "acidiag avread" command.

Please attach outputs in your reply or check and verify the "fabric name" on each APIC. They should all match.  If not, you will need to "eraseconfig setup" on each APIC again and configure the APICs via the initial setup script.

thanks

 

T.

 

 

View solution in original post

dpita
Cisco Employee
Cisco Employee

Hello Ben,

It was great working with you on the webex today! I have closed the case, here is a summary of the resolution for that fault when attempting to login:

The problem was that the switches were still part of an old fabric. Only the APICs were cleared but they could not form the cluster through the switches that were part of a the old fabric instance. To resolve the issue:


-cleared and rebooted all switches with setup-clean-config.sh and reload
-on all APICs ran the "eraseconfig setup" command and they reboot to the setup script
-complete setup script with new advised infra vlan of 3967
-login attempt to APIC1 was successful and fabric membership showed a leaf waiting to be registered.

 

 

 

View solution in original post

9 Replies 9

Tomas de Leon
Cisco Employee
Cisco Employee

Ben,

Access the console or CIMC kvm of each APIC controller.

Please try to login to all 3 APICs using the "rescue-user" username.  There should be no password since you cleared the configuration..

After logging into each APIC, run and screen capture the display output of the "acidiag avread" command.

Please attach outputs in your reply or check and verify the "fabric name" on each APIC. They should all match.  If not, you will need to "eraseconfig setup" on each APIC again and configure the APICs via the initial setup script.

thanks

 

T.

 

 

dpita
Cisco Employee
Cisco Employee

Hello Ben,

It was great working with you on the webex today! I have closed the case, here is a summary of the resolution for that fault when attempting to login:

The problem was that the switches were still part of an old fabric. Only the APICs were cleared but they could not form the cluster through the switches that were part of a the old fabric instance. To resolve the issue:


-cleared and rebooted all switches with setup-clean-config.sh and reload
-on all APICs ran the "eraseconfig setup" command and they reboot to the setup script
-complete setup script with new advised infra vlan of 3967
-login attempt to APIC1 was successful and fabric membership showed a leaf waiting to be registered.

 

 

 


Thanks Daniel 

Great working with you today.

Might also be worth mentioning that the APIC cluster replication is performed over the infra tenant on the Fabric and not via the OOB management network.

Clustering can be done over the OOB management network to recover from certain failure scenarios but in this case since the APIC's were never in a cluster after clearing their setup config, I'm not sure the OOB management would be an option.

 

I was under the impression that our documentation states that if you clear the configuration on the APIC's using eraseconfig setup, you must also clear the configuration on the switches.  If that is not the case, we need to fix that.

Thanks for your comments.

 

It would be great if Cisco could confirm that in the event of someone removing the configuration from the APIC using the "eraseconfig setup" you must also have to clear the configuration from the switches.

If this is the only way to restore the fabric then it would involve a significant amount of downtime to the DC.

 

Thanks

Ben

Well here's the thing. If you use eraseconfig setup on just 1 APIC, nothing happens. Your fabric keeps forwarding and the remaining 2 APICs are still there and when APIC3 comes back online(after the setup script) it will join the cluster, replication and sharding will be distributed amongst all three APICs again and everything will be fine and fully fit. The switches will continue forwarding without the 1 APICs so you can eraseconfig setup all you want on the APICs and the switches will keep forwarding. You can really eraseconfig setup on all but 1 APIC. So long as you have 1 APIC to fall back on, the other three can be built back from it. This is because each APIC has the FULL MIT it is just shared and load balanced across all three APICs for high redundancy and performance.

Having the APICs join the switches again in the previous instance of a fabric is not possible. But if you eraseconfig setup on all three then yes you need to clear the switches, but under what circumstances would that happen in a production environment? Before that is even considered, a TAC case should be opened to triage the issue and hopefully prevent that drastic step from being taken =)

 

 

 

I am in the same situation as the current post.

Please let me know in detail so that I can solve the problem

*Progress history

1.APIC1-3 "eraseconfig setup"
2."REST Endpoint user authorization datastore is not initialized - Check Fabric Membership Status of this fabric node"

3.How do I solve this problem?

Please let me know by step by step.

Hello,

When you clear the configuration on the APICs and restore to factory defaults, you will also need to clear the configurations on all of your leaf & spine nodes.  The leaf & spine nodes have information of the previous cluster information therefore your APICs are running essentially in a different fabric.

Access each leaf & spine node via the console and run "setup-clean-config.sh" and then reload the switch.  The switches will go thru discovery again.  Once they are added back the fabric, the cluster and fabric will form and all things should be ok.

Cheers!


T.

Although the leaf/spine switch has been reset, there is still a UI error.

 the details were recorded in the URL below.

I ask you to reply.

https://supportforums.cisco.com/discussion/13158561/aci-unable-login-after-initialization-apic

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Save 25% on Day-2 Operations Add-On License