cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2370
Views
5
Helpful
7
Replies

ISE distributed deployment - restore config backup to another node

pHz
Level 1
Level 1

Hello,

 

My production environment is running a distributed deployment of ISE in version 2.2. My goal here is to set up a lab environment, also of ISE 2.2 nodes, and do a restore of the PROD configuration onto the LAB ISE primary node.

 

Both my LAB and PROD are in the same subnet and vlan. 

 

What I'm worried about is that the LAB node will try to communicate with / register the existing PROD ISE nodes once I restore the backup, and just generally cause problems to my PROD deployment.

 

What would the "Deployment" tab of my LAB node look like after restoring the config backup of the PROD node ? Is it going to show my other 3x ISE nodes ? Would I need to deregister them from the LAB ?


Basically, I'm not confident about how safe that would be to run my restore on another node while the prod deployment is running.

7 Replies 7

If you use the same hostname with a different IP address on the LAB VM, I would expect the restore to show you the deployment of the production build, but I don't think the LAB VM would maintain the AD join. However, I wouldn't recommend doing it that way in a production environment because I think that would break the synch between the nodes and the production primary PAN.

Probably the way how I would deal with this scenario would be to add your LAB VM to the production deployment as a PSN, it must have same software and patch already installed, then once it is sync'ed up with the production deployment, you deregister it. At that point, your LAB VM will become a standalone node with all the settings, policy sets, etc replicated from the production build. This shouldn't cause any issue as in this case your LAB VM will have a different hostname, different IP, and when you add it as a PSN nothing will affect your environment as no network devices will be sending any RADIUS or TACACS traffic to it.

The hostname of my LAB is different from the prod. Does that make any difference ? It's acceptable for me to have different hostnames.

The real goal here is ultimately to test our upgrade process. We want to have 4x LAB ISE nodes in version 2.2 (same as PROD), and 4x LAB ISE nodes in version 2.7. Once the 2.2 LAB ISEs are running fine, we want to restore the config data on the 2.7 nodes and build a new deployment from there.

If your LAB VM has a different hostname then what would happen when you try to restore is that the PAN will be converted to a standalone node and the deployment will be broken. I would stick with my proposal in the previous post.

as you just need this LAB for a short period  of time, you can spin up one LAB VM, bring it to the same software and patch, and then clone it three times to get a total of four nodes. Add only the first to the production deployment as per my previous post, then after you deregister it, and the other three cloned nodes to it to build your new LAB deployment with the four nodes.

I think this is the safest way to do it.

Wouldn't that be exactly what I'm trying to achieve ? If the LAB node is converted into a standalone node with all the config - that sounds like exactly what I need.

Nonetheless, that seems a bit sketchy to do it that way.

 

Regarding your method in your first post, my PROD setup has 3 patches installed (let's say 3,6,9). On my LAB, I have already installed patch 9 without 3 and 6 in between. Would that be a problem to register the node in the deployment ? I read that patches are cumulative - so I assume that should be ok ?

It would but there would still be a risk that might affect your production.

You don't have to install the previous patches, just the last one needs to be installed because as you said the patches are cumulative.

Damien Miller
VIP Alumni
VIP Alumni

I've done this plenty of times without issue, the restore will come up with no knowledge of the other nodes in the deployment, it won't list the other nodes in the deployment view. The only way this restored node will communicate with the existing production nodes is if you make the new "lab" node primary then join other nodes to it. 

pHz
Level 1
Level 1

So I've performed the operation. I ended up restoring the backup of the PROD device on my LAB node.

 

My LAB node had the following characteristics:

  • Same VLAN & Subnet as the PROD device
  • Different hostname from the device the backup was obtained on
  • Different IP
  • Standalone node

After restoring the backup, the LAB node was still in standalone mode and had no knowledge of the existing PROD deployment. I promoted it as PRIMARY, and registered my other LAB nodes (after importing their certificates in the Trusted Certificates menu / installing my primary LAB node's certificate on my other LAB nodes) into the deployment and they received the configuration.

 

The AD configuration was still there, but the connection broken. All 4 nodes of the new LAB deployment had to rejoin the AD domain (domain admin account required).

 

Finally, after all of this I had some minor issues with the "Context Visibility" menu that would refuse to display anything.

The error displayed the following:

Unable to load Context Visibility page. Ensure that full certificate chain of admin certificate is installed on Administration->System->Certificates->Trusted Certificates. If not, install them and restart application services. 

 

I ran the following commands in CLI :

  1. On the SAN (start with the SAN before the PAN):
    • application configure ise
    • 19 ("Reset Context Visibility")
  2. On the PAN:
    • application configure ise
    • 19 ("Reset Context Visibility")
  3. On the PAN:
    • (still in 'application configure ise' context menu): 20 ("Synchronize Context Visibility with Database") 

After that, I could access Context Visibility again.

I will update this post one more time after performing a restore operation on another ISE node with the same hostname as the original node (but a different IP) to report whether or not it causes issues to the original deployment.