cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
768
Views
5
Helpful
3
Replies

2.2 to 2.4 upgrade post install task question

gilesc
Level 1
Level 1

Ok I have just moved companies and found an ISE 2.2 installation that to be honest is a mess - someone migrated from acs to ise without cleaning up the data first, so the rules are “interesting”, as if the fact that we don’t have the encryption key for the backups which I will be fixing.

 

Now I want to take it to 2.4 before I do much tidying, and reading the upgrade notes all seems well until I get to the post upgrade tasks.

 

The last task says to restore the m&t database from the backup?

 

Now here is the problem, I have two nodes which are running all the roles, admin, logging and psn. The servers will take around 10 hours to upgrade (they are both physical sns servers), I then reckon on a couple of hours to run most of the post upgrade tasks (ad, certs, system checks etc) and apply the service pack. If I then need to restore the m&t data will that affect the boxes processing radius requests. I have about a 14 hour window. But if the restore knocks out the authentication services that could be an issue.

 

So the question is do I have to reload the logging data - or could I dump the logging database and start with a new one, and do you need to do this before patching the application.

 

At this point I wish they have bought a few vm licenses so I could have a more staged upgrade but I can’t change that yet.

 

Thanks

 

Giles

1 Accepted Solution

Accepted Solutions

Damien Miller
VIP Alumni
VIP Alumni

Personally, I would do what you are asking, purge the operational database and not restoring an operational backup. The upgrade will go much quicker. Doing so means no historical radius/tacacs logs, by default these only go back 30 days anyways. There will be no impact on anything but logs with this.

Doing the upgrade manually with the upgrade bundle from the CLI would be my suggestion. It allows you to pause, confirm everything is working on the new 2.4 node, then continue with the second.

Using existing nodes you would follow this.
1. Run URT and confirm upgrade will succeed, open TAC case and work through this if not.
2. Take a backup
3. Purge the operational data
4. Deregister the secondary admin node, run the upgrade bundle from CLI, patch, confirm everything is working as it should.
5. Run the upgrade bundle on the remaining 2.2 node, patch, register it to previous upgraded node.

Even without extra VM licenses you can still stand up additional nodes, there is no enforcement of these yet with current ISE releases (2.4). You could follow a process like this skipping over the purging steps you using a couple new 2.4 VM's.
1. Run URT and confirm upgrade will succeed, open TAC case and work through this if not.
2. Deploy 2.4 OVA, and have it staged at setup script state.
3. Deregister Secondary node.
4. Run setup script on the new 2.4 node, reusing old IP's and hostnames. Restore the production backup and patch.
5. Turn off the other 2.2 node and run setup script on the second staged 2.4 node, reusing old IP's and hostnames, patch, and register.

I prefer using new VM's for the admin nodes when I can. It allows for an easier roll if something went very wrong since you are not upgrading them, rather using new nodes.

View solution in original post

3 Replies 3

Damien Miller
VIP Alumni
VIP Alumni

Personally, I would do what you are asking, purge the operational database and not restoring an operational backup. The upgrade will go much quicker. Doing so means no historical radius/tacacs logs, by default these only go back 30 days anyways. There will be no impact on anything but logs with this.

Doing the upgrade manually with the upgrade bundle from the CLI would be my suggestion. It allows you to pause, confirm everything is working on the new 2.4 node, then continue with the second.

Using existing nodes you would follow this.
1. Run URT and confirm upgrade will succeed, open TAC case and work through this if not.
2. Take a backup
3. Purge the operational data
4. Deregister the secondary admin node, run the upgrade bundle from CLI, patch, confirm everything is working as it should.
5. Run the upgrade bundle on the remaining 2.2 node, patch, register it to previous upgraded node.

Even without extra VM licenses you can still stand up additional nodes, there is no enforcement of these yet with current ISE releases (2.4). You could follow a process like this skipping over the purging steps you using a couple new 2.4 VM's.
1. Run URT and confirm upgrade will succeed, open TAC case and work through this if not.
2. Deploy 2.4 OVA, and have it staged at setup script state.
3. Deregister Secondary node.
4. Run setup script on the new 2.4 node, reusing old IP's and hostnames. Restore the production backup and patch.
5. Turn off the other 2.2 node and run setup script on the second staged 2.4 node, reusing old IP's and hostnames, patch, and register.

I prefer using new VM's for the admin nodes when I can. It allows for an easier roll if something went very wrong since you are not upgrading them, rather using new nodes.

Of course Damien’s advice is sound and you can’t go wrong following it. Another “clean” upgrade option would be to:

a) backup all certificates
b) backup configuration database
c) backup M&T
d) dump M&T data
e) run URT
f) if all goes well, promote secondary admin node to primary (this will cause service restarts)
g) remove old primary from deployment
h) clean install 2.4 on old primary
i) patch old primary (with new 2.4 install)
j) restore backup onto upgraded node
k) import backed up certificates
l) test/validate new PAN
m) perform steps h and i on old secondary
n) join old secondary to new deployment

This is an old, but proven upgrade method. If you need to back out you would reinstall 2.2 on your original PAN and rejoin it to the deployment (before step m).

George

gilesc
Level 1
Level 1

Ok, thanks to both of you for the suggestions, as you say purging the m&t data seems to make the most sense.

 

However going back to the release notes it states:

 

Restore MnT Backup

With the operational data backup of MnT data that you created before update, restore the backup.

Failing to perform backup and restore could trigger a High RADIUS Disk Usage Alarm. The maximum size for an MnT database file does not get updated correctly while upgrading, and the database file fills up.

 

So even if I purge the M&T logs prior to upgrading do I still have to do anything about this - run the purge M&T logs after bringing the server up maybe?

 

Don't remember this being an issue when I previously went from 2.2 to 2.3.... You would have thought someone would re-write the script to upgrade without this by now.