01-26-2019 02:24 PM
Ok I have just moved companies and found an ISE 2.2 installation that to be honest is a mess - someone migrated from acs to ise without cleaning up the data first, so the rules are “interesting”, as if the fact that we don’t have the encryption key for the backups which I will be fixing.
Now I want to take it to 2.4 before I do much tidying, and reading the upgrade notes all seems well until I get to the post upgrade tasks.
The last task says to restore the m&t database from the backup?
Now here is the problem, I have two nodes which are running all the roles, admin, logging and psn. The servers will take around 10 hours to upgrade (they are both physical sns servers), I then reckon on a couple of hours to run most of the post upgrade tasks (ad, certs, system checks etc) and apply the service pack. If I then need to restore the m&t data will that affect the boxes processing radius requests. I have about a 14 hour window. But if the restore knocks out the authentication services that could be an issue.
So the question is do I have to reload the logging data - or could I dump the logging database and start with a new one, and do you need to do this before patching the application.
At this point I wish they have bought a few vm licenses so I could have a more staged upgrade but I can’t change that yet.
Thanks
Giles
Solved! Go to Solution.
01-26-2019 03:09 PM - edited 01-26-2019 11:59 PM
Personally, I would do what you are asking, purge the operational database and not restoring an operational backup. The upgrade will go much quicker. Doing so means no historical radius/tacacs logs, by default these only go back 30 days anyways. There will be no impact on anything but logs with this.
Doing the upgrade manually with the upgrade bundle from the CLI would be my suggestion. It allows you to pause, confirm everything is working on the new 2.4 node, then continue with the second.
Using existing nodes you would follow this.
1. Run URT and confirm upgrade will succeed, open TAC case and work through this if not.
2. Take a backup
3. Purge the operational data
4. Deregister the secondary admin node, run the upgrade bundle from CLI, patch, confirm everything is working as it should.
5. Run the upgrade bundle on the remaining 2.2 node, patch, register it to previous upgraded node.
Even without extra VM licenses you can still stand up additional nodes, there is no enforcement of these yet with current ISE releases (2.4). You could follow a process like this skipping over the purging steps you using a couple new 2.4 VM's.
1. Run URT and confirm upgrade will succeed, open TAC case and work through this if not.
2. Deploy 2.4 OVA, and have it staged at setup script state.
3. Deregister Secondary node.
4. Run setup script on the new 2.4 node, reusing old IP's and hostnames. Restore the production backup and patch.
5. Turn off the other 2.2 node and run setup script on the second staged 2.4 node, reusing old IP's and hostnames, patch, and register.
I prefer using new VM's for the admin nodes when I can. It allows for an easier roll if something went very wrong since you are not upgrading them, rather using new nodes.
01-26-2019 03:09 PM - edited 01-26-2019 11:59 PM
Personally, I would do what you are asking, purge the operational database and not restoring an operational backup. The upgrade will go much quicker. Doing so means no historical radius/tacacs logs, by default these only go back 30 days anyways. There will be no impact on anything but logs with this.
Doing the upgrade manually with the upgrade bundle from the CLI would be my suggestion. It allows you to pause, confirm everything is working on the new 2.4 node, then continue with the second.
Using existing nodes you would follow this.
1. Run URT and confirm upgrade will succeed, open TAC case and work through this if not.
2. Take a backup
3. Purge the operational data
4. Deregister the secondary admin node, run the upgrade bundle from CLI, patch, confirm everything is working as it should.
5. Run the upgrade bundle on the remaining 2.2 node, patch, register it to previous upgraded node.
Even without extra VM licenses you can still stand up additional nodes, there is no enforcement of these yet with current ISE releases (2.4). You could follow a process like this skipping over the purging steps you using a couple new 2.4 VM's.
1. Run URT and confirm upgrade will succeed, open TAC case and work through this if not.
2. Deploy 2.4 OVA, and have it staged at setup script state.
3. Deregister Secondary node.
4. Run setup script on the new 2.4 node, reusing old IP's and hostnames. Restore the production backup and patch.
5. Turn off the other 2.2 node and run setup script on the second staged 2.4 node, reusing old IP's and hostnames, patch, and register.
I prefer using new VM's for the admin nodes when I can. It allows for an easier roll if something went very wrong since you are not upgrading them, rather using new nodes.
01-26-2019 04:00 PM
01-27-2019 03:46 AM
Ok, thanks to both of you for the suggestions, as you say purging the m&t data seems to make the most sense.
However going back to the release notes it states:
Restore MnT Backup
With the operational data backup of MnT data that you created before update, restore the backup.
Failing to perform backup and restore could trigger a High RADIUS Disk Usage Alarm. The maximum size for an MnT database file does not get updated correctly while upgrading, and the database file fills up.
So even if I purge the M&T logs prior to upgrading do I still have to do anything about this - run the purge M&T logs after bringing the server up maybe?
Don't remember this being an issue when I previously went from 2.2 to 2.3.... You would have thought someone would re-write the script to upgrade without this by now.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide