11-15-2023 11:14 AM - edited 11-15-2023 11:25 AM
I have been put in charge of our ISE deployment consisting of 2 PANs, 2 MNTs, and 4 PSNs which are split across two datacenters. The deployment is on version 3.1 patch 3. All VMs have 96 GB memory and 24 CPUs. The PANs and MNTs have 600GB disks and the PSNs have 300 GB. The deployment is used for 802.1x authentication on WiFi for about 30,000 clients peak daily in a university.
I was hoping to install 3.1 patch 8 next week for the security updates, but it's still not out, so I'm planning on upgrading to 3.2 instead, which I understand will also require patching after the upgrade. I plan to use the GUI for both upgrading and patching.
I've been reading up on the patching/upgrade process and have a few questions:
Solved! Go to Solution.
11-15-2023 06:14 PM
If you run the URT on the Standby PAN, it will estimate how long the upgrade takes. Run it to save yourself some trouble BEFORE the planned upgrade.
Since you're in the fortunate position that all nodes are VMs, why not consider deploying all new ISE 3.2 OVAs, using the ZTP process? You can deploy the OVA to build 8 new VMs in advance (a lot of the work does not require a maintenance window).
For the ISE config and patch process, I only use ZTP these days and it saves me many years of my life. Check out Charlie Moreton's brilliant ZTP guide here. The hardest part of ZTP is setting it all up once - but once it's done, you will reap the rewards. The things that took me some time the first time around were
The rewards
Once all the new VMs are deployed, you leave them powered off, and enable them for ZTP (I personally like adding the custom attributes in the VMWare VM (using the encoded data from the ZTP ini file) - all that prep is done during normal hours.
You can start the "maintenance window" - but since all nodes are redundant, I typically don't. But it's up to each customer. My approach is to get the new ISE deployment running on the Standby PAN - the rest of the old deployment is not affected - so you're still very safe.
It might sound dramatic and to be honest, it's a lot of steps, but even Cisco recommends to do config restore instead of inline upgrade. With SNS servers it's not so easy - so inline upgrade makes a lot more sense there.
I believe that from ISE 3.2 patch 3, the split upgrade has become really reliable. I might re-think my method - but in the past I was bitten by inline upgrades and when Cisco also recommended the config restore method, I never did inline again.
11-15-2023 02:36 PM
upgrade should take roughly around each 4-5hours depends on resources and rebooting time.(below seminar cover that timings also)
check below guide :
https://www.cisco.com/c/en/us/td/docs/security/ise/3-2/upgrade_guide/HTML/b_upgrade_prepare_3_2.html
check good seminar for split upgrade :
https://www.youtube.com/watch?v=iU4Zfqhqwdo
11-15-2023 06:14 PM
If you run the URT on the Standby PAN, it will estimate how long the upgrade takes. Run it to save yourself some trouble BEFORE the planned upgrade.
Since you're in the fortunate position that all nodes are VMs, why not consider deploying all new ISE 3.2 OVAs, using the ZTP process? You can deploy the OVA to build 8 new VMs in advance (a lot of the work does not require a maintenance window).
For the ISE config and patch process, I only use ZTP these days and it saves me many years of my life. Check out Charlie Moreton's brilliant ZTP guide here. The hardest part of ZTP is setting it all up once - but once it's done, you will reap the rewards. The things that took me some time the first time around were
The rewards
Once all the new VMs are deployed, you leave them powered off, and enable them for ZTP (I personally like adding the custom attributes in the VMWare VM (using the encoded data from the ZTP ini file) - all that prep is done during normal hours.
You can start the "maintenance window" - but since all nodes are redundant, I typically don't. But it's up to each customer. My approach is to get the new ISE deployment running on the Standby PAN - the rest of the old deployment is not affected - so you're still very safe.
It might sound dramatic and to be honest, it's a lot of steps, but even Cisco recommends to do config restore instead of inline upgrade. With SNS servers it's not so easy - so inline upgrade makes a lot more sense there.
I believe that from ISE 3.2 patch 3, the split upgrade has become really reliable. I might re-think my method - but in the past I was bitten by inline upgrades and when Cisco also recommended the config restore method, I never did inline again.
11-17-2023 10:44 AM - edited 11-17-2023 11:08 AM
Thank you both for the information. I've been studying various documentation, but the more info the better.
Arne - That is a very interesting solution. I don't have time to get all that done before our maintenance window, but will keep it in mind in the future. I guess we'll see how quickly and smoothly the inline upgrade goes.
Thank you for the pointer on the URT. It identified an issue that TAC corrected (CSCwe81125), so we should be all set. After purging all data from the MnT logs, the URT provided the following timing estimates.
Estimated time for each node (in mins):
ise-pan-msb-0(SECONDARY PAP):68
ise-pan-hbl-0(PRIMARY PAP):69
ise-mnt-msb-0(MNT):84
ise-mnt-hbl-0(MNT):84
Each PSN(5 if in parallel):69
In total, it seems like around 6 hours to complete. Do these tend to be accurate? We have a very robust VM infrastructure and network with no WAN connections, so latency shouldn't be a factor.
What does "5 if in parallel" on the last line mean?
11-19-2023 12:18 PM
Multiple PSN's could be done in parallel, as long as the NADs have one PSN to talk to while this happens. Most commonly when there are a number of PSNs behind a load balancer, then you can take a bunch out of commission, and upgrade them simultaneously.
11-17-2023 01:05 PM - edited 11-17-2023 01:08 PM
Another question. We are using VMware hypervisors. The virtual hardware is currently on version 9, which the ISE 3.1 installation guide says is acceptable for ESXi 6.5. Both ISE 3.1 and 3.2 require version 14 (or higher, for 3.2) on ESX 6.7 or 7.
We currently have ESX 8. So I'm wondering if we should upgrade the VM version, and if so, to what version, and should we do it before, during, or after the upgrade/patching. I say during since ESX supports a VM compatibility upgrade upon guest reboot of the OS, so while ISE reboots during the upgrade, the VM version would get upgraded.
Our VMware expert says staying with version 9 is fine in the sense that it is compatible with ESX 8, I just wasn't sure if it would be a problem for ISE.
11-19-2023 12:22 PM
The VMWare version makes no difference to ISE, since if you look at the VMWare charts of what newer version numbers offer, it's essentially just support for more RAM and more CPUs, and USB3 etc. But Version 8 already has presents a "hardware layer" to the OS that is more than sufficient. After I deployed the ISE 3.2 OVAs I still upgraded the VMWare version hardware to the latest that the ESXi could support - for no reason other than it bothered me
11-27-2023 10:21 AM - edited 11-27-2023 10:27 AM
Thank you for the additional information, Arne. I did the PSNs in two pairs. The 8540 WLCs are set to use one from one pair as primary and one from the other pair as secondary, and I use the built-in load balancing on IOS-XE on the 9800 WLCs to distribute auths across all 4 PSNs, so there was very little if any interruption. I left the VM version alone.
I completed the upgrade and had a couple obstacles. First was I could not initiate the upgrade over GUI due to an error with the PAN not able to communicate with the other nodes to initialize the upgrade (I forget the exact error), even though the deployment showed everything in sync. TAC was unbothered by this and said to do the upgrade by CLI instead, but then I had an issue reading and copying files from the SFTP repository (yet backups were successfully being written to it). We had to delete, re-add, and regenerate host keys on each node (maybe the host key was the original problem). I was nearly 4 hours late starting, so did the SAN, one MNT, and all the PSNs overnight, and the PAN and remaining MNT the next day. I think the total time was around 7 hours ( each node was slightly longer than predicted, and I wasn't watching constantly so didn't immediately start the next node as each finished).
The patching after the fact went much faster, worked via GUI, and only took 2.5 hours. The nodes were done automatically one by one.
Hopefully, the "new" upgrade process (starting with 3.2 P3) that includes simultaneous patching will be smoother, but I won't plan on it.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide