cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1547
Views
0
Helpful
3
Replies

ISE Upgrade using Backup & Restore method - Maintenance Plan

aarav
Level 1
Level 1

Hi ,

 

Could someone share me a maintenance window plan for ISE Upgrade for a 4 Node Cluster?

 

All Nodes need to be re-imaged and moved to the new 2.7 Deployment.

 

2 Active PSN

1 Primary PAN & Secondary Monitoring

1 Secondary PAN & Primary Monitoring

 

I'm planning to re-image using remote CIMC? Is this method option good for re-imaging? os Onsite USB is quicker?

 

Also could you please share your maintenance window schedule from a 4 Node upgrade scenario. I was thinking over a weekend starting on Friday and finishing up by Sunday for all the fours nodes to be re-imaged, joined to new deployment and tested each phase

 

Thanks in advance...

1 Accepted Solution

Accepted Solutions

Damien Miller
VIP Alumni
VIP Alumni

A USB stick plugged in to the SNS appliance will be quicker than imaging it via the vKVM mounting. Mounting the ISO via the vKVM/cimc is certainly possible, you are constrained by the network bandwidth, maybe possible in a tight window if you have a local jumpbox.  

 

You don't indicate the version you're upgrading from so this could change a bit, you also need SNS 35x5 or SNS 36x5 appliances in order to upgrade to 2.7. Before going down this path, is performing an inline upgrade out of the question? 

Assuming the backup/restore method is used and imaging is done from USB. 
1. Install 2.7 on secondary PAN node, run setup - est 2hrs
2. Restore backup to secondary PAN node (becomes primary until the end of process), join AD - est 1.5 hr
3. Install 2.7 on PSN 1, run setup, join to deployment, join AD - est 2 hrs, add time for testing
4. Install 2.7 on PSN 2, run setup, join to deployment, join AD- est 2 hrs
5. Install 2.7 on old Primary PAN, run setup, join to deployment, join AD - est 2 hrs. 
6. Install current patch - est about 30 min per node
7. Flip Primary/Sec PAN roles if desired - est 20 min

I would target to have this done in a single longer day. 

View solution in original post

3 Replies 3

Damien Miller
VIP Alumni
VIP Alumni

A USB stick plugged in to the SNS appliance will be quicker than imaging it via the vKVM mounting. Mounting the ISO via the vKVM/cimc is certainly possible, you are constrained by the network bandwidth, maybe possible in a tight window if you have a local jumpbox.  

 

You don't indicate the version you're upgrading from so this could change a bit, you also need SNS 35x5 or SNS 36x5 appliances in order to upgrade to 2.7. Before going down this path, is performing an inline upgrade out of the question? 

Assuming the backup/restore method is used and imaging is done from USB. 
1. Install 2.7 on secondary PAN node, run setup - est 2hrs
2. Restore backup to secondary PAN node (becomes primary until the end of process), join AD - est 1.5 hr
3. Install 2.7 on PSN 1, run setup, join to deployment, join AD - est 2 hrs, add time for testing
4. Install 2.7 on PSN 2, run setup, join to deployment, join AD- est 2 hrs
5. Install 2.7 on old Primary PAN, run setup, join to deployment, join AD - est 2 hrs. 
6. Install current patch - est about 30 min per node
7. Flip Primary/Sec PAN roles if desired - est 20 min

I would target to have this done in a single longer day. 

Hi Damien,

Thanks for your valuable comments 

We going to upgrade from 2.2 

The Inline upgrade is this a safe method, as we are jumping 3 long term release

 

Also during the halfway, say between both New and old deployment is have 1x pan/mnt and 1 x PSN, and NAD configuration decides the load balancing.

Will this affect the end-user devices & NAD talking to two different deployments.

 

Also with PXGRID services for Stealth watch and FTD , does it require anything to be done before or after the upgrade?

 

Thanks

I've had only minimal issues leveraging inline upgrade from the CLI. I pre stage the upgrade file on the localdisk of each node then run them in the same order provided above. The issue I have run in to with inline upgrades was failing the pre upgrade disk space check on 200GB VM nodes. We found that once a 200GB PSN had less than 40% disk space left, it would fail this check. We were able to work with TAC and clean up some log files to allow the upgrade to proceed. This shouldn't be an issue on appliances since none of them come with than small of disk.

You will have a split brain deployment during the upgrade of the PSNs. The NADs will attempt to authenticate to the PSNs they are configured to leverage, and fail to the alternate. You may have some odd behavior though. For example, if a PSN is up, but not joined to AD, it may return an accept reject to clients. So running through the AD join if the node requires it again is important to prevent impact.

For PXG integration, as long as they are configured for two pxg nodes, then there should be minimal issue. While both PSNs are up there will be a split brain scenario, both PXG nodes will consider themselves as active in V1 and respond to client connections. You will have information disparity where endpoint y authenticates to the 2.2 deployment, while endpoint x authenticates to the 2.7 deployment.