cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
4317
Views
15
Helpful
17
Replies

how to migrate ISE to new servers

tachyon05
Level 1
Level 1

We have SNS-3415-K9 at each branch office and a pair of SNS-3495-K9 at our Headquarters. The branch office servers run policy service persona only, while the 2 HQ servers back up each other and run both the administration and monitoring in addition to the policy service personas. We are running the highest software version the existing hardware can support, 2.3.0.298 patch 7. We have placed an order for a number of SNS-3615-K9 to replace the existing hardware.  Each remote office will still get only 1 new server to replace the old one, but HQ will get 4 new servers to replace the current 2 servers.  How would you migrate the cluster from the old to the new servers?

17 Replies 17

marce1000
VIP
VIP

 

 - Check this thread for advices and hints :

              https://community.cisco.com/t5/network-access-control/migrate-to-new-ise-server/td-p/3703083

 M.



-- ' 'Good body every evening' ' this sentence was once spotted on a logo at the entrance of a Weight Watchers Club !

hslai
Cisco Employee
Cisco Employee

Adding to marce1000's...

as SNS 36x5 appliances do not support ISE 2.3 and ISE 2.3 has reached the end of support on 2020-June-17, please review the upgrade journey of the target ISE release at ISE Install and Upgrade Guides 

tachyon05
Level 1
Level 1

Thanks for your replies.  Since I have new servers to replace every old servers, is there a way to stand up the new servers into a brand new ISE cluster and cutover from the old to new cluster in phases?  For example, can I point access switches on one floor / building at a time to the new cluster with multiple maintenance windows, Or is it recommended to cutover the entire enterprise (by reconfiguring all access switches everywhere to point to the new cluster) inside 1 big maintenance window?  Will end users devices be able to connect to network as usual, or is user intervention needed (perhaps to accept / trust a new certificate)?

 

I have 2 nodes at HQ
ISE-HQ-1  (primary admin / secondary monitoring / PSN)

ISE-HQ-2  (secondary admin / primary monitoring / PSN)

 

I also have branch offices with just 1 PSN

ISE-branch-01 (PSN)

ISE-branch-02 (PSN)

ISE-branch-xx (PSN)

Hi @tachyon05 

 

1. SNS-36XX appliances MUST be running ISE 2.6+.
2. test the new SNS-36XX and ISE version on a LAB SW first
3. I suggest at least two maintenance window:
a. the first is a % of your Enterprise (5%, 10% or other)
b. the second, third, etc is the rest of you Enterprise.


Hope this helps.

Thanks.  That is what I like to do, however, I am trying to determine what happens to users' devices as they physically move between old and new ISE.  For example, he goes from a floor/building on the old ISE to a floor/building on the new ISE, and vice versa.

Build out your new ISE deployment using the all new appliances. It should be configured using a backup of the existing deployment's configuration. So all policies, NAD definitions etc. will match. Use 90 day licensing to start.

Put the new servers all in place - the only difference is that they will have new addresses temporarily. Then decommission each old server starting with the remote PSNs. As you do that, change the address of the new PSN to match. End users and network access devices should not see any change in behavior and users should not have to do anything differently for a basic 802.1X deployment.

If you are currently using device registration or guest services, there would be some considerations to account for that depend on the particulars of your deployment.

Your server certificates are also another consideration. As long as you are using CA-issued certificates and allowing the clients to trust them automatically (and haven't, for example, specified particular servers in the 802.1X supplicant configuration) it should be OK. Some more complex issues may arise depending on how you have issued the node certificates (wildcard or other approaches).

Once you've decommissioned the old servers, migrate that deployment's licenses to the new one.

Hi Marvin,  Old post but useful as I'm facing this scenario now.  Can you clarify some items for me please - you mention building all 5 new servers for the deployment (2 x PAN appliances and 3 x PSN VMs) and restoring each node from backup with a temporary IP, then decommissioning the current and changing IP on the new node. My questions:

1 - Does decommission = shutdown the current node and then re-IP the new node to the current ones IP.  I take it rollback would be just power the shutdown node back up?

2 - Viewing this method as being similar to a hardware failure recovery would the restore from backup operation do everything required (minus IP restore) to not require the node to be registered again to the PAN/AD?

3 - If PSNs are done one at a time and then the PANs one at a time too I believe always having one active PAN holding all the config means it should all just work or am I thinking too simply?

Thanks in advance for your response!

@n.elms yes/correct for all three questions.

Star man!  Thanks for the quick response.  Much appreciated @Marvin Rhoads!

Hi Marvin, The ISE PAN is the only 'proper' backup created in the deployment as it holds all the config for the deployment and for the PSNs we have the ADE-OS CLI config backed up. 

So am I right in thinking that a fresh ISE install with 90 day eval license followed by the ADE-OS CLI config restored and certs from the node to be replaced installed via Web-UI will be enough for this node to come back into the deployment as the old node it is replacing (which will be shutdown)?

As in does the PAN poll/connect to the node and apply the config/policy as the PAN drives everything?

 

Thanks!

 

@n.elms yes the Primary Policy Administration Node (PPAN or PAN is it's commonly called) "drives" everything. When you restopre a backup and certificates from the old deployment onto it it will have everything needed to assume the role as PPAN is a new deployment. To migrate any old nodes to that deployment, you would need to deregister them from the old and then join to the new (or just build net new VMs).

Thanks Marvin, So at present our plan is to do the PSNs first which is what my query relates to as there is no PSN backup.  I have installed a new VM to same level as current hardware to be replaced.  So now I need to down the hardware and get the VM in as its replacement on the same IP/name - what is the best patch for this?  As mentioned before it's replicating a hardware failure scenario and recovering to a VM.

 

 

hslai
Cisco Employee
Cisco Employee

@n.elms Since ISE 3.2 is our current recommended release, please use ISE 3.2 Patch 3 if all possible.

For ISE policies and elements, a secondary ISE node gets a copy of them from the primary ISE after it is registered to the ISE deployment, restarted, and synchronized. So, no PSN backup. However, for PSNs, please do the following:

  • Take a note of the services enabled on each PSN
  • Take a note of the profiling probes enabled on each PSN
  • Take a note of the node group membership of each PSN
  • Export the server certificates with the corresponding private keys of each secondary ISE node.
  • Engage AD team to join the new ISE nodes to Active Directory if needed.
  • Take a note of any other configurations specific to each ISE node, such as IP addresses, DNS, NTP, and static IP hosts entries.

Thanks @hslai, the migration of the install is to facilitate a migration to 3.2 patch 3. 

For the PSN migration, Marvin mentioned we can shutdown and bring back a PSN quite simply (like in a failure scenario) - how would you tackle replacing a PSN so we don't have to reconfigure all switches/wireless etc.  My thought was treat it as a failure - so whats the best way to recover a PSN?

The latest our hardware supports is 3.1 so we will migrate PSNs to VMs from SNS-3515 appliances then the whole deployment will be upgraded to 3.1 patch 7.  The PPAN and SPAN will me migrated from SNS-3595 to SNS-3755 and then final upgrade to 3.2 patch 3.