cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
683
Views
0
Helpful
0
Comments
stuart.pannell
Level 1
Level 1

Well this weekend was eventful, my first migration upgrade using PCD.

I used PCD version 12.6, installed it on Wednesday ready for the migration. When I went to use the CLI (ping to make sure I could resolve and reach the existing CUCM) I had to change the password for the Admin account. I changed this as prompted to a new password.

I then went on to discovery my existing cluster, this appeared to be a simple process at first, point to IP of Publisher, add administrator username and password and bang! Nicely populated fields with hostname and version.. Oh wait, the discovery has failed! 

Ran a show version active on the CUCM, no ciscocm.ucmap_platformconfig.cop Installed. Ok so after a quick google it has been suggested in the past to install it manually, tried this 3 times and rebooted the servers after each time. All 3 times it said the install was successful, ran show version active and no sign of the cop file. Tried PCD cluster discovery again and it failed, checked logs this time and could see bad username or password. Checked sftp user details and was set correctly or so I thought. Turns out the CLI admin password change had not changed the adminsftp user password. When the discovery was being executed it was using the adminsftp user to install the cop file but the password was still the original that was set for the admin user during the PCD install. I ran a few commands on the PCD cli and removed password history checking and expiry, this allowed me to change the admin password back to what was used during install and therefore sync between the Admin user and the adminsftp user. 

I than ran the PCD cluster discovery again and to my delight it worked, the 8.6 cluster was fully discovered. This was down to the adminsftp user now having the correct password and insatlling the ciscocm. ucmap_platformconfig. cop file. 

I now added my new ESXi environment, for this we used the root username and password, no problems.

The next step was to configure the migration cluster, first deploy the CUCM OVA's with the ESXi environment but do not install any software for them. Now SFTP the CUCM version bootable ISO file to PCD and place it in the "Fresh_Install" directory. All good so far. Finally create the Migration task with PCD, use the wizard to select the source cluster 8.6, the destination cluster 12.0 and the bootable ISO file, We changed the hostname and IP addresses during the creation, simple. You can either schedule the task to start, start it automatically after the wizard completes or leave it and start it manually when ready(this is what I did as I wanted to check backups and stop EM prior to the upgrade). 

All DNS for new cluster has been added and tested ok.

Now the big migration/upgrade day, check backups ok, turn off Extension Mobility ok, start PCD job ok.

Step 1 completed, export of current configuration from 8.6 cluster and shutdown of physical publisher server

Step 2 starts and logs show can take up to 2 hours or more - this stage is the installation of the 12.0 publisher server after 3 long hours I check the server console with the ESXi environment(how I wish I was watching it) the install had halted with a DNS error saying that the hostname did not match the IP address. Now rather than scrolling through dump file and looking for what the issue was, I had a gut feeling that the install either had the new hostname and old IP or vice versa. So I cancelled the job, deleted and re-created the OVA's, deleted and recreated the destination cluster and then recreated the migration job though this time not re-ip'ing or changing the hostname. Started the migration task and sat back watching the install on the console. All was going well until DNS check of the publisher install, although my customer had the DNS in for some time it appears he hadn't added the original IP's and Host details into the reverse lookup zone.

So deleted OVA's again, deleted destination cluster, deleted migration job  and recreated the whole lot for a third time. At the same time my customer added the original Callmanager details in to the reverse DNS lookup zone.

I then kicked off the migration task and watched it complete the whole migration: -

Step 1 collect data from source cluster 8.6

Step 2 collect UFF data from source cluster

Step 3 Shutdown 8.6 publisher and build new virtual 12.0 publisher

Step 4 Shutdown 8.6 subscriber and build new virtual 12.0 subscriber

Step 5 restore UFF data to cluster

Happy days all complete now for testing

Everything worked with the exception of EM, this is where I started to panic, oh wait there are extra details in the URL for cross cluster EM, removed this and could browse to EM login/logout on the phone. Tried to login, failed error 208, tried to logout of another phone, failed error 208. **bleep** it, what was wrong? 

Well after a few minutes googling I found a post on this community saying that it was probably certificates, I checked the certs and there they were all expired and original 8.6 certs. So I regenerated Callmanager, ipsec, tomcat and TVS certs on PUB and SUB, exported certs and installed as trust certs on respective opposites. Rebooted the cluster and bingo! All working brilliantly.

Now on day one support for 240 users and no issues reported. 

 

Happy Days!

 

 

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: