06-27-2016 11:15 PM - edited 03-19-2019 11:18 AM
Hi Folks, I'm trying to upgrade a cluster from 8.6 to v11 but failing - the process fails on the post-install phase - no indications on why. I have loaded all of the required COP files, rebooted the node and tried to change the VM settings in VMWare - RAM and disk space only.
However, I cannot select the second disk in the VM to change the size to 110Gb - it is greyed out. I presume the cop file -
ciscocm.vmware-disk-size-reallocation-1.9.cop.sg would allow me to do that and I can see that it has installed after issuing the command show version active on the 8.6 server.
So question - any ideas on what I haven't done right to not get this option?
Do I need to install the COP files on ALL servers before migrating the individual nodes - I was going to upgrade them one by one to the inactive partition but NOT to switch version yet?
Looking at the release notes again, I notice that it asks to change the subnet mask to 255.255.255.0 - will this not cause confusion, is it absolutely necessary?
Is doing it using Cisco prime Collaboration Deployment much easier?
Thanks.
06-28-2016 12:51 AM
Hi Paul,
You may try reinstalling the disk size reallocation file, the following short video provides exact steps for upgrading cucm and IM&P from version 8.6 to 11.x , you may check this
https://supportforums.cisco.com/video/12724161/how-upgrade-cucmcups-86-cucmimp-110
HTH
Manish
06-28-2016 12:57 AM
Hi Manish, Yes I will have to do that. Jamie is my saviour on how to explain things nice and straightforward.
06-28-2016 04:13 AM
06-28-2016 08:29 AM
First time I've been called saviour LOL
You should be able to change the HDD as I show in the video, and PCD for an upgrade, would not really make it easier, it's going to be 90% the same, only difference is that PCD will load the ISO to the VM and kick start the upgrade process.
If you're doing a migration, yes, it would help, but if you're going to use the same hostname/IP, you'd need to stage a lab where to do that, which also takes time and effort.
06-28-2016 08:54 AM
Hi Jamie, I am using the same IP address's and hostnames as the V8.6 servers. This cluster is already on UCS servers. I'm a bit confused as to why I need to do this in a lab. Can't I just upgrade to the inactive partition or will it be best to define new blank VM's and migrate to them?
regards.
06-28-2016 08:59 AM
I was referring to using PCD for a migration, not for a regular in-place upgrade.
06-30-2016 04:53 PM
Hi Jamie, managed to solve the hard disk change error - the customer had left snapshots on the vmware :( . still fails the upgrade but 1 question - the guide says change the servers to 24 bit mask - is this really necessary?
thanks
Paul
07-01-2016 01:45 PM
Explain the customer snapshots are not supported on CUCM, they could have avoided a headache by not doing that.
I believe the subnet mask note is related to this bug:
https://bst.cloudapps.cisco.com/bugsearch/bug/CSCub72346
https://bst.cloudapps.cisco.com/bugsearch/bug/CSCtt17619
The root problem is that you *could* have configured the mask as 255.255.255.000 and that was a valid entry in RHEL 4, but in RHEL 5+ it's no longer a valid option, and it will cause trouble. I don't think it's necessarily telling you that you need to change the mask to a /24, I use a /25 in my lab and never had any problem, but I cannot input 000 in the last octect.
07-04-2016 01:38 AM
Thanks for that Jaime. Will review and hopefully upgrade this week.
Paul
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide