cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
668
Views
6
Helpful
10
Replies

CUCM 11.5.1 upgrade to 14.1SU3 preUpgrade Failed check 1.6

Question to the community out there. (Anyone done this or came across this before?)

Running the preUpgrade cop file on my CUCM PUB still on 11.5 I get the following failure.

Question: It seems this is only if you go to version 15, a version 14SU3 should still be successful, HDD is 110GB already in size?

Best practice obviously would be to follow the suggestions below but just to get the system on a supported version again would a version 14SU3 upgrade work for now and then later plan a rebuild to accommodate a future version 15 upgrade? 

1.6 FAIL OS and Infrastructure Checks

FAIL: This server is using an older filesystem type (ext3).
To resolve the installation or upgrade failure, either
- backup-reinstall-restore your source release in a new virtual machine
with 110GB virtual hard disk, then direct upgrade to 15.
- Use a direct migration option to get to 15 (Refer the Upgrade and
Migration Guide for Cisco Unified Communications Manager and the IM and
Presence Service on Cisco.com).

FAIL: Source release only has swap space of 2 GB, which is not enough for
destination release 15.
To resolve the installation or upgrade failure, either
- backup-reinstall-restore your source release in a new virtual machine
with 110GB virtual hard disk, then direct upgrade to 15.
Note:Growing to 110GB in existing virtual machine will result in upgrade
failure to 15, use new virtual machine with 110GB.
- Use a direct migration option to get to 15 (Refer the Upgrade and
Migration Guide for Cisco Unified Communications Manager and the IM and
Presence Service on Cisco.com).

 

Best Regards
1 Accepted Solution

Accepted Solutions

You can safely ignore these warnings as they are as you say for an upgrade to v15.



Response Signature


View solution in original post

10 Replies 10

You can safely ignore these warnings as they are as you say for an upgrade to v15.



Response Signature


will run an older pre-upgrade file today maybe and see what I get.

Best Regards

same result on the pre-upgrade 0038 file as on the pre-upgrade 0043 file, all version 15 warnings.

Best Regards

I have a customer I need to upgrade from - 11.5(1) SU11 to v15 - and there is not a refresh/L2 path to get there. So the Data Migration or Data Export/Import method is the only method supported. I was like you, I thought as long as the Disk is 10GB it should be good. But more and more I am seeing the Data Export is really the sure fire way to upgrade to make sure everything is built / rebuilt correctly. The pre-check is still important to make sure everything else is ready for the Data Export.

You’d need to go all the way back to a version of the pre check that predates any v15 checks if you don’t want to see warnings related to this. That said you can as I wrote safely disregard the warning as you’re updating to v14.

However at some point in time you’ll need to do a rebuild of your system to match the conditions of v15 so that you can upgrade to 15. We are currently in that process, although we’re currently on v14. Our plan is to create new VMs based on the v15 OVA, change the guest OS on the created VM to CentOS 7 (64-bit), then install the version of CM 14 we are currently using on the created VM and as a last step do a DRS restore of that one node and repeat the process again for all the other nodes. We know there are some other options at hand, although those are not something that we have used previously and are therefore not comfortable with.



Response Signature


Just an FYI, I did a CUCM 11.5SU7 --> 14.0SU3 upgrade successfully in my lab even though I got the above fail message on the preUpgrade cop file. Then I went on and upgrade the same CUCM to the latest CM15SU successfully last night, no funnies no errors, only had to change the RAM from 8192Mb to 10Gb and change the VM OS from CentOS7 64bit to Linux other 4.x 64bit. Upgrade took a while but the switch version took like 5 - 8 minutes, granted the DB is basically empty only a couple of phones registered to it. 

Best Regards

Michael.Heimann
Level 1
Level 1

Hi, tried to do the same (upgrade 14SU3 to 15SU1) and upgrade failed for me - logs pointing at ext3 and swap in the upgrade log.
Since backup+restore of the whole cluster is quite time-consuming I went another route. I updated ext3 to ext4 and increased the swap by 2031MB with a ubuntu desktop live-cd.
To increase swap: resize the common partition (which is after the swap partition) with gparted and make 2031 MB "free", then increase swap by that amount.
To update ext3 to ext4 simply start a terminal and run 

sudo tune2fs -O extents,uninit_bg,dir_index /dev/sdXY 

where sdXY is sda6 if that's the partition you want to upgrade. Remember to also adjust the /etc/fstab entry from ext3 to ext4 for that partition.
I did need to upgrade common and the inactive partition to ext4 since upgrade will verify common and reformat and mount the inactive partition as ext4. If either doesn't work, upgrade stops.
If you don't know if /partB or / is your current active partition, you can just leave it as ext3 and then the upgrade log will explicitly say which partition couldn't be mounted. Also: if your switch version fails it'll probably boot into a state where it says partition Xy couldn't be mounted - also here, you need to update the partition from ext3 to ext4.

All in all much faster than a reinstall, would recommend a snapshot before, though


 

All this would render your system ineligible for TAC support. So I would not recommend using this approach. Yes a rebuild and restore is time consuming, but in all honesty it doesn’t take more than around 3, maybe 4 hours per cluster node. We just have completed this for two 5 node clusters and we could complete it within a week and a half by doing node by node sequentially spread out on different days.



Response Signature


The procedure is not TAC supported as it's not documented in any cisco guide, sure. Making a upgrade iso bootable also isn't.
But I do not see why it should render the system ineligible for TAC support. If correctly done, it leaves the system in the exact same state as a rebuild (if that was done proper, too).
I would go further and say your average TAC is not trained to even recognize that something like this was done. Also TAC looks at the problem at hand and not if the system was touched in any way not officially documented.

Technically this is a shortcut that saves at least tens of hours to get to the same result as a rebuild. Admittedly one that needs some linux skills (gparted, fstab and mount should be no strangers) but if you have the skills, why not save money for the customer?

And a rebuild is also something not everybody is happy doing - for various reasons.

I just wanted to share a way that saved me loads of time and I would have been happy to find myself.

I do wonder though why Cisco would not just do that as part of their upgrade scripts. I think it's sadly part of the "spending the least effort on onprem" mindset that I recognize lately.

That might be and you’re welcome to have your opinion on this, it is after all a free world we live in. But I wanted others that reads your post to understand that what you suggest does make the system to be in a non supportable state if it is ever discovered by a Cisco representative. True that the majority of TAC engineers might not see this, but that doesn’t make it more supportable, it’s just not detected at that specific point in time, that doesn’t mean it won’t be detected at a later stage by someone else. At the end of the day the suggestion made is not supported by Cisco and I’d advise anyone from using it.



Response Signature