Thanks for the encouragement. Your feedback definitly makes a difference in understanding the quality of current material & how we can improve further.
I have one quick question if I change the IP address and hostname during the process how do you do the bulk certificate process? I wish if you have a video for that it would be great.
That was really informative , now I do have a question ,Customer currently running on UCS platform with version 8.6 on OVF version 8.6 ,how we can bring this entire cluster on to version 10.5 on OVF 10.X.
Here we don't have enough space to deploy parallel 10.X ovf and PCD to do the similar process explained.
Just a clarification considering the above scenario -upgrade the existing 8.6 version to 10.5 on the same OVF all publishers and Subscribers once upgrade is done we take a completed DB & License backup,-make sure all services runs on Publishers, -delete all subscribers, -install new 10.X OVF assigned to new virtual switch and connect this virtual switch to a stand alone network then install all required Publishers (CUCM,CUXN,UCCX) once this done ,shutdown the existing Publishers 10.5 on 8.6 OVF,- put the new OVF with 10.5 installed back on production virtual switch ,check for all services and phones registration once confirmed , upload all the required 10.X OVF and install subscribers and restored from backup .This way will it create issues on the phone CTL files.
The Above explained scenario there is no changed in and network , TFTPs or hostnames.
I would like to know your inputs is there a way we can keep the same 8.6 OVF and just increase the VM specification to match 10.5 requirements.
As per the design, there is no way to adjust the 8.6 OVF Template to match the requirements of 10.x OVF template.
The only problem I suspect in the manual procedure you mentioned above is during the installation of the fresh CUCM 10.x connected to the new virtual switch, it would try to reach the Default GW, DNS, NTP etc. & they should be reachable for the installation to complete.
On this to note is, its not a mandatory requirement to have PCD running on the same physical server as the existing cluster. All it needs is network connectivity.
Here the situation is I do have 3 UCS-210 ,once UCS with 3 VM and the other 2 with 4 VM each.Here we have CUCM PUB & 4 Subscribers,CUXN with HA ,UCCX with HA & IM&P with HA.
All running on version 9.1.2 and OVF on version 8.6.The requirement is to upgrade all applications to 10.5 with 10.X OVF.Using PCD how we can achieve the same on production.
We don't have enough vCUPs and Mem to deploy 10.X OVF in parallel ,existing has to be deleted.
Do have a concern on the upgrade of UC application from version 8.6 to 10.5 on same UCS platform
Customer current Scenario
3 UCS 210 server, OVF Deployed 8.6 and UC Applications running version 8.6 ,free core available on 1 server 2 cores and the other 2 servers 1 core each.
Need to deploy 10.5 OVF and migrate all UC application to 10.5 without any issue on the ITL files on the phones ,since customer do have 2000 phones in which 25 phones over internet using phone VPN in different international locations.
Do you have any test case been done for the same scenario explained above, please do share.
Even if you do the migration with IP Address change, the private & public keys will be copied to the new server (CUCM 10.x) & therefore you shouldn't run into any certificate issues & the phones will register seamlessly.
I have already tested this in my lab, but it would be good if you can test in lab what exactly you are planning to do before performing the action on production device.
Hope that is helpful.
Thanks & Regards
Raees Shaikh
P.S.: Please rate the post if you find it helpful.
In that case i would suggest you to perform the upgrade in maintenance window. As described before, once the server upgrade is successful to CUCM ver 10.5, you need to take a backup, create a Vm using the CUCM 10.5 OVA & perform restore. You would need to perform this procedure starting with the Publisher.
While you perform the upgrade/install, the server would try to contact the Default GW, DNS etc, therefore, it would be good to have them connected to the actual network.
In regards to certificates, the private & public keys will be copied over & therefore, you will not run into certificate issues. However, before proceeding with any of the above, I would highly recommend you to take a backup of existing CUCM 8.6 cluster.
That was a good video. i wonder if its possible to watch it in better quality. there is no option to bump it up to 720p or so just like you get in youtube.
I am wondering if you can advise on my below questions
We have following applications running on MCS servers atm
1. CUCM 8.5.1 Pub and Sub
2. CUC and UCCX 8.5.1 respectively.
We have ordered two BE6K servers to migrate above applications to release 10. I am planning to use PCD for this and have few questions
1. We are not running CUPS and no plans to run IM&P. Does that mean that i dont need to create an IM&P ova for this migration?
2. PCD doesnt support CUC and UCCX migration at the moment. what is the best approach to have them migrated across to BE6Ks with minimum down time?
3. For licensing, can I install PLM 10 on BE6K in advance of migration? having other VMs running on BE6K shouldn't affect the migration?
4. We do have Vmware foundation licensing ordered with BE6K servers. so I hope PCD works fine there?
5. I can see PCD 10.5 is available. Can I use it to migrate from 8.5.1 to 10.5 directly? Release notes says below
For a migration from Release 10.0(1) to Release 10.5(1) to be successful, you must upgrade all source cluster Cisco Unified Communications Manager nodes to 10.0(1)SU1, at a minimum, before using Cisco Prime Collaboration Release 10.5(1) to perform a migration.
does that mean that pcd 10.5 is only for migrations from 10 to 10.5?
6. I believe PCD doesnt use DRS at any point so 8.5.1 to 10 shouldnt be affected by CSCtn50405?
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: