03-23-2018 07:10 AM - edited 03-17-2019 12:28 PM
Hello,
I follow the link to rebuild my pub-cucm.
-----
step1. utils dbreplication stop >> To stop DB sync (Sub)
step2. reintall pub-node, and config cucm server list.
step3. restart pub-node....
-----
When I restart the new pub-node... I find all sub-node still sync DB data from pub-node! So, All config is cleaned... Orz...
I already enter the dbreplication stop command on all sub-node. Why not work ?
Have any wrong during my process.
This is the log from sub-node to stop dbreplication:
===========
admin:utils dbreplication stop
********************************************************************************************
This command will delete the marker file(s) so that automatic replication setup is stopped
It will also stop any replication setup currently executing
********************************************************************************************
Deleted the marker file, auto replication setup is stopped
Service Manager is running
Commanded Out of Service
A Cisco DB Replicator[NOTRUNNING]
Service Manager is running
A Cisco DB Replicator[STARTED]
Completed replication process cleanup
Please run the command 'utils dbreplication runtimestate' and make sure all nodes are
RPC reachable before a replication reset is executed
admin:
admin:
admin:utils dbreplication runtimestate
DB and Replication Services: ALL RUNNING
Cluster Replication State: Only available on the PUB
DB Version: ccm8_6_2_23900_10
Number of replicated tables: 541
Cluster Detailed View from SUB (3 Servers):
PING REPLICATION REPL. DBver& REPL. REPLICATION SETUP
SERVER-NAME IP ADDRESS (msec) RPC? STATUS QUEUE TABLES LOOP? (RTMT)
----------- ------------ ------ ---- ----------- ----- ------- ----- -----------------
TVSC1CCM02 10.208.2.12 0.283 Yes Connected 0 match Yes (3)
TVSC1CCM01 10.208.2.11 Failed No Active-Dropped 415998 ? No (?)
TVSC1CCM03 10.208.2.13 0.039 Yes Connected 0 match Yes (3)
=========
Is there any wrong step?
Thanks~
03-26-2018 11:08 PM
03-27-2018 10:34 AM
03-27-2018 09:06 PM
Hello,
I do it successfully...
On CUCM8.6 Lab:
1. Stop dbreplication at all sub-node.
2. When the new pub-node add all cluster server, then all sub-node's phone information & all config be cleaned, like as the new pub-node....
3. But don't care it, just to do backup & restore by DRS.
4. When restore from sub DB finish and success, the DB will be OK~ It restore data from original sub-node!
So, the process cannot run at production environment, downtime about 30mins.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide