I've been asked to check whether or not a CUCM cluster can be configured robustly for db replication purposes.
For example, suppose that two subscribers on remote sites were disconnected from the rest of the network. After a few weeks (or months), this connection is restored. I'd like for the db replication to be performed automatically and as fast as possible.
1) Can this be fine tuned?
2) Are the intervals and limitations of dbreplication documented anywhere?
3) Can a site which is down for several weeks or months automatically sync with the publisher and other subscribers?
I mean theoretically there is no max period a subsciber can be isolated from publisher, and the isolated sub will work with its local DB for the duration of which its isolated. CUCM clusters are designed ti have a continuous connection between eachother, with very predictable round trip times.
what is it you are trying to achieve?
If the dbrepl queue fills the replication agreement will be dropped and require a reset to recover replication. This is not configurable and as Dennis indicated, the product is designed/expected to have continuous connectivity amongst all cluster nodes. If a site may plausibly become isolated on a regular or prolonged basis it should have a dedicated local CUCM cluster.
I'm interested in providing a unified database via a publisher + subscribers deployment rather than a publisher + several CMEs deployment. The topology is a standard hub and spoke one where the publisher is at the hub.
The thing is that the spokes may be disconnected for long stretches of time, and when they regain connectivity I'd like them to reestablish replication with the other nodes. Configuring a CME for each spoke, with dial-peers towards the publisher and other spokes, is a lot more administrative overhead and could likely involve more human error.
If there is no practical method of doing this with a CUCM cluster, I'm wondering what would be the suggested architecture to ease administrative burden:
1) CMEs at the spokes, or
2) Independant publishers at each spoke, with ILS+GDPR to publish directory numbers
CUCM clusters are definitely not intended to function as you wish they did. It’s difficult to make significant design recommendations based on the limited information in the forums; however, my inclination would be toward local CUCM instances at each site instead of CME. This would provide a common platform to provision, offer consistent feature functionality, monitor, and maintain administrative competency in. CUCM has always struggled to scale down to small sites because of the x86 server requirements, though you didn’t say how small these sites are. A specs-based deployment model may relieve some of that cost though.
As for ILS/GDPR, again that feature assumes reliable site connectivity to replicate. You will want to test tolerance to connectivity loss; however, the PSTN fallback method relies on AAR which itself requires a stable WAN connectivity and an active denial by CAC to reroute. That won’t happen if the site is isolated/offline. You will likely need to rely on +E.164 globalized numbering plans with classic Route Patterns & Lists that provide local egress, perhaps via a LRG, if the inter-cluster trunk is down.