ā04-04-2024 09:04 AM
I have a doubt regarding Cisco HA DNAC setup with 3 nodes in cluster.
Does the 3 nodes share compute, RAM, storage capacity? Or only one node is working at time?
Thank you in advance
ā04-04-2024 09:36 AM
The Catalyst Center is running Kubernetes underneath the covers. It schedules/distributes the various workloads/services across the nodes and allows you to utilize the compute capacity of two full nodes. So all nodes are working at the same time, they're just not running the exact same services at all times.
ā04-04-2024 10:32 AM
Thank you for your reply.
So it that the number of devices supported by DNAC is mutiplied x2 when we have a 3 HA node setup?
I though that only one DNAC was working at time, and the information was replicated, when the primary DNAC goes down the sencondary takes over
ā04-04-2024 11:16 AM
I believe the answer is yes in terms of actual capacity, but the data sheet only explicitly says so for the XL size of node. You should plan according to what is listed in the data sheet for your cluster to be supported.
ā04-08-2024 10:03 AM
No, the scale limit is not increased from a single-node cluster to a three-node cluster. The scale limit remains the same unless you are using three DN2-APL-HW-XL appliances.
As mentioned, all three nodes are operating at the same time. The only difference is HA allows us to distribute the workload of the services among the three nodes and replicate the information for stateful services/databases, such as postgres, across the cluster. Each stateful service will have a primary instance working on a single node and secondary/standby instances working on the other nodes. But the primary instances for each stateful service may not all reside on the same node. For instance, postgres-0 may be the primary and postgres-1/2 will be the secondary instances. Postgres-0 may be on node 1, meaning node 1 is running the primary instance. Then we have Mongodb, which has three instances. Mongodb-1 may be the primary and Mongodb-0/2 will be the secondary. Mongodb-1 may reside on node 2, making that the primary instance holder.
I state all of this to show an example of how the nodes work together to keep the cluster functioning. In the event that one node fails, the stateless services should redistribute to another working node while the stateful services should go into a PENDING or NODE_LOST state. Then if that node that went down held a primary instance, one of the secondary instances should promote to primary and take over write privileges.
ā04-08-2024 11:13 AM - edited ā04-08-2024 11:19 AM
Nice summary @maflesch!
What makes the XL appliance clusters different from others in terms of scaling?
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide