cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
238
Views
2
Helpful
5
Replies

Cisco DNAC 3 HA setup

icarimo
Level 1
Level 1

I have a doubt regarding Cisco HA DNAC setup with 3 nodes in cluster.

Does the 3 nodes share compute, RAM, storage capacity? Or only one node is working at time?

Thank you in advance

 

5 Replies 5

Torbjørn
Spotlight
Spotlight

The Catalyst Center is running Kubernetes underneath the covers. It schedules/distributes the various workloads/services across the nodes and allows you to utilize the compute capacity of two full nodes. So all nodes are working at the same time, they're just not running the exact same services at all times.

Happy to help! Please mark as helpful/solution if applicable.
Get in touch: https://torbjorn.dev

Thank you for your reply.

So it that the number of devices supported by DNAC is mutiplied x2 when we have a 3 HA node setup?

I though that only one DNAC was working at time, and the information was replicated, when the primary DNAC goes down the sencondary takes over 


I believe the answer is yes in terms of actual capacity, but the data sheet only explicitly says so for the XL size of node. You should plan according to what is listed in the data sheet for your cluster to be supported.

https://www.cisco.com/c/en/us/products/collateral/cloud-systems-management/dna-center/nb-06-dna-center-data-sheet-cte-en.html#Appliance scale

Happy to help! Please mark as helpful/solution if applicable.
Get in touch: https://torbjorn.dev

No, the scale limit is not increased from a single-node cluster to a three-node cluster. The scale limit remains the same unless you are using three DN2-APL-HW-XL appliances. 

As mentioned, all three nodes are operating at the same time. The only difference is HA allows us to distribute the workload of the services among the three nodes and replicate the information for stateful services/databases, such as postgres, across the cluster. Each stateful service will have a primary instance working on a single node and secondary/standby instances working on the other nodes. But the primary instances for each stateful service may not all reside on the same node. For instance, postgres-0 may be the primary and postgres-1/2 will be the secondary instances. Postgres-0 may be on node 1, meaning node 1 is running the primary instance. Then we have Mongodb, which has three instances. Mongodb-1 may be the primary and Mongodb-0/2 will be the secondary. Mongodb-1 may reside on node 2, making that the primary instance holder.

I state all of this to show an example of how the nodes work together to keep the cluster functioning. In the event that one node fails, the stateless services should redistribute to another working node while the stateful services should go into a PENDING or NODE_LOST state. Then if that node that went down held a primary instance, one of the secondary instances should promote to primary and take over write privileges.

Nice summary @maflesch!

What makes the XL appliance clusters different from others in terms of scaling?

Happy to help! Please mark as helpful/solution if applicable.
Get in touch: https://torbjorn.dev