cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
259
Views
0
Helpful
4
Replies

How to view ALL of Maglev Configuration without DNAC Shutdown

adamruncie
Level 1
Level 1

Hi all,

I am working on scoping a DNA Center migration project for a customer.  Unfortunately, the customer's previous network engineer did not document their maglev configuration very well, so I am needing to login to the DNA Center appliance maglev to view the configuration myself.  I know that I can use the command "sudo maglev-config update" to essentially re-launch the maglev config and view how each interface is configured.  HOWEVER, before it gets to the remaining configuration (most notably, the screen where you enter what virtual IP address each interface is supposed to use for the cluster), I get this screen:

Wizard Controller Shutdown Screen.jpg

If I am understanding this correctly, I would have to take access to the DNAC offline just to SEE what the maglev configuration is?  I don't intend on making ANY changes at all to this configuration on the existing production appliance, I just need to get in and see/take screenshots of how it is configured so I can duplicate the new install properly.  Is there a command to do this?  Unfortunately, the customer is a hospital and internal policies dictate that any reboot/restart/take down of any networking monitoring equipment has to go through a whole CAB review and a maintenance window.  So ...you can imagine I'm trying to find a way not to have to submit a bunch of paperwork and have to get up in the middle of the night just to run view how something is already configured.

Thanks in advance for the information!

1 Accepted Solution

Accepted Solutions

From the CLI,

Note: due to restricted shell, commands are limited without root admin access.
You can use the following from CLI:
$ ip a | egrep "management"
$ ip a | egrep "internet"
$ ip a | egrep "cluster"
$ ip a | egrep "enterprise"

These will get the IP address and VIP configured (if up and running).  Note: capture on all nodes if 3 Node Cluster.

 

View solution in original post

4 Replies 4

Tomas de Leon
Cisco Employee
Cisco Employee

Another way to gather the networking information is to use the APIs and a REST API application like Postman. Perform the API request against a single node in the Cluster and it will gather the networking information from all of the nodes.

GET
{{PROTOCOL}}://{{CATC.IP.ADDRESS}}/api/system/v1/maglev/nodes/config

 

Hey @Tomas de Leon - I guess maybe I spoke a bit too soon.  Spent so much time yesterday looking at command line that my brain wasn't working correctly.  The API you have does indeed show the interface configuration, but not the VIP configuration.  Here's an export from my lab.  I'm using both the Enterprise and Cluster interfaces, both with a .249 host address.  The VIP for each, though, is a .248 address, which is not shown anywhere in that API output.

{
    "response": {
        "nodes": [
            {
                "name": "10.0.0.249",
                "id": "29df14ab-c1d9-4ddc-89e7-d9b1b39e922e",
                "network": [
                    {
                        "slave": [
                            "ens160"
                        ],
                        "lacp_supported": true,
                        "lacp_mode": false,
                        "intra_cluster_link": true,
                        "interface": "enterprise",
                        "inet6": {
                            "netmask": "",
                            "host_ip": ""
                        },
                        "inet": {
                            "routes": [],
                            "netmask": "255.255.255.0",
                            "dns_servers": [
                                "10.0.0.253",
                                "8.8.8.8"
                            ],
                            "gateway": "10.0.0.1",
                            "host_ip": "10.0.0.249"
                        }
                    },
                    {
                        "slave": [
                            "ens192"
                        ],
                        "lacp_supported": true,
                        "lacp_mode": false,
                        "intra_cluster_link": false,
                        "interface": "cluster",
                        "inet6": {
                            "netmask": "",
                            "host_ip": ""
                        },
                        "inet": {
                            "routes": [
                                {
                                    "netmask": "255.0.0.0",
                                    "gateway": "192.168.199.1",
                                    "network": "10.0.0.0"
                                }
                            ],
                            "netmask": "255.255.255.0",
                            "dns_servers": [
                                "192.168.199.253"
                            ],
                            "gateway": "",
                            "host_ip": "192.168.199.249"
                        }
                    }
                ],
                "ntp": {
                    "keys": [],
                    "server_parms": [
                        {
                            "key_id": null
                        },
                        {
                            "key_id": null
                        }
                    ],
                    "auth": false,
                    "servers": [
                        "10.0.0.253",
                        "192.168.199.253"
                    ]
                },
                "platform": {
                    "product": "DN-SW-APL",
                    "provider": "VMware, Inc.",
                    "vendor": "Cisco Systems Inc",
                    "serial": "VMware-56 4d bd e0 f5 a8 b6 03-f2 87 99 c7 b2 74 ca 13",
                    "uuid": "E0BD4D56-A8F5-03B6-F287-99C7B274CA13"
                },
                "proxy": {
                    "https_proxy": "",
                    "http_proxy": "",
                    "no_proxy": [
                        "localhost",
                        "127.0.0.1"
                    ],
                    "https_proxy_username": "",
                    "https_proxy_password": ""
                }
            }
        ]
    },
    "version": "1.5.1"
}

 

From the CLI,

Note: due to restricted shell, commands are limited without root admin access.
You can use the following from CLI:
$ ip a | egrep "management"
$ ip a | egrep "internet"
$ ip a | egrep "cluster"
$ ip a | egrep "enterprise"

These will get the IP address and VIP configured (if up and running).  Note: capture on all nodes if 3 Node Cluster.

 

That got it!  Thanks!