cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3251
Views
0
Helpful
7
Replies

DNAC Fresh install question

Freemen
Level 1
Level 1

Hi All,

 

I am going to setup DNAC 1 unit and configured it as cluster master, I only have 1 10G SFP, so I only use the enterprise port.

 

can I configure the DNAC initial setup with cluster link not connect? 

 

 

1 Accepted Solution

Accepted Solutions

I just redo everything in ISO 1.2.10, it work like expected, i have no time to test again on 1.3.0.3.

 

Thanks for your help

View solution in original post

7 Replies 7

jalejand
Cisco Employee
Cisco Employee

Hi Freemen

 

You can configure a fresh DNAC with the enterprise port as it is, just remember that your DNS and NTP servers are reachable via the enterprise port. Do not forget to configure a virtual IP for that port.

For the cluster link, you can enable it and configure it afterwards, no need to have it connected.

I saw those message in the /var/log/syslog, which 192.168.150.2 is the cluster port IP.

 

so i am wondering if that is the issue.

can share your experience on how many hour to bring up a fresh installl DNAC?

 

Oct 4 08:19:11 maglev-master-192-168-150-2 kubelet[21058]: E1004 08:19:11.206775 21058 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Oct 4 08:19:12 maglev-master-192-168-150-2 start_web_install_service[2562]: | INFO | Service maglev-install-ui - 412b271ffadb maglev-dockerhub.cisco.com/tesseract-docker/maglev-install-ui:1.2.0.1008 "/usr/local/lib/node…" 3 hours ago Up 3 hours maglev-install-ui
Oct 4 08:19:12 maglev-master-192-168-150-2 start_web_install_service[2562]: | INFO | Service maglev-web-install - 07335bf9096f maglev-dockerhub.cisco.com/tesseract-docker/maglev-web-install:1.2.0.1008 "/bin/sh -c '. /opt/…" 3 hours ago Up 3 hours maglev-web-install
Oct 4 08:19:12 maglev-master-192-168-150-2 kubelet[21058]: I1004 08:19:12.444589 21058 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Oct 4 08:19:12 maglev-master-192-168-150-2 start_web_install_service[2562]: | INFO | Web install agent pid - 3205
Oct 4 08:19:12 maglev-master-192-168-150-2 dockerd[4682]: time="2019-10-04T08:19:12Z" level=info msg="shim reaped" id=c52a4dabe3be14703c03e1b26cdaa582a9af529d16c9473087e955789d4f2a98
Oct 4 08:19:12 maglev-master-192-168-150-2 dockerd[4682]: time="2019-10-04T08:19:12.935563839Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Oct 4 08:19:13 maglev-master-192-168-150-2 kubelet[21058]: I1004 08:19:13.025960 21058 kuberuntime_manager.go:757] checking backoff for container "etcd" in pod "etcd-192.168.150.2_kube-system(c0c5c1a7b606058c4186b688989aea65)"
Oct 4 08:19:13 maglev-master-192-168-150-2 dockerd[4682]: time="2019-10-04T08:19:13Z" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/43cc36c96f4d0f9cd0848467aba752810a76bcf6f0efa865bfd1b1ac9540ec85/shim.sock" debug=false pid=26049
Oct 4 08:19:13 maglev-master-192-168-150-2 kubelet[21058]: I1004 08:19:13.354146 21058 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Oct 4 08:19:14 maglev-master-192-168-150-2 kubelet[21058]: I1004 08:19:14.372465 21058 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Oct 4 08:19:15 maglev-master-192-168-150-2 kubelet[21058]: I1004 08:19:15.378599 21058 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Oct 4 08:19:16 maglev-master-192-168-150-2 kubelet[21058]: E1004 08:19:16.153401 21058 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "192.168.150.2" not found
Oct 4 08:19:16 maglev-master-192-168-150-2 kubelet[21058]: W1004 08:19:16.207942 21058 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Oct 4 08:19:16 maglev-master-192-168-150-2 kubelet[21058]: E1004 08:19:16.208134 21058 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Oct 4 08:19:16 maglev-master-192-168-150-2 kubelet[21058]: E1004 08:19:16.578619 21058 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.150.2:9443/api/v1/services?limit=500&resourceVersion=0: net/http: TLS handshake timeout
Oct 4 08:19:16 maglev-master-192-168-150-2 kubelet[21058]: E1004 08:19:16.580714 21058 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.150.2:9443/api/v1/nodes?fieldSelector=metadata.name%3D192.168.150.2&limit=500&resourceVersion=0: net/http: TLS handshake timeout
Oct 4 08:19:16 maglev-master-192-168-150-2 kubelet[21058]: E1004 08:19:16.581891 21058 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.150.2:9443/api/v1/pods?fieldSelector=spec.nodeName%3D192.168.150.2&limit=500&resourceVersion=0: net/http: TLS handshake timeout

A fresh install, from the moment it boots after exiting the wizard, it takes about 3 hours more or less.

But In fact you are seeing errors there.
Could you try re-installing the DNAC without the Cluster IP? only marking it as cluster with an X.

but if i do that next time when i have 3 node, i need reimage again rite? because document say cluster IP cant be change

That is correct, I'll do an internal research about your logs.
At this point, it is stuck at these errors? What is the current console output it shows?

And also, which DNAC version are you booting? (Fresh install out of the box or did you got an ISO file) ?

ya from /var/log/syslog, the error flooding out.

 

I using ISO 1.3.0.3 logged case and provided by Cisco TAC verified the md5, and balenaEther go create bootable usb as the document.

 

I just redo everything in ISO 1.2.10, it work like expected, i have no time to test again on 1.3.0.3.

 

Thanks for your help