cancel
Showing results for 
Search instead for 
Did you mean: 
cancel

Pondering Automation: Less is More Part 2

137
Views
0
Helpful
0
Comments
Cisco Employee

logo_V2.png

 

Howdy out there in automation land!!!! Again... two in one day... wow :) So onwards we press. If you have not read Part 1, please go back and do that as it might not make sense. In this part of the Less is More series we are going to install CloudCenter Suite (CCS) and Action Orchestrator(AO) to our 4 noded Kubernetes cluster we setup last time.I think we'll use this movie poster... because after you do this, you are truly going to be "The One" :)

 

220px-TheOnefilm.jpg

 

 

At a high level we are going to follow the install guide found here. Remember single commands are in BOLD and longer commands are blocked.

Without further delay... let's get onto it!

 

Prerequisites

  1. First we need Kubernetes version 1.11.3 or later... no worries! We should have installed 1.15 or later.
  2. We need to install Cert Manager v0.5.2 via kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.5/contrib/manifests/cert-manager/with-rbac.yaml
  3. We must define a default storage class, you are welcome to use VMWare or Openstack or whatever cloud as a storage class but in this case I will suggest using local disk or what we call "Host-Path Provisioner". You can find out about all storage classes here. So in this case, we will assume we have a 2nd drive mounted on /mnt/disk2 in each node. To use the host-path provisioner and really follow this end to end, do these sub-tasks, if not, skip to step #4
    1. If you run the command kubectl get sc you will probably see nothing! But we need storage for databases for CCS.
    2. Use the attached yaml (or see below) and run kubectl create -f host-path-provisioner.yaml. If you look at the POD spec, you will see volume mounts for /mnt/disk2. You should change that to whatever your volume mount point is when you apply the yaml.
      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: hostpath-provisioner
        namespace: kube-system
      ---
      
      kind: ClusterRole
      apiVersion: rbac.authorization.k8s.io/v1beta1
      metadata:
        name: hostpath-provisioner
        namespace: kube-system
      rules:
        - apiGroups: [""]
          resources: ["persistentvolumes"]
          verbs: ["get", "list", "watch", "create", "delete"]
        - apiGroups: [""]
          resources: ["persistentvolumeclaims"]
          verbs: ["get", "list", "watch", "update"]
        - apiGroups: ["storage.k8s.io"]
          resources: ["storageclasses"]
          verbs: ["get", "list", "watch"]
        - apiGroups: [""]
          resources: ["events"]
          verbs: ["list", "watch", "create", "update", "patch"]
      ---
      
      kind: ClusterRoleBinding
      apiVersion: rbac.authorization.k8s.io/v1beta1
      metadata:
        name: hostpath-provisioner
        namespace: kube-system
      subjects:
        - kind: ServiceAccount
          name: hostpath-provisioner
          namespace: kube-system
      roleRef:
        kind: ClusterRole
        name: hostpath-provisioner
        apiGroup: rbac.authorization.k8s.io
      ---
      
      apiVersion: rbac.authorization.k8s.io/v1beta1
      kind: Role
      metadata:
        name: hostpath-provisioner
        namespace: kube-system
      rules:
        - apiGroups: [""]
          resources: ["secrets"]
          verbs: ["create", "get", "delete"]
      ---
      
      apiVersion: rbac.authorization.k8s.io/v1beta1
      kind: RoleBinding
      metadata:
        name: hostpath-provisioner
        namespace: kube-system
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: Role
        name: hostpath-provisioner
      subjects:
      - kind: ServiceAccount
        name: hostpath-provisioner
      ---
      
      # -- Create a pod in the kube-system namespace to run the host path provisioner
      # -- replace "/mnt/disks" with whatever your host path is
      apiVersion: v1
      kind: Pod
      metadata:
        namespace: kube-system
        name: hostpath-provisioner
      spec:
        serviceAccountName: hostpath-provisioner
        containers:
          - name: hostpath-provisioner
            image: mazdermind/hostpath-provisioner:latest
            imagePullPolicy: "IfNotPresent"
            env:
              - name: NODE_NAME
                valueFrom:
                  fieldRef:
                    fieldPath: spec.nodeName
              - name: PV_DIR
                value: /mnt/disk2
      
            volumeMounts:
              - name: pv-volume
                mountPath: /mnt/disk2
        volumes:
          - name: pv-volume
            hostPath:
              path: /mnt/disk2
      ---
      
      # -- Create the standard storage class for running on-node hostpath storage
      apiVersion: storage.k8s.io/v1
      kind: StorageClass
      metadata:
        namespace: kube-system
        name: manual
        annotations:
          storageclass.beta.kubernetes.io/is-default-class: "true"
          storageclass.kubernetes.io/is-default-class: "true"
        labels:
          kubernetes.io/cluster-service: "true"
          addonmanager.kubernetes.io/mode: EnsureExists
      provisioner: hostpath
      
    3. There is one as well for windows, it is included in the zip below.
    4. You can now run kubectl get sc again and you should see the provisioner setup. You can also see a pod running for it.
  4. We now need to create the priority classes for CCS, we do this via
    cat <<EOF | kubectl apply -f -
     
    apiVersion: scheduling.k8s.io/v1beta1
     
    kind: PriorityClass
     
    metadata:
     
      name: suite-high
     
    value: 1000000
     
    globalDefault: false
     
    description: "High priority"
     
    ---
     
    apiVersion: scheduling.k8s.io/v1beta1
     
    kind: PriorityClass
     
    metadata:
     
      name: suite-medium
     
    value: 10000
     
    globalDefault: false
     
    description: "Medium priority"
     
    ---
     
    apiVersion: scheduling.k8s.io/v1beta1
     
    kind: PriorityClass
     
    metadata:
     
      name: suite-low
     
    value: 100
     
    globalDefault: false
     
    description: "Low priority"
     
    EOF
  5. You can then run kubectl get pc to check them.

 

Install CloudCenter Suite and Action Orchestrator

  1. You will now need to load the Installer VM which can be downloaded from Cisco.com. You would probably either use the OVA or qCOW2, so download what you need.
  2. Load the install VM up either into vCenter or whatever hypervisor you want. If you don't have a hypervisor you can use a "local" hypervisor like Virtual Box or VMWare Fusion. I used vCenter, the only requirement is that the installer can reach the IP address of your master.
  3. Once your installer VM launches, it should grab an IP address and then you can go to https://<IP-ADDRESS> to load the installer in the web
  4. Select GET STARTED under the Existing Kubernetes Cluster header
    ccs-installer.JPG
  5. Slide the Load Balancer as a Service option to NO
  6. Go back to your master VM and go to your Kube home directory via cd $HOME/.kube
  7. Print out your admin configuration via cat config and then copy and paste that into a text file and save as adminConfig.json
  8. Go back to the installer web UI and click Select File and then select your json file from step 7. Then click INSTALL
  9. Most likely you will get the system to install to the "default" namespace. If it does not, then replace "default" in commands with the proper namespace
  10. If you are using host-path provisioner, you might run into an issue with a pod and a database not being on the same host. If you run into this follow some of these sub-steps to "fix" it. I ran into it on one install and one install I did not, so hopefully you do not, but if you do...
    1. Check if postgres is running or not via kubectl -n default get posts -o wide | grep "postgres"
    2. If postgres is running, just skip the rest and go back to the main steps, if it has errors, continue. Please note: If you have other pods failing due to pods and storage not being on the same node, you can use this for any deployment or stateful set.
    3. To fix postgres issue... we must determine the nodes they are assigned to. The above command will show you what node postgres is trying to run on. Now we need to find where the storage is.
    4. Run the command kubectl get pv | grep "postgres" to find the PV ID.
    5. Then describe the PV via kubectl describe pv <PV ID>. This will show you the assigned node in the annotations section.
      k8s-host-path-storage.JPG
    6. You can see in the above it is assigned to "ao-so-rtp-09", so we need to force the pod to be on the same node. To do this we need to run kubectl -n default edit statefulset common-framework-suite-postgresql . Look for the priorityClass line and above it, insert a line like nodeName: <node>
    7. Then save the stateful set and it should bounce the pod and start running.
    8. If you run into this, make sure you watch the video as I detail what I did, etc.
  11. Once the bottom bar of the installer completes, it will prompt you to go to your registration URL and will send you to https://<master-ip>:<node port of ingress>
  12. Register your root tenant admin and login the first time! Congrats :)
  13. To then install AO, go to the Dashboard once you login and click install next to AO and pick the version to install. Once it is completed you can go into AO.

 

Wow!!! Now you have your own "non-production"/smaller cluster to work with AO on CCS directly!!! If you run into any issues, please feel free to post questions/comments/etc into the comments section of the blog.I highly suggest using the VODs as a guide and work alongside them... now... ONTO THE VIDEO!!!

 

Play recording (38 min 54 sec)

Recording password: HppRgUP2

 

 

 

Standard End-O-Blog Disclaimer:

 

Thanks as always to all my wonderful readers and those who continue to stick with and use CPO and now those learning AO! I have always wanted to find good questions, scenarios, stories, etc... if you have a question, please ask, if you want to see more, please ask... if you have topic ideas that you want me to blog on, Please ask! I am happy to cater to the readers and make this the best blog you will find :)

 

AUTOMATION BLOG DISCLAIMER: As always, this is a blog and my (Shaun Roberts) thoughts on CPO, AO, CCS, orchestration, and automation, my thoughts on best practices, and my experiences with the products and customers. The above views are in no way representative of Cisco or any of it's partners, etc. None of these views, etc are supported and this is not a place to find standard product support. If you need standard product support please do so via the current call in numbers on Cisco.com or email tac@cisco.com

 

Thanks and Happy Automating!!!

 

--Shaun Roberts

shaurobe@cisco.com

CreatePlease to create content
Content for Community-Ad
July's Community Spotlight Awards