12-13-2017 01:06 PM - edited 03-01-2019 06:47 AM
Task Name |
|
Description |
|
Prerequisites |
|
Category | Workflow |
Components | vSphere 6.5 |
User Inputs |
Instructions for Regular Workflow Use:
This is an example of a Kubernetes install / deployment with UCSD.
The end user catalog offering:
The service offering/workflow questions:
In this case I want 1 master and 5 Minions
Workflow execution:
Deployment happens in parallel (minions):
This spawned one Global WF SR and one master SR and 5 minion SR's
The resulting e-mails:
Workflow completion:
Kubernetes is up:
Workflow Picture:
The Guide used for this example can be found here:
https://kubernetes.io/docs/getting-started-guides/centos/centos_manual_config/
Here are the conversion steps take to get this to run in UCSD:
The host preparation is handled at VM deployment time as a post provisioning workflow
Workflow: CreateKubernetesCluster calls the Master and Minion workflows via postprovisioning
The Master creation workflow:
Prepare the Host:
/etc/yum.repos.d/virt7-docker-common-release.repo on all hosts - centos-{master,minion-n}
with following information.
[virt7-docker-common-release]
name=virt7-docker-common-release
baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/
gpgcheck=0
Execute
yum -y install --enablerepo=virt7-docker-common-release kubernetes etcd flannel
UCSD Translation
echo "[virt7-docker-common-release]" > /etc/yum.repos.d/virt7-docker-common-release.repo
echo "name=virt7-docker-common-release" >> /etc/yum.repos.d/virt7-docker-common-release.repo
echo "baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/
gpgcheck=0" >> /etc/yum.repos.d/virt7-docker-common-release.repo
nohup yum -y install --enablerepo=virt7-docker-common-release kubernetes etcd flannel > /tmp/install.log &
Step 2 is creating a file on UCSD with host file information for later processing:
Minion post provisioning workflow is the same as master:
Next step is to distribute the /etc/host file to all servers (master and minion(s)) from UCSD(it accumulated the hosts file during all the server builds)
nohup curl --append --insecure --user ${TemplateUser}:${TemplatePassword} -T /tmp/hosts sftp://${custom_ReadFromCSVFileRandom_4118.out1}/etc/hosts &
Next:
Edit /etc/kubernetes/config which will be the same on all hosts:
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
# How the replication controller and scheduler find the kube-apiserver
KUBE_MASTER="--master=http://centos-master:8080"
Next:
setenforce 0
systemctl disable iptables-services firewalld
systemctl stop iptables-services firewalld
Next steps only happen on the master:
Edit /etc/etcd/etcd.conf
# [member]
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
#[cluster]
ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"
Edit /etc/kubernetes/apiserver
# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"
# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"
# Port kubelets listen on
KUBELET_PORT="--kubelet-port=10250"
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://centos-master:2379"
# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# Add your own!
KUBE_API_ARGS=""
echo KUBE_API_ADDRESS=\\"--address=0.0.0.0\\" > /etc/kubernetes/apiserver
echo KUBE_API_PORT=\\"--port=8080\\" >> /etc/kubernetes/apiserver
echo KUBELET_PORT=\\"--kubelet-port=10250\\" >> /etc/kubernetes/apiserver
echo KUBE_ETCD_SERVERS=\\"--etcd-servers=http://${custom_ReadFromCSVFileRandom_4118.out2}:2379\\" >> /etc/kubernetes/apiserver
echo KUBE_SERVICE_ADDRESSES=\\"--service-cluster-ip-range=10.254.0.0/16\\" >> /etc/kubernetes/apiserver
echo KUBE_API_ARGS=\\"\\"
Start ETCD and configure it to hold the network overlay configuration on master: Warning This network must be unused in your network infrastructure! 172.30.0.0/16
is free in our network
systemctl start etcd
etcdctl mkdir /kube-centos/network
etcdctl mk /kube-centos/network/config "{ \"Network\": \"172.30.0.0/16\", \"SubnetLen\": 24, \"Backend\": { \"Type\": \"vxlan\" } }"
Configure flannel to overlay Docker network in /etc/sysconfig/flanneld
# Flanneld configuration options
# etcd url location. Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://centos-master:2379"
# etcd config key. This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network"
# Any additional options that you want to pass
#FLANNEL_OPTIONS=""
echo FLANNEL_ETCD_ENDPOINTS=\\"http://${custom_ReadFromCSVFileRandom_4118.out2}:2379\\" > /etc/sysconfig/flanneld
echo FLANNEL_ETCD_PREFIX=\\"/kube-centos/network\\" >> /etc/sysconfig/flanneld
Start the appropriate services on master
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler flanneld; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
Configure the Kubernetes services on the nodes.
Edit /etc/kubernetes/kubelet
# The address for the info server to serve on
KUBELET_ADDRESS="--address=0.0.0.0"
# The port for the info server to serve on
KUBELET_PORT="--port=10250"
# You may leave this blank to use the actual hostname
# Check the node number!
KUBELET_HOSTNAME="--hostname-override=centos-minion-n"
# Location of the api-server
KUBELET_API_SERVER="--api-servers=http://centos-master:8080"
# Add your own!
KUBELET_ARGS=""
Configure flannel to overlay Docker network in /etc/sysconfig/flanneld
# Flanneld configuration options
# etcd url location. Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://centos-master:2379"
# etcd config key. This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network"
# Any additional options that you want to pass
#FLANNEL_OPTIONS=""
echo KUBELET_ADDRESS=\\"--address=0.0.0.0\\" > /etc/kubernetes/kubelet
echo KUBELET_PORT=\\"--port=10250\\" >> /etc/kubernetes/kubelet
echo KUBELET_HOSTNAME=\\"--hostname-override=${custom_ReadFromCSVFileRandom_4118.out2}\\" >> /etc/kubernetes/kubelet
echo KUBELET_API_SERVER=\\"--api-servers=http://`grep master /etc/hosts | awk '{ print $2 }'`:8080\\" >> /etc/kubernetes/kubelet
echo KUBELET_ARGS=\\"\\" >> /etc/kubernetes/kubelet
echo FLANNEL_ETCD_ENDPOINTS=\\"http://`grep master /etc/hosts | awk '{ print $2 }'`:2379\\" > /etc/sysconfig/flanneld
echo FLANNEL_ETCD_PREFIX=\\"/kube-centos/network\\" >> /etc/sysconfig/flanneld
Start the appropriate services on node (centos-minion-n)
for SERVICES in kube-proxy kubelet flanneld docker; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
Configure kubectl
kubectl config set-cluster default-cluster --server=http://centos-master:8080
kubectl config set-context default-context --cluster=default-cluster --user=default-admin
kubectl config use-context default-context
kubectl config set-cluster default-cluster --server=http://`grep master /etc/hosts | awk '{ print $2 }'`:8080
kubectl config set-context default-context --cluster=default-cluster --user=default-admin
kubectl config use-context default-context
Workflow run with 1 master and 2 minions resulting e-mails.
Logging onto the master and checking its nodes.
Workflow is attached enjoy!
thanks you ogelbric !
No problem. Hope you got it working in your environment.
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: