As some of you may know we released two Cisco Validated Designs for Docker Datacenter early in the spring. There are links to them here. And while Docker Datacenter certainly has its appeal, our own inclinations were that we should focus our attention on Kubernetes. Starting in November of 2016 through February 2017 few of my colleagues and I did about 6 Kubernetes classes around the US (East coast, west coast, etc) and focused the attention of those on running on top of OpenStack. However, it became clear in those classes that the majority of our attendees who wanted to run Kubernetes in their own data center didn't want to run it on OpenStack nor VMware. They wanted to run it on bare metal. And since it was a Cisco event, they were looking to run it on UCS.
Advantages of Kubernetes on Bare Metal
I have found that running Kubernetes in production in the cloud for the last 6 months for the Cisco Pipeline application has stood up wonderfully. That solution runs on Metacloud and has been rock solid. However, as I thought about running it in a public cloud like most people do I thought of the several advantages bare metal has:
Cost - The Kubernetes cluster is always on. Even if it is utilized very weakly it is still on and you pay a price for that. I don't see people adding nodes and taking away nodes during non-peak period but this is certainly something that could be automated. Running in house has a much larger capital cost up front but saves money in the long run - especially as the system gets bigger.
Performance - You can specify all of your hardware needs and architect it all yourself. You can put powerful servers and design the network to meet optimal standards.
Control - You get to decide more options as to how the system will work. You want to run it on a bunch of Raspberry Pi nodes, you can do that too. Want your data stored locally? You got it. It's all yours.
Disadvantages of Kubernetes on Bare Metal
While the performance, and control gains are certainly a given, what is not a given is the cost savings. It may actually cost you more to run Kubernetes on prem. Why? Man hours. You see, there is a great amount of uncertainty when running Kubernetes on-Prem. Most of the guides out there for Kubernetes are geared toward AWS or Google Cloud. So often its difficult to know a good architecture for running Kubernetes on Bare metal. But this is not all. Consider some other factors:
Load Balancers - On AWS and GKE there are built in load balancers that comes with the cloud. ELB makes it easier to have 3 head nodes and avoid any possibility of failure.
Storage - Storage volumes are also a given and something that needs to be figured out for on-prem. Will you use NFS? CEPH?
Networking - Kubernetes doesn't come with a network overlay out of the box, but one is required to run it. There are many options to chose from and making sure it works in your private environment is another option that needs to be figured out.
You should note, however there are some products that you can buy that can solve these problems. RedHat has the OpenShift product that is based on Kubernetes. There is also Rancher and Tectonic.
And while these products are great, what we really wanted was to use as close as vanilla upstream Kubernetes and make a simple way for it to be running on UCS.
What we've come up with now is a set of semi-automated steps to bring up your Kubernetes cluster on UCS. We call it KUBaM! We've tested it with rack mounts and blades. Essentially there are four parts:
Prepare the Build Environment
Deploy Service Profiles and Operating Systems automagically
Run Kubernetes deploy scripts
Configure Day 2 operations: Remote access, Monitoring, etc.
While we are still early in the process and will look to combine steps (like 2 & 3) we are very optimistic in the modules that we are using to solve the problem and give UCS users a way to make the most out of their infrastructure. You'll also notice that there are things that we can do with this automated process on UCS that we can't do on other types of servers.
Rather than write a lengthy CVD we have decided to write this as an iterative live document on Github complete with code. To make it possible we use the Python UCSM SDK as well as Ansible. Our approach creates a stack that can be torn down and built back up from scratch in a matter of minutes (Minutes because, yes, you still have to wait for the blades or servers to reboot after installation). We also base the first 2 steps on my previous article on PXEless automated installs. After that we've created Ansible playbooks that bring the whole thing up using kubeadm and contiv.
I have a CISCO (UCS C220 M3) server, which was running with WINDOWS 2008 R2 OS. In earlier days, when the server was running, I took the offline backup of the server and the backup had completed successfully. One day, I and my one colleague thought that w...
Dear all,I currently use UCS C220- M4 server with 2x120 SAS (UCS-SD120GBKS4-EV).Now I need upgrade my server storage to 240 or 480G.Could I use this part?UCS-SD240GM1X-EV=UCS-SD480GBMS4-EV=UCS-SD480GBIS6-EV=how is difference between UCS-SD480GBMS4-EV= &am...
Hello All I purchased a UCS C220 M4 server for my own lab and I populated all the memory banks with 24x 16GB of ram. I checked the server and it was running fine with 384GB of memory. The next day my server wouldn't fully boot and would get stuck at ...
Have a question regarding the load balancing methods in ESXi 6.7. I can't find a definitive answer believe it or not. Have UCS-C Series (220,240 M5's) VIC's -> 6454 Fabric -> Nexus 9k. Each UCS-C Series host has 1x25gig nic into Fabric-A a...