As you have seen from the Part 1 post, Cisco UCS can bring a lot of value to a Contianer Cluster environment which can prove of immense value as it's crazy to acknowledge, but landing the operating system on bare metal servers in alignment with the required customization, has proven to be none trivial for many years now!
So now you have the operating system installed in a consistant fashion, how do you drop on the associated packages, configuration files and clustering technology itself?
This is where the UCS integrations prove to be of great value and it's these we will dive into now!
If we look at the whole flow including the creation of the web server in Part 1 of this blog, it looks something like the following:
Installing Kubernetes on UCS programatically
Setting up the web server (this is a manual operation today but we are in the process of automating this part)
The UCS Python SDK being used to setup the UCS in a PXEless way
Ansible used to customize the CentOS operating system and deploy Kubernetes (which in turn uses Docker) and the contianer networking solution, Contiv.
So let's have a further look at some of these components and how the made the whole process easy.
Cisco UCS Python SDK
The Cisco UCS Python SDK was used extensively in our project as it is fully featured, well documented and supported via a slack community.
It provides Python language bindings to interface with Cisco UCSM APIs for CRUD operations. As every UCS managed object type is abstracted as a python class it offers better code Writing/Reading experience by specifying the objects directly for building out the Kubernetes environment.
One valuable feature to point out is the “convert_to_ucs_python” tool which helps in code development by generating python script as it captures operations done within the UI, and converting it to code. Using this tool, relevant code can be generated without being an expert in the UCS API model.
The SDK has sample code packaged and auto-generation of API reference documentation which along with the Slack Support assist with the learning curve.
Much of the Ansible playbooks and Python scripts were hand crafted for our project as the team had the skills to do this and was a straight forward process due to the UCSM API model but it was clear that Cisco provided ansible modules would be appreciated to further simplyfy the process.
The good news is these modules are now available and will be used in future iterations of deploying contianer tech on UCS as you can now use Ansible for configuration management, deployment, and orchestration of Cisco UCS servers, storage, fabric, hyperconverged infrastructure and converged infrastructure as well as Nexus switches.
The new integration of Cisco UCS Manager and Ansible by Red Hat provides a software-defined approach to the management of the entire hardware and software stack. As a result, you to achieve faster build times, because the entire application stacks can be provisioned automatically. You can also automate UCS Manager policy, resource pool, and resource profile configuration and ongoing management including the ability to detect and remediate unintended changes.
Both the Absible and Python work was made possible to the highly appreciated UCS Manager API which has been around since the begining of the UCS journey.
The UCSM was designed and implemented to model all of the UCS h/w aspects in s/w and make this fully programable via the mature and well documented XML based API.
Due to this layer of abstration made avaiable for consumption by the API, we were able to effectively treat the h/w platform a Bare Metal as a Service as we could manipulate and consume it programatically thus automating most tasks.
The use of a consistent object model and unified API resulted in simplicity writing scripts, and should aid even further when developing tools in a CI/CD process. Because the object model is consistent throughout any UCS deployment, you can learn the API once and apply it as the scope of what you are managing changes.
For example, it can be programmed as low as the drive component or as high as the chassis level, and you can apply the investment you have made in learning the API as you move to other areas of infrastructure management. You simply add new objects at each level of infrastructure and scale—without the need to change the data model or software architecture.
Its worth noting that the Python SDK takes advantage of this API but most (if not all) the actions could be achived by communicating directly to the API if so desired. With all of the UCSM functionality available via the API (over 2500 functions), which ever route is taken, there is enormous possibility to apply DevOps tools against UCS.
Cisco recently announced general availability of an open source project called Contiv and this we used within our project as it offered us a solid flexible networking and policy engine for Kubernetes.
It has actually been around for some time now but we recently hardened it and added enterprise features such as LDAP integration and support for muliple contianer cluster managers such as Kuberntes, Docker Swarm and Mesos.
Contiv provides both network connectivity and policy creation and enforcment.
The connectivity options are impressive as they offer native L2/VLAN, VXLAN, L3 BGP and Cisco ACI topology support thus making it an enterprise option to accommodate most requirements.
The policy aspect of Contiv delivers on container orientated use cases such as east-west connectivity and protocol security plus policy to control resource usage such as bandwidth.
The policy model is based on Group Based Policy which is being rapidly adopted in the industry as it allows you to empower both the network admins and the developers to either define policy for consumption by others or for the end user to determine the security posture of their contianer based application upon instantiation.
We have a C240 M3 with an LSI Nytro MegaRAID NMR 8110-4i. The Cisco documentation states that it supports SAS & SATA drives. The RAID controller is configured so that the 200GB cache is used as the boot drive, rather than for caching the SAS/SATA...
I received some extra 32G memory sticks for UCS B200 M5 blades. Comparing to the ones in the blade, the new memory (samsung branded) has different model#, screenshot below. So would I be able to mix the new ones with existing ones? OR I should replace the...
Hi From the Cisco HyperFlex Data Platform Administration Guide, Release 4.0, it mentioned about the Native / Sentinel snapshots. Its reccomended to take an initial HX native snapshot on all VMs on the HX cluster. I would like to find out: 1...
Hi, I am looking at the HX Connect UI and I find very little details about the tasks being perform, for e.g. I do not see which users has logged into the HX Connect at what date/time, which user initiated the HX Maintenace for a node. Is the level of...
If a power circuit goes and comes back up, there is the risk of power up surge overloading the circuit. I heard on another tech forum that some modern servers have the ability to delay their power up. Is that the case with UCS chassis? And is it also poss...