So why has this become a hot topic, something that has many people turning their head and looking at it once more. Well that is quite simple. This is due to a huge amount of interest in Docker.
Let us first start with a brief history of who and what Docker is (some of this information is readily available on their website – or other locations).
Docker provides a high-level API extension on top of Linux Containers (LXC), providing a lightweight virtualization mechanism that runs processes in isolation. A Docker container utilizes the underlying cgroups and LXC mechanisms on the host operating system, that provide resource isolation (CPU, memory, block I/O, network, etc.) and separate namespaces to completely isolate the application's view of the operating environment.
Developers dig it!
They are completely portable – that means they will run exactly the same on a developer laptop – be it OS X, Windows or Linux, or on server hardware in your datacenter – not matter what the flavor of Linux the metal is running – or in the cloud – any cloud provider will do.
The developers are able to build any app, in any language using any toolchain. They can get going in a jiffy by starting any one of the ~13,000 containers available on Docker Hub. All of the changes and dependencies are tracked and changed by Docker, making it easier for sysadmins to understand how the apps that developers build work. Developers are comfortable with the concept of private or public repositories – where they can automate their build development workflow incorporating docker without having to change the way they work.
The quote below from the Docker web site is completely true.
“Docker helps developers build and ship higher-quality applications, faster.”
Sysadmins think it’s groovy!!
Sysadmins (those guys who make everything just work – but no-one really knows how) use Docker to provide standardized environments for all the difference stages of the product lifecycle. Development, QA, staging and production teams, all have the same platform – and this reduces “works on my machine” blame game.
This is achieved “containerizing” the application platform and all its dependencies, which allow the sysadmins to provide an abstraction layer making the differences in Operating systems and the underlying infrastructure irrelevant.
An additional benefit is that the sysadmins can decide that the container becomes the unit of deployment. This gives now them the huge flexibility in choosing where workloads will run. On -premises bare metal, data center VMs or public clouds, they will all use the same unit – the container.
Last but not least - the Docker Engine’s lightweight runtime, allows for insanely rapid scale-up and scale-down in response to changes in demand on the actual application.
The quote below from the Docker web site is summarizes it well.
“Docker helps sysadmins deploy and run any app on any infrastructure, quickly and reliably. “
Hey – you just described virtual machines – didn’t you?
There a basic and fundamental difference between VM’s and containers. A VM is usually a full OS – which requires a decent amount of resources. A container’s sole and only purpose is to run an Application – on top of an OS, it does not need to run a full OS, only the application. Therefore that makes the container smaller – considerably smaller. It also makes containers extremely portable. This can increase consolidation density exponentially.
What does this mean for your datacenter?
Containers will affect the way you provide services in your datacenter – to what extent is not yet clear.
It should provoke your thinking and you should ask yourselves the following questions – in preparation for the future.
· Will containers replace VM’s or instances?
o I think it is too early to know, but there is a very valid use case for using containers.
· Are they suitable for every single use case?
o Definitely not! But there some use cases where they fit perfectly!
· Are they 100% production ready?
o Currently I would say they are not. But then again – the question you should be asking yourself – is production the correct use case?
· Will this change the datacenter of the future?
o Yes – but exactly how I think will still evolve over the next couple of years.
· Should you start looking at containers today?
o A definite yes! I think you would prefer to be that company that acts proactively – ahead of the curve – and not the one that has to rush to implement something because the customers already are demanding it – and you are late in the game.
Hi all, I'm trying to understand what is the purpose/importance of performing a install commit after having installed couple of packages in my ASR9906. I noticed that if I perform a reload without committing, the packages will still be available afte...
CCO documentation lists out the ability to do a password recovery for eXR with a ZTP/PXE boot.
One can also perform the operation manually, like a "turboboot" for classic XR in this facinity.
this procedure will wipe out the complete system and install a...
All, From my understanding, the following route policy:route-policy TEST_EIGRP_OUT if destination in TEST_EIGRP_OUT then drop else pass endif set eigrp-metric 10000 27000 255 1 1500end-policy Should be written:rou...