Docker & Containers
12 June 2019
You’ve probably heard of Docker and it may have caught your attention because of its rising popularity. If you did, then you’ve probably also heard about containers too -and, no, Docker is not a container-! Docker is a company that provides a platform, also named “Docker”, that allows you to build, run and manage containers. Yes, that’s right! Docker is not equal to a container, so what is a container exactly? A container is an isolated environment that runs an application on a server, simply put “a process”. Alright, it’s so much more than just a process, but I’ll go more into detail on the “container” part later on. First, let’s go back and talk about Docker!
Let me start with a bit of history, because before Docker, there was LXC in 2008. LXC provides virtualization on OS level to allow multiple containers to run on a single server, sharing the same kernel. LXC indeed runs a container, but it wasn’t quite popular because of its several limitations. Therefore Docker was released as an open source project in 2013, which is actually an extension of LXC. So, Docker is LXC, but with several extra features.
Docker provides more capabilities than LXC such as:
Provisioning: ability to roll back to a previous version
Public registry: tons of useful containers already created for you
Portable deployments: install your container on any Docker enabled server
One of the main reasons why Docker is so popular, is that Docker focuses on the requirements of developers and sysadmins to separate application dependencies from infrastructure. Let me give you a real-life example why this is revolutionary.
You, as a developer, are responsible for the task to develop an application. When you’ve finished developing the application, you will create a container image and push it to a cloud repository, which is called “Docker Hub”. This container image will need all its dependencies and binaries to run the application. I, as a sysadmin, can easily pull the image from Docker Hub and directly deploy it to a test server. Yes, the application you have developed, will run exactly the same on the test server as it will on your laptop.
Now that you already know what Docker is, let me to talk more about containers! Why using containers? Well, the shift from a monolithic application to microservices is accelerating, because it adds several advantages. Codes are now made easier to understand, develop and maintain. Developers are free to develop, because each service is independently deployable and updatable. Moving on to another application architecture also means that you have to make some other changes, for instance: which execution environment are you going to use? Do you want to run your multi microservice in bare metal, virtual machines or containers?
Now, let’s go into details of those other options. First of all, I don’t want to be a pain in the ass, but bare metal solution is not evident when it comes to microservices. Running multiple microservices in one instance, is just not best practice period. You can always divide your physical servers in virtual machines instead, but this also leads to several disadvantages when it comes to microservices; and it does cost a lot! That leaves us with the last option: a container. A container has a lot of benefits when it comes to microservices. Leaving the exact details out, you will actually get a lot more isolated applications for your single server, and your application can be improved and deployed much faster!
Examples on why to use containers:
- Containers use shared OS and it means that it is efficient in using system resources
- Containers lend themselves to CI/CD because codes are in a shared repository
- Containers gives you instant application portability
When talking about multiple microservices means, that there will be a lot of containers to manage, right? Well, manually managing a lot of containers is a pain in the ass. Therefore, you will need a solution that makes sure these containers will be automatically managed. So, brace yourself, the solution for this is called “container orchestrators”. Yes, container orchestrators are your friends when it comes to managing containers. They will automate all the work for you! You’ve probably heard of a technology, called “Kubernetes”. Kubernetes is a container orchestrator and it is one of the main solutions to manage a bunch of containers automatically.
Kubernetes has a lot of features, but if you think these features are not enough, you can also install other software that easily integrates Kubernetes to please your container needs. Extra features you might like or want, are monitoring dashboards, user friendly UI, CI/CD etc. Luckily, there are platforms that have already integrated some extra built-in features on top of Kubernetes. In that case you will probably want to try out other platforms, such as Openshift or ICP. I’d like to keep on talking about more container orchestrators, but that’s for another time!
So, there you go! Hopefully you’ve learned more about Docker and containers. And if you would like to know some more, I recommend you to just try out Docker yourself! It’s very easy and fun! If you want to get started, but you do need some help, just check the following small tutorial:
Getting started with docker (Centos 7)
yum install docker -y systemctl start docker docker help docker pull centos docker run -it --rm centos /bin/bash
Convinced of the benefits of containers but you don’t have the technical expertise in-house? FlowFactor is here to help!