If you plan on going into cloud computing, heck even any I.T. role for that matter, you just have to know what a docker container is and what it does.
What is a docker container? Well, before we can define what a docker container actually is, we first have to understand what it is replacing.
A docker container is essentially replacing virtual machines. Usually, a single physical machine contains one base operating system (Windows, Mac, Linux, etc.).
What if I wanted to use another OS? Before virtual machines, if you wanted to run two different operating systems, you would have to have two physical machines
to run them. That's where virtualization comes in.
Virtual machines are ran on something called a hypervisor (VMware). What hypervisors allow us to do is take a portion of our server's resources and dedicate it
to a virtual machine or guest operating system.
So what virtual machines do is essentially virtualize the hardware. What docker does is virtualize the entire operating system. Instead of using a hypervisor,
you would have a single OS on top of the machine and then have docker installed over that. Here, docker basically does the same thing as the hypervisor except
it virtualizes portions of the OS by creating containers. Like virtual machines, these containers can run their own operating system (Ubuntu, Debian, etc.).
The isolation and security allows you to run many containers simultaneously on a given host.
Docker containers are lightweight and contain everything needed to run the application, so you do not need to rely on what is currently installed on the host.
Containers often deliver both an application and configuration, meaning that a sysadmin doesn't have to spend as much time getting an application in a container
to run compared to when an application is installed from a traditional source. Docker Hub and Quay.io are repositories offering images for use by container engines.
Containers are desired because of their ability to be killed gracefully and respawned when load balancing demands it. They're designed to seamlessly appear and
disappear, whether the container's demise is caused by a crash or because it is simply no longer needed. Because containers are meant to be brief and to spawn new
instances often, it is expected that monitoring and managing them is not done by a person in real time, but is instead automated by something like Kubernetes (K8).
One of the great things about open source is that you have choice in what technology you use to accomplish a task. The Docker engine can be useful for lone developers
who need a lightweight, clean environment for testing, but without a need for complex orchestration.
The effort to ensure open standards prevail is ongoing, so the important long-term strategy for your container solution should be to stick with projects that respect
and foster open source and open standards. Proprietary extras may seem appealing at first, but as is usually the case, you lose the flexibility of choice once you commit
your tools to a product that fails to allow for migration.