Why you should use Docker and containers

Posted on 10-10-2018 , by: admin , in , 0 Comments

A book published in 1981, called Nailing Jelly to a Tree, describes software as “nebulous and difficult to get a firm grip on.” That was true in 1981, and it is no less true nearly four decades since. Software, whether it is an application you bought or one that you built yourself, remains hard to deploy, hard to manage, and hard to run.

Docker containers provide a way to get a grip on software. You can use Docker to wrap up an application in such a way that its deployment and runtime issues—how to expose it on a network, how to manage its use of storage and memory and I/O, how to control access permissions—are handled outside of the application itself, and in a way that is consistent across all “containerized” apps. You can run your Docker container on any OS-compatible host (Linux or Windows) that has the Docker runtime installed.

Docker offers many other benefits besides this handy encapsulation, isolation, portability, and control. Docker containers are small (megabytes). They start instantly. They have their own built-in mechanisms for versioning and component reuse. They can be easily shared via the public Docker Hub or private repository.

In this article I’ll explore how Docker containers make it easier to both build and deploy software—the issues containers address, how they address them, when they are the right answer to the problem, and when they aren’t.

Before Docker containers

For many years now, enterprise software has typically been deployed either on “bare metal” (i.e. installed on an operating system that has complete control over the underlying hardware) or in a virtual machine (i.e. installed on an operating system that shares the underlying hardware with other “guest” operating systems). Naturally, installing on bare metal made the software painfully difficult to move around and difficult to update—two constraints that made it hard for IT to respond nimbly to changes in business needs.

Then virtualization came along. Virtualization platforms (also known as “hypervisors”) allowed multiple virtual machines to share a single physical system, each virtual machine emulating the behavior of an entire system, complete with its own operating system, storage, and I/O, in an isolated fashion. IT could now respond more effectively to changes in business requirements, because VMs could be cloned, copied, migrated, and spun up or down to meet demand or conserve resources.