Containers have been one of the key drivers behind the current DevOps revolution. They offer a lightweight, portable and cost-effective alternative to virtual machines (VMs). And they provide a simpler and more convenient way to package and deploy modern applications.
The first steps in the evolution of containers date back as far as 1979—with the development of Unix V7, which introduced a way of isolating processes using the chroot (change root) command.
Adoption was also initially slow. But all that changed with the launch of Docker in 2013. The platform made containerized applications far easier to deploy and maintain. And, as a result, the use of containers rocketed.
But what exactly are containers? And what advantages do they offer over traditional methods of hosting your applications?
This introduction explains the key concepts and benefits of the technology. It also provides additional resources where you can learn about containers in more detail.
What Are Containers?
Containers are an application deployment technology that performs a similar role to virtual machines (VMs). Just like traditional virtualization, containers provide isolated environments for your applications. However, they use a different method of partitioning infrastructure resources.
Whereas VMs use a hypervisor to emulate fully fledged guest operating systems, containers share the kernel of the host operating system with other containers.
In addition, they offer a much more streamlined environment for your workloads, as they do away with a full-blown operating system and instead provide only the resources, such as installations, dependencies and code, your application actually needs.
As a result, containers can significantly lower the infrastructure footprint of your applications. They also offer better performance than VMs. And they’re able to stop and start more quickly, making them more responsive to fluctuating scaling requirements.
Containers decouple the application from the underlying infrastructure. This makes life easier for developers, as they can focus their efforts on writing code rather than the environment in which it will be hosted.
For example, they can replicate containers on different servers with different configurations, provided each server operating system uses the same Linux kernel (or one that’s compatible with the container environment). This allows a team of coders to work collaboratively on a project regardless of the host environment each of them is using.
And, because of their compact design, containers are also easy to incorporate into Continuous Integration (CI) and Continuous Delivery (CD) workflows.
Containers are best suited to modern cloud applications, which are based on a distributed architecture of loosely coupled microservices.
This approach offers a number of advantages over one large monolithic program.
For example, you can replicate microservices across a cluster of VMs to improve fault tolerance. That way, if a VM fails, the application will continue to function, falling back on the other microservices in the cluster.
Similarly, they can help eliminate maintenance downtime, as you can patch or update the code and operating environment of your containers without affecting the others in your cluster.
Interested in Learning More about Microservices Architecture?
Want to know more about microservices and how to migrate your applications to containerized infrastructure?
Our guide to the Role of Containers in a Microservice Architecture will show you how.
Key Container Concepts
The following concepts are the most important to understanding how a container platform like Docker works in practice.
The application you install on your host machine to build, run and manage your containers. It is the core of your installation and brings all other components of your container system together.
A read-only template for creating actual running containers. It consists of a collection of files that bundle together all the essentials required to configure a fully operational container environment.
Each of the files that make up the image is known as a layer. These layers form a series of intermediate images, built one on top of the other in stages.
Because container images are read only, they offer all the reliability and consistency of modern immutable infrastructure, avoiding issues such as configuration drift that’s very common to the traditional server model. If you need to make changes to your container environments, you simply swap out your previous image with an updated one and launch your replacement containers from it.
The first layer and starting point of your container image. It provides the basic building blocks for your container environments and the foundations upon which all other layers are built.
Parent images are very often a stripped-down Linux distribution. But they can also be an application framework or even a full application stack—such as a ready-to-use content management system (CMS).
You can import preconfigured parent images from a container registry service, such as Docker Hub or Google Container Registry. Alternatively, you can use one of your own existing images as your parent image.
Essentially an empty first layer, which allows you to build your container images from scratch. Base images are generally intended for more advanced users, who want complete control over every part of their image.
A living instance of a container image. A running container basically consists of:
- The image from which it was launched.
- A top writable layer, known as the container layer, which is used to store any changes made to the container throughout its runtime.
This layered structure is key to container efficiency, as it allows like-for-like containers to share access to the same underlying image while each maintaining their own individual state.
Container Image Hygiene
With the underlying image making up so much of the anatomy of a live container, image hygiene is absolutely essential to the security and performance of your containerized deployments.
Start by keeping your images lean—as the larger the image, the larger the attack surface and the more bloat that can slow down performance.
So, when you pull your parent images from a container registry, avoid automatically using the latest version of the image you’re looking for.
Instead, look for suitable alternatives with a lower image size. You can then add any additional packages and dependencies to your own image builds as you need them.
You should be able to find the compressed size for each image listed on your chosen container registry, as shown for the following sample Ubuntu version on Docker Hub.
However, your image optimization shouldn’t stop there, as you can streamline images even further by weeding out unnecessary artifacts during your own build process.
Learn How to Create Your Own Docker Images
In our Beginner’s Guide to Understanding and Building Docker Images we take you step by step through the process of creating a Docker image.
And, finally, as part of your defense against container security exploits, you should make use of image vulnerability scanning tools. These include built-in offerings provided by container registry services and also third-party scanners, such as Anchore and Clair.
Ideally these should be fully automated and integrated into your CI and CD workflows, as this will help avoid manual oversights and speed up development time.