3 Essential Steps to Securing Your Docker Container Deployments

Docker containers provide a more secure environment for your workloads than traditional server and virtual machine (VM) models. They offer a way to break up your applications into much smaller, loosely coupled components, each isolated from one another and with a significantly reduced attack surface.

This can restrict the number of opportunities for hackers to exploit your computer systems and make it more difficult for a breach to spread in the event of an attack.

But, regardless of Docker’s enhanced level of protection, you still need to understand the security pitfalls of the technology and maintain best practices to safeguard your containerized systems.

Much of this will be similar to what you already do for VM-based workloads—such as monitoring container activity, limiting resource consumption of each container environment, maintaining good application design practice, patching vulnerabilities and making sure credentials don’t make their way into your Docker images.

But you’ll also need to take security measures that are very specific to Docker deployments. So the following is a list of three essential steps to securing applications hosted on the container platform.

Scan Your Docker Image For Free!
Scan any image for security vulnerabilities in 3 easy steps, with Xray CLI

SCAN DOCKER IMAGE

 

Let’s start by looking at the first and most important consideration your IT team needs to know right from the outset.

1. Run Containers as a Non-Root User

By default, Docker gives root permission to the processes within your containers, which means they have full administrative access to your container and host environments.

Yet, just as you wouldn’t run your processes as root on a standard Linux server, you wouldn’t run them as root in your containers.

Without due care and attention, developers can easily overlook this default behavior and create insecure images that grant root access by mistake.

This can be a gift to hackers, who could exploit this vulnerability to steal API keys, tokens, passwords and other confidential data or interfere with the underlying host of your container deployments and cause malicious damage to your server system.

Moreover, your DevOps teams could also fall foul of unrestricted access permissions with unintended consequences for your Docker environments. For example, they could inadvertently create images, built from Dockerfile commands with administrative access, that erase data or alter host system settings when they launch a container.

How to Prevent Containers from Running as Root

If you’re unsure what privileges your parent images use then you should force your containers to use a custom user or group identifier with reduced permissions. That way, your container processes will only have access to the resources they need to perform their intended function.

You can do this by either:

➜ Setting up a non-root user in your Dockerfile

First set up a dedicated user or group identifier with only the access permissions your application needs.

Then add the USER Dockerfile directive to specify this user or group for running commands in the image build and container runtime processes.

The following is a very basic Dockerfile example. However, you can repeat the USER statement as many times as necessary, as you may sometimes need to run different processes that require different permission levels.

FROM centos:7
RUN groupadd -g 1000 basicuser &&
useradd -r -u 1000 -g basicuser basicuser
USER basicuser

➜ Including the –user option in your Docker run command.

The –user option in the Docker run command overrides any user specified in your Dockerfile. Therefore, in the following example, your container will always run with the least privilege—provided user identifier 1009 also has the lowest permission level. $ docker run --user 1009 centos:7
However, this method doesn’t address the underlying security flaw of the image itself. Therefore it’s better to specify a non-root user in your Dockerfile so your containers always run securely.

User-added imageBeware: The Linux kernel doesn’t recognize usernames. So you must specify a user or group identifier instead.

2. Use Your Own Private Registry

A private registry is a fully independent catalog of container images set up by the organization that uses it. You can host it on your own on-premises infrastructure or on a third-party registry service such as Amazon ECR , Azure Container Registry , Google Container Registry , Red Hat Quay and JFrog’s own container registry service .

Private registries give you complete control over how you manage your images and generally offer more advanced features, which can help keep your inventory secure.

They typically include functionality, such as:

  • Sophisticated image scanning tools for identifying compromises and unpatched vulnerabilities.
  • Strict governance, such as role-based access control (RBAC) and compliance monitoring.
  • Digital signing, image authentication and other tamper-protection capabilities.
  • Segregated registries for use in development, test and production.

By contrast, public registries, such as Docker Hub , by and large provide only a basic service—where you have to put your trust in an image publisher, who may not adhere to the same high standards of security.

As a result, you could end up with images that contain malicious or outdated code and ultimately live container environments that are wide open to a data breach.

Scan Your Docker Image!
Scan any image in 3 easy steps

SCAN NOW

3. Keep Your Images Lean and Clean

The larger the image, the larger the attack surface of your Docker containers.

In the case of a fully fledged VM, you have no choice but to use an entire operating system. But with Docker workloads, your containers only have to provide the resources your application needs.

Choose Minimized Parent Images

First, you should be aware that some images on Docker Hub are more streamlined than others. For example, in the ubuntu repository, some images are more than twice the size of others.

Therefore you shouldn’t just automatically pull the latest image. Ideally, you should look for one that has the lowest footprint and then add any packages and dependencies required to support your application.

Docker Hub shows the compressed size for each of the images in a repository, as shown below for the Minimal Ubuntu version.
User-added image

Once you’ve pulled the image, you can check its actual size using the docker images command.
 $ docker images
Then look for the entry for the image you’ve just downloaded, as follows.

REPOSITORY     TAG                 IMAGE ID             CREATED          SIZE
ubuntu        18.04               ccc6e87d482b        4 days ago        64.2MB

Optimize Images in Dockerfile and .dockerignore

Next, you’ll need to create a Dockerfile to build your own streamlined image for your containers. This will be made up of your parent image and your own layers that make up the final image build.

As you add these layers, you’ll create artifacts that won’t be a necessary part of your runtime environments. To exclude these, you should set up a .dockerignore file in the root directory from which you intend to build your image.

Use Multi-Stage Builds

Finally, another way to keep image sizes down is to use the Docker multi-stage build feature , which is supported by versions 17.05 and higher.

This allows you to use more than just the one FROM directive in your Dockerfile.

With each new FROM statement, you can use a different parent image that represents a new stage of the build. You can then selectively copy only the artifacts you want from one stage to the next, leaving behind the excess as you build up the layers of your image.

The following Dockerfile is a real-life example of a multi-stage build in practice.


FROM golang:1.7.3
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html
COPY app.go .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .

FROM alpine:latest
RUN apk –no-cache add ca-certificates
WORKDIR /root/
COPY –from=0 /go/src/github.com/alexellis/href-counter/app .
CMD [“./app”]

Verify Image Integrity

Another way to improve your container security posture is to verify images before pulling them from Docker Hub.

The Docker daemon defaults to pulling Docker images without checking their integrity. However, with the release of Docker Engine 1.8, the platform introduced a new feature, Docker Content Trust , which supports digital signing and authentication of images.

This service allows you to add a cryptographic signature to the images you publish to a remote directory. At the same time, whenever you attempt to pull an image, it automatically verifies the digital signature. That way, you can be sure the owner of the image is who they claim to be.

To activate Docker Content Trust, you’ll need to set the following variable with the Linux export command. $ export DOCKER_CONTENT_TRUST=1
This will only set the feature in your current shell. If you want to enable Docker Content Trust persistently across the board then you’ll need to set it up in a default environment variable shared by all users.

Although Docker Content Trust cannot verify the quality of images, it can help keep your images clean by preventing compromises while in transit or through unauthorized access to the repositories where they’re stored.

Learn More about Docker