How to Build Docker Images

Topics Cloud Native Docker Images: A Beginner’s…

Definition

A Docker image is a read-only template containing instructions to create a container, packaging an application and its preconfigured environment. In this article, you will discover the essentials of understanding and building Docker images and gain step-by-step guidance and best practices for efficient, secure container builds.

Summary

  • A Docker image is a read-only template containing instructions to create a container, packaging an application and its preconfigured environment. It’s the essential starting point for using Docker.
  • Images are built from layers, where each layer is an intermediate image, and changes to a lower layer require rebuilding that layer and all layers above it. To optimize builds, layers that change often, like application code, should be placed as high up the stack as possible.
  • The two primary methods for creating a Docker image are the Interactive Method (quickest for testing but difficult for lifecycle management) and the Dockerfile Method (systematic, flexible, and the choice for enterprise-grade, production deployments).
  • For best practices, utilize multi-stage builds to separate build-time dependencies from the final lightweight runtime image, which reduces the final image size and minimizes security risks.
  • Container registries (like Docker Hub or JFrog Container Registry) are catalogs of storage locations (repositories) where images are stored and shared, with each repository holding related images referenced by different tags to represent versions.

Overview

In this introduction to understanding and building Docker images, we’ll not only take you through the basics of Docker images, but also show you where to find ready-made, off-the-shelf images that will give you a head start in building your own containerized applications, tools, and services.

As a new Docker user, you’ll also need to understand how to build your own custom images. So, we’ll briefly cover how to create Docker images for deploying your code and assembling container-based services. But first, let’s cover the basics and look at the composition of a Docker image in detail.

What is a Docker Image?

A Docker image is a read-only template containing a set of instructions for creating a container that can run on the Docker platform. It provides a convenient way to package up applications and preconfigured server environments, which you can use for your own private use or share publicly with other Docker users. Docker images are also the starting point for anyone using Docker for the first time.

Anatomy of a Docker Image

A Docker image is made up of a collection of files that bundle together all the essentials – such as installations, application code, and dependencies – required to configure a fully operational container environment. You can create a Docker image by using one of two methods:

  • Interactive: By running a container from an existing Docker image, manually changing that container environment through a series of live steps, and saving the resulting state as a new image.
  • Dockerfile: By constructing a plain-text file, known as a Dockerfile, which provides the specifications for creating a Docker

We’ll cover these two methods in more detail later in this guide. For now, though, let’s focus on the most important Docker image concepts.

Docker Layers

Each of the files that make up a Docker image is known as a layer. These layers form a series of intermediate images, built one on top of the other in stages, where each layer is dependent on the layer immediately below it. The hierarchy of your layers is key to efficient lifecycle management of your Docker images. Thus, you should organize layers that change most often as high up the stack as possible. This is because, when you make changes to a layer in your image, Docker not only rebuilds that particular layer, but all layers built from it. Therefore, a change to a layer at the top of a stack involves the least amount of computational work to rebuild the entire image.

Container Layer

Each time Docker launches a container from an image, it adds a thin writable layer, known as the container layer, which stores all changes to the container throughout its runtime. As this layer is the only difference between a live operational container and the source Docker image itself, any number of like-for-like containers can potentially share access to the same underlying image while maintaining their own individual state.

Parent Image

In most cases, the first layer of a Docker image is known as the “parent image”. It’s the foundation upon which all other layers are built and provides the basic building blocks for your container environments. You can find a wide variety of ready-made images for use as your parent image on the public container registry, Docker Hub.

 

You can also find them on a small number of third-party services, such as the Google Container Registry. Alternatively, you can use one of your own existing images as the basis for creating new ones. A typical parent image may be a stripped-down Linux distribution or come with a preinstalled service, such as a database management system (DBMS) or a content management system (CMS).

Base Image

In simple terms, a base image is an empty first layer, which allows you to build your Docker images from scratch. Base images give you full control over the contents of images, but are generally intended for more advanced Docker users.

Docker Manifest

Together with a set of individual layer files, a Docker image also includes an additional file known as a manifest. This is essentially a description of the image in JSON format and comprises information such as image tags, a digital signature, and details on how to configure the container for different types of host platforms.

What is a Container Registry vs. Container Repository?

While the terms “registry” and “repository” are often used interchangeably, they represent two distinct, hierarchical concepts in the Docker image storage and sharing ecosystem: the overall service and the specific location for image versions.

Container Registries

Container registries are catalogs of storage locations, known as repositories, where you can push and pull container images. The three main registry types are:

  1. Docker Hub: Docker’s own, official image resource where you can access more than 100,000 container images shared by software vendors, open-source projects, and Docker’s community of users. You can also use the service to host and manage your own private images.
  2. Third-party registry services: Fully managed offerings that serve as a central point of access to your own container images, providing a way to store, manage, and secure them without the operational headache of running your own on-premises registry. Examples of third-party registry offerings that support Docker images include Red Hat Quay, Amazon ECR, Azure Container Registry, Google Container Registry, and the JFrog Container Registry.
  3. Self-hosted registries: A registry model favored by organizations that prefer to host container images on their own on-premises infrastructure – typically due to security, compliance concerns or lower latency requirements. To run your own self-hosted registry, you’ll need to deploy a registry server. Alternatively, you can set up your own private, remote, and virtual Docker registry.

Container Repositories

Container repositories are the specific physical locations where your Docker images are actually stored, whereby each repository comprises a collection of related images with the same name. Each of the images within a repository is referenced individually by a different tag and represents a different version of fundamentally the same container deployment. For example, on Docker Hub, mysql is the name of the repository that contains different versions of the Docker image for the popular, open-source DBMS, MySQL.

Security, Governance, and Trust in Agentic Repositories

The autonomous nature of agentic repositories introduces new security and governance challenges that require careful management. Given that AI agents can execute code and make decisions, ensuring their actions are secure, auditable, and compliant with organizational policies is paramount.

Addressing Security Challenges

Because agentic repositories can autonomously interact with the software supply chain, they create new Application Security (AppSec) considerations. A key challenge is safeguarding against prompt injection attacks, where a malicious actor provides a natural language prompt that manipulates an agent’s behavior. Another concern is the potential for agents to introduce vulnerabilities or non-compliant dependencies without proper oversight. To mitigate these risks, agentic repositories must implement robust security features like:

  • Zero-trust architectures: Ensuring all agent actions are verified and authenticated, regardless of their origin.
  • Explainable AI (XAI): Providing clear, human-readable explanations for every decision an agent makes, enabling auditing and debugging.
  • Automated guardrails: Enforcing strict policies that prevent agents from performing unauthorized actions, such as introducing known vulnerabilities or accessing sensitive data without permission.

The Role of Governance and Trust

Governance frameworks for agentic repositories extend beyond technical controls to include policies that define the scope and authority of AI agents. Establishing trust requires transparency and accountability. Organizations must create clear frameworks that define an agent’s autonomy levels, data usage rights, and decision-making responsibilities. In doing so, they can ensure that as AI agents become more deeply integrated into a company’s operations, the organization can confidently manage risk and maintain control. The goal is to treat governance not as a hurdle but as a fundamental enabler that builds stakeholder trust and allows for the safe and scalable adoption of agentic AI.

How Agentic Repositories Improve Security, Governance, and Trust

While agentic AI comes along with a set of security and governance considerations, it must also be noted that the proper utilization of agentic repositories can serve to enhance an organization’s security posture and compliance as well.

Security: By using intelligent automation, agentic repositories provide a proactive approach to security and compliance. They can continuously monitor and analyze the software supply chain for vulnerabilities and policy violations, providing a new level of diligence that surpasses manual checks. This is accomplished by integrating security scanning tools directly into the agentic workflows, ensuring that every artifact is vetted before it is used.

Governance: The ability of agents to provide clear, auditable explanations for their actions through Explainable AI is also crucial, as it builds transparency and trust, allowing teams to verify decisions and maintain compliance with governance policies.

How to Create a Docker Image

In this final section, we’ll cover the two different methods of creating Docker images in a little more detail, so you can start putting your knowledge into practice.

Interactive Method

Advantages: Quickest and simplest way to create Docker images. Ideal for testing, troubleshooting, determining dependencies, and validating processes.

Disadvantages: Difficult lifecycle management, requiring error-prone manual reconfiguration of live interactive processes. Easier to create unoptimized images with unnecessary layers.

The following is a set of simplified steps to creating an image interactively:

  • Install Docker and launch the Docker engine
  • Open a terminal session
  • Use the following Docker run command to start an interactive shell session with a container launched from the image specified by image_name:tag_name:

$ docker run -it image_name:tag_name bash

If you omit the tag name, then Docker automatically pulls the most recent image version, which is identified by the latest tag. If Docker cannot find the image locally then it will pull what it needs to build the container from the appropriate repository on Docker Hub.

In our example, we’ll launch a container environment based on the latest version of Ubuntu:

$ docker run -it ubuntu bash

  • Now configure your container environment by, for example, installing all the frameworks, dependencies, libraries, updates, and application code you need. The following simple example adds an NGINX server:

# apt-get update && apt-get install -y nginx

Next, you’ll need to know the name or ID of your running container instance.

  • Open another Bash shell and type the following docker ps command to list active container processes:

$ docker ps

The sample output below shows our running container with the ID e61e8081866d and the name keen_gauss:

CONTAINER ID    IMAGE    COMMAND    CREATED          STATUS        PORTS    NAMES
e61e8081866d       ubuntu      “bash”         2 minutes ago     Up 2 minutes         keen_gauss

This name is randomly generated by the Docker daemon. But you can also identify your container with something more meaningful by assigning your own name using the – name operator in the Docker run command.

  • Save your image using the Docker commit command, specifying either the ID or name of the container from you which want to create it:

$ docker commit keen_gauss ubuntu_testbed

In the example above, we supplied the name of our container and called the resulting image ubuntu_testbed.

  • Now, use the Docker images command to see the image you’ve just created:

$ docker images

You should see your new image listed in the results.

REPOSITORY     TAG        IMAGE ID          CREATED            SIZE
ubuntu                 latest      775349758637    5 minutes ago      64.2MB

  • Finally, return to your interactive container shell and type exit to shut it down.

# exit

 

Dockerfile Method

Advantages: Clean, compact and repeatable recipe-based images. Easier lifecycle management and easier integration into continuous integration (CI) and continuous delivery (CD) processes. Clear self-documented record of steps taken to assemble the image.

Disadvantages: More difficult for beginners and more time consuming to create from scratch.

The Dockerfile approach is the method of choice for real-world, enterprise-grade container deployments. It’s a more systematic, flexible, and efficient way to build Docker images and the key to compact, reliable, and secure container environments.

In short, the Dockerfile method is a three-step process whereby you create the Dockerfile and add the commands you need to assemble the image.

The following table shows you those Dockerfile statements you’re most likely to use:

Command

Purpose

FROM

To specify the parent image.

WORKDIR

To set the working directory for any commands that follow in the Dockerfile.

RUN

To install any applications and packages required for your container.

COPY

To copy over files or directories from a specific location.

ADD

As COPY, but also able to handle remote URLs and unpack compressed files.

ENTRYPOINT

Command that will always be executed when the container starts. If not specified, the default is /bin/sh -c

CMD

Arguments passed to the entrypoint. If ENTRYPOINT is not set (defaults to /bin/sh -c), the CMD will be the commands the container executes.

EXPOSE

To define which port through which to access your container application.

LABEL

To add metadata to the image.

 

Example Dockerfile

# Use the official Ubuntu 18.04 as base
FROM ubuntu:18.04
# Install nginx and curl
RUN apt-get update &&
apt-get upgrade -y &&
apt-get install -y nginx curl &&
rm -rf /var/lib/apt/lists/*

An example of a Dockerfile for building an image based on official Ubuntu 18.04 with installing Nginx

Next, we’ll set up a .dockerignore file to list any files that would otherwise be created during the Docker build process, which you want to exclude from the final build.

.dockerignore files play an important role in creating more compact, faster-running containers – by providing a way to prevent sensitive or unnecessary files and directories from making their way into your image builds. Your .dockerignore file should be located in the root directory, known as the build context, from which you intend to build your image. This will be either your current working directory or the path you specify in the Docker build command that we’ll discuss below.

The Docker Build Context

Now use the Docker build command to create your Docker image. Use the -t flag to set an image name and tag:

$ docker build -t my-nginx:0.1 .

In the example above, we built the image from within the same directory as the Dockerfile and the context, as the . argument simply tells the Docker daemon to build the image from the files and folders in the current working directory.

Finally, as we saw in the interactive method, you can use the Docker images command to see the image you’ve just created.

$ docker images

REPOSITORY     TAG IMAGE ID        CREATED SIZE

my-nginx       0.1 f95ae2e1344b    10 seconds ago 138MB

ubuntu         18.04 ccc6e87d482b  12 days ago 64.2MB

Again, you should see your new image listed in the results.

Exploring Docker Image Best Practices

Building Docker images is not just about getting an image to run — it’s about creating efficient, secure, and maintainable images that work well in production. A few best practices can make a big difference:

  • Optimize image layering
    Place frequently changing components, such as application code, as high in the layer stack as possible. This reduces rebuild time when only small changes are made. Keep lower layers reserved for stable dependencies like OS packages or frameworks.
  • Leverage build cache for faster builds
    Docker caches layers that haven’t changed between builds. By ordering your Dockerfile instructions carefully (installing dependencies before copying app code, for instance), you maximize cache reuse and minimize unnecessary rebuilds.
  • Implement multi-stage builds for efficient images
    Multi-stage builds allow you to separate build dependencies from runtime dependencies. For example, you can use one stage to compile an application and another lightweight stage to run it. This reduces the final image size and eliminates sensitive or unnecessary files from production containers.

Following these best practices ensures faster builds, smaller images, and fewer security risks — all while improving developer productivity.

Next Steps and Further Learning

Once you’re comfortable building Docker images, you can start exploring advanced techniques and integrations:

  • Deepen your Docker image knowledge
    The Docker documentation covers advanced Dockerfile syntax, automated image scanning, and reproducible build pipelines that go beyond the basics.
  • Explore advanced image building techniques
    Consider using content trust for signed images, reproducible builds using digests, and automated workflows with CI/CD tools to strengthen your container supply chain.
  • Integrate with JFrog for seamless management
    JFrog Artifactory and the JFrog Container Registry provide enterprise-grade capabilities for storing, versioning, and promoting Docker images. With integrated vulnerability scanning, build metadata, and CI/CD pipeline connections, JFrog products help teams manage container images securely at scale.

By applying best practices and extending your toolkit, you’ll be able to manage Docker images not just as code artifacts but as part of a secure, automated, and enterprise-ready development workflow.

Managing Docker Images with JFrog

Mastering the fundamentals of Docker images lays the foundation for building efficient, secure, and scalable containerized applications. Whether you’re experimenting with interactive builds or leveraging Dockerfiles for production-ready workflows, following best practices ensures your images are optimized for performance and security.

To take your container strategy further, the JFrog Software Supply Chain Platform provides enterprise-grade solutions for storing, managing, and securing Docker images at scale—integrated with your CI/CD pipelines and enhanced with vulnerability scanning and metadata insights. With JFrog, your container images aren’t just artifacts; they’re part of a trusted, automated software supply chain.

For more information, please visit our website, take a virtual tour, or set up a one-on-one demo at your convenience.

Release Fast Or Die