- 1 1. Introduction
- 2 2. Basic Structure of a Dockerfile
- 3 3. Practical: Creating an Ubuntu-Based Dockerfile
- 4 4. Building and Verifying Docker Images
- 5 5. Advanced: Building a Python Environment
- 6 6. Common Issues and Troubleshooting
- 7 7. Summary
- 8 8. FAQ (Frequently Asked Questions)
1. Introduction
What Are Docker and Dockerfiles?
In recent years, Docker has rapidly gained popularity as an efficient way to streamline development environments and application deployment. Docker packages applications and their dependencies into a single unit called a “container,” allowing them to run consistently across different environments.
To build these Docker containers, a blueprint called a Dockerfile is required. A Dockerfile is a text file that defines the base operating system image, installed software, environment variables, and other configuration details. Developers can use it to automatically build customized environments.
Why Use Ubuntu as the Base Image?
When creating a Dockerfile, the first step is selecting a base operating system image. Among the many options available, Ubuntu is one of the most popular. Ubuntu is a Debian-based Linux distribution known for its ease of use and flexible environment setup supported by a vast package ecosystem.
Ubuntu-based Dockerfiles offer several advantages:
- Extensive official and community documentation, resulting in a low learning curve
- Easy installation of packages and tools using APT
- Officially provided lightweight and minimal images (such as
ubuntu:20.04andubuntu:24.04)
Purpose of This Article and Target Audience
This article focuses on the keyword “Dockerfile Ubuntu” and explains how to create Ubuntu-based Dockerfiles in a way that is easy for beginners to understand.
It covers everything from the basic structure of a Dockerfile to step-by-step instructions for building an Ubuntu environment, examples of setting up application environments such as Python, and common errors with their solutions.
This article is recommended for:
- Those who want to build environments using Dockerfiles for the first time
- Developers who want to create reproducible development environments on Ubuntu
- Anyone who wants to deepen their understanding, including troubleshooting techniques
2. Basic Structure of a Dockerfile
What Is a Dockerfile and What Is Its Role?
A Dockerfile is like a recipe for creating Docker images. It defines which base operating system to use, what software to install, and how to configure the environment.
By running the docker build command based on this file, you can easily create highly reproducible development and runtime environments.
Benefits of using Dockerfiles:
- Automated environment setup (no need for manual repetition)
- Eliminates environment inconsistencies in team development
- Easy integration into CI/CD pipelines
Commonly Used Dockerfile Instructions
A Dockerfile consists of multiple instructions (directives). The following are some of the most commonly used ones. By combining them appropriately, you can build an Ubuntu-based Dockerfile.
| Instruction | Description |
|---|---|
FROM | Specifies the base Docker image (e.g., FROM ubuntu:24.04) |
RUN | Executes shell commands, typically for installing packages |
COPY | Copies local files into the image |
ADD | Similar to COPY, but also supports URLs and archive extraction |
WORKDIR | Sets the working directory |
ENV | Defines environment variables |
CMD | Defines the default command executed at container startup (can be overridden) |
ENTRYPOINT | Defines a command that is always executed at container startup |
Minimal Ubuntu-Based Dockerfile Example
The following is a very basic example of a Dockerfile using Ubuntu as the base image.
FROM ubuntu:24.04
RUN apt-get update && apt-get install -y \
curl \
vim
CMD ["/bin/bash"]This Dockerfile uses Ubuntu 24.04 as the base image, installs the curl and vim utilities, and launches a Bash shell when the container starts.
Selecting the Appropriate Ubuntu Tag
Ubuntu Docker images are published in the official Docker Hub repository. While specifying ubuntu:latest will use the most recent version, explicitly pinning a version is recommended.
For example:
ubuntu:22.04(LTS: Long-Term Support, focused on stability)ubuntu:24.04(Latest LTS, focused on newer features)
Choose the version based on whether stability or new features are your priority.
3. Practical: Creating an Ubuntu-Based Dockerfile
Installing Required Packages in an Ubuntu Environment
When building an Ubuntu environment using a Dockerfile, it is often necessary to install additional packages. For example, the following utilities are commonly used when setting up a development environment:
curl: For downloading files and testing APIsvim: A lightweight text editorgit: Version control systembuild-essential: Essential tools for building C/C++ programs
To install these packages in a Dockerfile, use the RUN instruction.
FROM ubuntu:24.04
RUN apt-get update && apt-get install -y \
curl \
vim \
git \
build-essentialBy running apt-get update first, you ensure that the latest package lists are retrieved before installation.
Configuring Non-Interactive Installation
On Ubuntu, apt-get install may sometimes require user input. However, interactive operations are not possible during Docker builds. To avoid this, it is recommended to set an environment variable and enable non-interactive mode.
ENV DEBIAN_FRONTEND=noninteractiveThis suppresses prompts such as locale or timezone selection and allows installations to proceed smoothly.
Reducing Image Size by Removing Unnecessary Cache
When using APT, downloaded temporary files (cache) may remain in the image, increasing its final size. You can reduce the image size by removing the cache as shown below:
RUN apt-get update && apt-get install -y \
curl \
vim \
&& rm -rf /var/lib/apt/lists/*Combining multiple commands into a single RUN instruction also helps prevent unnecessary increases in image layers.
Best Practices for Writing Dockerfiles
In real-world development environments, the following Dockerfile best practices are widely recommended:
- Combine
RUNinstructions whenever possible to reduce the number of layers - Explicitly define versions and settings using
ENV - Use comments to clearly describe the purpose of each step
- Avoid leaving unnecessary files by using
rmand--no-install-recommends
Example:
RUN apt-get update && apt-get install -y --no-install-recommends \
curl \
git \
&& rm -rf /var/lib/apt/lists/*This approach results in a lighter and more maintainable Dockerfile.
4. Building and Verifying Docker Images
Building a Docker Image from a Dockerfile
Once your Dockerfile is ready, the next step is to build a Docker image. This is done using the docker build command. Run the following command in the directory containing your Dockerfile:
docker build -t my-ubuntu-image .- The
-toption assigns a name (tag) to the image. In this example, the image is namedmy-ubuntu-image. - The dot (
.) refers to the current directory containing the Dockerfile.
Docker will read the instructions in the Dockerfile sequentially and build the image accordingly.
Checking the Built Docker Image
After the image has been built successfully, you can verify it using the following command:
docker imagesThis displays a list of Docker images stored locally, including the following information:
- REPOSITORY (image name)
- TAG
- IMAGE ID (unique identifier)
- CREATED (creation date)
- SIZE
Example:
REPOSITORY TAG IMAGE ID CREATED SIZE
my-ubuntu-image latest abcd1234abcd 5 minutes ago 189MBThis confirms that the image has been registered correctly.
Running a Docker Container for Verification
To verify that the created image works as expected, start a Docker container using the following command:
docker run -it my-ubuntu-image- The
-itoption launches an interactive terminal session. - If successful, a Bash prompt will appear, indicating that you are inside the Ubuntu container.
Inside the container, you can verify installed tools with commands such as:
curl --version
vim --versionIf these commands work correctly, your Dockerfile is properly configured.
Cleaning Up Unused Images and Containers
Repeated builds and experiments may leave unused Docker images and containers on your system. It is recommended to clean them up periodically using the following commands:
- Remove stopped containers:
docker container prune- Remove unused images:
docker image prune- Remove all unused data (use with caution):
docker system pruneThese operations help save disk space and prevent potential issues.
5. Advanced: Building a Python Environment
Enabling Python in an Ubuntu-Based Dockerfile
When building an Ubuntu environment using a Dockerfile, adding a Python runtime environment enables a wide range of use cases, including development, testing, and machine learning. Although Python may already be installed on Ubuntu by default, it is common practice to explicitly configure it for better version and package management.
Installing Python Using APT
The simplest approach is to install Python using APT packages. Below is an example:
FROM ubuntu:24.04
RUN apt-get update && apt-get install -y \
python3 \
python3-pip \
&& rm -rf /var/lib/apt/lists/*This method provides a stable system Python version (such as Python 3.10 or 3.12, depending on the Ubuntu release). You can also install additional Python packages using the pip command.
Managing Python Versions with pyenv
If you need a specific Python version or want to switch between multiple versions, using pyenv is highly recommended.
The following example shows how to install Python 3.11.6 using pyenv in a Dockerfile:
FROM ubuntu:24.04
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y \
git \
curl \
make \
build-essential \
libssl-dev \
zlib1g-dev \
libbz2-dev \
libreadline-dev \
libsqlite3-dev \
wget \
llvm \
libncurses5-dev \
libncursesw5-dev \
xz-utils \
tk-dev \
libffi-dev \
liblzma-dev \
&& rm -rf /var/lib/apt/lists/*
# Install pyenv
RUN git clone https://github.com/pyenv/pyenv.git ~/.pyenv
ENV PYENV_ROOT="$HOME/.pyenv"
ENV PATH="$PYENV_ROOT/bin:$PATH"
RUN echo 'eval "$(pyenv init --path)"' >> ~/.bashrc
# Install a specific Python version
RUN pyenv install 3.11.6 && pyenv global 3.11.6This setup provides a flexible and well-controlled Python environment.
Managing Packages with requirements.txt
Most real-world projects require multiple Python libraries. These dependencies are commonly managed using a requirements.txt file.
First, create a requirements.txt file in your project root:
flask==2.3.2
requests>=2.25.1
pandasThen reference it in your Dockerfile as follows:
COPY requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip install --no-cache-dir -r requirements.txtThis allows all required libraries to be installed at once and significantly improves environment reproducibility.
Best Practices
- When using Python, creating a virtual environment with
virtualenvorvenvhelps prevent dependency conflicts. - Using cache suppression options such as
--no-cache-dirreduces Docker image size. - Running
pip install --upgrade pipbefore installing packages can help avoid installation errors.
6. Common Issues and Troubleshooting
Permission Errors
Example:
Permission deniedThis error occurs when copied files lack execution permissions or when file ownership and execution users are misconfigured.
Solution:
- Make the file executable:
RUN chmod +x script.sh- Change file ownership if necessary:
RUN chown root:root /path/to/filePackage Not Found or Installation Failure
Example:
E: Unable to locate package xxxThis error typically occurs when apt-get update has not been executed or when the package name is incorrect.
Solution:
- Always run
apt-get updatebefore installing packages:
RUN apt-get update && apt-get install -y curl- Verify package names and check for typos
Network-Related Errors
Example:
Temporary failure resolving 'deb.debian.org'This error indicates a DNS resolution issue during the build process.
Solution:
- Restarting the Docker daemon may resolve the issue:
sudo systemctl restart docker- Review Docker DNS settings by adding DNS servers in
/etc/docker/daemon.json:
{
"dns": ["8.8.8.8", "8.8.4.4"]
}Build Using Outdated Cache
Docker uses layer-based caching to speed up builds. As a result, changes to a Dockerfile may not always be reflected immediately.
Solution:
- Rebuild without cache:
docker build --no-cache -t my-image .Container Exits Immediately or Startup Command Does Not Run
Causes:
- The command specified in
CMDorENTRYPOINTcontains an error - Using
CMD ["/bin/bash"]without interactive mode causes immediate exit
Solution:
- Start the container in debug mode:
docker run -it my-image /bin/bash- Understand the differences between
CMDandENTRYPOINTand use them appropriately
By encountering and resolving these issues, your Dockerfile design skills will steadily improve. When errors occur, carefully read the error messages and identify which instruction and layer caused the problem.
7. Summary
Key Takeaways for Creating Ubuntu-Based Dockerfiles
This article provided a step-by-step explanation of how to build Ubuntu environments using Dockerfiles, covering both fundamental and advanced topics. Let’s review the key points:
- Understanding Dockerfile fundamentals is the first step
Instructions such asFROM,RUN,CMD, andENVenable automated environment creation. - Ubuntu is a stable and flexible base image
Its extensive package ecosystem, large user base, and LTS releases make it ideal for development environments. - Practical package management allows installation of necessary tools and libraries
Proper use ofapt-get, cache cleanup, and non-interactive installation is essential. - Building practical environments such as Python is fully supported by Dockerfiles
Tools likepyenv,pip, andrequirements.txtensure reproducible setups. - Troubleshooting skills directly impact stable operations
Understanding permissions, networking, and build cache behavior significantly improves productivity.
Next Steps in Dockerfile Learning
Once you are comfortable using Dockerfiles, you can extend your skills beyond development into testing and production deployments. Consider exploring the following topics:
- Managing multi-container setups with Docker Compose
- Integrating with CI/CD tools such as GitHub Actions and GitLab CI
- Working with container orchestration platforms like Kubernetes
Official Documentation and Reference Links
8. FAQ (Frequently Asked Questions)
Q1. Which Ubuntu version should I choose in a Dockerfile?
A1. In most cases, choosing an LTS (Long Term Support) release is recommended for stability and long-term maintenance. Versions such as ubuntu:22.04 and ubuntu:20.04 are widely used and supported for five years.
If you need the latest packages or language versions, you may choose a newer release such as ubuntu:24.04, but thorough testing is recommended.
Q2. Why does apt-get install report “package not found”?
A2. The most common reason is failing to run apt-get update beforehand. Without updating the package list, APT cannot locate the requested packages.
Correct example:
RUN apt-get update && apt-get install -y curlAlso ensure that package names are correct and not deprecated (for example, use python3 instead of python).
Q3. How do I set environment variables in a Dockerfile?
A3. Use the ENV instruction to define environment variables that are available during both build time and container runtime.
Example:
ENV DEBIAN_FRONTEND=noninteractiveThis is commonly used to suppress interactive prompts during APT installations. Environment variables are also useful for application configuration and API keys.
Q4. What is the difference between CMD and ENTRYPOINT?
A4. Both specify commands executed when a container starts, but their behavior differs.
| Item | CMD | ENTRYPOINT |
|---|---|---|
| Overridable | Can be overridden by docker run | Generally not overridden (treated as fixed command) |
| Use Case | Define a default command | Define a command that must always run |
Example:
CMD ["python3", "app.py"]
# vs
ENTRYPOINT ["python3"]
CMD ["app.py"]In the latter case, you can pass arguments using docker run my-image another_script.py.
Q5. Why are my Dockerfile changes not reflected?
A5. Docker uses build cache, which may cause unchanged layers to be reused even after editing the Dockerfile.
Solution:
docker build --no-cache -t my-image .This forces a full rebuild and ensures all changes are applied.


