- 1 1. Introduction
- 2 2. Prerequisites and Preparation
- 3 3. Installing Docker
- 4 4. User Permissions and Running Docker Without sudo
- 5 5. Startup and Operation Verification
- 6 6. Installing and Using Docker Compose
- 7 7. Security and Operational Considerations
- 7.1 7-1. The docker Group Has “Effectively Root” Privileges
- 7.2 ● Key Points to Be Aware Of
- 7.3 7-2. Considering Rootless Docker (Advanced Option)
- 7.4 7-3. Cleaning Up Unused Images and Containers
- 7.5 7-4. Avoid Using the “latest” Tag in Production
- 7.6 7-5. Use Official Base Images in Dockerfiles
- 7.7 7-6. Be Careful with Network and Port Exposure
- 7.8 7-7. Log Management During Failures
- 8 8. Common Issues and Troubleshooting
1. Introduction
When setting up a development environment on Ubuntu, situations where you think “let’s just install Docker for now” have become increasingly common. Web applications, batch processing, test databases, test middleware… If you install these manually every time, a huge amount of time and effort is consumed just for environment setup.
This is where the container virtualization technology Docker becomes extremely useful. With Docker, you can manage not only the application itself but also all required libraries and configurations together as an “image.” Once Docker is installed on Ubuntu, you can easily:
- Launch a new development environment in minutes
- Ensure all team members can reproduce behavior in the same environment
- Recreate a production-like setup locally with ease
These benefits can be enjoyed with minimal effort.
On the other hand, for those using Docker for the first time, there are many common stumbling points:
- Not knowing which installation procedure is actually correct
- Not understanding the difference between Ubuntu’s default repository and Docker’s official repository
- Running into permission errors due to confusion about when to use
sudo
When you search for “ubuntu install docker,” you will find many articles listing long command sequences, but they often fail to explain why those steps are necessary or what marks a complete installation.
1-1. Goal of This Article
This article is intended for readers who want to install Docker on Ubuntu, and it covers the following key points:
- The currently common procedure for installing Docker on Ubuntu
- A more manageable installation method using the official repository
- How to run the
dockercommand withoutsudo - Post-installation verification and essential basic commands
- An introduction to commonly used tools such as Docker Compose
Rather than simply listing commands, this guide explains why each step is necessary, helping you maintain your environment more easily in the future.
1-2. Target Audience and Prerequisites
This article is intended for readers who:
- Understand basic Ubuntu operations (opening a terminal, using the
aptcommand, etc.) - Are developers or aspiring engineers trying Docker for the first time
- Are considering migrating existing test environments to containers
Advanced Linux administration knowledge is not required. As long as you are comfortable typing commands in a terminal, this guide should be sufficient.
1-3. Article Structure and How to Read It
This article proceeds in the following order:
- Checking prerequisites
- Installation methods (official repository / script-based)
- Permission settings and verification
- Installing Docker Compose
- Troubleshooting and next steps
You may read the article from start to finish, or if Docker is already installed, you can focus only on the “Permissions” or “Compose” sections.
2. Prerequisites and Preparation
Installing Docker itself is relatively simple, but depending on your Ubuntu version or existing environment, there are several points worth checking beforehand. This section summarizes the prerequisites and preparations needed for a smooth installation.
2-1. Supported Ubuntu Versions
Docker works on many Ubuntu versions, but the following LTS releases are most commonly used:
- Ubuntu 22.04 LTS (Recommended)
- Ubuntu 20.04 LTS
- Ubuntu 24.04 LTS (Latest)
LTS (Long Term Support) releases provide long-term stability, making them ideal for maintaining Docker-based development environments.
Non-LTS releases (such as 23.10) can also be used, but LTS versions are generally preferred in professional environments.
2-2. Preinstalled Docker Packages
Ubuntu’s default repository includes a package called docker.io. However, this is not the official Docker package provided by Docker Inc., and updates tend to lag behind. Therefore, installing Docker from the official repository is strongly recommended.
First, check and remove any existing Docker-related packages if necessary:
sudo apt remove docker docker.io containerd runcIf the message indicates that nothing is installed, no action is required.
2-3. Updating APT and Installing Required Packages
Before adding Docker’s official repository, update APT and install required tools:
sudo apt update
sudo apt install -y ca-certificates curl gnupgThese tools are required to add Docker’s GPG key and repository securely.
2-4. Verifying Administrator Privileges (sudo)
Docker installation requires sudo privileges. If your account does not have sudo access, switch to an administrator account or request permission.
You can verify sudo access with the following command:
sudo -vIf you are prompted for a password and the command succeeds, you are ready to proceed.
2-5. Checking Network Connectivity
Installing Docker requires access to external repositories, so an active internet connection is mandatory. In corporate or proxy environments, GPG key retrieval may fail due to access restrictions.
In such cases, consult your network administrator regarding proxy settings or allowlist configurations.
2-6. Choosing the Installation Method
There are three main ways to install Docker:
- Install via the official Docker repository (Recommended)
- Use the
get.docker.cominstallation script (Quick and easy) - Manually download and install Docker .deb packages (Special cases)
This article focuses primarily on the official repository method, which is the most common and easiest to maintain.
3. Installing Docker
Now let’s install Docker on Ubuntu. Although multiple installation methods exist, this guide focuses on the official Docker repository method, which is the most reliable and widely used in production environments.
This method allows stable upgrades via apt upgrade, making it ideal for long-term use.
3-1. Adding the Official Docker Repository (Recommended)
First, register the official GPG key provided by Docker and add Docker’s repository to APT.
Once this is configured correctly, you can avoid accidentally installing the outdated docker.io package from Ubuntu’s default repository.
3-1-1. Registering the GPG Key
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg/etc/apt/keyrings/is the recommended key storage location for Ubuntu 22.04 and later--dearmorconverts the key into binary format
This step allows APT to trust the official Docker repository.
3-1-2. Adding the Repository
Next, add Docker’s repository to APT’s source list.
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo $VERSION_CODENAME) stable" \
| sudo tee /etc/apt/sources.list.d/docker.list > /dev/nullKey points:
$(. /etc/os-release && echo $VERSION_CODENAME)automatically inserts the correct Ubuntu codename (such asjammyorfocal)- Only the stable repository is added
3-1-3. Updating Repository Information
After adding the repository, update the APT index.
sudo apt updateAt this point, docker-ce (Docker Engine) should appear as an installable package.
3-2. Installing Docker Engine
Now install the main Docker packages.
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-pluginPackage roles:
- docker-ce: Docker Engine core
- docker-ce-cli: Docker command-line interface
- containerd.io: Core container runtime used by Docker
- docker-buildx-plugin: Advanced build features such as multi-platform builds
- docker-compose-plugin: Docker Compose V2 (
docker composecommand)
After installation, the Docker daemon starts automatically.
3-3. Verifying the Installation
Check Docker’s runtime status with the following command:
sudo systemctl status dockerIf you see active (running), Docker is operating correctly.
Press q to exit the status view.
3-4. Optional: Script-Based Installation for Convenience
Docker also provides an all-in-one installation script.
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.shAdvantages:
- Fewer commands and quick setup
- Ideal for simple or disposable environments
Disadvantages:
- Difficult version control
- Often discouraged for production or long-term use
While convenient for beginners, this article prioritizes maintainability and therefore focuses on the official repository method.
3-5. Notes for the Latest Ubuntu Releases (e.g., 24.04)
Immediately after a major Ubuntu release, Docker’s official repository may not yet fully support the new version.
In such cases, verify the following:
- That the GPG key location and format match current specifications
- That
VERSION_CODENAMEis officially supported - That no signature errors occur during
apt update
If support is delayed, temporarily using the get.docker.com script may be a practical workaround.
4. User Permissions and Running Docker Without sudo
After installing Docker, you may want to start using the docker command immediately. However, by default, you must prefix every command with sudo.
This behavior is intentional for security reasons, but it is inconvenient for daily development or learning. To resolve this, you can add your user to the docker group, allowing Docker commands to be executed without sudo.
4-1. Why Running Without sudo Matters
On Ubuntu, the Docker daemon (dockerd) runs with root privileges.
Therefore, creating or removing containers via the docker command normally requires root access.
The docker group exists to handle this requirement.
- Users in this group can directly access the Docker daemon
- This enables commands like
docker runwithoutsudo - This setup is almost essential for development use
Note that the docker group effectively grants privileges close to root, so caution is required in shared environments.
(For personal desktops or WSL2, this is generally not an issue.)
4-2. Adding Your User to the docker Group
Add the currently logged-in user to the docker group:
sudo usermod -aG docker $USERThis command appends the user to the group. The -aG option stands for append to group.
4-3. Applying the Changes
Group membership changes take effect after logging out and logging back in.
To apply the change immediately, you can also run:
newgrp dockerThis starts a new shell session with docker group permissions applied.
4-4. Verifying sudo-Free Execution
Now test Docker without sudo:
docker run hello-worldSuccessful output:
- Hello from Docker!
- The image is downloaded and the message is displayed
If an error occurs, check the following:
- Did you log out and back in after modifying group membership?
- Is
/usr/bin/dockerbeing used? - Is the Docker daemon running (
systemctl status docker)?
4-5. Security Considerations (Important)
The docker group provides powerful privileges that are effectively equivalent to root access.
- Reading arbitrary files
- Mounting host directories into containers
- Performing network operations
- System-level control via the Docker socket
This is acceptable for personal systems, but user management is critical on shared servers.
In such cases, you may consider rootless Docker, which is discussed in later sections.
5. Startup and Operation Verification
Once Docker installation and permission configuration are complete, the next step is to verify that Docker operates correctly.
This section explains how to check the Docker service status and actually run containers.
5-1. Checking the Docker Daemon Status
First, verify that Docker is running correctly in the background.
sudo systemctl status dockerKey status indicators:
- active (running) → Operating normally
- inactive → Not running (must be started manually)
- failed → Configuration or dependency error
If the status is inactive or failed, start Docker with the following command:
sudo systemctl start dockerTo ensure Docker starts automatically when the OS boots:
sudo systemctl enable docker5-2. Verifying Operation with the hello-world Container
The most common way to verify Docker installation is by running the official hello-world image.
docker run hello-worldThis command performs the following actions:
- Downloads the image from Docker Hub if it is not present locally
- Starts a container from the image
- Displays a test message and exits
If successful, you will see output similar to the following:
Hello from Docker!
This message shows that your installation appears to be working correctly.If this message appears, Docker is installed and functioning correctly.
5-3. Trying Basic Docker Commands
Once basic operation is confirmed, try some commonly used Docker commands.
5-3-1. Listing Docker Images
docker imagesThis displays a list of images downloaded locally. If hello-world appears, everything is working as expected.
5-3-2. Checking Running Containers
docker psThis command lists currently running containers.
(The hello-world container exits immediately and will not usually appear.)
To display stopped containers as well:
docker ps -a5-3-3. Running an Official Image Example
To try a simple Nginx web server:
docker run -d -p 8080:80 nginx-d→ Run in the background-p→ Map host port 8080 to container port 80
Open http://localhost:8080 in your browser to see the default Nginx page.
5-4. Stopping and Removing Containers
You can stop a running container using the following command:
docker stop <container-id>To remove a container:
docker rm <container-id>To remove unused images:
docker rmi <image-id>Remember that dependencies follow the order container → image → volume, so remove them carefully.
5-5. Common Causes of Errors
● Permission Errors
Got permission denied while trying to connect to the Docker daemon socket
→ The user is not added to the docker group
● Docker Daemon Not Running
Cannot connect to the Docker daemon at unix:///var/run/docker.sock
→ Start Docker with systemctl start docker
● Network Issues Preventing Image Downloads
→ Check proxy settings, DNS configuration, or network restrictions
● Legacy docker.io Package Still Installed
→ Uninstall it completely and reinstall Docker from the official repository
6. Installing and Using Docker Compose
One essential tool for working with Docker at scale is Docker Compose.
Modern web applications often consist of multiple components such as databases, caches, workers, and web servers. Managing these individually with docker run commands quickly becomes impractical.
Docker Compose allows you to define multiple container configurations in a single file and manage them together, making it one of the most commonly used tools in real-world development.
6-1. Verifying Docker Compose V2 Installation
When installing Docker from the official repository, Docker Compose is automatically installed as a plugin.
Verify the installation with the following command:
docker compose versionIf installed correctly, you should see output similar to:
Docker Compose version v2.x.xIf you see an error such as docker: 'compose' is not a docker command, install the plugin manually:
sudo apt install docker-compose-plugin6-2. Benefits of Docker Compose
Key advantages of Docker Compose include:
- Unified management of multiple containers (start, stop, restart)
- Configuration as code, ensuring reproducible environments
- Easy sharing of application, API, and database setups
- Launching development environments with a single
docker compose up
This makes Docker Compose nearly indispensable for application development.
6-3. Basic Structure of a Compose Configuration File
Docker Compose uses a file named docker-compose.yml (or compose.yaml) to define services.
As a minimal example, create a simple configuration that launches Nginx.
services:
web:
image: nginx:latest
ports:
- "8080:80"Run the following command in the directory containing the file:
docker compose up -dNginx will start in the background. Access http://localhost:8080 in your browser to confirm.
6-4. Example: Multi-Container Setup (Web + Database)
The real power of Compose becomes apparent when managing multiple containers simultaneously.
For example, running a web application together with MySQL can be configured as follows:
services:
app:
build: .
ports:
- "3000:3000"
depends_on:
- db
db:
image: mysql:8.0
environment:
MYSQL_ROOT_PASSWORD: example
volumes:
- db-data:/var/lib/mysql
volumes:
db-data:
Explanation:
depends_onensures the database starts before the applicationvolumespersist database data- Multiple services are managed within a single YAML file
This is a highly practical pattern for development environments.
6-5. Commonly Used Docker Compose Commands
These commands are useful to memorize:
| Command | Description |
|---|---|
docker compose up -d | Start containers in the background |
docker compose down | Stop and remove containers and networks |
docker compose build | Build images using Dockerfile |
docker compose ps | List containers managed by Compose |
docker compose logs -f | View logs (optionally per service) |
Using Compose allows you to recreate identical environments repeatedly, making it ideal for team development.
6-6. Recommended Use Cases for Compose in Development
- One-command local environment setup
- Testing in environments close to production
- Launching combined services such as databases, caches, and message queues
- Persisting data with volumes
- Managing configuration with
.envenvironment variables - Supporting complex microservice architectures
Once you are comfortable with Docker and Compose, environment setup time is drastically reduced,
significantly improving development efficiency.
7. Security and Operational Considerations
Docker is an extremely powerful and convenient tool, but “being containerized” does not automatically mean “secure.”
When using Docker regularly on Ubuntu, there are several important security and operational points you should understand.
This section organizes essential knowledge for operating Docker safely and stably, in a way that is easy to understand even for beginners.
7-1. The docker Group Has “Effectively Root” Privileges
The docker group configured earlier actually grants very strong privileges.
Users belonging to the docker group can operate the host OS via the Docker socket, which is effectively equivalent to root-level access.
● Key Points to Be Aware Of
- Do not add arbitrary users to the docker group on shared servers
- Understand the implications, not just the convenience of “no sudo required”
- In organizations with strict security policies, administrator approval may be required
This is rarely an issue on personal Ubuntu machines or development PCs, but careful judgment is required on production servers.
7-2. Considering Rootless Docker (Advanced Option)
Docker provides a feature called rootless mode,
which allows the Docker daemon to run under a regular user account instead of root.
Advantages:
- Significantly reduces host OS privilege risk
- Allows safer Docker usage in environments with strict security requirements
Disadvantages:
- Some networking features are restricted
- Configuration is more complex for beginners
- Behavior may differ from standard Docker
Rootless mode is not necessary for most development use cases,
but it can be a viable option in enterprise or compliance-focused environments.
7-3. Cleaning Up Unused Images and Containers
Over time, Docker can consume a large amount of disk space without you noticing.
Unused containers, images, and volumes can accumulate and easily exceed 100GB.

● Commands for removing unused resources
Removing unused images
docker image pruneRemoving containers and networks together
docker system pruneAggressive cleanup (use with caution)
docker system prune -aThe -a option removes all unused images, so use it carefully.
7-4. Avoid Using the “latest” Tag in Production
While tags like nginx:latest are convenient during development, they are not recommended for production use.
Reasons:
- The exact version behind
latestis not guaranteed - Unexpected updates can cause runtime failures
- Loss of reproducibility leads to unstable deployments
Recommended approach: Pin versions explicitly
Example:
image: nginx:1.25Explicit versioning is a fundamental rule for production environments.
7-5. Use Official Base Images in Dockerfiles
When creating Dockerfiles, follow these guidelines:
- Prefer official images (library images)
- Avoid images maintained by unknown or untrusted authors
- When using lightweight OS images such as Alpine, check vulnerability support status
Untrusted images may contain malware.
Even in development environments, avoid them whenever possible.
7-6. Be Careful with Network and Port Exposure
When containers expose ports on the host OS,
they may become accessible from outside the system.
Precautions:
- Avoid unnecessary
-p 80:80mappings - For local use, bind to localhost only, e.g.
-p 127.0.0.1:8080:80 - Combine with firewall settings such as UFW
- For production, use a reverse proxy (such as Nginx) for better security
Port management is especially critical on VPS or cloud-based Ubuntu servers.
7-7. Log Management During Failures
Docker logs can be viewed with the following command:
docker logs <container-name>Large volumes of logs can consume disk space quickly,
so consider configuring log drivers and log rotation.
8. Common Issues and Troubleshooting
Although Docker is a powerful tool, unexpected errors can occur on Ubuntu due to environment differences or configuration mistakes.
This section summarizes common issues and their solutions, from beginner to intermediate level.
8-1. Cannot Connect to the Docker Daemon
● Error message
Cannot connect to the Docker daemon at unix:///var/run/docker.sock.● Causes and solutions
- Docker daemon is not running:
sudo systemctl start docker - docker group changes not applied: Log out and log back in, or run
newgrp docker - Permission issue with /var/run/docker.sock: Ensure the user belongs to the docker group
8-2. Permission Denied Errors
● Typical error
Got permission denied while trying to connect to the Docker daemon socket● Solution
The cause is almost always missing docker group configuration.
sudo usermod -aG docker $USERThen log out and log back in.
8-3. GPG Errors When Adding the APT Repository
● Error examples
NO_PUBKEY XXXXXXXXor
The following signatures couldn't be verified● Causes and fixes
- GPG key was not registered correctly
- curl failed due to network restrictions
Re-register the key with:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
| sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpgThen run sudo apt update again.
8-4. Port Binding Conflicts
● Error example
Bind for 0.0.0.0:80 failed: port is already allocated.● Cause
- The port is already used by another process on the host
- Another Docker container is using the same port
● Solutions
Check which process is using the port:
sudo lsof -i -P -n | grep :80Check running containers:
docker psChange the port mapping:
-p 8080:808-5. Image Download Failures
● Common causes
- Network restrictions (corporate environments)
- DNS configuration issues
- Blocked access to Docker Hub
● Solutions
- Change DNS servers (e.g. 1.1.1.1 or 8.8.8.8)
- Verify proxy configuration
- Use a VPN if required by the environment
8-6. Disk Space Exhaustion Errors
● Typical message
no space left on device● Resolution
Remove unused resources:
docker system prune -aReview images, containers, and volumes:
docker images
docker ps -a
docker volume lsDisk space exhaustion is one of the most common Docker operational issues.



