Docker CLI Guide & Reference
Master the tools to build, ship, and run any application, anywhere. The definitive reference for Docker CLI commands with detailed explanations.
Containers Commands
docker ps | List running containers Shows Container ID, Image, Command, Created status, Ports, and Names. |
docker ps -a | List all containers (incl. stopped) Useful to find containers that exited immediately due to errors. |
docker run -d -p 80:80 nginx | Run container in background Downloads nginx image if missing, starts it, prevents terminal lock (-d), maps port 80. |
docker stop <id> | Stop a container Sends SIGTERM. If it doesnt stop in 10s, sends SIGKILL. |
docker rm <id> | Remove a container Deletes the container. Add -f to force remove a running one. |
docker logs -f <id> | View container logs (follow) Essential for debugging why a container crashed. |
docker exec -it <id> sh | Shell into container Opens an interactive terminal inside the container. Use /bin/bash for full shell. |
docker cp src <id>:dest | Copy files to container Moves files from host to container, or vice versa. |
The Complete Docker CLI Reference
Docker has revolutionized the way software is developed, deployed, and operated. By packaging applications and their dependencies into standardized units called containers, Docker ensures that software runs identically regardless of where it is deployed—whether on a developer's laptop, a testing server, or a production cluster of thousands of machines. This guide provides a comprehensive understanding of the Docker command-line interface, the primary tool for interacting with the Docker engine.
While graphical tools exist for Docker, the CLI remains the most powerful and flexible way to work with containers. Understanding these commands deeply will enable you to debug issues faster, automate workflows more effectively, and truly leverage the power of containerization in your development and deployment pipelines.
Understanding Docker Architecture
Before diving into commands, it is essential to understand Docker's architecture. The Docker CLI is a client that communicates with the Docker daemon (dockerd) via a REST API. When you run a command like docker run, the CLI sends a request to the daemon, which does the actual work of pulling images, creating containers, and managing networks and storage.
This client-server architecture enables powerful scenarios. The Docker CLI can connect to a daemon running on a remote machine, allowing you to manage containers on a server from your local workstation. It also means the daemon runs with elevated privileges (as root on Linux), enabling it to configure networking and mount filesystems, while the CLI can run as a regular user.
Docker uses a layered filesystem where each instruction in a Dockerfile creates a new layer. These layers are cached and shared between images, making builds faster and images smaller. Understanding layers is key to writing efficient Dockerfiles and troubleshooting image size issues.
Containers: The Core Concept
A container is a running instance of a Docker image. While an image is a read-only template containing your application and its dependencies, a container adds a writable layer on top where runtime data is stored. You can have multiple containers running from the same image, each with its own state.
The docker run command is the most complex and important Docker command. It combines several operations: pulling the image if it does not exist locally, creating a container from the image, starting the container, and optionally attaching to its output. The many flags control aspects like port mapping, volume mounting, environment variables, resource limits, and networking.
Understanding container lifecycles is crucial. A container can be created (docker create), started (docker start), stopped (docker stop), and removed (docker rm). The docker run command combines create and start. Stopping a container does not delete it—the container's writeable layer persists, and it can be started again later.
Working with Images
Docker images are the blueprints for containers. They are built from Dockerfiles, which are text files containing a series of instructions. The docker build command reads the Dockerfile and executes each instruction, creating a layer for each. The final image is the stack of all these layers.
Images are identified by repository name and tag, like nginx:1.21 or python:3.9-slim. The special tag latest is used when no tag is specified, but this is considered bad practice in production because it is unpredictable—it always points to whatever the maintainer considers "latest" at any given time.
Docker Hub is the default registry for images, but you can use private registries or other public registries like Amazon ECR, Google Container Registry, or GitHub Container Registry. The docker login command authenticates with registries, enabling you to push and pull private images.
Data Persistence with Volumes
Containers are ephemeral by design—when you remove a container, all data written to its writable layer is lost. For applications that need to persist data (like databases), Docker provides volumes. A volume is a directory stored outside the container's filesystem, managed by Docker.
There are three ways to persist data in Docker. Named volumes are the recommended approach—Docker manages their location and lifecycle, and they can be shared between containers. Bind mounts map a host directory directly into the container, giving you direct access to the files but coupling the container to the host's filesystem layout. tmpfs mounts store data in memory only, useful for sensitive data that should not persist to disk.
When using volumes with databases, it is critical to understand that you should never attempt to share a database's data directory between multiple running database containers. Each database instance expects exclusive access to its data files.
Networking in Docker
Docker creates isolated networks that containers can connect to. The default bridge network allows containers to communicate using IP addresses. However, user-defined bridge networks are more powerful because they enable DNS resolution—containers can reach each other by name rather than IP address.
Port mapping (-p flag) is how you expose container services to the host. The syntax -p 8080:80 means "forward traffic from host port 8080 to container port 80." You can also bind to specific interfaces, like -p 127.0.0.1:8080:80 to only accept connections from localhost.
For multi-container applications, Docker Compose simplifies networking significantly. It automatically creates a network for your stack and configures DNS so services can communicate by their service name defined in the compose file.
Debugging Containers
When containers misbehave, the Docker CLI provides several debugging tools. docker logs shows the standard output and error streams of the container—this is often the first place to look when something goes wrong. The -f flag follows the log output in real-time.
docker exec allows you to run commands inside a running container. The most common use is docker exec -it container bash, which opens an interactive shell inside the container. From there, you can examine files, test network connectivity with tools like curl or ping, and investigate the runtime environment.
docker inspect returns detailed JSON metadata about containers, images, volumes, or networks. This includes network configuration, mount points, environment variables, and more. Use it to verify that a container was created with the expected configuration.
Resource Management
By default, containers have unlimited access to host resources. In production environments, you should limit memory and CPU to prevent a single container from starving others. The --memory flag sets a hard memory limit, and --cpus limits CPU usage.
docker stats shows real-time resource usage for running containers, similar to the top command for the operating system. This is valuable for understanding your containers' resource footprint and right-sizing the limits.
Best Practices
Always use specific image tags in production, not latest. When an image updates, containers using latest might suddenly behave differently after a new pull. Pin to specific versions like node:18.12.1-alpine for reproducible builds.
Keep images small by using minimal base images like Alpine Linux, multi-stage builds to exclude build tools from production images, and careful ordering of Dockerfile instructions to maximize layer caching.
Never store secrets in Docker images. Environment variables, Docker secrets, or external secret management tools should be used to provide sensitive configuration at runtime. Secrets baked into images can be extracted by anyone with access to the image.
Use Docker Compose for development environments with multiple services. It simplifies the orchestration of containers with complex configurations, making it easy for new team members to spin up the complete development stack with a single command.