Introduction
Modern applications rarely run as a single process. A typical web application involves a web server, a database, a caching layer, a background worker, and possibly a reverse proxy—all running as separate services that communicate over a network. Managing these components individually with raw docker run commands quickly becomes unmanageable. Configuration flags multiply, startup order matters, and reproducing the exact same environment across machines turns into a manual, error-prone task.
Docker Compose solves this problem by letting you define your entire application stack in a single declarative YAML file. Every service, network, volume, and configuration detail lives in one place. A single command—docker compose up—brings the entire stack to life. Another command—docker compose down—tears it all down cleanly. This declarative approach gives you version-controlled infrastructure, reproducible environments, and a workflow that scales from local development to single-server production deployments.
This guide covers the Docker Compose specification in depth. You will learn how to define services, configure networking and persistent storage, manage secrets, implement health checks, structure multi-environment configurations, and deploy real-world application stacks.
Prerequisites
Before proceeding, ensure the following are installed and available on your system:
- Docker Engine 24.0 or later — Compose V2 is included with modern Docker installations.
- Docker Compose V2 — Verify with
docker compose version. Thedocker-compose(hyphenated) binary is the legacy V1 tool and is no longer maintained. - A terminal with access to run
dockercommands (add your user to thedockergroup or usesudo). - A text editor for writing YAML and Dockerfiles.
- Basic Docker knowledge — You should understand images, containers, ports, and volumes at a fundamental level.
Verify your installation:
docker --version
# Docker version 27.5.1, build 9f9e405
docker compose version
# Docker Compose version v2.33.1
Compose File Structure
A Compose file is a YAML document that defines how your multi-container application runs. The canonical filename is compose.yaml, though docker-compose.yml and docker-compose.yaml are still recognized for backward compatibility.
The file has four top-level keys:
services: # Container definitions (required)
networks: # Custom networks
volumes: # Named volumes for persistent data
configs: # Configuration files (Swarm mode / Compose 2.23+)
secrets: # Sensitive data
Here is a minimal Compose file that runs a single Nginx container:
services:
web:
image: nginx:alpine
ports:
- "8080:80"
Run it with:
docker compose up -d
The -d flag starts containers in detached mode (background). Compose creates a default network, pulls the image if needed, and starts the container. The project name defaults to the directory name.
Naming Conventions
Compose names containers using the pattern <project>-<service>-<index>. You can override the project name with:
docker compose -p myproject up -d
Or set the COMPOSE_PROJECT_NAME environment variable, or add a name: key at the top level of your Compose file:
name: myproject
services:
web:
image: nginx:alpine
Service Configuration
Each entry under services: defines one containerized component of your application. Services support dozens of configuration options. Here are the most important ones.
Image vs Build
You can either pull a pre-built image or build one from a Dockerfile:
services:
# Using a pre-built image
redis:
image: redis:7-alpine
# Building from a Dockerfile
api:
build:
context: ./api
dockerfile: Dockerfile
image: myapp/api:latest
When both build and image are specified, Compose builds the image and tags it with the name from image.
Ports
Map host ports to container ports:
services:
web:
image: nginx:alpine
ports:
- "8080:80" # host:container
- "443:443" # HTTPS
- "127.0.0.1:9090:80" # bind to localhost only
Use expose to make ports accessible only to other services on the same network, without publishing them to the host:
services:
db:
image: postgres:16
expose:
- "5432"
Restart Policies
Control what happens when a container stops:
services:
api:
image: myapp/api:latest
restart: unless-stopped
Available policies:
no— Never restart (default).always— Always restart, including after daemon reboot.on-failure— Restart only on non-zero exit codes.unless-stopped— Likealways, but does not restart if the container was explicitly stopped.
Resource Limits
Constrain CPU and memory usage to prevent a single service from consuming all host resources:
services:
worker:
image: myapp/worker:latest
deploy:
resources:
limits:
cpus: "1.0"
memory: 512M
reservations:
cpus: "0.25"
memory: 128M
Command and Entrypoint Overrides
Override the default command or entrypoint defined in the image:
services:
api:
image: node:20-alpine
working_dir: /app
command: ["node", "server.js"]
debug:
image: myapp/api:latest
entrypoint: ["sh", "-c"]
command: ["echo 'Container started' && tail -f /dev/null"]
Networking
Docker Compose creates a default bridge network for each project. All services in a Compose file can reach each other by service name on this network. Understanding networking options gives you fine-grained control over service communication and isolation.
Default Network Behavior
When you run docker compose up, Compose creates a network named <project>_default. Every service joins this network automatically. Services resolve each other by their service name as the hostname:
services:
api:
image: myapp/api:latest
# Can reach the database at hostname "db" on port 5432
db:
image: postgres:16
# Can reach the API at hostname "api"
Inside the api container, the connection string postgresql://user:pass@db:5432/mydb works because db resolves to the database container’s IP on the shared network.
Custom Networks
Define custom networks to segment traffic and control which services can communicate:
services:
frontend:
image: nginx:alpine
networks:
- frontend-net
api:
image: myapp/api:latest
networks:
- frontend-net
- backend-net
db:
image: postgres:16
networks:
- backend-net
networks:
frontend-net:
driver: bridge
backend-net:
driver: bridge
internal: true # No external internet access
In this configuration:
frontendcan reachapibut cannot reachdb.apican reach bothfrontendanddbbecause it is on both networks.dbis on an internal network with no outbound internet access, reducing attack surface.
Network Aliases
Assign additional hostnames to a service within a specific network:
services:
db-primary:
image: postgres:16
networks:
backend:
aliases:
- database
- postgres
networks:
backend:
driver: bridge
Other services on the backend network can reach this container using db-primary, database, or postgres as the hostname.
Static IP Addresses
For situations that require fixed IP addresses (legacy applications, specific firewall rules):
services:
api:
image: myapp/api:latest
networks:
app-net:
ipv4_address: 172.28.0.10
networks:
app-net:
driver: bridge
ipam:
config:
- subnet: 172.28.0.0/16
Note: Static IPs are rarely needed. Prefer DNS-based service discovery in most cases.
Volumes and Data Persistence
Containers are ephemeral—when a container is removed, its filesystem is lost. Volumes solve this by persisting data outside the container lifecycle.
Named Volumes
Named volumes are managed by Docker and stored in /var/lib/docker/volumes/ by default:
services:
db:
image: postgres:16
volumes:
- pgdata:/var/lib/postgresql/data
volumes:
pgdata:
driver: local
Named volumes survive container removal and can be shared between services. Inspect them with:
docker volume ls
docker volume inspect myproject_pgdata
Bind Mounts
Bind mounts map a host directory directly into the container. They are ideal for development when you want live code reloading:
services:
api:
image: node:20-alpine
volumes:
- ./src:/app/src # Host path : Container path
- ./package.json:/app/package.json:ro # Read-only
The :ro suffix makes the mount read-only inside the container, preventing accidental writes.
tmpfs Mounts
For temporary data that should not persist and should not be written to the container’s writable layer:
services:
api:
image: myapp/api:latest
tmpfs:
- /tmp
- /run:size=64M
Volume Drivers and Options
Use volume drivers for network-attached storage or specific filesystem options:
volumes:
nfs-data:
driver: local
driver_opts:
type: nfs
o: addr=192.168.1.100,rw
device: ":/exports/data"
Backup and Restore
Back up a named volume by mounting it into a temporary container:
# Backup
docker run --rm \
-v myproject_pgdata:/data \
-v $(pwd)/backups:/backup \
alpine tar czf /backup/pgdata-$(date +%Y%m%d).tar.gz -C /data .
# Restore
docker run --rm \
-v myproject_pgdata:/data \
-v $(pwd)/backups:/backup \
alpine sh -c "cd /data && tar xzf /backup/pgdata-20260215.tar.gz"
Environment Variables and Secrets
Inline Environment Variables
Define environment variables directly in the Compose file:
services:
api:
image: myapp/api:latest
environment:
NODE_ENV: production
LOG_LEVEL: info
DATABASE_URL: postgresql://user:pass@db:5432/mydb
Environment Files
Keep variables in a separate .env file to avoid hardcoding values:
services:
api:
image: myapp/api:latest
env_file:
- .env
- .env.local
The .env file at the project root is loaded automatically for variable substitution in the Compose file itself. Additional env_file entries are loaded into the container’s environment.
Example .env file:
POSTGRES_USER=appuser
POSTGRES_PASSWORD=s3cur3_passw0rd
POSTGRES_DB=appdb
APP_PORT=3000
Reference these in your Compose file with ${VARIABLE} syntax:
services:
db:
image: postgres:16
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB}
api:
image: myapp/api:latest
ports:
- "${APP_PORT:-3000}:3000" # Default to 3000 if unset
Docker Secrets
For sensitive data, use secrets instead of environment variables. Secrets are mounted as files inside the container:
services:
db:
image: postgres:16
environment:
POSTGRES_PASSWORD_FILE: /run/secrets/db_password
secrets:
- db_password
secrets:
db_password:
file: ./secrets/db_password.txt
Inside the container, the secret is available at /run/secrets/db_password. Many official images support the _FILE suffix convention to read credentials from a file instead of an environment variable.
Health Checks and Dependencies
Health Checks
Health checks let Docker determine whether a service is actually ready to accept requests, not just running:
services:
db:
image: postgres:16
environment:
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
redis:
image: redis:7-alpine
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 3s
retries: 3
Parameters explained:
test— The command to run. Exit code 0 means healthy.interval— Time between checks.timeout— Maximum time for a check to complete.retries— Number of consecutive failures before marking unhealthy.start_period— Grace period for container initialization before health checks count.
Service Dependencies
Use depends_on with conditions to control startup order based on health status:
services:
api:
image: myapp/api:latest
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
migrations:
condition: service_completed_successfully
migrations:
image: myapp/migrations:latest
depends_on:
db:
condition: service_healthy
db:
image: postgres:16
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 3s
retries: 5
start_period: 20s
redis:
image: redis:7-alpine
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 3s
retries: 3
Available conditions:
service_started— Default; waits only for the container to start.service_healthy— Waits until the health check passes.service_completed_successfully— Waits until the container exits with code 0 (useful for init containers and migrations).
Build Configuration
Basic Build
Point to a directory containing a Dockerfile:
services:
api:
build: ./api
Advanced Build Options
services:
api:
build:
context: ./api
dockerfile: Dockerfile.prod
args:
NODE_VERSION: "20"
BUILD_DATE: "${BUILD_DATE}"
target: production
cache_from:
- myapp/api:cache
labels:
com.example.version: "2.1.0"
image: myapp/api:2.1.0
context— The build context directory sent to the Docker daemon.dockerfile— Path to the Dockerfile relative to the context.args— Build-time arguments passed asARGin the Dockerfile.target— Build a specific stage in a multi-stage Dockerfile.cache_from— Images to use as cache sources to speed up builds.
Multi-Stage Dockerfile Example
# Stage 1: Build
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production && cp -R node_modules /prod_modules
RUN npm ci
COPY . .
RUN npm run build
# Stage 2: Production
FROM node:20-alpine AS production
WORKDIR /app
COPY --from=builder /prod_modules ./node_modules
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/package.json ./
USER node
EXPOSE 3000
CMD ["node", "dist/server.js"]
Reference the production stage in your Compose file with target: production.
Development vs Production Configurations
Docker Compose supports override files that let you maintain a base configuration and layer environment-specific changes on top.
Base Configuration — compose.yaml
services:
api:
build: ./api
environment:
NODE_ENV: production
restart: unless-stopped
depends_on:
db:
condition: service_healthy
db:
image: postgres:16-alpine
volumes:
- pgdata:/var/lib/postgresql/data
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB}
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER}"]
interval: 10s
timeout: 5s
retries: 5
volumes:
pgdata:
Development Override — compose.override.yaml
This file is loaded automatically when you run docker compose up:
services:
api:
build:
context: ./api
target: development
environment:
NODE_ENV: development
DEBUG: "app:*"
volumes:
- ./api/src:/app/src
ports:
- "3000:3000"
- "9229:9229" # Node.js debugger
command: ["npm", "run", "dev"]
db:
ports:
- "5432:5432" # Expose DB port for local tools
Production Override — compose.prod.yaml
Explicitly specify this file for production deployments:
services:
api:
build:
context: ./api
target: production
ports:
- "3000:3000"
deploy:
resources:
limits:
cpus: "2.0"
memory: 1G
replicas: 2
logging:
driver: json-file
options:
max-size: "10m"
max-file: "5"
db:
deploy:
resources:
limits:
cpus: "1.0"
memory: 2G
Deploy with the production override:
docker compose -f compose.yaml -f compose.prod.yaml up -d
Compose merges the files in order, with later files overriding or extending earlier ones.
Profiles
Use profiles to conditionally include services. Services without a profile are always started. Services with a profile only start when that profile is activated:
services:
api:
image: myapp/api:latest
db:
image: postgres:16
adminer:
image: adminer
ports:
- "8080:8080"
profiles:
- debug
mailhog:
image: mailhog/mailhog
ports:
- "1025:1025"
- "8025:8025"
profiles:
- debug
# Start only api and db
docker compose up -d
# Start everything including debug tools
docker compose --profile debug up -d
Real-World Example: WordPress + MySQL + Redis
A complete WordPress stack with MySQL for the database and Redis for object caching:
name: wordpress-stack
services:
wordpress:
image: wordpress:6.7-php8.3-apache
ports:
- "8080:80"
environment:
WORDPRESS_DB_HOST: mysql
WORDPRESS_DB_USER: ${WP_DB_USER:-wordpress}
WORDPRESS_DB_PASSWORD: ${WP_DB_PASSWORD}
WORDPRESS_DB_NAME: ${WP_DB_NAME:-wordpress}
WORDPRESS_CONFIG_EXTRA: |
define('WP_REDIS_HOST', 'redis');
define('WP_REDIS_PORT', 6379);
define('WP_CACHE', true);
volumes:
- wp-content:/var/www/html/wp-content
- ./wp-config-extra.php:/var/www/html/wp-config-extra.php:ro
depends_on:
mysql:
condition: service_healthy
redis:
condition: service_healthy
restart: unless-stopped
networks:
- frontend
- backend
mysql:
image: mysql:8.4
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${WP_DB_NAME:-wordpress}
MYSQL_USER: ${WP_DB_USER:-wordpress}
MYSQL_PASSWORD: ${WP_DB_PASSWORD}
volumes:
- mysql-data:/var/lib/mysql
- ./mysql/custom.cnf:/etc/mysql/conf.d/custom.cnf:ro
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost", "-u", "root", "-p${MYSQL_ROOT_PASSWORD}"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
restart: unless-stopped
networks:
- backend
redis:
image: redis:7-alpine
command: ["redis-server", "--maxmemory", "128mb", "--maxmemory-policy", "allkeys-lru"]
volumes:
- redis-data:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 3s
retries: 3
restart: unless-stopped
networks:
- backend
volumes:
wp-content:
mysql-data:
redis-data:
networks:
frontend:
driver: bridge
backend:
driver: bridge
internal: true
Create the .env file:
WP_DB_USER=wordpress
WP_DB_PASSWORD=wp_s3cur3_p4ss
WP_DB_NAME=wordpress
MYSQL_ROOT_PASSWORD=r00t_s3cur3_p4ss
Launch the stack:
docker compose up -d
docker compose ps
docker compose logs -f wordpress
Real-World Example: Node.js API + PostgreSQL + Redis
A production-ready API stack with a Node.js application, PostgreSQL database, and Redis for caching and session storage:
name: node-api-stack
services:
api:
build:
context: ./api
dockerfile: Dockerfile
target: production
image: myapp/api:latest
ports:
- "${API_PORT:-3000}:3000"
environment:
NODE_ENV: production
DATABASE_URL: postgresql://${PG_USER}:${PG_PASSWORD}@postgres:5432/${PG_DB}
REDIS_URL: redis://redis:6379
JWT_SECRET_FILE: /run/secrets/jwt_secret
LOG_LEVEL: ${LOG_LEVEL:-info}
secrets:
- jwt_secret
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
migrations:
condition: service_completed_successfully
restart: unless-stopped
deploy:
resources:
limits:
cpus: "1.0"
memory: 512M
healthcheck:
test: ["CMD-SHELL", "wget -qO- http://localhost:3000/health || exit 1"]
interval: 15s
timeout: 5s
retries: 3
start_period: 10s
networks:
- frontend
- backend
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
migrations:
build:
context: ./api
target: production
command: ["npm", "run", "migrate"]
environment:
DATABASE_URL: postgresql://${PG_USER}:${PG_PASSWORD}@postgres:5432/${PG_DB}
depends_on:
postgres:
condition: service_healthy
networks:
- backend
postgres:
image: postgres:16-alpine
environment:
POSTGRES_USER: ${PG_USER}
POSTGRES_PASSWORD: ${PG_PASSWORD}
POSTGRES_DB: ${PG_DB}
volumes:
- pgdata:/var/lib/postgresql/data
- ./db/init:/docker-entrypoint-initdb.d:ro
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${PG_USER} -d ${PG_DB}"]
interval: 10s
timeout: 5s
retries: 5
start_period: 20s
restart: unless-stopped
deploy:
resources:
limits:
cpus: "1.0"
memory: 1G
networks:
- backend
redis:
image: redis:7-alpine
command:
- redis-server
- --appendonly yes
- --maxmemory 256mb
- --maxmemory-policy allkeys-lru
volumes:
- redis-data:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 3s
retries: 3
restart: unless-stopped
networks:
- backend
volumes:
pgdata:
redis-data:
networks:
frontend:
driver: bridge
backend:
driver: bridge
internal: true
secrets:
jwt_secret:
file: ./secrets/jwt_secret.txt
The corresponding .env file:
PG_USER=apiuser
PG_PASSWORD=pg_s3cur3_p4ss
PG_DB=apidb
API_PORT=3000
LOG_LEVEL=info
Directory structure for this project:
project/
├── compose.yaml
├── compose.override.yaml # Dev overrides (auto-loaded)
├── compose.prod.yaml # Production overrides
├── .env # Default environment variables
├── .env.production # Production variables
├── secrets/
│ └── jwt_secret.txt
├── api/
│ ├── Dockerfile
│ ├── package.json
│ └── src/
│ └── server.js
└── db/
└── init/
└── 01-extensions.sql
Useful Commands Reference
Lifecycle Commands
# Start all services (detached)
docker compose up -d
# Start with build
docker compose up -d --build
# Stop all services (containers remain)
docker compose stop
# Stop and remove containers, networks
docker compose down
# Stop, remove containers, networks, AND volumes (destroys data)
docker compose down -v
# Restart all services
docker compose restart
# Restart a specific service
docker compose restart api
Build Commands
# Build all services
docker compose build
# Build without cache
docker compose build --no-cache
# Build a specific service
docker compose build api
# Build with build arguments
docker compose build --build-arg NODE_VERSION=20 api
Service Management
# List running containers
docker compose ps
# List all containers including stopped
docker compose ps -a
# Scale a service (stateless services only)
docker compose up -d --scale worker=4
# Update a single service without touching others
docker compose up -d --no-deps --build api
# Execute a command in a running container
docker compose exec api sh
# Run a one-off command in a new container
docker compose run --rm api npm test
# View resource usage
docker compose top
Inspection Commands
# Validate and view the resolved Compose file
docker compose config
# View environment variables for a service
docker compose exec api env
# View the full merged configuration
docker compose -f compose.yaml -f compose.prod.yaml config
# List images used by services
docker compose images
Monitoring and Logs
Viewing Logs
# Follow logs from all services
docker compose logs -f
# Follow logs from specific services
docker compose logs -f api db
# Show last 100 lines
docker compose logs --tail=100 api
# Show logs with timestamps
docker compose logs -f -t api
# Show logs since a specific time
docker compose logs --since="2026-02-15T10:00:00" api
Log Drivers
Configure log drivers per service to integrate with centralized logging systems:
services:
api:
image: myapp/api:latest
logging:
driver: json-file
options:
max-size: "10m" # Maximum log file size
max-file: "5" # Number of rotated files to keep
compress: "true" # Compress rotated files
For production, consider sending logs to a logging aggregator:
services:
api:
image: myapp/api:latest
logging:
driver: syslog
options:
syslog-address: "tcp://logserver:514"
tag: "api-service"
Resource Monitoring
# Real-time resource usage statistics
docker compose stats
# Inspect specific container details
docker compose inspect api
# View running processes inside containers
docker compose top
Adding a Monitoring Stack
Extend your Compose file with Prometheus and Grafana for metrics:
services:
prometheus:
image: prom/prometheus:latest
volumes:
- ./monitoring/prometheus.yml:/etc/prometheus/prometheus.yml:ro
- prometheus-data:/prometheus
ports:
- "9090:9090"
profiles:
- monitoring
grafana:
image: grafana/grafana:latest
volumes:
- grafana-data:/var/lib/grafana
ports:
- "3001:3000"
environment:
GF_SECURITY_ADMIN_PASSWORD: ${GRAFANA_PASSWORD:-admin}
profiles:
- monitoring
volumes:
prometheus-data:
grafana-data:
Start the monitoring stack alongside your application:
docker compose --profile monitoring up -d
Troubleshooting
Container Fails to Start
Check the logs first:
docker compose logs api
If the container exits immediately, run it interactively to debug:
docker compose run --rm api sh
Inspect the container for configuration issues:
docker compose ps -a
docker inspect $(docker compose ps -q api)
Port Conflicts
If you see bind: address already in use, another process is using the port:
# Find what is using the port
sudo lsof -i :3000
# or
sudo ss -tlnp | grep 3000
Change the host port in your Compose file or stop the conflicting process.
Network Connectivity Issues
Verify that services are on the same network:
docker compose exec api ping db
docker network ls
docker network inspect myproject_backend
Check DNS resolution inside a container:
docker compose exec api nslookup db
docker compose exec api getent hosts db
Volume Permission Errors
When a container runs as a non-root user, it might not have permission to write to mounted volumes. Solutions:
services:
api:
image: myapp/api:latest
user: "1000:1000"
volumes:
- app-data:/app/data
Or fix ownership in the Dockerfile:
FROM node:20-alpine
RUN mkdir -p /app/data && chown -R node:node /app/data
USER node
WORKDIR /app
Dependency Startup Failures
If a service starts before its dependency is ready, add proper health checks:
# Check health status
docker compose ps
docker inspect --format='{{json .State.Health}}' myproject-db-1
Ensure your depends_on uses condition: service_healthy rather than condition: service_started.
Orphan Containers
When you rename or remove a service from your Compose file, old containers may linger:
# Remove orphan containers
docker compose up -d --remove-orphans
# Or during teardown
docker compose down --remove-orphans
Compose File Validation
Always validate your Compose file before deploying:
docker compose config --quiet
# No output = valid file
docker compose config
# Prints the fully resolved configuration
Slow Builds
Speed up builds with caching strategies:
services:
api:
build:
context: ./api
cache_from:
- myapp/api:latest
Use .dockerignore to exclude unnecessary files from the build context:
node_modules
.git
.env
*.md
dist
coverage
.vscode
Disk Space Issues
Docker can consume significant disk space over time:
# View disk usage
docker system df
# Remove unused containers, networks, images, and optionally volumes
docker system prune
# Include volumes (caution: destroys data)
docker system prune --volumes
# Remove only dangling images
docker image prune
Summary
Docker Compose transforms the complexity of multi-container applications into a single, declarative YAML file. By defining services, networks, volumes, and dependencies in one place, you get reproducible environments that work identically across development, staging, and production.
Key takeaways:
- Use
compose.yamlas the canonical filename with the modern Compose specification. - Define health checks on every stateful service and use
depends_onwithcondition: service_healthyto ensure proper startup order. - Separate configuration by environment using override files (
compose.override.yamlfor development,compose.prod.yamlfor production). - Use named volumes for persistent data and bind mounts for development code.
- Keep secrets out of environment variables — use Docker secrets or mounted files for credentials.
- Leverage custom networks to isolate services and reduce the attack surface.
- Set resource limits in production to prevent any single service from consuming all host resources.
- Configure log rotation to prevent logs from consuming all available disk space.
With a well-structured Compose file, standing up your entire application stack is a single docker compose up -d away.