Running multi-container applications in production requires more than a basic docker-compose.yml file. While Docker Compose is commonly associated with local development, it is a powerful tool for deploying production workloads on single-host and small-cluster environments when configured correctly. This guide walks you through hardening your Docker Compose setup for production with health checks, restart policies, resource limits, secrets management, centralized logging, and Nginx reverse proxy with SSL termination.
Prerequisites
- Linux server with Docker Engine 24+ and Docker Compose v2 installed
- Domain name pointing to your server’s public IP address
- Basic familiarity with Docker concepts (images, containers, volumes, networks)
- SSL certificate or willingness to use Let’s Encrypt with Certbot
- SSH access to your production server
- At least 2 GB of RAM and 2 CPU cores available for your stack
Production Docker Compose Configuration
A production-ready docker-compose.yml differs significantly from a development configuration. You need explicit restart policies, health checks, resource constraints, and read-only filesystems where possible.
Restart Policies
Restart policies ensure your containers recover automatically from crashes, OOM kills, or host reboots:
services:
web:
image: myapp:latest
restart: unless-stopped
deploy:
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
Use unless-stopped for most services. This restarts containers after crashes and host reboots but respects manual stops. For critical services that must always run, use always. Avoid no in production — it leaves crashed containers dead until manual intervention.
Health Checks
Health checks let Docker monitor whether your application is actually functioning, not just whether the process is running:
services:
api:
image: myapp-api:latest
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
The start_period gives your application time to initialize before Docker starts counting failed checks. Set it higher than your application’s average startup time. Use specific health endpoints rather than checking if a port is open — a process can listen on a port while being in a broken state.
Resource Limits
Without resource limits, a single runaway container can consume all system memory and crash everything:
services:
api:
image: myapp-api:latest
deploy:
resources:
limits:
cpus: "1.0"
memory: 512M
reservations:
cpus: "0.25"
memory: 128M
Set limits to the maximum a service should ever use and reservations to guarantee minimum resources. Monitor your containers with docker stats for a week before setting final limits. Overly tight limits cause OOM kills and degraded performance.
Read-Only Filesystems
Minimize the attack surface by running containers with read-only root filesystems:
services:
api:
image: myapp-api:latest
read_only: true
tmpfs:
- /tmp
- /var/run
volumes:
- app-data:/app/data
This prevents malicious processes from writing to the container filesystem. Use tmpfs for directories that need temporary write access and named volumes for persistent data.
Secrets and Environment Management
Hardcoded credentials in your compose file are a security risk. Docker Compose supports multiple approaches for secrets management.
Using .env Files
Create a .env file alongside your compose file:
# .env - NEVER commit this file
POSTGRES_PASSWORD=your_secure_password_here
API_SECRET_KEY=another_secure_value
REDIS_PASSWORD=redis_password_here
Reference these in your compose file:
services:
db:
image: postgres:16
environment:
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
Add .env to your .gitignore immediately. For CI/CD pipelines, inject environment variables from your secret manager (GitHub Secrets, AWS Secrets Manager, or HashiCorp Vault).
Docker Secrets
For more secure handling, use Docker secrets which mount as files rather than environment variables:
secrets:
db_password:
file: ./secrets/db_password.txt
services:
db:
image: postgres:16
secrets:
- db_password
environment:
POSTGRES_PASSWORD_FILE: /run/secrets/db_password
Many official Docker images support the _FILE suffix convention, reading the secret from a mounted file instead of an environment variable. This keeps credentials out of docker inspect output and process environment listings.
Reverse Proxy and SSL with Nginx
Exposing application ports directly is insecure and inflexible. Use Nginx as a reverse proxy with SSL termination:
services:
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/conf.d:/etc/nginx/conf.d:ro
- certbot-webroot:/var/www/certbot:ro
- certbot-certs:/etc/letsencrypt:ro
depends_on:
api:
condition: service_healthy
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost/health"]
interval: 30s
timeout: 5s
retries: 3
certbot:
image: certbot/certbot
volumes:
- certbot-webroot:/var/www/certbot
- certbot-certs:/etc/letsencrypt
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done'"
volumes:
certbot-webroot:
certbot-certs:
The Nginx configuration should proxy to your internal services:
upstream api_backend {
server api:8080;
}
server {
listen 80;
server_name example.com;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl http2;
server_name example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
location / {
proxy_pass http://api_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Notice that only the Nginx service exposes ports to the host. All other services communicate through the Docker internal network, reducing the attack surface significantly.
Logging and Monitoring
Production deployments need structured logging with rotation to prevent disk exhaustion.
JSON File Logging with Rotation
services:
api:
image: myapp-api:latest
logging:
driver: json-file
options:
max-size: "10m"
max-file: "5"
tag: "{{.Name}}"
Without max-size and max-file, Docker logs grow indefinitely and can fill your disk. The json-file driver is the default and works well for most setups. For multi-host environments, consider forwarding logs to a centralized system.
Forwarding to External Systems
For production observability, forward logs to an aggregation service:
services:
api:
image: myapp-api:latest
logging:
driver: syslog
options:
syslog-address: "tcp://logserver:514"
tag: "myapp-api"
Alternatives include the fluentd and gelf drivers for ELK or Graylog stacks. Whichever you choose, always set log rotation on the Docker daemon level as a safety net in /etc/docker/daemon.json:
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}
Basic Monitoring
Add container monitoring with a lightweight stack:
services:
prometheus:
image: prom/prometheus:latest
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml:ro
- prometheus-data:/prometheus
deploy:
resources:
limits:
memory: 256M
cadvisor:
image: gcr.io/cadvisor/cadvisor:latest
volumes:
- /:/rootfs:ro
- /var/run:/var/run:ro
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
deploy:
resources:
limits:
memory: 128M
Comparison: Docker Compose vs Kubernetes vs Docker Swarm
| Feature | Docker Compose | Kubernetes | Docker Swarm |
|---|---|---|---|
| Setup complexity | Low | High | Medium |
| Multi-host support | No (single host) | Yes (cluster) | Yes (cluster) |
| Auto-scaling | No | Yes (HPA) | Limited |
| Service discovery | DNS-based | DNS + Ingress | DNS-based |
| Secret management | File-based | Built-in (etcd) | Built-in |
| Rolling updates | Manual | Built-in | Built-in |
| Health checks | Yes | Yes (liveness/readiness) | Yes |
| Resource limits | Yes | Yes (requests/limits) | Yes |
| Learning curve | Gentle | Steep | Moderate |
| Best for | Single host, small teams | Large-scale, multi-team | Small clusters |
Docker Compose is the right choice when you deploy to a single server or a small number of servers, your team is small, and you do not need auto-scaling. Many successful SaaS products run entirely on Docker Compose in production.
Real-World Scenario
You are deploying a web application consisting of a Node.js API, a PostgreSQL database, a Redis cache, and an Nginx reverse proxy. The server has 4 GB of RAM and 2 CPU cores.
Here is the complete production compose file:
services:
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/conf.d:/etc/nginx/conf.d:ro
- certbot-certs:/etc/letsencrypt:ro
depends_on:
api:
condition: service_healthy
restart: unless-stopped
deploy:
resources:
limits:
cpus: "0.25"
memory: 64M
logging:
driver: json-file
options:
max-size: "5m"
max-file: "3"
api:
image: myapp-api:1.2.3
env_file: .env
secrets:
- db_password
- api_key
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
restart: unless-stopped
read_only: true
tmpfs:
- /tmp
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
deploy:
resources:
limits:
cpus: "1.0"
memory: 512M
reservations:
cpus: "0.25"
memory: 128M
logging:
driver: json-file
options:
max-size: "10m"
max-file: "5"
db:
image: postgres:16-alpine
volumes:
- pgdata:/var/lib/postgresql/data
secrets:
- db_password
environment:
POSTGRES_PASSWORD_FILE: /run/secrets/db_password
POSTGRES_DB: myapp
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
deploy:
resources:
limits:
cpus: "0.5"
memory: 1G
reservations:
memory: 256M
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
redis:
image: redis:7-alpine
command: redis-server --requirepass ${REDIS_PASSWORD}
volumes:
- redisdata:/data
restart: unless-stopped
healthcheck:
test: ["CMD", "redis-cli", "-a", "${REDIS_PASSWORD}", "ping"]
interval: 10s
timeout: 5s
retries: 3
deploy:
resources:
limits:
cpus: "0.25"
memory: 256M
logging:
driver: json-file
options:
max-size: "5m"
max-file: "3"
secrets:
db_password:
file: ./secrets/db_password.txt
api_key:
file: ./secrets/api_key.txt
volumes:
pgdata:
redisdata:
certbot-certs:
Deploy with:
docker compose -f docker-compose.prod.yml up -d
docker compose -f docker-compose.prod.yml ps
docker compose -f docker-compose.prod.yml logs --tail=50
This configuration allocates approximately 1.8 GB of the 4 GB available, leaving headroom for the operating system and spikes.
Gotchas and Edge Cases
Container startup order does not guarantee readiness. Using depends_on alone only waits for the container to start, not for the service inside to be ready. Always combine depends_on with condition: service_healthy and proper health checks.
Orphan containers accumulate. When you remove a service from your compose file, the old container keeps running. Always run docker compose up -d --remove-orphans to clean up stale containers.
Volume permissions cause silent failures. If your container runs as a non-root user, the mounted volume may not be writable. Pre-create volumes with the correct ownership or use an init container pattern.
Docker Compose rebuilds do not pull new images by default. Running docker compose up -d reuses cached images. Use docker compose pull && docker compose up -d to ensure you deploy the latest version.
.env file variable expansion can conflict. Docker Compose interpolates ${VAR} in the compose file before passing it to the container. If your application config uses $ characters, escape them with $$ in the compose file.
Log rotation must be configured per-service and globally. Service-level log options do not override the daemon defaults for other containers. Set both for complete coverage.
Bridge networks have DNS resolution issues with underscores. Service names with underscores may not resolve correctly in older Docker versions. Use hyphens in service names for maximum compatibility.
Summary
- Use
restart: unless-stoppedand health checks on every production service - Set memory and CPU limits with
deploy.resourcesto prevent runaway containers - Manage secrets with Docker secrets or
.envfiles — never hardcode credentials - Place Nginx as a reverse proxy in front of application services with SSL termination
- Configure log rotation with
max-sizeandmax-fileto prevent disk exhaustion - Use
depends_onwithcondition: service_healthyfor reliable startup ordering - Run
docker compose up -d --remove-orphansto clean stale containers on every deploy - Docker Compose is production-ready for single-host and small-team deployments