TL;DR — Quick Summary
A practical home lab and SME guide for migrating from a single-node Docker Compose setup to a multi-node, highly available Docker Swarm cluster.
Kubernetes is the undisputed titan of container orchestration. It runs the internet’s largest infrastructure. But what happens if you are a small team, an agency, or a homelabber who needs high availability without the crushing overhead of deploying and maintaining a Kubernetes control plane?
Enter Docker Swarm.
Docker Swarm is natively built directly into the standard Docker daemon. It allows you to link multiple Linux machines together into a single virtual Docker host. If a server physically catches fire, Swarm will instantly detect the failure and reschedule your containers on the surviving servers. It is incredibly easy to maintain and understand.
In this tutorial, we will turn three separate Linux servers into a unified, highly-available Docker Swarm cluster.
Architecture Overview
A Docker Swarm consists of two types of nodes:
- Manager Nodes: These handle the cluster management, scheduling, and maintain the Swarm state via the embedded Raft consensus algorithm. A Swarm must have at least one Manager (preferably 3 or 5 for fault tolerance).
- Worker Nodes: These strictly execute tasks (containers) assigned to them by the Managers.
For this tutorial, assume we have three servers with Docker already installed:
manager-1(IP: 192.168.1.10)worker-1(IP: 192.168.1.11)worker-2(IP: 192.168.1.12)
Step 1: Open Firewall Ports
Swarm nodes must be able to communicate with each other over the network. If your cloud provider or local machines use a firewall (like ufw), you must open the following ports between the node IP addresses:
- TCP 2377: Cluster management communications
- TCP/UDP 7946: Communication among nodes
- UDP 4789: Overlay network traffic
On Ubuntu using UFW, you would run this on every node:
sudo ufw allow 2377/tcp
sudo ufw allow 7946
sudo ufw allow 4789/udp
Step 2: Initialize the Swarm
SSH into your primary server (manager-1). This machine will become the foundation of your cluster.
Run the initialization command, explicitly stating the IP address that other nodes should use to connect to it:
docker swarm init --advertise-addr 192.168.1.10
Docker will output something that looks like this:
Swarm initialized: current node (dxn1zl6l...) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-49nj1cmql0jkz8smp... 192.168.1.10:2377
Save that join command. You will need it in the next step.
Step 3: Join the Worker Nodes
Next, SSH into your second server (worker-1) and paste the exact command provided by the initialization step:
docker swarm join --token SWMTKN-1-49nj1cmql0jkz8smp... 192.168.1.10:2377
You should see:
This node joined a swarm as a worker.
Repeat this step precisely on your third server (worker-2).
Step 4: Verify the Swarm Status
Go back to your Manager node (manager-1). Run the following command to see the layout of your new cluster:
docker node ls
The output should show all three nodes, with manager-1 marked as the Leader:
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
dxn1zl6lqv8a9s manager-1 Ready Active Leader
8pkhc2s1d6g9m worker-1 Ready Active
3fld1m9b8h1q5 worker-2 Ready Active
Congratulations! You now have a working Docker Swarm cluster.
Step 5: Deploying a Scaled Service
Instead of running individual containers using docker run, in Swarm mode, we deploy Services. A service describes the desired state of a container (e.g., “I want 3 replicas of Nginx running at all times”).
On the Manager node, create a simple Nginx web server service scaled to 3 replicas:
docker service create --name my-web --replicas 3 -p 8080:80 nginx:latest
You can watch Swarm distribute the containers across your nodes by running:
docker service ps my-web
You will see one Nginx container running on manager-1, one on worker-1, and one on worker-2.
The Magic of the Routing Mesh
Here is where Swarm shines: You mapped port 8080 to port 80. Docker Swarm uses an ingress routing mesh.
If you open a browser and go to http://192.168.1.12:8080 (worker-2’s IP address), the routing mesh accepts the connection and instantly forwards it to the correct internal container, even if that specific container is currently running on manager-1. You can hit any node’s IP on port 8080, and the swarm will load balance the traffic to an available Nginx container automatically.
Step 6: Test High Availability (Simulate a Failure)
To prove how robust this is, let’s intentionally kill a node.
- SSH into
worker-1and completely turn off the Docker service:sudo systemctl stop docker - Quickly go back to
manager-1and rundocker service ps my-web.
You will see Swarm immediately detect that the container on worker-1 has “Failed”. Automatically, it will spin up a replacement container on worker-2 to ensure you maintain the 3 replicas you requested.
Your website will not experience a second of downtime.
Transitioning from Docker Compose
If you currently use docker-compose.yml files, you do not need to learn a massive new syntax like Kubernetes YAML. Docker Swarm natively deploys Compose files using Docker Stacks.
To deploy an existing Compose file to the swarm, you simply run:
docker stack deploy -c docker-compose.yml my-stack
Swarm will read the standard Compose file and distribute the application safely across your cluster.
Conclusion
Docker Swarm remains an incredibly viable, lightweight alternative to Kubernetes. For teams that want self-healing, multi-node applications without dedicating a full-time DevSecOps position to managing K3s or EKS clusters, Docker Swarm provides exactly the features you need in an afternoon of setup.