TL;DR — Quick Summary
Docker multi-stage builds reduce image size and eliminate build tools from production, improving security for Node.js, Go, and .NET applications.
Production Docker images that contain compilers, test frameworks, and gigabytes of development dependencies are a liability. They slow down container startup, consume unnecessary storage in registries, and expose a wider attack surface to potential exploits. Docker multi-stage builds solve this by letting you compile and test inside a full-featured build environment and then copy only the finished artifacts into a lean, minimal runtime image — all from a single Dockerfile. This guide covers multi-stage builds for Node.js, Go, and .NET, including caching strategies, security hardening, and GitHub Actions integration.
Prerequisites
- Docker Engine 24+ installed (
docker --version) - Basic familiarity with Dockerfiles (
FROM,RUN,COPY,CMD) - A working application in Node.js, Go, or .NET (examples provided)
- Docker BuildKit enabled (default in Docker 23+; set
DOCKER_BUILDKIT=1for older versions) - Optional: GitHub Actions workflow for CI/CD integration
The Problem with Single-Stage Builds
A typical Node.js Dockerfile without multi-stage builds looks like this:
FROM node:22
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
CMD ["node", "dist/server.js"]
This image contains the full Node.js development environment: npm, native compiler toolchains, the node_modules directory including devDependencies, source files, and test utilities. The final image can easily exceed 1.2 GB for a modest application. Every layer is pushed to your registry, pulled on every deployment, and scanned for CVEs in all those tools you never actually run in production.
The same problem affects compiled languages. A Go binary built inside a golang:1.22 image carries the entire Go toolchain — compilers, linkers, standard library sources — even though the final binary is entirely self-contained and needs none of it at runtime.
How Multi-Stage Builds Work
Multi-stage builds use multiple FROM statements in a single Dockerfile. Each FROM begins a new stage with a clean filesystem. You name stages with AS and reference them in subsequent COPY --from=<stage> instructions:
# Stage 1: build environment
FROM node:22 AS builder
WORKDIR /app
COPY . .
RUN npm ci && npm run build
# Stage 2: production runtime
FROM node:22-alpine AS production
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
CMD ["node", "dist/server.js"]
Docker builds all stages sequentially but only keeps the final stage in the output image. Every tool, temporary file, and build artifact from earlier stages is discarded. The --from=builder flag in COPY reaches back into the named stage’s filesystem to extract specific paths.
You can have as many stages as you need. A common pattern is three stages: deps (install dependencies), builder (compile), and production (copy artifacts only).
Node.js Multi-Stage Build
Here is a production-ready multi-stage Dockerfile for a Node.js TypeScript application:
# ── Stage 1: install all dependencies ──────────────────────────
FROM node:22-alpine AS deps
WORKDIR /app
COPY package*.json ./
RUN --mount=type=cache,target=/root/.npm \
npm ci --frozen-lockfile
# ── Stage 2: build TypeScript ──────────────────────────────────
FROM node:22-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build
# ── Stage 3: production runtime ────────────────────────────────
FROM node:22-alpine AS production
ENV NODE_ENV=production
WORKDIR /app
# Create non-root user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
COPY --from=deps /app/node_modules ./node_modules
COPY --from=builder /app/dist ./dist
# Remove devDependencies from node_modules
RUN npm prune --production
USER appuser
EXPOSE 8080
CMD ["node", "dist/server.js"]
The three-stage pattern separates concerns cleanly. The deps stage installs everything including devDependencies. The builder stage compiles TypeScript to JavaScript. The production stage starts fresh, copies only the compiled output and node_modules, prunes devDependencies, and runs as a non-root user. The resulting image is roughly 180 MB versus 1.2 GB for a single-stage build.
Go Multi-Stage Build
Go is an ideal candidate for multi-stage builds because the compiler produces a single statically linked binary that can run on virtually any Linux filesystem:
# ── Stage 1: compile ──────────────────────────────────────────
FROM golang:1.22-alpine AS builder
WORKDIR /build
COPY go.mod go.sum ./
RUN --mount=type=cache,target=/go/pkg/mod \
go mod download
COPY . .
RUN --mount=type=cache,target=/go/pkg/mod \
--mount=type=cache,target=/root/.cache/go-build \
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 \
go build -ldflags="-s -w" -o /app/server ./cmd/server
# ── Stage 2: minimal runtime ──────────────────────────────────
FROM gcr.io/distroless/static-debian12 AS production
COPY --from=builder /app/server /server
USER nonroot:nonroot
EXPOSE 8080
ENTRYPOINT ["/server"]
CGO_ENABLED=0 produces a fully static binary with no shared library dependencies. The -ldflags="-s -w" strips the symbol table and debug info, reducing binary size by 20-30%. The final image uses distroless/static-debian12, which contains only the bare minimum to run a static binary — no shell, no package manager, no /bin/sh. The result is a 10-20 MB image versus 600 MB for the build stage.
If you need a shell for debugging (not recommended in production), use gcr.io/distroless/base-debian12 or alpine:3.19 as the final image instead.
.NET Multi-Stage Build
.NET separates cleanly into an SDK (for building) and a runtime (for executing). The runtime image is roughly one-quarter the size of the SDK image:
# ── Stage 1: restore NuGet packages ───────────────────────────
FROM mcr.microsoft.com/dotnet/sdk:9.0-alpine AS restore
WORKDIR /src
COPY *.sln .
COPY src/**/*.csproj ./
RUN --mount=type=cache,target=/root/.nuget/packages \
dotnet restore
# ── Stage 2: build and publish ────────────────────────────────
FROM restore AS builder
COPY . .
RUN --mount=type=cache,target=/root/.nuget/packages \
dotnet publish src/MyApp/MyApp.csproj \
-c Release \
-o /app/publish \
--no-restore
# ── Stage 3: production runtime ────────────────────────────────
FROM mcr.microsoft.com/dotnet/aspnet:9.0-alpine AS production
WORKDIR /app
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
COPY --from=builder /app/publish .
USER appuser
EXPOSE 8080
ENTRYPOINT ["dotnet", "MyApp.dll"]
The restore stage downloads NuGet packages once and caches them. The builder stage compiles and publishes. The production stage uses aspnet:9.0-alpine, which includes only the ASP.NET Core runtime — no SDK, no compilers, no source code. A typical .NET 9 API shrinks from 1.1 GB (SDK image) to approximately 130 MB (Alpine runtime image).
Caching Strategies
BuildKit cache mounts (--mount=type=cache) are the most impactful optimization for rebuild speed. Unlike regular Docker layer caching, cache mounts persist across builds without being included in the image:
# npm — cache the global module cache directory
RUN --mount=type=cache,target=/root/.npm \
npm ci
# Yarn — cache the Yarn cache directory
RUN --mount=type=cache,target=/usr/local/share/.cache/yarn \
yarn install --frozen-lockfile
# Go — cache module downloads and build cache separately
RUN --mount=type=cache,target=/go/pkg/mod \
--mount=type=cache,target=/root/.cache/go-build \
go build ./...
# NuGet — cache the package store
RUN --mount=type=cache,target=/root/.nuget/packages \
dotnet restore
Layer ordering matters equally. Copy only the dependency manifest files (package.json, go.mod, *.csproj) before copying source code. Docker invalidates the cache for all layers after the first changed layer, so if you copy source first, every dependency install reruns on every source change:
# Correct: copy manifests first, source second
COPY package*.json ./
RUN npm ci
COPY src/ ./src/
RUN npm run build
# Wrong: copying everything invalidates the npm ci cache every time
COPY . .
RUN npm ci && npm run build
Image Size Comparison
| Application | Single-Stage Image | Multi-Stage Image | Reduction |
|---|---|---|---|
| Node.js TypeScript API | 1,240 MB | 185 MB | 85% |
| Go REST service | 612 MB | 12 MB | 98% |
| .NET 9 ASP.NET Core API | 1,080 MB | 130 MB | 88% |
| Python Flask app | 945 MB | 180 MB | 81% |
| Java Spring Boot (JRE) | 880 MB | 250 MB | 72% |
These numbers reflect real-world applications. Your results will vary based on the number of dependencies, but multi-stage builds consistently deliver 70-98% size reductions.
Real-World Scenario
You are building and deploying a Node.js REST API through GitHub Actions. Currently, every CI run takes 8 minutes to build a 1.2 GB image and push it to GitHub Container Registry. With multi-stage builds and BuildKit layer caching, you can reduce this to under 2 minutes on cached runs.
Here is the complete GitHub Actions workflow:
name: Build and Deploy
on:
push:
branches: [main]
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
build:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=sha,prefix=sha-
type=ref,event=branch
type=raw,value=latest,enable={{is_default_branch}}
- name: Build and push
uses: docker/build-push-action@v6
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
target: production
cache-from: type=registry,ref=${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:cache
cache-to: type=registry,ref=${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:cache,mode=max
The cache-from and cache-to directives store the full layer cache in the registry alongside your image. On the first run, every layer is built from scratch. On subsequent runs, Docker pulls cached layers and only rebuilds what changed. The target: production flag builds only up to the production stage, skipping any debug stages you may have defined.
Gotchas and Edge Cases
ARG values do not cross stage boundaries. If you define ARG VERSION=1.0 before the first FROM, it is available globally. If you define it inside a stage, it only exists in that stage. Re-declare ARG in each stage that needs it:
ARG VERSION=1.0
FROM node:22-alpine AS builder
ARG VERSION
RUN echo "Building version $VERSION"
COPY --from uses the stage’s filesystem snapshot at the time the stage completed. If a later RUN command in the source stage deletes a file, that deletion is visible to COPY --from. Plan your stage outputs carefully.
Build context is shared across all stages. The entire build context (your working directory) is sent to the Docker daemon once. Use a .dockerignore file to exclude node_modules, .git, and test fixtures from the context to speed up context transfer and prevent sensitive files from appearing in intermediate layers.
The --target flag stops the build at a named stage. Use this for testing individual stages:
# Build only the builder stage
docker build --target builder -t myapp:debug .
# Run interactively to debug
docker run --rm -it myapp:debug sh
Timestamps are not preserved by COPY --from. Files copied between stages get the current build timestamp. This can affect applications that check file modification times.
Troubleshooting
“failed to solve: failed to read dockerfile” after adding a second FROM — Ensure BuildKit is enabled. Set DOCKER_BUILDKIT=1 or upgrade to Docker 23+, where it is the default. Legacy builder does not support all multi-stage features.
Final image still contains source files — You forgot to use COPY --from=builder and are instead using a plain COPY . . in the production stage. Review every COPY in your production stage and ensure it references a specific path from a named stage.
npm prune --production fails in production stage — This happens when you copied node_modules from a stage that ran npm ci without a package.json alongside it. Ensure package.json is present before running npm prune.
Go binary fails with “exec format error” on Alpine — You compiled with CGO_ENABLED=1 (the default) but used a stage with no C runtime. Set CGO_ENABLED=0 explicitly, or switch the runtime stage to alpine:3.19 which includes musl libc.
Cache mounts do not persist in CI — --mount=type=cache is a local BuildKit feature. In CI, use the registry cache strategy (cache-from/cache-to) instead. Cache mounts and registry caches serve different purposes and can be combined.
Image size did not decrease significantly — Check that your production stage does not inadvertently copy unnecessary directories. Run docker history myimage:latest to see which layers contribute the most size.
Summary
- Single-stage builds include compilers, dev tools, and source code in the final image — multi-stage builds eliminate all of that
- Use
COPY --from=<stage>to copy only compiled artifacts into a minimal runtime base image - Node.js apps shrink 85%, Go apps shrink 98%, and .NET apps shrink 88% with multi-stage builds
- Mount package manager caches with
--mount=type=cacheto accelerate incremental rebuilds without bloating the image - Copy dependency manifests before source code to maximize Docker layer cache reuse
- Use
gcr.io/distroless/static-debian12for Go binaries that need zero OS overhead - Run containers as non-root users and use read-only filesystems where possible
- In GitHub Actions, use
cache-from/cache-towith a registry cache to share BuildKit layers across runs - Use
--targetto build and debug specific stages without running the full build