TL;DR — Quick Summary
Dagger lets you write CI/CD pipelines as code in Go, Python, or TypeScript that run identically on your laptop and on any CI platform without changes.
Dagger is a programmable CI/CD engine that runs your pipeline logic inside containers, using real programming languages instead of YAML. If you have ever spent hours debugging a GitHub Actions failure that never reproduced locally, or maintained five slightly different pipeline configurations across GitHub, GitLab, and Jenkins, Dagger addresses both problems at once: your entire pipeline is typed code that runs identically on your laptop and any CI platform. This guide covers the architecture, the SDK patterns, services, caching, secrets, and CI integration.
Prerequisites
- Docker Desktop or Docker Engine running locally
- Node.js 18+ (for TypeScript SDK), Go 1.21+, or Python 3.11+ depending on your chosen SDK
- Basic familiarity with containers and at least one of Go, Python, or TypeScript
- A project you want to build and test (a Node.js app is used in examples)
Why Dagger: The Vendor Lock-In Problem
Every major CI platform has its own pipeline DSL. GitHub Actions uses YAML with uses: steps, GitLab CI uses .gitlab-ci.yml stages, Jenkins uses a Groovy-based Declarative Pipeline DSL, and CircleCI uses yet another YAML schema. The result is that your pipeline knowledge and pipeline code are completely non-portable.
The secondary problem is the “works on my machine” gap. A CI system is a different environment from your development machine. Debugging a failing pipeline means pushing a commit, waiting for a runner, reading truncated logs, and repeating. You cannot run GitHub Actions locally in any meaningful way without heavy workarounds.
Dagger solves this with three design decisions:
- Pipelines are container-native. Every step runs in a container via BuildKit. Caching, isolation, and portability come from the container model, not from CI platform abstractions.
- Pipelines are real code. You write Go, Python, or TypeScript. You get types, tests, IDE support, and code reuse — things YAML cannot offer.
- CI integration is a thin wrapper. Your CI YAML becomes a single
dagger callinvocation. All logic stays in your code.
| Feature | Dagger | GitHub Actions | GitLab CI | Jenkins | Earthly |
|---|---|---|---|---|---|
| Runs locally without modification | Yes | No | No | Partial | Yes |
| Language for pipeline logic | Go/Python/TS | YAML | YAML | Groovy | Earthfile DSL |
| Vendor lock-in | None | High | High | Medium | Low |
| Container-native caching | BuildKit layers | Limited | Limited | None | BuildKit layers |
| Reusable modules ecosystem | Daggerverse | Actions Marketplace | None | Plugins | None |
| Type safety | Full | No | No | Partial | No |
| IDE support | Full | Extensions | Extensions | Extensions | None |
Architecture: How Dagger Works
The Dagger architecture has three layers.
Dagger Engine is a long-running daemon that wraps BuildKit. When you call dagger call, the CLI connects to the Engine, which executes your pipeline steps in isolated containers. The Engine exposes a GraphQL API over a Unix socket. All SDK clients communicate with it via this API.
SDK clients (Go, Python, TypeScript, Elixir) generate strongly-typed wrappers around the GraphQL API. When your TypeScript code calls dag.container().withExec(["npm", "test"]), it builds a GraphQL query that is sent to the Engine. The Engine evaluates it lazily — nothing executes until you call .stdout() or .sync() to retrieve a result, which allows Dagger to optimize the execution graph.
Dagger Modules are the packaging unit. A module is a directory containing a dagger.json manifest, your pipeline code, and any dependencies. Modules can be published to the Daggerverse (dagger.io/hub) and consumed by other modules with dagger install.
Installation
Install the dagger CLI using the official script:
curl -fsSL https://dl.dagger.io/dagger/install.sh | BIN_DIR=/usr/local/bin sh
dagger version
On macOS with Homebrew:
brew install dagger/tap/dagger
Docker must be running. Dagger uses Docker (or any compatible Moby daemon) as the container backend. Verify connectivity:
docker info && dagger version
Writing Pipelines in TypeScript
Initialize a new Dagger module in your project:
dagger init --sdk=typescript --name=my-pipeline
This creates dagger.json and src/index.ts. A complete build-and-test pipeline for a Node.js app:
import { dag, Container, Directory, object, func } from "@dagger.io/dagger";
@object()
export class MyPipeline {
@func()
async build(source: Directory): Promise<Container> {
const nodeCache = dag.cacheVolume("node-modules");
return dag
.container()
.from("node:20-alpine")
.withMountedDirectory("/app", source)
.withMountedCache("/app/node_modules", nodeCache)
.withWorkdir("/app")
.withExec(["npm", "ci"])
.withExec(["npm", "run", "build"]);
}
@func()
async test(source: Directory): Promise<string> {
const built = await this.build(source);
return built
.withExec(["npm", "test", "--", "--forceExit"])
.stdout();
}
@func()
async lint(source: Directory): Promise<string> {
const nodeCache = dag.cacheVolume("node-modules");
return dag
.container()
.from("node:20-alpine")
.withMountedDirectory("/app", source)
.withMountedCache("/app/node_modules", nodeCache)
.withWorkdir("/app")
.withExec(["npm", "ci"])
.withExec(["npm", "run", "lint"])
.stdout();
}
}
Run it locally, passing your current directory as the source:
dagger call test --source=.
dagger call lint --source=.
The withMountedCache call creates a persistent cache volume for node_modules that survives between pipeline runs. The first run downloads dependencies; subsequent runs reuse the cache layer — the same behaviour you get from BuildKit layer caching but for arbitrary directories.
Writing Pipelines in Go
The same pipeline in Go:
package main
import (
"context"
"dagger.io/dagger"
)
type MyPipeline struct{}
func (m *MyPipeline) Build(ctx context.Context, source *dagger.Directory) (*dagger.Container, error) {
nodeCache := dag.CacheVolume("node-modules")
return dag.Container().
From("node:20-alpine").
WithMountedDirectory("/app", source).
WithMountedCache("/app/node_modules", nodeCache).
WithWorkdir("/app").
WithExec([]string{"npm", "ci"}).
WithExec([]string{"npm", "run", "build"}), nil
}
func (m *MyPipeline) Test(ctx context.Context, source *dagger.Directory) (string, error) {
built, err := m.Build(ctx, source)
if err != nil {
return "", err
}
return built.
WithExec([]string{"npm", "test", "--", "--forceExit"}).
Stdout(ctx)
}
The API is identical across SDKs. A team that writes Go pipelines and a team that writes TypeScript pipelines can share and consume each other’s Dagger Modules via the Daggerverse without needing to read each other’s code.
Writing Pipelines in Python
import dagger
from dagger import dag, function, object_type
@object_type
class MyPipeline:
@function
async def build(self, source: dagger.Directory) -> dagger.Container:
node_cache = dag.cache_volume("node-modules")
return (
dag.container()
.from_("node:20-alpine")
.with_mounted_directory("/app", source)
.with_mounted_cache("/app/node_modules", node_cache)
.with_workdir("/app")
.with_exec(["npm", "ci"])
.with_exec(["npm", "run", "build"])
)
@function
async def test(self, source: dagger.Directory) -> str:
built = await self.build(source)
return await built.with_exec(["npm", "test", "--", "--forceExit"]).stdout()
Services: Databases and Redis in Tests
Dagger’s Service abstraction lets you attach ephemeral containers as network-accessible services during a pipeline run. This eliminates the need for mocked databases in integration tests.
@func()
async integrationTest(source: Directory): Promise<string> {
const postgres = dag
.container()
.from("postgres:16-alpine")
.withEnvVariable("POSTGRES_PASSWORD", "test")
.withEnvVariable("POSTGRES_DB", "testdb")
.withExposedPort(5432)
.asService();
const redis = dag
.container()
.from("redis:7-alpine")
.withExposedPort(6379)
.asService();
return dag
.container()
.from("node:20-alpine")
.withMountedDirectory("/app", source)
.withWorkdir("/app")
.withServiceBinding("db", postgres)
.withServiceBinding("cache", redis)
.withEnvVariable("DATABASE_URL", "postgresql://postgres:test@db:5432/testdb")
.withEnvVariable("REDIS_URL", "redis://cache:6379")
.withExec(["npm", "ci"])
.withExec(["npm", "run", "test:integration"])
.stdout();
}
The services are started before the test container, live for the duration of the pipeline step, and are stopped automatically. They are reachable by hostname inside the pipeline network (db and cache in this example).
Secrets Management
Dagger has a first-class Secret type that ensures sensitive values are never written to logs, cached, or exposed in traces.
Pass a secret from the CLI:
dagger call publish --source=. --registry-token=env:REGISTRY_TOKEN
Use it in your TypeScript pipeline:
@func()
async publish(source: Directory, registryToken: Secret): Promise<string> {
return dag
.container()
.from("node:20-alpine")
.withMountedDirectory("/app", source)
.withWorkdir("/app")
.withExec(["npm", "ci"])
.withExec(["npm", "run", "build"])
.withSecretVariable("NPM_TOKEN", registryToken)
.withExec(["npm", "publish"])
.stdout();
}
The withSecretVariable and withSecret (for file-based secrets) methods pass the value into the container without exposing it in the Dagger trace, Dagger Cloud UI, or terminal output.
Multi-Platform Builds
Build for multiple CPU architectures in parallel using Dagger’s platform support:
@func()
async buildMultiPlatform(source: Directory): Promise<Container[]> {
const platforms: dagger.Platform[] = [
"linux/amd64" as dagger.Platform,
"linux/arm64" as dagger.Platform,
];
return Promise.all(
platforms.map((platform) =>
dag
.container({ platform })
.from("node:20-alpine")
.withMountedDirectory("/app", source)
.withWorkdir("/app")
.withExec(["npm", "ci"])
.withExec(["npm", "run", "build"])
)
);
}
BuildKit handles the QEMU emulation for cross-platform builds transparently.
Dagger Modules and the Daggerverse
A module is the reusable packaging unit in the Dagger ecosystem. Your dagger.json looks like:
{
"name": "my-pipeline",
"sdk": "typescript",
"dependencies": [
{
"name": "golang",
"source": "github.com/purpleclay/daggerverse/golang@v0.1.0"
}
]
}
Install a third-party module:
dagger install github.com/shykes/daggerverse/wolfi@v0.1.2
Once installed, the module’s functions are callable from your own module’s code as a typed dependency. The Daggerverse at dagger.io/hub is the public registry for community-published modules — you can find pre-built modules for Go testing, Docker publishing, GitHub releases, Helm chart deployments, and more.
CI Integration
Dagger’s CI integration strategy is always the same: keep CI YAML as thin as possible. All logic stays in the module.
GitHub Actions:
name: CI
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: dagger/dagger-for-github@v6
with:
verb: call
args: test --source=.
cloud-token: ${{ secrets.DAGGER_CLOUD_TOKEN }}
GitLab CI:
test:
image: docker:24
services:
- docker:24-dind
before_script:
- curl -fsSL https://dl.dagger.io/dagger/install.sh | BIN_DIR=/usr/local/bin sh
script:
- dagger call test --source=.
CircleCI:
jobs:
test:
machine:
image: ubuntu-2204:current
steps:
- checkout
- run: curl -fsSL https://dl.dagger.io/dagger/install.sh | BIN_DIR=/usr/local/bin sh
- run: dagger call test --source=.
Jenkins:
pipeline {
agent { docker { image 'docker:24-dind' } }
stages {
stage('Test') {
steps {
sh 'curl -fsSL https://dl.dagger.io/dagger/install.sh | BIN_DIR=/usr/local/bin sh'
sh 'dagger call test --source=.'
}
}
}
}
In every case, the dagger call test --source=. command is identical. Switching CI providers requires only changing the thin wrapper — not rewriting pipeline logic.
Dagger Cloud
Dagger Cloud is the optional SaaS observability layer. Key features:
- Trace UI — a visual timeline of every container step, with logs, durations, and cache hit/miss status
- Persistent cache sharing — your team shares a single distributed cache layer, so a cache warm from one developer’s run benefits CI and vice versa
- TUI — the terminal UI renders a live pipeline tree during execution, showing which steps are running, which are waiting, and which have cache hits
Connect to Dagger Cloud:
dagger login
dagger call test --source=.
Every run automatically uploads a trace to app.dagger.io, accessible by your whole team.
Real-World Scenario: Full Node.js Pipeline
You have a Node.js API that needs linting, unit tests with a PostgreSQL database, a Docker image build, and a push to GitHub Container Registry on every merge to main.
@func()
async ci(
source: Directory,
registryToken: Secret,
branch: string
): Promise<string> {
// Run lint and tests in parallel
const [lintResult, testResult] = await Promise.all([
this.lint(source),
this.integrationTest(source),
]);
if (branch !== "main") {
return `lint: ok\ntests: ok\nskipped publish (branch: ${branch})`;
}
// Build and push Docker image only on main
const ref = await dag
.container()
.from("node:20-alpine")
.withMountedDirectory("/app", source)
.withWorkdir("/app")
.withExec(["npm", "ci"])
.withExec(["npm", "run", "build"])
.withEntrypoint(["node", "dist/server.js"])
.withRegistryAuth("ghcr.io", "github-actions", registryToken)
.publish("ghcr.io/my-org/my-api:latest");
return `published: ${ref}`;
}
Running locally: dagger call ci --source=. --registry-token=env:GHCR_TOKEN --branch=main
Running in GitHub Actions: the exact same command in a dagger-for-github step with the token injected from ${{ secrets.GHCR_TOKEN }}.
Gotchas and Edge Cases
Docker-in-Docker on CI — Dagger requires access to a Docker daemon. On GitHub Actions with ubuntu-latest runners, Docker is pre-installed. On GitLab CI, use the docker:dind service and set DOCKER_HOST=tcp://docker:2376.
Module SDK version pinning — The dagger.json pins the SDK version. If you upgrade the dagger CLI but not the module SDK, you may get version mismatch errors. Run dagger develop to upgrade the module SDK in sync with your CLI version.
Large monorepos — Passing an entire monorepo directory as a Directory input copies all files into the Dagger filesystem. Use withDirectory with a filter or pass only the subdirectory your pipeline needs to keep transfers fast.
Cache invalidation — withMountedCache caches are keyed by the volume name. If you want separate caches per branch, use a dynamic volume name like node-modules-${branch}.
Windows support — Dagger requires a Linux container backend. On Windows, Docker Desktop with WSL2 is required. The dagger CLI itself runs natively on Windows, but pipeline execution always happens in Linux containers.
Summary
- Dagger solves CI vendor lock-in by moving pipeline logic into container-native code (Go, Python, TypeScript)
- The Dagger Engine wraps BuildKit and exposes a GraphQL API; SDK clients provide typed wrappers in your language of choice
withMountedCachegives you persistent, cross-run caching for dependencies without custom CI caching plugins- Services (
asService+withServiceBinding) provide real databases and Redis for integration tests without mocks - Secrets use a typed
Secretparameter that is never logged or cached — pass via CLI--secretflag - CI integration is always a thin wrapper calling
dagger call— identical command across GitHub Actions, GitLab CI, CircleCI, and Jenkins - Dagger Modules are publishable to the Daggerverse and composable — reuse community-built pipeline components
- Dagger Cloud adds trace visualization, persistent distributed cache sharing, and live TUI monitoring