TL;DR — Quick Summary

Pulumi lets you define cloud infrastructure using TypeScript, Python, Go, C#, and Java. Complete guide covering stacks, state, testing, and CI/CD integration.

Pulumi is an infrastructure as code platform that lets you define, deploy, and manage cloud resources using real programming languages — TypeScript, Python, Go, C#, Java — instead of domain-specific languages like HCL or YAML. If you have ever hit a wall with Terraform loops, wrestled with CloudFormation’s JSON verbosity, or wished you could test your infrastructure with the same testing framework you use for application code, Pulumi solves exactly those problems. This guide covers everything from installation and core concepts through testing, state management, CI/CD integration, and a practical multi-cloud example.

Prerequisites

  • Familiarity with at least one of: TypeScript/JavaScript, Python, Go, C#, or Java
  • An account with AWS, Azure, or GCP (free tier is sufficient for examples)
  • Node.js 18+ installed (for TypeScript/JavaScript examples)
  • Basic understanding of cloud concepts (VPCs, storage buckets, VMs)
  • Pulumi Cloud account (free tier) or an alternative state backend

Why Pulumi: Real Languages vs. HCL and YAML

The fundamental difference between Pulumi and tools like Terraform or CloudFormation is that Pulumi programs are written in general-purpose programming languages. This unlocks capabilities that are awkward or impossible in DSL-based tools.

Full IDE support — Your editor provides autocomplete for every resource property, inline documentation, and type errors before you ever run a deployment. Mistyping an AWS S3 bucket property name is a compile-time error, not a runtime failure after a 30-second API call.

Standard control flow — Loops, conditionals, functions, and classes all work as expected. Creating 10 subnets is a for loop, not Terraform’s count meta-argument or for_each gymnastics.

Reusable components as packages — You can publish infrastructure components to npm, PyPI, or Maven. A team that builds a compliant VPC module publishes it as a package; other teams add it as a dependency and import it like any library.

Testing with standard frameworks — Unit tests use Jest, pytest, or Go’s testing package. Property tests use Policy as Code. Integration tests spin up real infrastructure and validate it end-to-end.

Same language for app and infra — A TypeScript team can keep infrastructure code in the same repository, with the same linting rules, the same CI pipeline, and shared utility functions.

FeaturePulumiTerraform/OpenTofuAWS CDKCrossplaneCloudFormation
LanguageTS/Python/Go/C#/JavaHCLTS/Python/Java/GoYAML/CRDsJSON/YAML
IDE supportFull (types + autocomplete)Partial (HCL plugin)FullMinimalMinimal
TestingStandard frameworksTerratest (Go)Standard frameworksNone built-inNone built-in
State managementCloud/S3/GCS/Azure/localCloud/S3/GCS/Azure/localCloudFormation stacksKubernetes etcdCloudFormation
Multi-cloudYes (single program)Yes (multiple providers)AWS-focusedYesAWS only
Reusable packagesnpm/PyPI/Maven/NuGetTerraform Registrynpm/PyPI/Maven/NuGetHelm/OCINested stacks
Learning curveLow (uses existing language)Medium (learn HCL)Low (uses existing language)High (Kubernetes CRDs)High (verbose JSON/YAML)

Architecture: How Pulumi Works

Understanding Pulumi’s architecture helps you reason about what happens when you run pulumi up.

Language host — A process running your chosen language runtime (Node.js, Python, JVM, etc.) that executes your program. It calls the Pulumi SDK to register resources.

Pulumi engine — The core orchestration process that receives resource registration requests from the language host, computes the dependency graph, compares desired state against the last-known state, and generates a plan.

Resource providers — Plugins that translate Pulumi resource declarations into API calls (AWS, Azure, GCP, Kubernetes, etc.). Each provider is a separate binary that the engine downloads and runs.

State backend — Stores the last-known state of your stack (which resources exist, their properties, their IDs). The engine diffs new program output against this state to determine what to create, update, or delete.

Deployment engine — Executes the plan respecting the dependency graph, running independent resource operations in parallel for speed.

Installation

Install the Pulumi CLI:

# Linux/macOS via install script
curl -fsSL https://get.pulumi.com | sh

# macOS via Homebrew
brew install pulumi

# Windows via Chocolatey
choco install pulumi

# Verify
pulumi version

Install language SDKs (you likely already have these):

# TypeScript/JavaScript
npm install -g typescript

# Python (pip is standard)
python3 --version

# Go
go version

# .NET
dotnet --version

Log in to your state backend:

# Pulumi Cloud (default, free tier available)
pulumi login

# S3 backend
pulumi login s3://your-state-bucket/pulumi

# Local filesystem (not for teams)
pulumi login --local

Project Structure

Create a new project from a template:

pulumi new aws-typescript        # TypeScript + AWS
pulumi new aws-python            # Python + AWS
pulumi new azure-go              # Go + Azure
pulumi new gcp-csharp            # C# + GCP
pulumi new kubernetes-typescript # TypeScript + Kubernetes

A TypeScript project has this structure:

my-infra/
  Pulumi.yaml          # Project metadata (name, runtime, description)
  Pulumi.dev.yaml      # Stack configuration for the "dev" stack
  package.json         # npm dependencies
  tsconfig.json        # TypeScript config
  index.ts             # Entry point — your infrastructure code

Pulumi.yaml is minimal:

name: my-infra
runtime: nodejs
description: AWS infrastructure for my application

Pulumi.dev.yaml holds stack-specific configuration:

config:
  aws:region: us-east-1
  my-infra:environment: dev
  my-infra:instanceCount: "2"

Core Concepts

Resources

A resource is any cloud object Pulumi manages. You declare it by constructing an SDK object:

import * as aws from "@pulumi/aws";

const bucket = new aws.s3.Bucket("my-bucket", {
    acl: "private",
    tags: { Environment: "production" },
    versioning: { enabled: true },
});

// Export the bucket name for use by other stacks or CI systems
export const bucketName = bucket.bucket;
export const bucketArn = bucket.arn;

Inputs and Outputs

Resource properties are Output<T> — asynchronous values that resolve after a resource is created. You cannot use an Output directly as a plain string; you must use pulumi.interpolate or Output.apply:

// pulumi.interpolate — for string templates
const bucketUrl = pulumi.interpolate`https://${bucket.bucket}.s3.amazonaws.com`;

// Output.apply — for transformations
const upperBucketName = bucket.bucket.apply(name => name.toUpperCase());

// Output.all — when you need multiple outputs together
const combined = pulumi.all([bucket.bucket, bucket.arn]).apply(([name, arn]) => ({
    name,
    arn,
    url: `https://${name}.s3.amazonaws.com`,
}));

Stack References

Stack references let you consume outputs from one stack in another stack — the standard pattern for separating network, data, and application infrastructure:

// In the application stack, consume outputs from the network stack
const networkStack = new pulumi.StackReference("org/network/prod");
const vpcId = networkStack.getOutput("vpcId");
const privateSubnetIds = networkStack.getOutput("privateSubnetIds");

const cluster = new aws.ecs.Cluster("app-cluster", {});
const service = new aws.ecs.Service("app-service", {
    cluster: cluster.arn,
    networkConfiguration: {
        subnets: privateSubnetIds,
    },
});

ComponentResource

ComponentResource is the abstraction mechanism for building reusable infrastructure modules:

import * as pulumi from "@pulumi/pulumi";
import * as aws from "@pulumi/aws";

class SecureS3Bucket extends pulumi.ComponentResource {
    public readonly bucket: aws.s3.Bucket;
    public readonly bucketName: pulumi.Output<string>;

    constructor(name: string, opts?: pulumi.ComponentResourceOptions) {
        super("myorg:storage:SecureS3Bucket", name, {}, opts);

        this.bucket = new aws.s3.Bucket(`${name}-bucket`, {
            acl: "private",
            serverSideEncryptionConfiguration: {
                rule: {
                    applyServerSideEncryptionByDefault: {
                        sseAlgorithm: "AES256",
                    },
                },
            },
            versioning: { enabled: true },
        }, { parent: this });

        new aws.s3.BucketPublicAccessBlock(`${name}-block`, {
            bucket: this.bucket.id,
            blockPublicAcls: true,
            blockPublicPolicy: true,
            ignorePublicAcls: true,
            restrictPublicBuckets: true,
        }, { parent: this });

        this.bucketName = this.bucket.bucket;
        this.registerOutputs({ bucketName: this.bucketName });
    }
}

// Use it anywhere
const appStorage = new SecureS3Bucket("app-storage");
export const storageName = appStorage.bucketName;

Writing Infrastructure in Python

Python is equally first-class. The same concepts apply with Python syntax:

import pulumi
import pulumi_aws as aws

# Create a VPC
vpc = aws.ec2.Vpc("main-vpc",
    cidr_block="10.0.0.0/16",
    enable_dns_hostnames=True,
    tags={"Environment": pulumi.get_stack()},
)

# Create subnets using a list comprehension — real Python
availability_zones = ["us-east-1a", "us-east-1b", "us-east-1c"]
subnets = [
    aws.ec2.Subnet(f"subnet-{i}",
        vpc_id=vpc.id,
        cidr_block=f"10.0.{i}.0/24",
        availability_zone=az,
        map_public_ip_on_launch=False,
    )
    for i, az in enumerate(availability_zones)
]

# Output.all waits for all outputs to resolve
all_subnet_ids = pulumi.Output.all(*[s.id for s in subnets]).apply(
    lambda ids: ids
)

pulumi.export("vpc_id", vpc.id)
pulumi.export("subnet_ids", all_subnet_ids)

State Backends

The state backend stores your stack’s last-known resource state. Choose carefully — migrating state between backends requires manual steps.

BackendCommandBest for
Pulumi Cloudpulumi loginTeams, secrets encryption, audit log built-in
AWS S3pulumi login s3://bucket/pathAWS-native teams, existing S3 bucket
Azure Blobpulumi login azblob://container/pathAzure-native teams
Google Cloud Storagepulumi login gs://bucket/pathGCP-native teams
Local filepulumi login --localSolo testing only
Self-hosted Pulumipulumi login https://pulumi.example.comAir-gapped or compliance requirements

For the S3 backend, enable versioning on the bucket and use a DynamoDB table for state locking to prevent concurrent pulumi up runs from corrupting state.

Stacks and Environments

A stack is an isolated deployment of the same program. Typical setup:

# Create stacks for each environment
pulumi stack init dev
pulumi stack init staging
pulumi stack init prod

# Switch between stacks
pulumi stack select prod

# Set configuration per stack
pulumi config set aws:region eu-west-1
pulumi config set instanceType t3.medium

# Set a secret (encrypted in state)
pulumi config set --secret databasePassword "super-secret-value"

# Read config in code
const config = new pulumi.Config();
const instanceType = config.require("instanceType");
const dbPassword = config.requireSecret("databasePassword");

Stack configuration values live in Pulumi.<stackname>.yaml and are committed to Git (secrets are encrypted). This gives you environment-specific configuration without separate variable files or environment-specific code branches.

Testing

Unit Tests with Mocks

Pulumi’s mock system intercepts resource registration so you can test infrastructure logic without deploying anything:

import * as pulumi from "@pulumi/pulumi";

pulumi.runtime.setMocks({
    newResource: (args: pulumi.runtime.MockResourceArgs) => {
        return { id: `${args.name}-id`, state: args.inputs };
    },
    call: (args: pulumi.runtime.MockCallArgs) => args.inputs,
});

import { SecureS3Bucket } from "../components/secure-bucket";

describe("SecureS3Bucket", () => {
    it("blocks all public access", async () => {
        const bucket = new SecureS3Bucket("test");
        const blockConfig = await new Promise(resolve => {
            pulumi.output(bucket.bucket.id).apply(id => resolve(id));
        });
        expect(blockConfig).toBeDefined();
    });
});

Integration Tests with ProgramTest

Integration tests deploy real infrastructure, run assertions, and tear it down. Use @pulumi/pulumi/x/automation (Automation API) for this pattern:

import { LocalWorkspace } from "@pulumi/pulumi/automation";

const stack = await LocalWorkspace.createOrSelectStack({
    stackName: "integration-test",
    workDir: ".",
});

await stack.up({ onOutput: console.log });
const outputs = await stack.outputs();
// Assert on real deployed resources
expect(outputs.bucketName.value).toMatch(/^my-bucket-/);
await stack.destroy();

Policy as Code (CrossGuard)

Pulumi CrossGuard enforces compliance rules across all stacks:

import { PolicyPack, validateResourceOfType } from "@pulumi/policy";
import * as aws from "@pulumi/aws";

new PolicyPack("aws-compliance", {
    policies: [
        {
            name: "s3-no-public-read",
            description: "S3 buckets must not allow public read access",
            enforcementLevel: "mandatory",
            validateResource: validateResourceOfType(aws.s3.Bucket, (bucket, args, reportViolation) => {
                if (bucket.acl === "public-read" || bucket.acl === "public-read-write") {
                    reportViolation("S3 bucket must not be publicly readable.");
                }
            }),
        },
    ],
});

Run policy checks: pulumi preview --policy-pack ../aws-compliance

CI/CD Integration

GitHub Actions

name: Pulumi Deploy
on:
  push:
    branches: [main]

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: "20"
      - run: npm ci
      - uses: pulumi/actions@v5
        with:
          command: up
          stack-name: prod
          work-dir: ./infra
        env:
          PULUMI_ACCESS_TOKEN: ${{ secrets.PULUMI_ACCESS_TOKEN }}
          AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

For preview on pull requests, use command: preview and add comment-on-pr: true to post the diff as a PR comment.

Automation API

The Automation API embeds the Pulumi engine as a library, enabling self-service infrastructure portals and dynamic environment creation:

import { LocalWorkspace, Stack } from "@pulumi/pulumi/automation";

async function provisionEnvironment(tenantId: string) {
    const stackName = `tenant-${tenantId}`;
    const stack = await LocalWorkspace.createOrSelectStack({
        stackName,
        projectName: "saas-infra",
        program: async () => {
            const bucket = new aws.s3.Bucket(`${tenantId}-data`);
            return { bucketName: bucket.bucket };
        },
    });

    await stack.setConfig("aws:region", { value: "us-east-1" });
    const result = await stack.up({ onOutput: console.log });
    return result.outputs;
}

This pattern powers ephemeral per-PR environments, tenant provisioning in SaaS platforms, and chaos engineering tools.

Practical Multi-Cloud Example

Deploying a web application across AWS (compute) and Cloudflare (DNS/CDN) in a single Pulumi program:

import * as aws from "@pulumi/aws";
import * as cloudflare from "@pulumi/cloudflare";

const config = new pulumi.Config();
const domainName = config.require("domainName");
const cfZoneId = config.requireSecret("cloudflareZoneId");

// AWS: ECS Fargate service
const cluster = new aws.ecs.Cluster("web-cluster");

const lb = new aws.lb.LoadBalancer("web-lb", {
    internal: false,
    loadBalancerType: "application",
    subnets: publicSubnetIds,
});

// Cloudflare: DNS record pointing to ALB
const dnsRecord = new cloudflare.Record("web-dns", {
    zoneId: cfZoneId,
    name: domainName,
    type: "CNAME",
    value: lb.dnsName,
    proxied: true,  // Enable Cloudflare proxy/CDN
});

export const appUrl = pulumi.interpolate`https://${domainName}`;
export const albDns = lb.dnsName;

A single pulumi up creates all resources across both clouds, respecting the dependency between the ALB (must exist first) and the DNS record.

Gotchas and Edge Cases

Outputs in string context — Never do const url = "https://" + bucket.bucket. Use pulumi.interpolate instead. The + operator converts an Output to [object Object].

Deleting stack outputs — Removing an export does not delete the resource; it only removes the output value. To delete the resource, remove it from your program code.

pulumi refresh vs pulumi up — Use pulumi refresh to sync state with actual cloud state (after manual changes). Use pulumi up to apply program changes. Running up without refresh after manual changes may fail or conflict.

Resource naming — Pulumi appends a random suffix to resource names by default (e.g., my-bucket-a3f8c2b). Use name property explicitly if you need a fixed name, or use deleteBeforeReplace: true in the resource options.

State file corruption — Never manually edit state files. If state is corrupted, use pulumi state delete <urn> or pulumi import to repair individual resources.

Provider version pinning — Pin provider versions in package.json or requirements.txt. Unpinned providers can change behavior on pulumi up after an automatic update.

Summary

  • Pulumi uses real programming languages (TypeScript, Python, Go, C#, Java) instead of HCL or YAML, giving full IDE support and standard testing
  • The engine computes a dependency graph and applies changes in parallel, comparing against stored state in Pulumi Cloud, S3, Azure Blob, or GCS
  • Stacks isolate dev/staging/prod environments; each stack has its own configuration and state
  • ComponentResource enables reusable infrastructure abstractions publishable as npm or PyPI packages
  • Unit tests use mocks; integration tests use the Automation API; compliance uses CrossGuard policies
  • The Automation API embeds Pulumi in application code for self-service portals and ephemeral environments
  • CI/CD integration is a single pulumi/actions step in GitHub Actions or equivalent in GitLab CI