TL;DR — Quick Summary

Cloudflare Workers run JavaScript at the edge on V8 isolates. Deploy your first Worker with Wrangler, set up KV storage, secrets, and GitHub Actions CI/CD.

Cloudflare Workers bring serverless computing to the edge — your code runs in data centers across 300+ cities worldwide, mere milliseconds from your users, without any server provisioning or scaling concerns. Built on V8 isolates (the same engine that powers Chrome and Node.js), Workers achieve cold start times under 1 millisecond, making them dramatically faster than traditional containerized serverless functions. In this guide you will go from zero to a fully deployed, production-grade Worker with KV storage, encrypted secrets, custom domains, and an automated GitHub Actions deployment pipeline.

Prerequisites

  • Node.js 18+ installed locally
  • A Cloudflare account (free tier works for everything in this guide)
  • Basic familiarity with JavaScript or TypeScript
  • A registered domain added to Cloudflare (required only for the custom domain section)
  • npm or pnpm package manager

Creating Your First Worker

Install Wrangler, Cloudflare’s official CLI, globally:

npm install -g wrangler
wrangler login

The wrangler login command opens a browser window and asks you to authorize Wrangler to access your Cloudflare account. After authorization, credentials are stored at ~/.wrangler/config/default.toml.

Scaffold a new project:

wrangler init my-api-worker
cd my-api-worker

Wrangler presents a few prompts — choose TypeScript and the “Hello World” Worker template. The resulting directory structure is:

my-api-worker/
  src/
    index.ts          ← your Worker code
  wrangler.toml       ← project configuration
  package.json
  tsconfig.json

Understanding wrangler.toml

The wrangler.toml file is the project manifest. A minimal configuration looks like this:

name = "my-api-worker"
main = "src/index.ts"
compatibility_date = "2024-09-23"

[[routes]]
pattern = "api.example.com/*"
zone_name = "example.com"

Key fields:

FieldPurpose
nameWorker name shown in the dashboard
mainEntry point file resolved by Wrangler
compatibility_dateLocks runtime API behavior to a specific date
routesMaps URL patterns to this Worker
[[kv_namespaces]]Binds KV namespaces as env variables
[vars]Plain-text environment variables

The compatibility_date is important — Cloudflare occasionally ships breaking changes to Worker APIs, and this date pins which set of APIs your Worker sees. Always set it to the date you create the project and update it explicitly after reviewing the changelog.

Writing the Worker

The Worker entry point exports a default object with a fetch handler. Every incoming HTTP request calls this handler:

export interface Env {
  MY_KV: KVNamespace;
  API_SECRET: string;
}

export default {
  async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
    const url = new URL(request.url);

    // Route: return JSON
    if (url.pathname === '/api/status') {
      return Response.json({ status: 'ok', region: request.cf?.colo });
    }

    // Route: return HTML
    if (url.pathname === '/') {
      return new Response(
        `<h1>Hello from the edge!</h1>`,
        { headers: { 'Content-Type': 'text/html;charset=UTF-8' } }
      );
    }

    return new Response('Not Found', { status: 404 });
  }
};

Request and Response Patterns

Workers use the standard Fetch API — Request, Response, and Headers are identical to browser APIs. This means code you write for Workers is largely portable.

Returning JSON is idiomatic with Response.json():

return Response.json({ items: ['a', 'b', 'c'] }, {
  headers: { 'Cache-Control': 'public, max-age=60' }
});

Redirecting a request:

return Response.redirect('https://example.com/new-path', 301);

Modifying a proxied response (the “transform” pattern):

const upstream = await fetch(request);
const body = await upstream.text();
return new Response(body.replace('old text', 'new text'), upstream);

Local Development

Run the local dev server:

wrangler dev

Wrangler starts a local server at http://localhost:8787 that emulates the Cloudflare runtime including KV, Durable Objects, R2, and the request.cf metadata object. Hot reload triggers automatically on file changes.

To test against real Cloudflare infrastructure (useful for Workers that call other Cloudflare services):

wrangler dev --remote

Inspect live traffic with the built-in tail log in a second terminal:

wrangler tail

wrangler tail streams real-time logs from the production Worker, showing request/response details, console output, and exceptions. It works for both dev and production.

Deploying to Production

Deploy with a single command:

wrangler deploy

Wrangler compiles your TypeScript, bundles dependencies, and uploads the Worker to Cloudflare’s network. Deployment propagates globally in seconds. The output includes the workers.dev URL:

Published my-api-worker (2.45 sec)
  https://my-api-worker.your-subdomain.workers.dev

Connecting a Custom Domain

Add a routes block in wrangler.toml to route requests from your Cloudflare-proxied domain to the Worker:

[[routes]]
pattern = "api.example.com/*"
zone_name = "example.com"

Alternatively, use the Custom Domains feature (no route pattern needed):

[[routes]]
pattern = "api.example.com"
custom_domain = true

Custom Domains automatically provision a TLS certificate and handle all routing, so the Worker serves the apex of the hostname without conflicting with other DNS records.

Environment Variables and Secrets

Plain Variables

Non-sensitive configuration lives in wrangler.toml:

[vars]
ENVIRONMENT = "production"
MAX_RETRIES = "3"

Access them in the Worker as env.ENVIRONMENT and env.MAX_RETRIES.

Encrypted Secrets

Secrets are encrypted at rest and never visible after upload. Add them via CLI:

wrangler secret put API_KEY

Wrangler prompts you to enter the value interactively (it will not appear in shell history). List existing secrets:

wrangler secret list

Delete a secret:

wrangler secret delete API_KEY

In the Worker, secrets surface as env.API_KEY — indistinguishable from plain vars at runtime but stored encrypted in Cloudflare’s vault.

KV Storage

KV (Workers KV) is a globally distributed key-value store with eventual consistency. It excels at storing configuration, user sessions, cached API responses, and feature flags.

Create a KV namespace:

wrangler kv namespace create CACHE

Wrangler outputs the namespace ID. Add the binding to wrangler.toml:

[[kv_namespaces]]
binding = "CACHE"
id = "abc123def456..."

Write and read from KV in the Worker:

// Write (with optional TTL in seconds)
await env.CACHE.put('user:123', JSON.stringify(userData), { expirationTtl: 3600 });

// Read
const raw = await env.CACHE.get('user:123');
const user = raw ? JSON.parse(raw) : null;

// Delete
await env.CACHE.delete('user:123');

Operate on KV from the CLI for debugging:

wrangler kv key put --binding=CACHE "test-key" "test-value"
wrangler kv key get --binding=CACHE "test-key"
wrangler kv key list --binding=CACHE

Comparison

FeatureCloudflare WorkersAWS Lambda@EdgeDeno Deploy
RuntimeV8 isolatesNode.js containerV8 isolates
Cold start< 1 ms100–500 ms~50 ms
Global PoPs300+~4 CloudFront regions35 regions
Free tier100k req/dayPay per request100k req/day
Max CPU time50 ms (free) / 30 s (paid)30 s50 ms
StorageKV, R2, D1, Durable ObjectsDynamoDB (separate)Deno KV
TypeScriptNative (built-in)Via build stepNative
Runtime APIsFetch, Streams, Web CryptoNode.js subsetDeno std + Fetch
Pricing (beyond free)$0.50/million requests~$0.60/million + Lambda cost$0.50/million requests

Workers win on cold start and global distribution. Lambda@Edge is the right choice if you are already deeply invested in the AWS ecosystem and need access to Node.js-specific packages. Deno Deploy is a good alternative if you want Deno’s permission model or its standard library.

Real-World Scenario: API Proxy Worker

You have a third-party API that does not support CORS, requires an API key you cannot expose to the browser, and responds with data you want to cache and transform. A Cloudflare Worker is the perfect solution.

export interface Env {
  UPSTREAM_API_KEY: string;
  CACHE: KVNamespace;
}

export default {
  async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
    const url = new URL(request.url);
    const upstreamUrl = `https://api.thirdparty.com${url.pathname}${url.search}`;
    const cacheKey = upstreamUrl;

    // Check KV cache first
    const cached = await env.CACHE.get(cacheKey);
    if (cached) {
      return Response.json(JSON.parse(cached), {
        headers: {
          'X-Cache': 'HIT',
          'Access-Control-Allow-Origin': '*'
        }
      });
    }

    // Fetch from upstream with secret key
    const upstream = await fetch(upstreamUrl, {
      headers: {
        'Authorization': `Bearer ${env.UPSTREAM_API_KEY}`,
        'Accept': 'application/json'
      }
    });

    if (!upstream.ok) {
      return new Response('Upstream error', { status: upstream.status });
    }

    const data = await upstream.json();

    // Cache the response for 5 minutes
    ctx.waitUntil(env.CACHE.put(cacheKey, JSON.stringify(data), { expirationTtl: 300 }));

    return Response.json(data, {
      headers: {
        'X-Cache': 'MISS',
        'Access-Control-Allow-Origin': '*'
      }
    });
  }
};

This Worker keeps the API key server-side, adds CORS headers the original API lacks, and serves cached responses at edge locations closest to each user — reducing latency and upstream API costs simultaneously.

CI/CD with GitHub Actions

Automate deployments on every push to main:

# .github/workflows/deploy.yml
name: Deploy Worker

on:
  push:
    branches: [main]

jobs:
  deploy:
    runs-on: ubuntu-latest
    name: Deploy
    steps:
      - uses: actions/checkout@v4

      - name: Set up Node.js
        uses: actions/setup-node@v4
        with:
          node-version: '22'
          cache: 'npm'

      - name: Install dependencies
        run: npm ci

      - name: Deploy to Cloudflare Workers
        uses: cloudflare/wrangler-action@v3
        with:
          apiToken: ${{ secrets.CF_API_TOKEN }}
          accountId: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}

Add CF_API_TOKEN and CLOUDFLARE_ACCOUNT_ID as repository secrets in GitHub. Create the API token in the Cloudflare dashboard under Profile → API Tokens → Create Token using the “Edit Cloudflare Workers” template — this scopes the token to Workers-only permissions.

For secrets that need to be set in the Worker environment during CI:

- name: Set Worker secrets
  uses: cloudflare/wrangler-action@v3
  with:
    apiToken: ${{ secrets.CF_API_TOKEN }}
    command: secret put API_KEY <<< "${{ secrets.MY_API_KEY }}"

Gotchas and Edge Cases

CPU time limit — The free plan allows 10 ms of CPU time per request (wall-clock time does not count while awaiting I/O). The paid Workers Paid plan raises this to 30 seconds. CPU-intensive work like image processing or cryptography can hit this limit; offload heavy computation to a queue or use Durable Objects.

Subrequest limits — Each Worker invocation can make up to 50 outbound fetch() calls on the free plan (1000 on paid). Design your fanout patterns accordingly.

Compatibility date and breaking changes — When you update compatibility_date, review the compatibility flags changelog. Some flags change how Request.clone(), streams, or error handling work.

Size limits — Worker scripts are limited to 1 MB after compression (10 MB on paid). Large npm dependencies can push you over this limit; use tree shaking and avoid bundling server-side-only packages.

No filesystem access — Workers have no disk I/O. All persistence must go through KV, R2, D1, or Durable Objects. The fs module is unavailable.

waitUntil for background work — Use ctx.waitUntil(promise) for work that should complete after the response is sent (like cache writes). Without it, the Worker runtime may terminate before the promise resolves.

Troubleshooting

Error: Script startup exceeded CPU time limit — Your Worker is doing expensive work during module initialization (at the top level, outside the handler). Move expensive operations inside the handler or use lazy initialization.

TypeError: Cannot read properties of undefined (reading 'get') — A KV or other binding is missing from wrangler.toml, or you are accessing env.MY_BINDING before the binding is configured. Double-check the binding name matches exactly (case-sensitive).

**wrangler deploy fails with “Authentication error”** — Your Wrangler session has expired. Run wrangler loginagain or set theCLOUDFLARE_API_TOKEN` environment variable for CI environments.

CORS errors in the browser — The Worker response is missing Access-Control-Allow-Origin. Add the header in your Response constructor or use a helper. Also handle the OPTIONS preflight request separately.

Custom domain not routing to the Worker — Ensure the domain is proxied through Cloudflare (orange cloud in DNS settings), the route pattern in wrangler.toml uses /* wildcard if needed, and you have re-deployed after changing wrangler.toml.

Summary

  • Cloudflare Workers are V8-isolate-based serverless functions that run at the edge with sub-millisecond cold starts and global distribution
  • Install Wrangler with npm install -g wrangler, scaffold with wrangler init, and test locally with wrangler dev
  • The wrangler.toml file controls the Worker name, entry point, compatibility date, routes, and all resource bindings
  • Use wrangler secret put for sensitive values and [vars] in wrangler.toml for non-sensitive configuration
  • KV namespaces provide globally distributed key-value storage, accessible via env.BINDING.get/put/delete
  • Workers outperform Lambda@Edge on cold start and global reach; Lambda@Edge is preferred for deep AWS ecosystem integration
  • The wrangler-action GitHub Action provides turnkey CI/CD; scope your API token to Workers-only permissions
  • Watch for CPU time limits, subrequest caps, and the 1 MB script size ceiling on the free plan