TL;DR — Quick Summary

SQLite as a production database is increasingly viable for web applications. Learn when to use SQLite in production, performance tuning, and backup strategies.

SQLite has evolved from a lightweight embedded database to a legitimate production database for web applications. Companies like Expensify process billions of transactions with SQLite, Cloudflare built D1 on top of it, and the Rails community has championed it for deployment simplicity. This guide covers when SQLite is the right choice for production, how to configure it for optimal performance, and how to handle backups and monitoring.

Prerequisites

  • Basic SQL knowledge
  • Linux server (Ubuntu/Debian recommended)
  • SQLite 3.35+ (for returning clauses and other modern features)
  • A web application framework (examples use Node.js, but concepts apply to any language)

When to Use SQLite in Production

SQLite is an excellent choice when:

  • Your application runs on a single server (or single primary with read replicas)
  • Your workload is read-heavy (90%+ reads)
  • Your data fits comfortably in 1 TB or less
  • You want zero operational overhead (no database server to manage)
  • You value deployment simplicity over horizontal scaling

SQLite is NOT the right choice when:

  • You need multiple write servers (horizontal write scaling)
  • Your application requires more than 10,000 writes per second sustained
  • You need distributed transactions across multiple databases
  • Your team expects a PostgreSQL/MySQL-compatible query interface

Configuring SQLite for Production

Essential PRAGMAs

-- Enable WAL mode (dramatically improves concurrent access)
PRAGMA journal_mode = WAL;

-- Set busy timeout to 5 seconds (retry instead of failing immediately)
PRAGMA busy_timeout = 5000;

-- Use memory-mapped I/O for reads (faster for large databases)
PRAGMA mmap_size = 268435456;  -- 256MB

-- Set cache size (negative = KB, positive = pages)
PRAGMA cache_size = -64000;  -- 64MB cache

-- Set synchronous to NORMAL (safe with WAL, much faster than FULL)
PRAGMA synchronous = NORMAL;

-- Enable foreign keys (off by default in SQLite)
PRAGMA foreign_keys = ON;

-- Store temp tables in memory
PRAGMA temp_store = MEMORY;

Connection Management

import Database from 'better-sqlite3';

// CRITICAL: Use separate connections for reads and writes
const writeDb = new Database('app.db');
const readDb = new Database('app.db', { readonly: true });

// Apply PRAGMAs on connection open
for (const db of [writeDb, readDb]) {
  db.pragma('journal_mode = WAL');
  db.pragma('busy_timeout = 5000');
  db.pragma('cache_size = -64000');
  db.pragma('synchronous = NORMAL');
  db.pragma('foreign_keys = ON');
  db.pragma('temp_store = MEMORY');
}

// Use transactions for multiple writes (100x faster than individual statements)
const insertMany = writeDb.transaction((items) => {
  const stmt = writeDb.prepare('INSERT INTO items (name, value) VALUES (?, ?)');
  for (const item of items) {
    stmt.run(item.name, item.value);
  }
});

Backup Strategies

Litestream continuously streams SQLite WAL changes to cloud storage, providing point-in-time recovery.

# Install Litestream
wget https://github.com/benbjohnson/litestream/releases/latest/download/litestream-linux-amd64.deb
sudo dpkg -i litestream-linux-amd64.deb

# Configure /etc/litestream.yml
cat > /etc/litestream.yml << 'EOF'
dbs:
  - path: /var/lib/myapp/app.db
    replicas:
      - type: s3
        bucket: my-backups
        path: myapp/
        region: us-east-1
        retention: 72h
        sync-interval: 1s
EOF

# Run as systemd service
sudo systemctl enable litestream
sudo systemctl start litestream

SQLite Backup API

// Safe online backup (can run while database is in use)
const backup = writeDb.backup('backup.db');
backup.step(-1); // Copy all pages at once

Comparison

FeatureSQLitePostgreSQLMySQLCloudflare D1
Setup complexityNone (file)MediumMediumLow (managed)
Write concurrency1 writer (WAL)UnlimitedUnlimited1 writer
Read performanceExcellentVery goodVery goodGood
Horizontal scalingNoYesYesAutomatic
Backup complexitySimpleMediumMediumAutomatic
Operational cost$0$50-500/mo$50-500/moPay-per-query
Max practical size~1 TBUnlimitedUnlimited10 GB

Real-World Scenario

You run a content management system serving 50,000 daily visitors with 500 content updates per day. PostgreSQL would require a managed database at $50-200/month plus connection pooling configuration. With SQLite, you deploy a single binary that includes the database, configure WAL mode and Litestream backups, and handle the same traffic for $0 in database costs. Your deployment is a single scp command instead of coordinating application and database deployments.

Gotchas and Edge Cases

  • WAL file growth: The WAL file can grow large during heavy write periods. Set PRAGMA wal_autocheckpoint = 1000 (default) and monitor WAL file size
  • File locking on NFS: Never put a SQLite database on a network file system (NFS, SMB, CIFS). SQLite’s locking requires a local filesystem
  • SQLITE_BUSY errors: Even with busy_timeout, extremely high write concurrency can cause SQLITE_BUSY. Implement retry logic in your application
  • Vacuum considerations: Unlike PostgreSQL’s autovacuum, SQLite doesn’t reclaim space automatically. Run PRAGMA auto_vacuum = INCREMENTAL or schedule periodic VACUUM during low-traffic periods
  • Concurrent schema migrations: Only one connection can modify the schema at a time. Use a migration lock or deploy during maintenance windows

Troubleshooting

  • Database is locked error: Ensure WAL mode is enabled and busy_timeout is set. Check for long-running transactions holding the write lock
  • Slow writes: Wrap multiple writes in a transaction. Individual INSERT statements without a transaction trigger a separate fsync for each one
  • Database corruption after crash: Enable PRAGMA synchronous = NORMAL (with WAL) to protect against power-loss corruption. Consider UPS for the server
  • Backup file is inconsistent: Never copy the .db file directly while the application is running. Use Litestream or the SQLite Backup API

Summary

  • SQLite is a legitimate production database for read-heavy, single-server applications under 1 TB
  • Enable WAL mode, set busy_timeout, configure cache_size, and use synchronous = NORMAL for optimal performance
  • Use separate read and write database connections for better concurrency
  • Litestream provides continuous, point-in-time backup streaming to cloud storage ($0 operational overhead)
  • Wrap multiple writes in transactions for 100x performance improvement
  • Don’t use SQLite on network filesystems, for high-write workloads, or when you need horizontal write scaling
  • Monitor WAL file size, SQLITE_BUSY frequency, and database file growth