Cloudflare D1 — Serverless SQL Database on the Edge

Posted on: 4/26/2026 2:14:44 PM

When your web application needs a database but you don't want to manage servers, worry about scaling, or pay when there's no traffic — Cloudflare D1 is the answer. It's a serverless SQL database built on SQLite, running directly on Cloudflare's global edge network. No provisioning, no connection pooling, no significant cold starts — just write SQL and deploy. This article provides a deep analysis of D1's internal architecture, Workers integration, global read replication strategy, and when you should (or shouldn't) choose D1 for your project.

10 GBMaximum size per database
50,000Databases per account (paid plan)
$0Egress fee — zero bandwidth charges
~0msCold start via Workers binding

1. Why Cloudflare D1?

Before D1, if you built applications on Cloudflare Workers (serverless edge functions), you had two options for structured data storage:

SolutionProsCons
Workers KVFast key-value, eventually consistent, edge-nativeNo SQL, no complex queries, no JOINs
External DB (PlanetScale, Supabase, Neon...)Full SQL, mature ecosystemHigh latency (round-trip from edge to DB region), needs connection pooling, egress fees
Durable ObjectsStrong consistency, transactional, edge-nativeEach object is a single point, no querying across objects

D1 fills the gap between KV (too simple) and external databases (too far away). It brings full SQL to the edge with low latency and near-zero cost when idle.

graph LR
    subgraph "Before D1"
        W1["Workers"] -->|"High latency"| EDB["External DB
us-east-1"] W1 -->|"Key-value only"| KV["Workers KV"] end subgraph "With D1" W2["Workers"] -->|"~0ms binding"| D1["D1 Database
SQLite on Edge"] D1 -->|"Auto replicate"| R1["Read Replica
Asia"] D1 -->|"Auto replicate"| R2["Read Replica
Europe"] end style D1 fill:#e94560,stroke:#fff,color:#fff style W2 fill:#f8f9fa,stroke:#e94560,color:#2c3e50 style EDB fill:#f8f9fa,stroke:#e0e0e0,color:#888 style R1 fill:#4CAF50,stroke:#fff,color:#fff style R2 fill:#4CAF50,stroke:#fff,color:#fff

Figure 1: D1 brings SQL database directly to the edge — eliminating round-trips to distant DB regions

2. D1 Internal Architecture

D1 is built on two core Cloudflare primitives:

2.1. SQLite as the SQL Engine

Instead of building a database engine from scratch, Cloudflare chose SQLite — the most widely deployed database in the world with over 1 trillion active databases. SQLite provides:

  • Full SQL: JOIN, subqueries, window functions, CTEs, JSON functions, full-text search (FTS5)
  • Zero configuration: No buffer pool tuning, shared memory, or connection limit management
  • Single-file storage: The entire database is one file — easy to backup, replicate, and migrate
  • Battle-tested: Over 20 years of development, used in every smartphone, browser, and operating system

2.2. Durable Objects as the Coordination Layer

Each D1 database is hosted by a Durable Object — ensuring that at any point in time, there is exactly one instance of the database running globally. This solves the consistency problem without requiring complex distributed consensus protocols:

graph TB
    subgraph "Global Cloudflare Network"
        subgraph "Primary Region"
            DO["Durable Object
(Single Writer)"] DB["SQLite File
(WAL Mode)"] DO --> DB end subgraph "Edge PoPs" E1["PoP Singapore"] E2["PoP Frankfurt"] E3["PoP São Paulo"] end end E1 -->|"Read/Write requests"| DO E2 -->|"Read/Write requests"| DO E3 -->|"Read/Write requests"| DO style DO fill:#e94560,stroke:#fff,color:#fff style DB fill:#2c3e50,stroke:#fff,color:#fff style E1 fill:#f8f9fa,stroke:#e94560,color:#2c3e50 style E2 fill:#f8f9fa,stroke:#e94560,color:#2c3e50 style E3 fill:#f8f9fa,stroke:#e94560,color:#2c3e50

Figure 2: D1 uses Durable Objects as single-writer primary — guaranteeing snapshot isolation

Snapshot Isolation — same level as SQLite WAL mode

D1 provides snapshot isolation — the same consistency level as SQLite running in WAL (Write-Ahead Logging) mode:

Readers don't block writers: Multiple read queries can execute concurrently with writes. Each reader sees a consistent snapshot from when the query started.
Writers are serialized: Only one write transaction runs at a time (single-writer model), ensuring no write conflicts.

This is an intentional trade-off: D1 chooses simplicity and correctness over complex multi-writer architectures.

3. Workers Integration

D1 is designed for zero-config integration with Workers through bindings — no connection strings, no drivers, no connection pooling needed:

# wrangler.toml — declare D1 binding
name = "my-app"
main = "src/index.ts"
compatibility_date = "2026-04-01"

[[d1_databases]]
binding = "DB"
database_name = "my-production-db"
database_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
// src/index.ts — Worker using D1
export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    const url = new URL(request.url);

    if (url.pathname === "/api/products") {
      // Prepared statement — auto-parameterized, prevents SQL injection
      const { results } = await env.DB.prepare(
        "SELECT id, name, price FROM products WHERE category = ? ORDER BY created_at DESC LIMIT ?"
      )
        .bind("electronics", 20)
        .all();

      return Response.json(results);
    }

    if (url.pathname === "/api/products" && request.method === "POST") {
      const body = await request.json();

      // Batch operations — multiple statements in 1 round-trip
      const statements = [
        env.DB.prepare(
          "INSERT INTO products (name, price, category) VALUES (?, ?, ?)"
        ).bind(body.name, body.price, body.category),
        env.DB.prepare(
          "INSERT INTO audit_log (action, entity, timestamp) VALUES (?, ?, ?)"
        ).bind("CREATE", "product", new Date().toISOString()),
      ];

      const results = await env.DB.batch(statements);
      return Response.json({ success: true, results });
    }

    return new Response("Not found", { status: 404 });
  },
};

interface Env {
  DB: D1Database;
}

Batch API — atomic multi-statement

env.DB.batch() sends multiple SQL statements in a single round-trip to the database, all executing within the same implicit transaction. If any statement fails, the entire batch rolls back. This is the most efficient way to perform complex write operations on D1 without an explicit transaction API.

3.1. Client API Reference

MethodDescriptionUse case
.prepare(sql)Creates a prepared statement with parameterized queryAll queries — always use to prevent SQL injection
.bind(...params)Binds values to ? placeholdersSafely passing dynamic values
.all()Returns all rows + metadata (success, meta.changes, meta.duration)SELECT queries returning multiple rows
.first(column?)Returns the first row, or a single column valueSELECT ... LIMIT 1, COUNT(*), etc.
.run()Executes statement without returning rows (INSERT, UPDATE, DELETE)Write operations
.raw()Returns rows as arrays instead of objectsMaximum performance, skipping column mapping overhead
.batch(stmts[])Executes multiple statements in a single transactionAtomic multi-write operations

4. Global Read Replication — Sessions API

The biggest limitation of the single-writer architecture: if the primary database is in the US, users in Vietnam face ~200ms latency for every query. The Sessions API solves this by automatically creating read replicas close to users:

sequenceDiagram
    participant User as User (Vietnam)
    participant Edge as Edge PoP (Singapore)
    participant Replica as Read Replica (Singapore)
    participant Primary as Primary DB (US)

    User->>Edge: SELECT query
    Edge->>Replica: Read from local replica
    Replica-->>Edge: Result (< 5ms)
    Edge-->>User: Fast response

    User->>Edge: INSERT/UPDATE query
    Edge->>Primary: Route to primary
    Primary-->>Edge: Write confirmed + commit token
    Edge-->>User: Response + new commit token

    Note over Edge,Replica: Replica receives async replication
    Note over User,Edge: Next read uses commit token
to ensure sequential consistency

Figure 3: Sessions API — reads from nearest replica, writes route to primary

4.1. Commit Tokens — Ensuring Sequential Consistency

The classic problem with async replication: read-after-write inconsistency. A user inserts a row, but the next read goes to a replica that hasn't synced yet — the newly written data is invisible. D1 solves this with commit tokens (Lamport timestamps):

// Using Sessions API with commit tokens
export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    // Get commit token from cookie or header
    const token = request.headers.get("x-d1-token") || "";

    // Create session with token — D1 ensures replica has synced to at least this token
    const session = env.DB.withSession(token);

    if (request.method === "POST") {
      // Writes always go to primary
      const result = await session
        .prepare("INSERT INTO posts (title, body) VALUES (?, ?)")
        .bind("Hello", "World")
        .run();

      // Return new commit token — client sends it back with next request
      return Response.json(
        { success: true },
        { headers: { "x-d1-token": result.meta.token } }
      );
    }

    // Reads can go to nearest replica (if synced to token)
    const { results } = await session
      .prepare("SELECT * FROM posts ORDER BY id DESC LIMIT 10")
      .all();

    return Response.json(results);
  },
};

Sequential Consistency — not Strong Consistency

The Sessions API guarantees sequential consistency — all operations within a session see results in the order they were performed. But across different sessions, different states may be visible (stale reads are acceptable).

In practice: User A just posted a comment — User A sees the comment immediately (thanks to the commit token). User B in Europe might see the comment after a few hundred milliseconds when the replica finishes syncing. For most web applications, this is a perfectly acceptable trade-off.

5. Time Travel — Point-in-Time Recovery

D1 automatically backs up every minute (no configuration needed) and allows restoring the database to any point within the last 30 days:

# List bookmarks (named restore points)
npx wrangler d1 time-travel info my-production-db

# Restore to a specific point in time
npx wrangler d1 time-travel restore my-production-db \
  --timestamp "2026-04-25T10:30:00Z"

# Or restore to a bookmark
npx wrangler d1 time-travel restore my-production-db \
  --bookmark "before-migration-v42"

# Create bookmark before running a risky migration
npx wrangler d1 time-travel bookmark my-production-db \
  --name "before-migration-v42"

Use Time Travel to debug production

A powerful use case: download a production snapshot to debug locally. Instead of reproducing bugs on staging (which often has different data), you can pull the exact database state at the time the bug occurred and query it directly on your local machine with any SQLite client.

6. Schema Migration and Computed Columns

6.1. Migrations with Wrangler

# Create migration file
npx wrangler d1 migrations create my-db add-user-profiles

# File created at migrations/0001_add-user-profiles.sql
-- migrations/0001_add-user-profiles.sql
CREATE TABLE IF NOT EXISTS user_profiles (
    id INTEGER PRIMARY KEY AUTOINCREMENT,
    user_id TEXT NOT NULL UNIQUE,
    display_name TEXT NOT NULL,
    bio TEXT,
    avatar_url TEXT,
    metadata JSON,
    created_at TEXT DEFAULT (datetime('now')),
    updated_at TEXT DEFAULT (datetime('now'))
);

CREATE INDEX idx_user_profiles_user_id ON user_profiles(user_id);

-- Computed column — auto-extracted from JSON
ALTER TABLE user_profiles
ADD COLUMN location TEXT
GENERATED ALWAYS AS (json_extract(metadata, '$.location')) STORED;
# Apply migration to production (or preview first)
npx wrangler d1 migrations apply my-db --remote

# Check migration status
npx wrangler d1 migrations list my-db --remote

6.2. Computed Columns — database-level calculations

Computed columns allow D1 to automatically calculate values from other columns or JSON data. Particularly useful when storing flexible JSON but still needing fast indexed queries:

-- Store settings as JSON, but extract frequently queried fields
CREATE TABLE products (
    id INTEGER PRIMARY KEY,
    name TEXT NOT NULL,
    data JSON NOT NULL,
    -- Computed columns — auto-extracted from JSON
    price REAL GENERATED ALWAYS AS (json_extract(data, '$.price')) STORED,
    category TEXT GENERATED ALWAYS AS (json_extract(data, '$.category')) STORED,
    in_stock INTEGER GENERATED ALWAYS AS (
        CASE WHEN json_extract(data, '$.quantity') > 0 THEN 1 ELSE 0 END
    ) STORED
);

-- Index on computed columns — queries as fast as regular columns
CREATE INDEX idx_products_category ON products(category);
CREATE INDEX idx_products_price ON products(price);

-- Query normally — no json_extract needed in WHERE
SELECT name, price FROM products
WHERE category = 'electronics' AND in_stock = 1
ORDER BY price ASC;

STORED vs VIRTUAL computed columns

SQLite supports two types of computed columns:

STORED: Values are computed and written to disk on INSERT/UPDATE. Uses more storage but SELECTs are faster — D1 only supports STORED as it's better suited for read-heavy edge workloads.
VIRTUAL: Values are recomputed on every SELECT. Saves storage but costs CPU on each query.

7. ORM and Type Safety — Drizzle, Prisma

Writing raw SQL works fine for small projects, but as schemas grow complex, you need an ORM. D1 has first-class support for the two most popular ORMs:

7.1. Drizzle ORM — lightweight, type-safe

// schema.ts — define schema with Drizzle
import { sqliteTable, text, integer, real } from "drizzle-orm/sqlite-core";

export const products = sqliteTable("products", {
  id: integer("id").primaryKey({ autoIncrement: true }),
  name: text("name").notNull(),
  price: real("price").notNull(),
  category: text("category").notNull(),
  createdAt: text("created_at").default("datetime('now')"),
});

export const orders = sqliteTable("orders", {
  id: integer("id").primaryKey({ autoIncrement: true }),
  productId: integer("product_id").references(() => products.id),
  quantity: integer("quantity").notNull(),
  total: real("total").notNull(),
});
// worker.ts — using Drizzle with D1
import { drizzle } from "drizzle-orm/d1";
import { eq, desc, and, gte } from "drizzle-orm";
import * as schema from "./schema";

export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    const db = drizzle(env.DB, { schema });

    // Type-safe query — IDE autocomplete + compile-time checking
    const expensiveProducts = await db
      .select({
        name: schema.products.name,
        price: schema.products.price,
      })
      .from(schema.products)
      .where(
        and(
          eq(schema.products.category, "electronics"),
          gte(schema.products.price, 100)
        )
      )
      .orderBy(desc(schema.products.price))
      .limit(10);

    // JOIN query — fully type-safe
    const ordersWithProducts = await db
      .select({
        orderId: schema.orders.id,
        productName: schema.products.name,
        quantity: schema.orders.quantity,
        total: schema.orders.total,
      })
      .from(schema.orders)
      .innerJoin(
        schema.products,
        eq(schema.orders.productId, schema.products.id)
      );

    return Response.json({ expensiveProducts, ordersWithProducts });
  },
};
CriteriaDrizzle ORMPrisma
Bundle size~45 KB (lightweight)~2 MB (Prisma Client + engine)
D1 supportNative adapter, zero overheadVia Prisma Accelerate or driver adapter
Type safetySQL-first — schema = TypeScript codeSchema-first — generates client from .prisma file
Migrationdrizzle-kit push — auto schema diffprisma migrate — migration files
Raw SQLEasy to mix raw SQL when needed$queryRaw — less type-safe
Edge runtimeRuns great on WorkersNeeds Prisma Accelerate (extra hop)

Recommendation for D1

Drizzle ORM is the better choice for D1 because of its small bundle size (important on Workers' 10 MB limit), native D1 adapter without proxying, and an API close to raw SQL — making query optimization straightforward. Prisma is better if your team is already invested in the Prisma ecosystem and accepts the overhead.

8. Multi-tenant Architecture — Database per Tenant

An architecture where D1 excels: database-per-tenant. With a limit of 50,000 databases per account and no charges for idle databases, you can create a separate database for each user or organization:

graph TB
    subgraph "Traditional Multi-tenant"
        APP1["App Server"] --> SHARED["Shared Database
tenant_id column everywhere
Complex RLS policies"] end subgraph "D1 Multi-tenant" APP2["Worker"] --> ROUTER["Tenant Router"] ROUTER --> DB1["D1: tenant-acme
10 MB"] ROUTER --> DB2["D1: tenant-globex
50 MB"] ROUTER --> DB3["D1: tenant-initech
5 MB"] ROUTER --> DB4["D1: tenant-....
50,000 DBs max"] end style SHARED fill:#ff9800,stroke:#fff,color:#fff style ROUTER fill:#e94560,stroke:#fff,color:#fff style DB1 fill:#4CAF50,stroke:#fff,color:#fff style DB2 fill:#4CAF50,stroke:#fff,color:#fff style DB3 fill:#4CAF50,stroke:#fff,color:#fff style DB4 fill:#4CAF50,stroke:#fff,color:#fff

Figure 4: D1 per-tenant — each tenant gets its own database with complete isolation

Benefits of per-tenant databases:

  • Complete isolation: No need for WHERE tenant_id = ? on every query. A bug in one tenant doesn't affect others.
  • Easier compliance: Data residency (GDPR) — place EU tenant databases in EU regions. Deleting a tenant = deleting a database, no need for DELETE FROM ... WHERE tenant_id = ? across 50 tables.
  • Predictable performance: Large tenants don't slow down small ones. Each database has its own resources.
  • Schema flexibility: You can migrate schemas gradually (blue-green migration per tenant).
// Tenant routing — lookup database binding dynamically
export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    const tenantId = getTenantFromRequest(request);

    // D1 binding names must be known at deploy time, but you can use
    // the Cloudflare API to manage dynamic bindings
    const dbName = `TENANT_DB_${tenantId}`;
    const db = env[dbName] as D1Database;

    if (!db) {
      return new Response("Tenant not found", { status: 404 });
    }

    const { results } = await db
      .prepare("SELECT * FROM settings")
      .all();

    return Response.json(results);
  },
};

9. Pricing — True Scale to Zero

TierFree (Workers Free)Paid ($5/mo Workers Paid)
Reads5 million rows/day25 billion rows/month, then $0.001/million rows
Writes100,000 rows/day50 million rows/month, then $1.00/million rows
Storage5 GB (total across all DBs)5 GB included, then $0.75/GB-month
Database limit500 databases50,000 databases
Max DB size10 GB10 GB
Egress fee$0$0

Cost comparison with alternatives

Example: a SaaS app with 10 million reads + 500K writes/month, 2 GB storage:

Cloudflare D1: $5/month (just Workers Paid plan, reads + writes + storage all within free allowance)
PlanetScale (Scaler Pro): ~$39/month
Neon (Scale): ~$69/month
Supabase (Pro): ~$25/month

D1 is significantly cheaper, but remember the trade-offs: 10 GB limit, SQLite SQL only, and must run on Workers.

10. Limitations and When NOT to Use D1

LimitationDetailsImpact
Max DB size10 GB per databaseNot suitable for data warehouses or apps with lots of binary data
Max row size~2 MB (SQLite limit)Don't store large blobs directly — use R2 for files
Write latencyDepends on distance to primaryUsers far from the primary region face higher write latency
No stored proceduresSQLite doesn't have traditional stored proceduresComplex logic must live in Worker code
No real-time subscriptionsNo built-in change streams or webhooksNeed Durable Objects or polling for real-time features
Single-writerAll writes go through one primaryWrite throughput has a ceiling — cannot scale writes horizontally

D1 is NOT suitable for:

1. Large datasets: Analytics, logs, time-series data exceeding 10 GB — use ClickHouse, BigQuery, or Cloudflare Analytics Engine instead.

2. Write-heavy workloads: Real-time chat systems, IoT sensors writing thousands of events per second — the single-writer model becomes a bottleneck.

3. Complex queries requiring PostgreSQL features: If you need extensions (PostGIS, pg_trgm), advanced indexes (GIN, GiST), or materialized views — PostgreSQL remains king.

4. Applications not running on Cloudflare: D1 bindings only work from Workers/Pages. If your backend is .NET, Go, or Python on its own server — you'll need the HTTP API (slower than bindings).

5. Cross-database ACID transactions: Each D1 database is independent — there are no distributed transactions between two D1 databases.

11. D1 in the Cloudflare Ecosystem

graph TB
    subgraph "Application Layer"
        W["Workers / Pages Functions"]
    end
    subgraph "Storage Layer"
        D1DB["D1
Relational SQL"] KV["Workers KV
Key-Value, edge-cached"] R2["R2
Object Storage (files, images)"] DO["Durable Objects
Stateful coordination"] VE["Vectorize
Vector embeddings"] AE["Analytics Engine
Time-series, event logs"] HY["Hyperdrive
Connection pool to external DBs"] end W --> D1DB W --> KV W --> R2 W --> DO W --> VE W --> AE W --> HY style D1DB fill:#e94560,stroke:#fff,color:#fff style W fill:#2c3e50,stroke:#fff,color:#fff style KV fill:#f8f9fa,stroke:#e94560,color:#2c3e50 style R2 fill:#f8f9fa,stroke:#e94560,color:#2c3e50 style DO fill:#f8f9fa,stroke:#e94560,color:#2c3e50 style VE fill:#f8f9fa,stroke:#e94560,color:#2c3e50 style AE fill:#f8f9fa,stroke:#e94560,color:#2c3e50 style HY fill:#f8f9fa,stroke:#e94560,color:#2c3e50

Figure 5: D1 is part of Cloudflare's storage ecosystem — each service serves a different use case

What do you need?What to use?
Relational data, SQL queries, ACID transactionsD1
Simple key-value lookups, configuration, session dataWorkers KV
Files, images, videos, large binary objectsR2
Real-time coordination, WebSocket state, rate limitingDurable Objects
Semantic search, AI embeddings, similarity queriesVectorize
Event logs, analytics, time-series (write-heavy)Analytics Engine
Connect Workers to external PostgreSQL/MySQLHyperdrive

12. Conclusion

Cloudflare D1 represents a new direction in databases: bringing SQL databases to the edge, zero-config, scale-to-zero. Built on battle-tested SQLite, with Durable Objects ensuring consistency, Sessions API for global read replication, and pricing that's nearly free for small-to-medium workloads — D1 is the ideal choice for edge-first applications on Cloudflare.

However, D1 is not a universal database. The 10 GB limit, single-writer model, and vendor lock-in with Cloudflare Workers are important trade-offs to consider. For systems requiring high write throughput, large datasets, or PostgreSQL-level features — continue using managed PostgreSQL (Neon, Supabase) or Hyperdrive to connect from Workers to external databases. D1 shines in its niche: edge-native, read-heavy applications that need SQL and want maximum simplicity.

References