Cloudflare Workers — Building Free Full-Stack Serverless Apps on the Edge

Posted on: 4/22/2026 10:13:57 AM

When it comes to serverless, most developers think of AWS Lambda or Azure Functions. But there's a platform quietly becoming the top choice for edge-first applications: Cloudflare Workers. With a complete ecosystem including Workers (compute), D1 (SQL database), R2 (object storage), KV (key-value), Durable Objects (stateful coordination), and Queues (message queue) — you can build a fully functional full-stack application without any traditional servers, and most of it runs for free. This article dives deep into the architecture, how services connect, and guides you through building production-ready apps on Cloudflare.

1. Cloudflare Workers Ecosystem Overview

Cloudflare Workers run on V8 isolates — the same JavaScript engine Chrome uses — but not containers or VMs. Each request is processed in a separate isolate with cold start times under 1ms, deployed across more than 330 data centers worldwide. This is the fundamental difference compared to Lambda (container-based, cold starts of 100ms to several seconds).

330+ Global data centers
<1ms Average cold start
100K Free requests/day
$0 Egress bandwidth (R2)
graph TB
    USER["👤 User Request"] --> EDGE["Cloudflare Edge
(nearest PoP)"] EDGE --> WORKER["⚡ Worker
(V8 Isolate)"] WORKER --> D1["🗄️ D1
SQL Database"] WORKER --> R2["📦 R2
Object Storage"] WORKER --> KV["🔑 KV
Key-Value Store"] WORKER --> DO["🔒 Durable Objects
Stateful Coordination"] WORKER --> QUEUE["📨 Queues
Async Processing"] QUEUE --> CONSUMER["⚡ Consumer Worker"] style WORKER fill:#e94560,stroke:#fff,color:#fff style EDGE fill:#2c3e50,stroke:#e94560,color:#fff style D1 fill:#f8f9fa,stroke:#e94560,color:#2c3e50 style R2 fill:#f8f9fa,stroke:#e94560,color:#2c3e50 style KV fill:#f8f9fa,stroke:#e94560,color:#2c3e50 style DO fill:#f8f9fa,stroke:#e94560,color:#2c3e50 style QUEUE fill:#f8f9fa,stroke:#e94560,color:#2c3e50 style USER fill:#2c3e50,stroke:#e94560,color:#fff style CONSUMER fill:#e94560,stroke:#fff,color:#fff

Figure 1: Cloudflare Workers ecosystem architecture — all services connected via bindings

1.1. Free Tier — Detailed Limits

Cloudflare's biggest strength is its generous free tier, sufficient to run production workloads for small to medium applications:

ServiceFree TierPaid ($5/month)AWS Comparison
Workers100K requests/day, 10ms CPU/request10M requests/month, 30s CPULambda: 1M requests/month free
D15M rows read/day, 100K rows write/day, 5GB storage25B reads/month, 50M writes/monthAurora Serverless: no free tier
R210GB storage, 1M Class A ops, 10M Class B ops/month$0.015/GB, $0 egressS3: 5GB free, $0.09/GB egress
KV100K reads/day, 1K writes/day, 1GB storage10M reads/month, 1M writes/monthDynamoDB: 25 RCU/WCU free
Durable Objects100K requests/day, 5GB SQLite storage1M requests/monthNo direct equivalent
Queues10K operations/day1M operations/monthSQS: 1M requests/month free

R2 — Zero egress fees

R2 is an object storage service with zero egress charges. With AWS S3, you pay $0.09/GB for every GB of data transferred to the internet. With R2, bandwidth is completely free. For applications serving many static files (images, videos, documents), R2 can save hundreds to thousands of dollars per month compared to S3.

2. Bindings — The Core Connection Mechanism

In the Cloudflare ecosystem, a binding is how a Worker accesses other resources. Unlike connecting via endpoints/URLs as with the AWS SDK, bindings are injected directly into the Worker's runtime through the env variable. This provides two benefits: no network latency between Worker and storage (same data center), and no credential management.

# wrangler.toml — declaring bindings
name = "my-app"
main = "src/index.ts"
compatibility_date = "2026-04-01"

[[d1_databases]]
binding = "DB"
database_name = "my-app-db"
database_id = "xxxx-xxxx-xxxx"

[[r2_buckets]]
binding = "STORAGE"
bucket_name = "my-app-files"

[[kv_namespaces]]
binding = "CACHE"
id = "xxxx"

[[queues.producers]]
binding = "TASK_QUEUE"
queue = "background-tasks"

[[queues.consumers]]
queue = "background-tasks"
max_batch_size = 10
max_batch_timeout = 30
// src/index.ts — using bindings via env
interface Env {
  DB: D1Database;
  STORAGE: R2Bucket;
  CACHE: KVNamespace;
  TASK_QUEUE: Queue;
}

export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    const url = new URL(request.url);

    // D1: direct SQL query
    const posts = await env.DB
      .prepare("SELECT id, title, created_at FROM posts ORDER BY created_at DESC LIMIT 10")
      .all();

    // KV: read cache
    const cached = await env.CACHE.get("homepage:data", "json");

    // R2: upload file
    if (request.method === "PUT") {
      await env.STORAGE.put("uploads/avatar.png", request.body);
    }

    // Queue: send async task
    await env.TASK_QUEUE.send({
      type: "process-image",
      key: "uploads/avatar.png"
    });

    return Response.json({ posts: posts.results });
  }
} satisfies ExportedHandler<Env>;

Binding vs SDK — architectural difference

With AWS, when Lambda calls S3, the request must traverse the network (even within the same region there's ~1-5ms latency). With Cloudflare, Worker and storage reside on the same machine — a binding is an in-process reference, not a network call. The result: D1 queries typically complete in <1ms, R2 reads in <5ms.

3. D1 — Serverless SQL Database on the Edge

D1 is a serverless SQL database built on SQLite, running on Cloudflare's edge network. The special feature: D1 supports Global Read Replication — data is automatically replicated to data centers closest to users, reducing read latency to under 1ms for most regions.

# Initialize D1 database
npx wrangler d1 create my-app-db

# Create schema
npx wrangler d1 execute my-app-db --command "CREATE TABLE posts (
  id INTEGER PRIMARY KEY AUTOINCREMENT,
  title TEXT NOT NULL,
  body TEXT NOT NULL,
  slug TEXT UNIQUE NOT NULL,
  created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
  view_count INTEGER DEFAULT 0
)"

# Seed data
npx wrangler d1 execute my-app-db --command "INSERT INTO posts (title, body, slug) VALUES ('Hello World', 'First post content', 'hello-world')"

3.1. D1 — Practical Patterns

// Parameterized queries (SQL injection prevention)
async function getPostBySlug(env: Env, slug: string) {
  const result = await env.DB
    .prepare("SELECT * FROM posts WHERE slug = ?")
    .bind(slug)
    .first();
  return result;
}

// Batch operations — multiple queries in 1 round-trip
async function getDashboardData(env: Env) {
  const results = await env.DB.batch([
    env.DB.prepare("SELECT COUNT(*) as total FROM posts"),
    env.DB.prepare("SELECT * FROM posts ORDER BY created_at DESC LIMIT 5"),
    env.DB.prepare("SELECT slug, view_count FROM posts ORDER BY view_count DESC LIMIT 5"),
  ]);

  return {
    totalPosts: results[0].results[0].total,
    recentPosts: results[1].results,
    popularPosts: results[2].results,
  };
}

// Time Travel — restore data to any point within 30 days
// npx wrangler d1 time-travel my-app-db --before="2026-04-20T10:00:00Z"

D1 limitations to keep in mind

D1 is built on SQLite so it doesn't support concurrent writes — each database has a single write point. This is perfectly suitable for blogs, landing pages, dashboards (read-heavy). But if your application needs write-heavy workloads (real-time chat, gaming leaderboards), consider Durable Objects or combine with Queues to serialize writes.

4. R2 — Object Storage with Zero Egress Fees

R2 is S3 API-compatible, meaning you can use any S3 SDK to interact with it. The only difference: $0 egress. Cloudflare charges nothing for bandwidth to the internet, whether you serve 1TB or 100TB per month.

// Upload file with metadata
async function uploadFile(env: Env, request: Request) {
  const formData = await request.formData();
  const file = formData.get("file") as File;

  const key = `uploads/${Date.now()}-${file.name}`;

  await env.STORAGE.put(key, file.stream(), {
    httpMetadata: {
      contentType: file.type,
      cacheControl: "public, max-age=31536000",
    },
    customMetadata: {
      uploadedBy: "user-123",
      originalName: file.name,
    },
  });

  return new Response(JSON.stringify({ key, url: `/files/${key}` }), {
    headers: { "Content-Type": "application/json" },
  });
}

// Serve file with cache headers
async function serveFile(env: Env, key: string) {
  const object = await env.STORAGE.get(key);
  if (!object) return new Response("Not Found", { status: 404 });

  const headers = new Headers();
  object.writeHttpMetadata(headers);
  headers.set("etag", object.httpEtag);
  headers.set("cache-control", "public, max-age=31536000, immutable");

  return new Response(object.body, { headers });
}
graph LR
    CLIENT["Client"] -->|"Upload"| WORKER["Worker"]
    WORKER -->|"PUT object"| R2["R2 Storage"]
    WORKER -->|"Save metadata"| D1["D1 Database"]
    CLIENT -->|"Download"| CDN["Cloudflare CDN"]
    CDN -->|"Cache MISS"| R2
    R2 -->|"$0 egress"| CDN
    CDN -->|"Cache HIT"| CLIENT
    style WORKER fill:#e94560,stroke:#fff,color:#fff
    style R2 fill:#f8f9fa,stroke:#e94560,color:#2c3e50
    style D1 fill:#f8f9fa,stroke:#e94560,color:#2c3e50
    style CDN fill:#2c3e50,stroke:#e94560,color:#fff
    style CLIENT fill:#2c3e50,stroke:#e94560,color:#fff

Figure 2: R2 combined with CDN — files cached at edge, egress completely free

5. KV — Global Edge Cache with Eventual Consistency

Workers KV is a key-value store replicated across the entire edge network. Data written at one location propagates to all 330+ PoPs within approximately 60 seconds. KV is ideal for read-heavy, write-light data — configuration, feature flags, cached API responses.

// KV with TTL and metadata
async function cacheApiResponse(env: Env, key: string, data: unknown) {
  await env.CACHE.put(key, JSON.stringify(data), {
    expirationTtl: 3600, // 1 hour
    metadata: {
      cachedAt: new Date().toISOString(),
      version: "v2",
    },
  });
}

// Read with metadata to check freshness
async function getCachedData(env: Env, key: string) {
  const { value, metadata } = await env.CACHE.getWithMetadata(key, "json");

  if (!value) return null;

  const cachedAt = new Date(metadata?.cachedAt as string);
  const age = Date.now() - cachedAt.getTime();

  return { data: value, ageMs: age, version: metadata?.version };
}

// Stale-while-revalidate pattern
async function getWithSWR(env: Env, key: string, fetcher: () => Promise<unknown>) {
  const cached = await env.CACHE.get(key, "json");
  if (cached) {
    // Return cached immediately, refresh asynchronously
    const ctx = env as unknown as ExecutionContext;
    ctx.waitUntil(
      fetcher().then(fresh =>
        env.CACHE.put(key, JSON.stringify(fresh), { expirationTtl: 3600 })
      )
    );
    return cached;
  }
  const fresh = await fetcher();
  await env.CACHE.put(key, JSON.stringify(fresh), { expirationTtl: 3600 });
  return fresh;
}
FeatureKVD1Durable Objects
Data modelKey-ValueRelational (SQL)Key-Value or SQLite
ConsistencyEventual (~60s)Strong (single writer)Strong (single instance)
Read latency<1ms (edge cache)<1ms (read replica)~1-5ms (single location)
Write latency~60s propagation<5ms (primary)<1ms (co-located)
Use caseConfig, cache, feature flagsCRUD, queries, analyticsReal-time state, WebSocket, coordination

6. Durable Objects — Stateful Computing on the Edge

Durable Objects (DO) solve the problem traditional serverless cannot: stateful coordination. Each DO instance is unique across the entire system, has persistent memory, and guarantees single-threaded execution — no two requests ever process simultaneously on the same object.

// Durable Object: Rate Limiter with sliding window
export class RateLimiter implements DurableObject {
  private state: DurableObjectState;

  constructor(state: DurableObjectState) {
    this.state = state;
  }

  async fetch(request: Request): Promise<Response> {
    const ip = new URL(request.url).searchParams.get("ip")!;
    const now = Date.now();
    const windowMs = 60_000; // 1 minute
    const maxRequests = 100;

    // Get timestamp list from SQLite storage
    const timestamps: number[] =
      (await this.state.storage.get(`rate:${ip}`)) ?? [];

    // Remove entries outside window
    const valid = timestamps.filter(t => now - t < windowMs);

    if (valid.length >= maxRequests) {
      return new Response("Rate limit exceeded", {
        status: 429,
        headers: {
          "Retry-After": String(Math.ceil((valid[0] + windowMs - now) / 1000)),
          "X-RateLimit-Remaining": "0",
        },
      });
    }

    valid.push(now);
    await this.state.storage.put(`rate:${ip}`, valid);

    return new Response("OK", {
      headers: {
        "X-RateLimit-Remaining": String(maxRequests - valid.length),
        "X-RateLimit-Reset": String(Math.ceil((valid[0] + windowMs) / 1000)),
      },
    });
  }
}

Durable Objects vs Redis

If you're familiar with Redis, think of Durable Objects as a single-key Redis instance with strong consistency. Redis allows multiple clients to write simultaneously (race conditions possible without WATCH/transactions). DO guarantees serial execution — no locks needed, no transactions. Trade-off: DO lacks Redis's complex query features (sorted sets, pub/sub).

7. Queues — Asynchronous Processing

Cloudflare Queues is a serverless message queue, integrated directly with Workers. A Producer Worker sends messages, a Consumer Worker receives and processes them in batches. Queues guarantee at-least-once delivery with automatic retries.

// Producer: send task to queue
async function handleUpload(env: Env, request: Request) {
  const formData = await request.formData();
  const file = formData.get("image") as File;

  // Upload original file to R2
  const key = `originals/${crypto.randomUUID()}.${file.name.split(".").pop()}`;
  await env.STORAGE.put(key, file.stream());

  // Send image processing task to queue
  await env.TASK_QUEUE.send({
    type: "resize-image",
    sourceKey: key,
    sizes: [
      { width: 320, suffix: "sm" },
      { width: 768, suffix: "md" },
      { width: 1200, suffix: "lg" },
    ],
  });

  // Save metadata to D1
  await env.DB.prepare(
    "INSERT INTO images (key, original_name, status) VALUES (?, ?, 'processing')"
  ).bind(key, file.name).run();

  return Response.json({ key, status: "processing" });
}

// Consumer: process batch messages
export default {
  async queue(batch: MessageBatch, env: Env) {
    for (const msg of batch.messages) {
      try {
        const task = msg.body as { type: string; sourceKey: string; sizes: Array<{width: number; suffix: string}> };

        if (task.type === "resize-image") {
          // Read original image from R2
          const original = await env.STORAGE.get(task.sourceKey);
          if (!original) { msg.ack(); continue; }

          // Resize and upload each size
          for (const size of task.sizes) {
            const resizedKey = task.sourceKey.replace("originals/", `resized/${size.suffix}/`);
            // ... resize logic with @cf/image/resize ...
            await env.STORAGE.put(resizedKey, resizedBody);
          }

          // Update status
          await env.DB.prepare(
            "UPDATE images SET status = 'ready' WHERE key = ?"
          ).bind(task.sourceKey).run();
        }

        msg.ack();
      } catch (e) {
        msg.retry(); // Automatic retry with exponential backoff
      }
    }
  },
} satisfies ExportedHandler<Env>;
sequenceDiagram
    participant C as Client
    participant W as Worker (Producer)
    participant R2 as R2 Storage
    participant Q as Queue
    participant CW as Consumer Worker
    participant D1 as D1 Database

    C->>W: POST /upload (image)
    W->>R2: PUT original image
    W->>Q: send({type: "resize", ...})
    W->>D1: INSERT status='processing'
    W-->>C: 202 Accepted
    Note over Q,CW: Async processing
    Q->>CW: batch messages
    CW->>R2: GET original
    CW->>R2: PUT resized (sm, md, lg)
    CW->>D1: UPDATE status='ready'
    CW-->>Q: ack()

Figure 3: Image processing pipeline — synchronous upload, asynchronous resize via Queues

8. Dynamic Workers — The Future of Stateful Serverless (2026)

At Developer Week 2026, Cloudflare announced Dynamic Workers (open beta) — a major leap combining serverless with stateful computing. Dynamic Workers can:

  • Auto-scale horizontally from 0 to thousands of instances based on traffic
  • Maintain persistent state across invocations without external storage
  • Support long-running tasks up to 30 minutes per invocation
  • Keep cold starts under millisecond thanks to V8 isolates

Dynamic Workers vs Durable Objects

If you're using Durable Objects solely for state persistence (sessions, counters, cache), Dynamic Workers offer a simpler alternative — no need to design around the actor model. Durable Objects remain the right choice when you need a single-point-of-execution guarantee — for example, distributed locks, leader election, or coordination between multiple clients on the same resource.

9. Real-World Full-Stack Architecture: Blog Platform

To illustrate how services work together, here's the architecture for a fully-featured blog platform running entirely on Cloudflare:

graph TB
    subgraph "Frontend (Pages)"
        FE["Vue.js / React SPA"]
    end
    subgraph "API Layer (Workers)"
        API["API Worker"]
        AUTH["Auth Middleware"]
    end
    subgraph "Data Layer"
        D1DB["D1: posts, users, comments"]
        R2S["R2: images, attachments"]
        KVC["KV: page cache, config"]
    end
    subgraph "Background"
        QUE["Queue: email, resize, sitemap"]
        CRON["Cron Trigger: daily stats"]
    end
    FE --> AUTH
    AUTH --> API
    API --> D1DB
    API --> R2S
    API --> KVC
    API --> QUE
    CRON --> API
    style FE fill:#f8f9fa,stroke:#e94560,color:#2c3e50
    style API fill:#e94560,stroke:#fff,color:#fff
    style AUTH fill:#2c3e50,stroke:#e94560,color:#fff
    style D1DB fill:#f8f9fa,stroke:#e94560,color:#2c3e50
    style R2S fill:#f8f9fa,stroke:#e94560,color:#2c3e50
    style KVC fill:#f8f9fa,stroke:#e94560,color:#2c3e50
    style QUE fill:#f8f9fa,stroke:#e94560,color:#2c3e50
    style CRON fill:#2c3e50,stroke:#e94560,color:#fff

Figure 4: Full-stack blog platform on Cloudflare — $0/month cost for low traffic

// Complete project structure
// wrangler.toml
// ├── src/
// │   ├── index.ts          # Main router
// │   ├── middleware/auth.ts # JWT verification
// │   ├── routes/
// │   │   ├── posts.ts      # CRUD posts (D1)
// │   │   ├── upload.ts     # File upload (R2)
// │   │   └── feed.ts       # RSS/sitemap (KV cache)
// │   └── workers/
// │       └── consumer.ts   # Queue consumer

// Main router with Hono framework
import { Hono } from "hono";
import { jwt } from "hono/jwt";

const app = new Hono<{ Bindings: Env }>();

// Public routes
app.get("/api/posts", async (c) => {
  // Check KV cache first
  const cached = await c.env.CACHE.get("posts:list", "json");
  if (cached) return c.json(cached);

  const posts = await c.env.DB
    .prepare(`
      SELECT id, title, slug, excerpt, created_at, view_count
      FROM posts WHERE published = 1
      ORDER BY created_at DESC LIMIT 20
    `)
    .all();

  // Cache for 5 minutes
  await c.env.CACHE.put("posts:list", JSON.stringify(posts.results), {
    expirationTtl: 300,
  });

  return c.json(posts.results);
});

app.get("/api/posts/:slug", async (c) => {
  const slug = c.req.param("slug");
  const post = await c.env.DB
    .prepare("SELECT * FROM posts WHERE slug = ? AND published = 1")
    .bind(slug)
    .first();

  if (!post) return c.json({ error: "Not found" }, 404);

  // Increment view count asynchronously
  c.executionCtx.waitUntil(
    c.env.DB.prepare("UPDATE posts SET view_count = view_count + 1 WHERE slug = ?")
      .bind(slug).run()
  );

  return c.json(post);
});

// Protected routes
app.use("/api/admin/*", jwt({ secret: "your-secret" }));

app.post("/api/admin/posts", async (c) => {
  const body = await c.req.json();
  const result = await c.env.DB
    .prepare(`
      INSERT INTO posts (title, slug, body, excerpt, published)
      VALUES (?, ?, ?, ?, ?)
    `)
    .bind(body.title, body.slug, body.body, body.excerpt, body.published ? 1 : 0)
    .run();

  // Invalidate cache
  await c.env.CACHE.delete("posts:list");

  // Send notification via queue
  await c.env.TASK_QUEUE.send({
    type: "new-post",
    postId: result.meta.last_row_id,
    title: body.title,
  });

  return c.json({ id: result.meta.last_row_id }, 201);
});

export default app;

10. Deploy to Production — Zero to Live in 5 Minutes

# 1. Initialize project
npm create cloudflare@latest my-blog -- --template hono
cd my-blog

# 2. Create D1 database
npx wrangler d1 create blog-db
# → Add database_id to wrangler.toml

# 3. Run migration
npx wrangler d1 execute blog-db --file=./schema.sql

# 4. Create R2 bucket
npx wrangler r2 bucket create blog-files

# 5. Create KV namespace
npx wrangler kv namespace create CACHE

# 6. Deploy
npx wrangler deploy

# ✅ Live at: https://my-blog.your-subdomain.workers.dev
# → Attach custom domain in Cloudflare Dashboard

Free custom domain

If your domain is already managed on Cloudflare (free plan), you can attach a custom domain to Workers at no extra cost. Workers automatically receive an SSL certificate via Cloudflare Universal SSL. Combined with Pages for the frontend, you have a complete system with custom domain, SSL, CDN — all free.

11. Production Best Practices

11.1. Caching Strategy

// Multi-layer caching: Cache API → KV → D1
async function getPost(env: Env, slug: string, request: Request) {
  // Layer 1: Cloudflare Cache API (edge, ~0ms)
  const cacheKey = new Request(`https://cache/${slug}`, request);
  const cache = caches.default;
  let response = await cache.match(cacheKey);
  if (response) return response;

  // Layer 2: KV (global, <1ms)
  const kvData = await env.CACHE.get(`post:${slug}`, "json");
  if (kvData) {
    response = Response.json(kvData);
    response.headers.set("Cache-Control", "public, max-age=60");
    await cache.put(cacheKey, response.clone());
    return response;
  }

  // Layer 3: D1 (primary, <5ms)
  const post = await env.DB.prepare("SELECT * FROM posts WHERE slug = ?")
    .bind(slug).first();

  if (!post) return new Response("Not found", { status: 404 });

  // Populate both layers
  await env.CACHE.put(`post:${slug}`, JSON.stringify(post), { expirationTtl: 300 });
  response = Response.json(post);
  response.headers.set("Cache-Control", "public, max-age=60");
  await cache.put(cacheKey, response.clone());

  return response;
}

11.2. Error Handling & Monitoring

// Structured logging with wrangler tail
export default {
  async fetch(request: Request, env: Env, ctx: ExecutionContext) {
    const startTime = Date.now();
    const requestId = crypto.randomUUID();

    try {
      const response = await handleRequest(request, env);

      // Log each request
      console.log(JSON.stringify({
        requestId,
        method: request.method,
        url: request.url,
        status: response.status,
        durationMs: Date.now() - startTime,
      }));

      return response;
    } catch (error) {
      console.error(JSON.stringify({
        requestId,
        error: error instanceof Error ? error.message : "Unknown error",
        stack: error instanceof Error ? error.stack : undefined,
      }));

      return new Response("Internal Server Error", { status: 500 });
    }
  },
} satisfies ExportedHandler<Env>;

// Real-time monitoring: npx wrangler tail --format json

12. Overall Comparison with AWS / Azure / GCP

CriteriaCloudflareAWSAzure
Cold start<1ms (V8 isolate)100ms-5s (container)200ms-10s (container)
Global distributionDefault (330+ PoPs)Requires region selection + Lambda@EdgeRequires region selection
Egress cost$0$0.09/GB$0.087/GB
SQL databaseD1 (SQLite, edge)Aurora Serverless (MySQL/PG)SQL Database serverless
Object storageR2 ($0 egress)S3 ($0.023/GB + egress)Blob ($0.018/GB + egress)
Free tierVery generous12-month limited12-month limited
Vendor lock-inMedium (D1=SQLite, R2=S3 API)High (proprietary APIs)High (proprietary APIs)
RuntimeJavaScript/TypeScript, Python, Rust/WASMMulti-languageMulti-language

When NOT to choose Cloudflare Workers?

Workers run on V8 isolates so they don't support binary dependencies (native modules). If your application needs FFmpeg, ImageMagick, server-side Puppeteer, or large ML models (except Workers AI), you'll still need Lambda/Cloud Run. D1 also isn't suitable for write-heavy workloads or complex ACID transactions — in those cases, consider PlanetScale, Neon, or Supabase.

Conclusion

Cloudflare Workers isn't just a serverless platform — it's a complete edge-first ecosystem. With D1 for SQL, R2 for file storage, KV for caching, Durable Objects for stateful logic, and Queues for async processing, you have everything needed to build production applications without thinking about servers, regions, or scaling. The generous free tier covers most personal projects and early-stage startups. And with Dynamic Workers approaching GA, the line between serverless and stateful computing is fading — the future of edge computing is already here.

References: