Cloudflare Developer Platform — Build a Full-Stack App for Free on the Edge
Posted on: 4/20/2026 12:09:41 PM
Table of contents
- 1. Overview of the Cloudflare Developer Platform
- 2. V8 Isolates — Why Are Workers So Fast?
- 3. D1 — SQLite Database on the Edge
- 4. R2 — Object Storage Without Egress Fees
- 5. KV — Key-Value Store for Read-Heavy Workloads
- 6. Hono — A Framework Optimized for Workers
- 7. Full-Stack Architecture on Cloudflare
- 8. Free Tier Breakdown — How Much Do You Get?
- 9. Deploy in Practice: From Zero to Production
- 10. Best Practices and Common Pitfalls
- 11. Comparing Competitors: Vercel, Netlify, AWS
- 12. Drizzle ORM — Type-safe Queries for D1
- 13. Monitoring and Observability
- 14. When Should You Leave Cloudflare?
- Conclusion
1. Overview of the Cloudflare Developer Platform
In the cloud world, the big three — AWS, Azure, GCP — usually come up first. But if you're an indie developer, a small startup, or a team that wants to ship fast without worrying about upfront cost, there's a surprising option: Cloudflare Developer Platform. Not just a CDN or DNS provider, Cloudflare has been quietly assembling a complete compute + storage + database ecosystem — free at a level that's sufficient for small production workloads.
The core difference: instead of running code in a central data center (us-east-1, westeurope...), Cloudflare's entire stack runs across 300+ edge locations — your code literally executes just a few milliseconds from the user. This isn't static CDN caching; it's full compute at the edge.
2. V8 Isolates — Why Are Workers So Fast?
2.1. Containers vs V8 Isolates
AWS Lambda and Azure Functions run code inside containers: each request needs its own container (or warm container), each container consumes tens of MB of RAM and takes 100–1000ms to cold start. Cloudflare Workers takes a completely different path: V8 Isolates — the same V8 engine Chrome/Node.js uses, but instead of spawning a whole process, Workers create a lightweight isolate inside a single process.
An isolate has about 5MB of overhead compared to 50–300MB for a container. Cold starts are under 1ms vs 100–1000ms. This lets Cloudflare run thousands of Workers on a single machine — which makes deploying to 300+ locations economically sustainable.
graph LR
subgraph Traditional["Lambda / Azure Functions"]
C1[Container 1
~100MB RAM
Cold: 100-1000ms]
C2[Container 2
~100MB RAM]
C3[Container 3
~100MB RAM]
end
subgraph Edge["Cloudflare Workers"]
P[Shared V8 Process]
I1[Isolate 1
~5MB]
I2[Isolate 2
~5MB]
I3[Isolate 3
~5MB]
I4[Isolate N
~5MB]
P --> I1
P --> I2
P --> I3
P --> I4
end
style C1 fill:#f8f9fa,stroke:#e94560,color:#2c3e50
style C2 fill:#f8f9fa,stroke:#e94560,color:#2c3e50
style C3 fill:#f8f9fa,stroke:#e94560,color:#2c3e50
style P fill:#e94560,stroke:#fff,color:#fff
style I1 fill:#f8f9fa,stroke:#4CAF50,color:#2c3e50
style I2 fill:#f8f9fa,stroke:#4CAF50,color:#2c3e50
style I3 fill:#f8f9fa,stroke:#4CAF50,color:#2c3e50
style I4 fill:#f8f9fa,stroke:#4CAF50,color:#2c3e50
Container model (Lambda) vs V8 Isolates (Workers)
2.2. Execution Model
Workers uses an event-driven model built on the Web-standard Fetch API. Each request is a FetchEvent, and the handler returns a Response. No Express, no complex middleware chains — just pure Web Standards:
export default {
async fetch(request, env, ctx) {
const url = new URL(request.url);
if (url.pathname === '/api/users') {
const users = await env.DB.prepare(
'SELECT * FROM users LIMIT 10'
).all();
return Response.json(users.results);
}
return new Response('Not Found', { status: 404 });
}
};
env contains all bindings — connections to D1, R2, KV, Durable Objects. No connection strings, no separate SDKs. Cloudflare injects them via the runtime.
Tip: Workers also supports Node.js APIs
Since 2024, Workers enables the nodejs_compat flag, allowing use of many Node.js built-in modules like crypto, buffer, stream, util. Most popular npm packages are now compatible.
3. D1 — SQLite Database on the Edge
3.1. SQLite, but distributed
D1 is Cloudflare's serverless SQL database built on SQLite. It sounds simple, but D1 solves SQLite's biggest limitation: single-writer, single-machine.
D1 has a primary instance (accepts writes) and automatically creates read replicas at the edge locations nearest to users. For read-only queries, requests are routed to the nearest replica — read latency can be under 1ms. Writes still go through the primary, but with the Session API you can guarantee read-after-write consistency when needed.
| Criterion | D1 (Cloudflare) | PostgreSQL (Neon/Supabase) | PlanetScale (MySQL) |
|---|---|---|---|
| Engine | SQLite | PostgreSQL | Vitess (MySQL) |
| Read latency (p50) | ~0.5ms | 1–3ms (same region) | 3–10ms |
| Write throughput | 500–2K ops/s | 10K–50K ops/s | 10K+ ops/s |
| Free tier storage | 5 GB | 512 MB (Neon) | 5 GB |
| Free reads/month | 5 billion row reads | 3 GB transfer | 1 billion row reads |
| Global replicas | Automatic (edge) | Manual (pay extra) | Yes (paid) |
| Backup / Recovery | Time Travel 30 days | Point-in-time 7 days | Branching |
| ORM support | Drizzle, Prisma | Every ORM | Prisma, Drizzle |
3.2. When to USE and When NOT to USE D1
D1 is a good fit for:
Blogs, landing pages, small-to-medium SaaS, read-heavy API backends, side projects, MVPs. Any app whose total data stays under 10 GB and writes stay below 2K ops/s.
D1 is NOT a good fit for:
Write-heavy applications (large e-commerce, real-time chat), data warehouses, or systems that need immediate strong global consistency. For those, consider PostgreSQL (Neon) or CockroachDB.
3.3. A Real Example: CRUD with D1
// wrangler.toml
[[d1_databases]]
binding = "DB"
database_name = "my-app-db"
database_id = "xxxx-xxxx-xxxx"
// Schema migration
// wrangler d1 execute my-app-db --file=./schema.sql
-- schema.sql
CREATE TABLE IF NOT EXISTS posts (
id INTEGER PRIMARY KEY AUTOINCREMENT,
title TEXT NOT NULL,
content TEXT NOT NULL,
slug TEXT UNIQUE NOT NULL,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX idx_posts_slug ON posts(slug);
CREATE INDEX idx_posts_created ON posts(created_at DESC);
// Worker handler
async function handleGetPost(slug, env) {
const post = await env.DB.prepare(
'SELECT * FROM posts WHERE slug = ?'
).bind(slug).first();
if (!post) {
return new Response('Not found', { status: 404 });
}
return Response.json(post, {
headers: { 'Cache-Control': 'public, max-age=3600' }
});
}
async function handleCreatePost(request, env) {
const { title, content, slug } = await request.json();
const result = await env.DB.prepare(
'INSERT INTO posts (title, content, slug) VALUES (?, ?, ?)'
).bind(title, content, slug).run();
return Response.json(
{ id: result.meta.last_row_id },
{ status: 201 }
);
}
4. R2 — Object Storage Without Egress Fees
4.1. The Core Value: Zero egress
If you've used AWS S3, you know the pain: egress fees. Uploads are free, but every GB users download costs $0.09. A website serving 1 TB of bandwidth per month? That's $90 just for egress.
R2 solves this decisively: zero egress, forever. You only pay for storage ($0.015/GB/month) and operations (Class A: $4.50/million, Class B: $0.36/million). The free tier includes:
4.2. S3-Compatible API
R2 is fully compatible with the S3 API. Every tool, SDK, and library you use with S3 works with R2 — just change the endpoint. AWS SDK, boto3, rclone, Cyberduck... all compatible.
// Upload a file from a Worker
async function handleUpload(request, env) {
const formData = await request.formData();
const file = formData.get('file');
const key = `uploads/${Date.now()}-${file.name}`;
await env.R2_BUCKET.put(key, file.stream(), {
httpMetadata: {
contentType: file.type,
},
customMetadata: {
uploadedBy: 'api',
},
});
return Response.json({ key, url: `/files/${key}` });
}
// Serve a file with caching
async function handleDownload(key, env) {
const object = await env.R2_BUCKET.get(key);
if (!object) {
return new Response('Not found', { status: 404 });
}
const headers = new Headers();
object.writeHttpMetadata(headers);
headers.set('etag', object.httpEtag);
headers.set('Cache-Control', 'public, max-age=86400');
return new Response(object.body, { headers });
}
4.3. Infrequent Access class (2026)
In 2026, R2 added an Infrequent Access (IA) storage class: $0.01/GB/month (33% cheaper than Standard). A good fit for backups, archives, and log storage. However, IA has higher retrieval fees — only use it for rarely accessed data.
5. KV — Key-Value Store for Read-Heavy Workloads
Workers KV is an eventually consistent global key-value store. Data is replicated to every edge location, enabling extremely fast reads (<10ms globally). Trade-off: writes take up to 60 seconds to propagate globally.
| Criterion | KV | D1 | Durable Objects |
|---|---|---|---|
| Model | Key-Value | Relational (SQL) | Actor model |
| Consistency | Eventual (~60s) | Session-based | Strong |
| Read latency | <10ms global | ~0.5ms (replica) | ~20ms |
| Use case | Config, cache, feature flags | CRUD, queries | Collaboration, counters |
| Free tier | 100K reads/day | 5 billion reads/month | No free tier |
| Max value size | 25 MB | N/A (row-based) | 128 KB/key |
// Config store pattern with KV
async function getFeatureFlags(env) {
let flags = await env.KV.get('feature-flags', { type: 'json' });
if (!flags) {
// Fallback: read from D1
const result = await env.DB.prepare(
'SELECT key, value FROM feature_flags WHERE active = 1'
).all();
flags = Object.fromEntries(
result.results.map(r => [r.key, JSON.parse(r.value)])
);
// Cache into KV (TTL 5 minutes)
await env.KV.put('feature-flags', JSON.stringify(flags), {
expirationTtl: 300
});
}
return flags;
}
6. Hono — A Framework Optimized for Workers
6.1. Why not just use Express?
Express was designed for Node.js — it depends on http.createServer, req/res objects, and Node.js streams. Workers doesn't run Node.js; it runs on V8 Isolates with Web Standards (Fetch API). Hono is a framework designed from the ground up for edge runtimes:
6.2. Full-stack API with Hono + D1 + R2
import { Hono } from 'hono';
import { cors } from 'hono/cors';
import { cache } from 'hono/cache';
import { jwt } from 'hono/jwt';
type Bindings = {
DB: D1Database;
R2_BUCKET: R2Bucket;
KV: KVNamespace;
JWT_SECRET: string;
};
const app = new Hono<{ Bindings: Bindings }>();
// Middleware
app.use('/api/*', cors());
app.use('/api/admin/*', jwt({ secret: 'JWT_SECRET' }));
// Cache static assets
app.get('/files/*',
cache({ cacheName: 'assets', cacheControl: 'max-age=86400' })
);
// Posts CRUD
app.get('/api/posts', async (c) => {
const page = Number(c.req.query('page') || 1);
const limit = 20;
const offset = (page - 1) * limit;
const { results, meta } = await c.env.DB.prepare(`
SELECT id, title, slug, created_at
FROM posts
ORDER BY created_at DESC
LIMIT ? OFFSET ?
`).bind(limit, offset).all();
return c.json({ posts: results, meta });
});
app.get('/api/posts/:slug', async (c) => {
const slug = c.req.param('slug');
const post = await c.env.DB.prepare(
'SELECT * FROM posts WHERE slug = ?'
).bind(slug).first();
if (!post) return c.notFound();
return c.json(post);
});
app.post('/api/posts', async (c) => {
const { title, content, slug } = await c.req.json();
const result = await c.env.DB.prepare(`
INSERT INTO posts (title, content, slug) VALUES (?, ?, ?)
`).bind(title, content, slug).run();
return c.json({ id: result.meta.last_row_id }, 201);
});
// File upload
app.post('/api/upload', async (c) => {
const body = await c.req.parseBody();
const file = body['file'] as File;
const key = `uploads/${crypto.randomUUID()}-${file.name}`;
await c.env.R2_BUCKET.put(key, file.stream(), {
httpMetadata: { contentType: file.type },
});
return c.json({ key });
});
export default app;
Why does Hono beat Express on the edge?
Hono uses the native Fetch API (Request/Response) instead of the Node.js http module. Bundle size is just 14KB vs Express's 200KB+. The router uses a Trie-based algorithm — benchmarks show routing is 4× faster than Express. Most importantly: Hono runs on every runtime — write once, deploy everywhere (Workers, Deno, Bun, Lambda).
7. Full-Stack Architecture on Cloudflare
graph TB
User((User)) -->|HTTPS| CF[Cloudflare Edge
300+ PoP]
CF -->|Static files| Pages[Pages
Vue/React SPA]
CF -->|/api/*| Worker[Worker
Hono API]
Worker -->|SQL queries| D1[(D1 Database
SQLite)]
Worker -->|File storage| R2[(R2 Bucket
S3-compatible)]
Worker -->|Cache/Config| KV[(KV Store
Global cache)]
Worker -->|Real-time| DO[Durable Objects
WebSocket]
D1 -.->|Read replicas| Edge2[Edge Replica]
KV -.->|Replicated| Edge3[All 300+ PoP]
style User fill:#e94560,stroke:#fff,color:#fff
style CF fill:#2c3e50,stroke:#fff,color:#fff
style Pages fill:#f8f9fa,stroke:#e94560,color:#2c3e50
style Worker fill:#f8f9fa,stroke:#e94560,color:#2c3e50
style D1 fill:#16213e,stroke:#fff,color:#fff
style R2 fill:#16213e,stroke:#fff,color:#fff
style KV fill:#16213e,stroke:#fff,color:#fff
style DO fill:#16213e,stroke:#fff,color:#fff
style Edge2 fill:#4CAF50,stroke:#fff,color:#fff
style Edge3 fill:#4CAF50,stroke:#fff,color:#fff
A full-stack application architecture on Cloudflare Developer Platform
7.1. Recommended Project Structure
my-app/
├── src/
│ ├── index.ts # Hono app entry
│ ├── routes/
│ │ ├── posts.ts # /api/posts
│ │ ├── auth.ts # /api/auth
│ │ └── upload.ts # /api/upload
│ ├── middleware/
│ │ ├── auth.ts # JWT verification
│ │ └── rateLimit.ts # Rate limiting via KV
│ └── lib/
│ ├── db.ts # D1 query helpers
│ └── storage.ts # R2 operations
├── frontend/ # Vue/React SPA
│ ├── src/
│ └── dist/ # Build output → Pages
├── migrations/
│ ├── 0001_init.sql
│ └── 0002_add_tags.sql
├── wrangler.toml # Bindings config
└── package.json
7.2. Wrangler config — Connecting Everything
# wrangler.toml
name = "my-app"
main = "src/index.ts"
compatibility_date = "2026-04-01"
compatibility_flags = ["nodejs_compat"]
# Static assets (replaces Pages)
[assets]
directory = "./frontend/dist"
# D1 Database
[[d1_databases]]
binding = "DB"
database_name = "my-app-db"
database_id = "your-d1-id"
# R2 Bucket
[[r2_buckets]]
binding = "R2_BUCKET"
bucket_name = "my-app-files"
# KV Namespace
[[kv_namespaces]]
binding = "KV"
id = "your-kv-id"
# Environment variables
[vars]
ENVIRONMENT = "production"
# Secrets (set via `wrangler secret put`)
# JWT_SECRET, API_KEY, etc.
8. Free Tier Breakdown — How Much Do You Get?
| Service | Free Tier | Paid (Workers Paid $5/month) |
|---|---|---|
| Workers | 100K requests/day, 10ms CPU/request | 10M requests/month, 30s CPU/request |
| D1 | 5 GB storage, 5B reads, 100K writes/day | 5 GB free + $0.75/GB, 25B reads, 50M writes |
| R2 | 10 GB storage, 1M Class A, 10M Class B | $0.015/GB/month, zero egress |
| KV | 100K reads/day, 1K writes/day | 10M reads, 1M writes/month |
| Pages | 500 builds/month, unlimited bandwidth | 5000 builds/month |
| Durable Objects | None | $0.15/million requests |
Real numbers: How far does the free tier go?
100K requests/day = ~3 million/month. An average personal blog sees 10K–50K pageviews/month. A small SaaS with 500 DAU and 20 API calls per user per day = 10K requests/day. The free tier easily covers the 0→1 phase and comfortably supports a few thousand DAUs.
9. Deploy in Practice: From Zero to Production
9.1. Initial Setup
# Install Wrangler CLI
npm install -g wrangler
# Login
wrangler login
# Create a new project with the Hono template
npm create hono@latest my-app
# Choose: cloudflare-workers
cd my-app
# Create D1 database
wrangler d1 create my-app-db
# → Copy database_id into wrangler.toml
# Create R2 bucket
wrangler r2 bucket create my-app-files
# Create KV namespace
wrangler kv namespace create KV
# → Copy id into wrangler.toml
9.2. Migrations and Seed Data
# Run migrations
wrangler d1 execute my-app-db --file=./migrations/0001_init.sql
# Seed data (local)
wrangler d1 execute my-app-db --local --file=./seed.sql
# Dev server (local)
wrangler dev
# Deploy production
wrangler deploy
9.3. Custom Domain
# If the domain is already on Cloudflare DNS
# Workers → Settings → Domains → Add custom domain
# Or inside wrangler.toml:
routes = [
{ pattern = "api.myapp.com/*", zone_name = "myapp.com" }
]
10. Best Practices and Common Pitfalls
10.1. Performance
- Batch D1 queries: D1 supports
db.batch()— combine multiple queries into a single round-trip instead of calling them sequentially. With 5 queries, batching is 3–5× faster. - Cache with KV or the Cache API: Don't query D1 on every repeated request. Use
caches.defaultor KV to cache common responses. - R2 + Cache: Set
Cache-Controlheaders when serving R2 objects — the Cloudflare CDN caches at the edge, no need to implement caching yourself. - Bundle size: Workers caps at 1MB (compressed). Avoid importing large libraries, use tree-shaking, and pick lightweight alternatives (date-fns over moment, Hono over Express).
10.2. Security
- Secrets: Use
wrangler secret putfor API keys and JWT secrets — DO NOT put them in wrangler.toml or code. - CORS: Hono has a built-in
cors()middleware, but whitelist specific domains instead oforigin: '*'in production. - Rate limiting: Cloudflare has free Rate Limiting rules (10K requests/month on the free plan), or you can implement it yourself via KV/Durable Objects.
- D1 parameterized queries: Always use
.bind()— the D1 API doesn't allow direct string interpolation, so SQL injection is largely eliminated by design.
10.3. Important Limits
Key limits
CPU time: Free tier allows just 10ms CPU/request (not wall-clock time). Heavy tasks (image processing, PDF generation) will time out. D1 size: Max 10 GB/database — enough for most small-to-medium apps, but beyond that you'll need sharding or Postgres. Workers memory: 128 MB/isolate — don't load large datasets into memory. KV consistency: Writes take up to 60s to propagate — don't use KV for data that needs real-time sync.
11. Comparing Competitors: Vercel, Netlify, AWS
| Criterion | Cloudflare | Vercel | Netlify | AWS (Lambda+S3+RDS) |
|---|---|---|---|---|
| Edge compute | 300+ PoP, V8 Isolates | Edge Functions (limited) | Edge Functions | Lambda@Edge (CloudFront) |
| Database | D1 (SQLite), built-in | None (use Neon/Supabase) | None | RDS/Aurora (not edge) |
| Object storage | R2 (zero egress) | Blob Storage ($0.03/GB) | Blobs (limited) | S3 ($0.09/GB egress) |
| Cold start | <1ms | ~50–200ms | ~100ms | 100–1000ms |
| Free tier value | Very generous | Good (hobby plan) | Good | 12-month free trial |
| Vendor lock-in | Medium (Hono portable) | High (Next.js coupling) | Low | High |
12. Drizzle ORM — Type-safe Queries for D1
Writing raw SQL for D1 works fine, but for larger apps, Drizzle ORM is the best choice for Workers:
// schema.ts
import { sqliteTable, text, integer } from 'drizzle-orm/sqlite-core';
export const posts = sqliteTable('posts', {
id: integer('id').primaryKey({ autoIncrement: true }),
title: text('title').notNull(),
content: text('content').notNull(),
slug: text('slug').unique().notNull(),
createdAt: text('created_at').default('CURRENT_TIMESTAMP'),
});
export const tags = sqliteTable('tags', {
id: integer('id').primaryKey({ autoIncrement: true }),
name: text('name').unique().notNull(),
});
// Type-safe query
import { drizzle } from 'drizzle-orm/d1';
import { eq, desc } from 'drizzle-orm';
const db = drizzle(env.DB, { schema: { posts, tags } });
const recentPosts = await db
.select()
.from(posts)
.orderBy(desc(posts.createdAt))
.limit(10);
// Full type inference — IDE autocomplete for every column
Why does Drizzle beat Prisma on Workers?
The Prisma Client requires a binary engine (~10MB) — too large for the Workers 1MB limit. Drizzle is pure TypeScript, zero runtime dependencies, ~50KB bundle. Drizzle Kit also generates D1-compatible migration SQL immediately via drizzle-kit generate.
13. Monitoring and Observability
Cloudflare provides basic observability for free through the Dashboard: request count, error rate, CPU time, bandwidth. For more advanced needs:
- Workers Logpush: Streams logs in real time to an R2 bucket or an external service (Datadog, Grafana Cloud). Free on the Workers Paid plan.
- Tail Workers: A special Worker that receives events from other Workers — useful for building custom logging pipelines without affecting main Worker latency.
- Workers Analytics Engine: A write-optimized analytics database at the edge. Free tier: 100K events/day.
// Tail Worker pattern — async logging
export default {
async tail(events) {
for (const event of events) {
if (event.outcome === 'exception') {
// Log errors to R2 or an external service
await logToR2(event);
}
}
}
};
14. When Should You Leave Cloudflare?
Cloudflare Developer Platform is powerful for a specific segment. But it's not a silver bullet:
- Write-heavy workloads: D1 caps at 2K writes/s. If your app needs 10K+ writes/s (real-time bidding, IoT telemetry), look at PostgreSQL or ClickHouse.
- Long-running tasks: Workers time out after 30s (paid) or 10ms CPU (free). Heavy background jobs need queue + consumer patterns, or a traditional server.
- Large binary processing: Edge image/video processing is capped at 128MB of memory. Use Workers AI for inference, or offload to a dedicated service.
- Complex relational queries: D1 is SQLite — no stored procedures, complex triggers, or materialized views. PostgreSQL remains superior for enterprise-grade SQL.
A realistic roadmap
Start with the Cloudflare free tier for an MVP → validate product-market fit → when you need to scale writes or run complex queries, split the database to Neon/Supabase (PostgreSQL) while keeping Workers + R2 for compute + storage. A Hono app is portable — swapping runtimes just means changing the adapter, not rewriting code.
Conclusion
Cloudflare Developer Platform is one of the best-kept secrets in web development. With a generous free tier, an edge-first architecture for ultra-low latency, and a tightly integrated ecosystem (Workers + D1 + R2 + KV), it's an ideal platform for indie developers, startups, and side projects that want to ship fast without worrying about cost.
Combined with the Hono framework and Drizzle ORM, you get a full-stack type-safe, portable, high-performance setup — write once, deploy to 300+ edge locations, serve users worldwide with latency under 50ms. And all of it starts at $0.
References:
Cloudflare Workers Documentation ·
Cloudflare D1 Documentation ·
Cloudflare R2 Documentation ·
Hono.js — Getting Started with Cloudflare Workers ·
Cloudflare Blog — Full-Stack Development on Workers ·
Cloudflare Blog — D1: We Turned It Up to 11 ·
Drizzle ORM — Cloudflare D1 Guide
Database Indexing — The Art of Query Optimization for Production Systems
Message Queue 2026: RabbitMQ vs SQS vs Azure Service Bus for Production
Disclaimer: The opinions expressed in this blog are solely my own and do not reflect the views or opinions of my employer or any affiliated organizations. The content provided is for informational and educational purposes only and should not be taken as professional advice. While I strive to provide accurate and up-to-date information, I make no warranties or guarantees about the completeness, reliability, or accuracy of the content. Readers are encouraged to verify the information and seek independent advice as needed. I disclaim any liability for decisions or actions taken based on the content of this blog.