Cloudflare R2 - Zero Egress Object Storage for Developers
Posted on: 4/26/2026 4:14:32 AM
Table of contents
- 1. What is Cloudflare R2 and Why Should Developers Care?
- 2. R2 Technical Architecture
- 3. Cost Comparison: R2 vs S3 vs Azure Blob vs GCS
- 4. R2 + Workers: The Power of Edge Computing
- 5. Event Notifications — Building Automated Data Pipelines
- 6. Migrating from S3 to R2
- 7. Real-World Use Cases for R2
- 8. Public Buckets and Custom Domains
- 9. CORS, Multipart Upload and Advanced Features
- 10. Best Practices When Using R2
- 11. When Should You Choose R2?
- 12. Conclusion
1. What is Cloudflare R2 and Why Should Developers Care?
Cloudflare R2 is an object storage service built on a simple but powerful philosophy: zero egress fees — you pay nothing when downloading data. This might sound minor, but it is the single biggest differentiator compared to AWS S3, Azure Blob Storage, or Google Cloud Storage, where egress fees can consume 50-80% of your total storage bill.
Imagine running a CDN serving 10TB of data per month. With AWS S3, egress alone costs $920/month. With R2? $0. That is not a small saving — it fundamentally changes how you design system architecture.
Why the Name R2?
Cloudflare named it R2 as wordplay — "R2" comes before "S3" (R before S, 2 before 3). It is a clear statement: R2 is designed to replace S3 for workloads where egress fees are the biggest pain point.
2. R2 Technical Architecture
R2 is not simply a "cheaper S3 clone." Its architecture leverages Cloudflare's massive edge network — over 330 data centers worldwide — to deliver a fundamentally different experience.
graph TD
A["Client Request"] --> B["Cloudflare Edge Network
330+ PoPs"]
B --> C["R2 Storage Layer"]
B --> D["Workers Runtime"]
D -->|"Binding"| C
C --> E["Standard Storage
$0.015/GB"]
C --> F["Infrequent Access
$0.01/GB"]
D --> G["Event Notifications"]
G --> H["Queues Consumer"]
style A fill:#e94560,stroke:#fff,color:#fff
style B fill:#0f3460,stroke:#fff,color:#fff
style C fill:#4CAF50,stroke:#fff,color:#fff
style D fill:#ff9800,stroke:#fff,color:#fff
Figure 1: Cloudflare R2 Architecture — deep integration with Edge Network and Workers Runtime
2.1. S3 API Compatibility
R2 is S3 API compatible, meaning most of your existing tools and SDKs work immediately without code changes. AWS CLI, Boto3, MinIO Client, rclone — all work with R2 by simply changing the endpoint.
# Configure AWS CLI for R2
aws configure set default.s3.endpoint_url https://<ACCOUNT_ID>.r2.cloudflarestorage.com
# Upload file — identical S3 syntax
aws s3 cp ./backup.tar.gz s3://my-bucket/backups/
# List objects
aws s3 ls s3://my-bucket/
2.2. Two Storage Tiers
R2 offers two storage classes suited for different access patterns:
| Feature | Standard | Infrequent Access |
|---|---|---|
| Storage cost | $0.015/GB/month | $0.01/GB/month |
| Class A (write) | $4.50/million requests | $9.00/million requests |
| Class B (read) | $0.36/million requests | $0.90/million requests |
| Data retrieval | Free | $0.01/GB |
| Egress | Free | Free |
| Min storage duration | None | 30 days |
| Use case | Hot data, CDN origin | Logs, backups, archives |
Automatic Lifecycle Policies
You can configure lifecycle rules to automatically transition objects from Standard to Infrequent Access after a specified period. Example: logs older than 30 days auto-transition to IA, saving 33% on storage costs.
3. Cost Comparison: R2 vs S3 vs Azure Blob vs GCS
This is the part that surprises most teams. Here are the actual costs for a common workload: 1TB storage + 5TB egress/month.
| Provider | Storage | Egress (5TB) | Operations (est.) | Total/month |
|---|---|---|---|---|
| Cloudflare R2 | $15 | $0 | ~$5 | ~$20 |
| AWS S3 | $23 | $460 | ~$5 | ~$488 |
| Azure Blob | $18 | $435 | ~$5 | ~$458 |
| Google Cloud Storage | $20 | $600 | ~$5 | ~$625 |
graph LR
subgraph "Monthly Cost - 1TB + 5TB egress"
A["R2: ~$20"] --- B["S3: ~$488"]
B --- C["Azure: ~$458"]
C --- D["GCS: ~$625"]
end
style A fill:#4CAF50,stroke:#fff,color:#fff
style B fill:#ff9800,stroke:#fff,color:#fff
style C fill:#ff9800,stroke:#fff,color:#fff
style D fill:#e94560,stroke:#fff,color:#fff
Figure 2: Cost comparison — R2 is 20-30x cheaper than Big 3 thanks to zero egress
When R2 is NOT the Best Choice
R2 currently lacks object versioning, object lock, and WORM compliance. If you need immutable backups for regulatory compliance (HIPAA, SEC 17a-4), S3 remains mandatory. R2 also has no equivalent to S3 Glacier for cold archival below $0.004/GB.
4. R2 + Workers: The Power of Edge Computing
R2's biggest differentiator compared to other object storage services: native integration with Cloudflare Workers. You can process, transform, and serve objects directly at the edge — no separate server, no Lambda@Edge required.
4.1. Wrangler Configuration
# wrangler.toml
name = "r2-image-service"
main = "src/index.ts"
compatibility_date = "2026-04-01"
[[r2_buckets]]
binding = "MEDIA_BUCKET"
bucket_name = "media-production"
preview_bucket_name = "media-preview"
4.2. Worker for Upload and Serve
interface Env {
MEDIA_BUCKET: R2Bucket;
}
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const url = new URL(request.url);
const key = url.pathname.slice(1);
switch (request.method) {
case "PUT": {
const object = await env.MEDIA_BUCKET.put(key, request.body, {
httpMetadata: {
contentType: request.headers.get("content-type") || "application/octet-stream",
},
customMetadata: {
uploadedBy: request.headers.get("x-user-id") || "anonymous",
uploadedAt: new Date().toISOString(),
},
});
return new Response(JSON.stringify({
key: object.key,
size: object.size,
etag: object.etag,
}), { status: 201 });
}
case "GET": {
const object = await env.MEDIA_BUCKET.get(key);
if (!object) return new Response("Not Found", { status: 404 });
const headers = new Headers();
object.writeHttpMetadata(headers);
headers.set("etag", object.httpEtag);
headers.set("cache-control", "public, max-age=31536000, immutable");
return new Response(object.body, { headers });
}
case "DELETE": {
await env.MEDIA_BUCKET.delete(key);
return new Response(null, { status: 204 });
}
default:
return new Response("Method Not Allowed", { status: 405 });
}
},
};
Cache Everything at the Edge
When serving objects via Workers, you can leverage Cloudflare's Cache API to cache objects at 330+ edge locations. Combined with R2 zero egress, this is the lowest-cost CDN solution available today.
5. Event Notifications — Building Automated Data Pipelines
R2 Event Notifications allow you to automatically trigger Workers when bucket data changes — similar to S3 Event Notifications but natively integrated with Cloudflare Queues.
sequenceDiagram
participant U as User
participant W as Upload Worker
participant R2 as R2 Bucket
participant Q as Cloudflare Queue
participant P as Processing Worker
U->>W: Upload image
W->>R2: PUT object
R2->>Q: Event notification
Q->>P: Consume message
P->>R2: GET original
Note over P: Resize, optimize, generate thumbnails
P->>R2: PUT processed versions
Figure 3: Event-driven pipeline — automatic image processing on R2 upload
5.1. Event Notifications Configuration
# wrangler.toml
[[r2_buckets]]
binding = "MEDIA_BUCKET"
bucket_name = "media-production"
[[queues.consumers]]
queue = "r2-events"
max_batch_size = 10
max_batch_timeout = 30
interface Env {
MEDIA_BUCKET: R2Bucket;
}
export default {
async queue(batch: MessageBatch<R2EventNotification>, env: Env) {
for (const message of batch.messages) {
const event = message.body;
if (event.action === "PutObject" && event.object.key.startsWith("uploads/")) {
const original = await env.MEDIA_BUCKET.get(event.object.key);
if (!original) continue;
const thumbnailKey = event.object.key.replace("uploads/", "thumbnails/");
console.log(`Processed: ${event.object.key} -> ${thumbnailKey}`);
}
message.ack();
}
},
};
6. Migrating from S3 to R2
Cloudflare provides two free migration tools:
6.1. Super Slurper — Bulk Migration
Super Slurper copies all objects from S3 (or GCS) to R2. Suitable for one-time migrations or scheduled syncs.
graph LR
A["AWS S3 Bucket"] -->|"Super Slurper"| B["Cloudflare R2"]
C["Google Cloud Storage"] -->|"Super Slurper"| B
D["S3-Compatible"] -->|"Super Slurper"| B
style B fill:#4CAF50,stroke:#fff,color:#fff
style A fill:#ff9800,stroke:#fff,color:#fff
Figure 4: Super Slurper supports migration from multiple sources
6.2. Sippy — Incremental Migration
Sippy is the smarter solution: it acts as a proxy layer. When a client requests an object:
- R2 checks if the object exists in the bucket
- If yes — serve from R2
- If no — fetch from S3 source, store in R2, then serve
Result: you migrate gradually with zero downtime, objects are copied on-demand when there is actual traffic. After some time, most hot data already resides in R2.
Migration Cost = $0
Both Super Slurper and Sippy are free — you only pay for operations (PUT) when objects are written to R2. There is no separate transfer fee for migration tools.
7. Real-World Use Cases for R2
7.1. CDN Origin for Static Assets
Use R2 as origin server for website assets (images, CSS, JS). Combined with Cloudflare CDN cache, you get a complete zero-cost egress solution.
7.2. Backup and Log Storage
Database backups, application logs, audit trails — use the Infrequent Access tier at just $0.01/GB. 56% cheaper than S3 Standard.
7.3. AI/ML Dataset Storage
Training datasets are often very large and need to be downloaded multiple times. Zero egress makes R2 the ideal choice for AI teams with limited budgets.
7.4. Multi-region Content Delivery
R2 + Workers enables serving content with custom logic at the edge: A/B testing, personalization, geo-routing — all without traditional origin servers.
7.5. Presigned URLs for Secure Uploads
async function generatePresignedUrl(env: Env, key: string): Promise<string> {
const s3Client = new S3Client({
region: "auto",
endpoint: `https://${env.ACCOUNT_ID}.r2.cloudflarestorage.com`,
credentials: {
accessKeyId: env.R2_ACCESS_KEY,
secretAccessKey: env.R2_SECRET_KEY,
},
});
const command = new PutObjectCommand({
Bucket: env.BUCKET_NAME,
Key: key,
ContentType: "image/jpeg",
});
return await getSignedUrl(s3Client, command, { expiresIn: 3600 });
}
8. Public Buckets and Custom Domains
R2 supports public bucket access — turning your bucket into a static file server accessible directly via URL. Combined with a custom domain, you have a complete CDN origin.
# Configure public access via Wrangler
npx wrangler r2 bucket update my-bucket --public
# Access objects via public URL
# https://pub-{hash}.r2.dev/{object-key}
# Or map a custom domain via Cloudflare Dashboard
# https://assets.yourdomain.com/{object-key}
Public Bucket Security
A public bucket means anyone can read all objects. Only use it for truly public static assets. For sensitive data, use presigned URLs or Workers authentication to control access.
9. CORS, Multipart Upload and Advanced Features
9.1. CORS Configuration
[
{
"AllowedOrigins": ["https://yourdomain.com"],
"AllowedMethods": ["GET", "PUT", "POST", "DELETE"],
"AllowedHeaders": ["*"],
"MaxAgeSeconds": 3600
}
]
9.2. Multipart Upload for Large Files
R2 supports multipart upload for large files — splitting files into multiple parts and uploading in parallel. Compatible with S3 multipart API, supporting files up to 5TB.
9.3. R2 Data Catalog (Beta)
A new feature allowing you to query object metadata without listing the entire bucket. Useful for analytics and data management on large buckets.
10. Best Practices When Using R2
10 Golden Rules
1. Use Workers bindings instead of S3 API when possible — faster with no operations cost.
2. Cache at the edge — combine Cache API to reduce R2 reads.
3. Lifecycle rules — auto-transition old data to Infrequent Access.
4. Event Notifications — event-driven instead of polling.
5. Presigned URLs — upload directly from client, reducing server load.
6. Multipart upload — for files > 100MB.
7. Bucket-scoped tokens — least privilege for each service.
8. Location hints — specify the region closest to your users when creating buckets.
9. Sippy migration — migrate from S3 with zero downtime.
10. Monitor with R2 analytics — track usage patterns.
11. When Should You Choose R2?
| Scenario | Recommendation | Reason |
|---|---|---|
| CDN origin, static assets | R2 | Zero egress = biggest savings |
| AI/ML datasets | R2 | Large datasets, frequent downloads |
| Log/backup storage | R2 (IA tier) | $0.01/GB, cheapest on the market |
| Compliance/immutable backups | S3 | R2 lacks Object Lock/WORM |
| Deep archive (<$0.004/GB) | S3 Glacier | R2 has no equivalent tier |
| Tight AWS ecosystem | S3 | Lambda, Athena, Redshift integration |
| Edge processing + storage | R2 + Workers | Native integration, no Lambda@Edge needed |
| Startup/side project | R2 | Free tier 10GB + zero egress |
12. Conclusion
Cloudflare R2 is not an "S3 killer" in every scenario — but it is the best choice for the majority of common workloads. Zero egress does not just save money; it frees you from "vendor lock-in via billing" — the phenomenon where egress costs are so high that migrating data out of a cloud becomes financially unfeasible.
Combined with Workers, Event Notifications, and Queues, R2 becomes a complete object storage + edge compute platform. Especially with the free tier of 10GB and zero egress, this is an unmissable choice for startups, side projects, and anyone looking to optimize cloud costs.
Key takeaways:
- Zero egress — 20-30x cheaper than Big 3 for read-heavy workloads
- S3 API compatible — easy migration, no SDK/tool changes needed
- Workers + R2 — native edge compute + storage, no extra services
- Event Notifications — fully serverless event-driven pipelines
- Sippy migration — move from S3 with zero downtime, zero cost
References:
Cloudflare R2 Documentation |
R2 Pricing |
R2 Event Notifications & Infrequent Access Announcement |
R2 Event Notifications Docs |
Workers R2 API Reference
Image Optimization for Web Performance 2026 — AVIF, WebP, Sharp & CDN Edge
.NET 10 Memory & GC Optimization — Span<T>, ArrayPool, DATAS and Zero-Allocation Patterns
Disclaimer: The opinions expressed in this blog are solely my own and do not reflect the views or opinions of my employer or any affiliated organizations. The content provided is for informational and educational purposes only and should not be taken as professional advice. While I strive to provide accurate and up-to-date information, I make no warranties or guarantees about the completeness, reliability, or accuracy of the content. Readers are encouraged to verify the information and seek independent advice as needed. I disclaim any liability for decisions or actions taken based on the content of this blog.