Cloudflare R2 — Zero Egress Object Storage for Developers

Posted on: 4/27/2026 12:15:37 PM

Each month, how much of your AWS S3 bill is egress fees — the cost of users downloading files from your bucket? For many projects, this accounts for 50-80% of total storage costs. Cloudflare R2 was built to solve exactly this problem: S3-compatible object storage with zero egress fees — you pay nothing for outbound bandwidth. This article provides a deep analysis of R2 architecture, Workers edge integration, production-ready upload/download patterns, and a detailed cost comparison with AWS S3.

1. The Problem — Cloud Storage "Egress Tax"

AWS S3 charges $0.09/GB for data transfer out to the Internet. Sounds small, but as your application scales, this cost grows non-linearly:

$0Egress on Cloudflare R2
$0.015/GB/month R2 storage
10 GBFree tier storage per month
PetabytesMigrated from S3 to R2
graph LR
    subgraph "AWS S3 — Hidden Costs"
        S3["S3 Bucket
$0.023/GB storage"] --> EG["Egress
$0.09/GB"] EG --> USER["End User"] S3 --> CF["via CloudFront
$0.085/GB"] CF --> USER end subgraph "Cloudflare R2 — Transparent" R2["R2 Bucket
$0.015/GB storage"] --> CDN["Cloudflare CDN
$0 egress"] CDN --> USER2["End User"] end style S3 fill:#ff9800,stroke:#fff,color:#fff style EG fill:#e94560,stroke:#fff,color:#fff style R2 fill:#4CAF50,stroke:#fff,color:#fff style CDN fill:#4CAF50,stroke:#fff,color:#fff

Figure 1: Cost flow comparison between AWS S3 and Cloudflare R2

ScenarioAWS S3 + CloudFrontCloudflare R2Savings
Small blog — 50 GB storage, 500 GB egress/month$1.15 + $42.50 = $43.65$0.75 + $0 = $0.7598%
Mid-size SaaS — 500 GB storage, 5 TB egress/month$11.50 + $425 = $436.50$7.50 + $0 = $7.5098%
Video platform — 5 TB storage, 50 TB egress/month$115 + $4,250 = $4,365$75 + $0 = $7598%
Enterprise — 50 TB storage, 200 TB egress/month$1,150 + $17,000 = $18,150$750 + $0 = $75096%

Why is egress so expensive?

Cloud providers charge for egress because bandwidth has real costs (transit, peering, infrastructure). But Cloudflare has a unique advantage: they own one of the largest CDN networks in the world (330+ PoPs across 120+ countries) with peering agreements with most major ISPs. Cloudflare's bandwidth cost is near zero thanks to their business model built on security + performance services, not storage egress. R2 simply doesn't pass bandwidth costs to customers.

2. Cloudflare R2 Architecture

R2 is not a simple S3 clone. It was designed from the ground up to deeply integrate with the Cloudflare ecosystem:

graph TB
    CLIENT["Client
(Browser / Mobile / Server)"] subgraph "Cloudflare Edge — 330+ PoPs" WORKER["Cloudflare Worker
Auth, Transform, Route"] CACHE["Edge Cache
Auto-cache objects"] end subgraph "Cloudflare R2 Storage" R2["R2 Bucket
S3-compatible API"] IA["Infrequent Access
$0.01/GB/month"] LIFECYCLE["Lifecycle Rules
Auto-transition"] end subgraph "Event System" NOTIFY["Event Notifications"] QUEUE["Cloudflare Queue"] CONSUMER["Consumer Worker
Process events"] end CLIENT --> WORKER CLIENT -->|"S3 API / Presigned URL"| R2 WORKER -->|"Bindings API"| R2 WORKER --> CACHE CACHE --> R2 R2 --> LIFECYCLE LIFECYCLE --> IA R2 --> NOTIFY NOTIFY --> QUEUE QUEUE --> CONSUMER style WORKER fill:#e94560,stroke:#fff,color:#fff style R2 fill:#f76c02,stroke:#fff,color:#fff style CACHE fill:#4CAF50,stroke:#fff,color:#fff style QUEUE fill:#2196F3,stroke:#fff,color:#fff

Figure 2: Cloudflare R2 overall architecture with Workers and Event Notifications

ComponentDescriptionBenefit
R2 BucketDistributed object storage with S3-compatible API. Supports objects up to 5 TB.Migrate from S3 without changing code
Workers BindingAccess R2 directly from Workers via in-process binding — no HTTP overhead.Ultra-low latency (~1-5ms), no API call fees
Edge CachePopular objects automatically cached at 330+ PoPs near users.Reduced latency for read-heavy workloads
Infrequent AccessCheaper storage class ($0.01/GB vs $0.015/GB) for rarely accessed data.33% savings on cold data
Event NotificationsSend events to Cloudflare Queue when objects are created/deleted/modified.Trigger async processing (thumbnails, transcoding, indexing)
Lifecycle RulesAutomatically transition objects to IA or delete after N days.Automated cost management

3. Two Ways to Access R2

R2 provides 2 completely different APIs, suited for different use cases:

3.1. S3-compatible API — For server-to-server

Use any S3 SDK (AWS SDK, boto3, @aws-sdk/client-s3) with the R2 endpoint:

// Node.js — Upload to R2 using AWS SDK v3
import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";

const r2Client = new S3Client({
  region: "auto",
  endpoint: "https://<ACCOUNT_ID>.r2.cloudflarestorage.com",
  credentials: {
    accessKeyId: process.env.R2_ACCESS_KEY_ID!,
    secretAccessKey: process.env.R2_SECRET_ACCESS_KEY!,
  },
});

await r2Client.send(new PutObjectCommand({
  Bucket: "my-bucket",
  Key: "uploads/avatar-123.webp",
  Body: fileBuffer,
  ContentType: "image/webp",
}));
// .NET — Upload to R2 using AWSSDK.S3
using Amazon.S3;
using Amazon.S3.Model;

var config = new AmazonS3Config
{
    ServiceURL = $"https://{accountId}.r2.cloudflarestorage.com",
    ForcePathStyle = true // Required for R2
};

var s3Client = new AmazonS3Client(
    accessKeyId, secretAccessKey, config);

await s3Client.PutObjectAsync(new PutObjectRequest
{
    BucketName = "my-bucket",
    Key = "uploads/avatar-123.webp",
    InputStream = fileStream,
    ContentType = "image/webp"
});

3.2. Workers API — For edge processing

When processing at the edge, Workers binding is faster than S3 API since it bypasses HTTP:

// wrangler.toml
// [[r2_buckets]]
// binding = "MY_BUCKET"
// bucket_name = "my-bucket"

export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    const url = new URL(request.url);
    const key = url.pathname.slice(1);

    switch (request.method) {
      case "GET": {
        const object = await env.MY_BUCKET.get(key);
        if (!object) return new Response("Not Found", { status: 404 });

        const headers = new Headers();
        object.writeHttpMetadata(headers);
        headers.set("etag", object.httpEtag);
        headers.set("cache-control", "public, max-age=86400");

        return new Response(object.body, { headers });
      }

      case "PUT": {
        const contentType = request.headers.get("content-type") ?? "";
        await env.MY_BUCKET.put(key, request.body, {
          httpMetadata: { contentType },
          customMetadata: { uploadedBy: "worker" },
        });
        return new Response(JSON.stringify({ key }), { status: 201 });
      }

      case "DELETE": {
        await env.MY_BUCKET.delete(key);
        return new Response(null, { status: 204 });
      }

      default:
        return new Response("Method Not Allowed", { status: 405 });
    }
  },
};

Workers Binding vs S3 API — When to use which?

Workers Binding: When you need edge logic (auth, image resize, data transform) before/after reading/writing R2. Latency ~1-5ms, no Class A/B operation fees.

S3 API: When your backend server (Node.js, .NET, Python) needs direct R2 access, or when using tools with built-in S3 support (Terraform, rclone, cyberduck). Incurs operation fees but compatible with the entire S3 ecosystem.

4. Presigned URLs — Direct Browser Uploads

The most common file upload pattern: client gets a presigned URL from the server, then uploads directly to R2 without proxying through the backend.

sequenceDiagram
    participant B as Browser
    participant W as Worker / API Server
    participant R2 as Cloudflare R2

    B->>W: POST /api/upload/presign
{filename, contentType} W->>W: Validate user, generate presigned URL W-->>B: {uploadUrl, key} B->>R2: PUT uploadUrl
(file binary) R2-->>B: 200 OK B->>W: POST /api/upload/confirm
{key} W->>R2: HEAD key (verify exists) R2-->>W: 200 + metadata W-->>B: {url: "https://cdn.example.com/key"}

Figure 3: Presigned URL flow — Browser uploads directly, bypassing backend

// Worker — Generate presigned URL for upload
import { AwsClient } from "aws4fetch";

const r2 = new AwsClient({
  accessKeyId: R2_ACCESS_KEY_ID,
  secretAccessKey: R2_SECRET_ACCESS_KEY,
});

export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    if (request.method !== "POST") {
      return new Response("Method Not Allowed", { status: 405 });
    }

    const { filename, contentType } = await request.json();
    const key = `uploads/${crypto.randomUUID()}/${filename}`;

    // Create presigned PUT URL — expires after 1 hour
    const url = new URL(
      `https://${env.ACCOUNT_ID}.r2.cloudflarestorage.com/${env.BUCKET_NAME}/${key}`
    );
    url.searchParams.set("X-Amz-Expires", "3600");

    const signed = await r2.sign(
      new Request(url, {
        method: "PUT",
        headers: { "Content-Type": contentType },
      }),
      { aws: { signQuery: true } }
    );

    return Response.json({
      uploadUrl: signed.url,
      key,
    });
  },
};
// Frontend — Upload file using presigned URL
async function uploadFile(file: File) {
  // Step 1: Get presigned URL
  const { uploadUrl, key } = await fetch("/api/upload/presign", {
    method: "POST",
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify({
      filename: file.name,
      contentType: file.type,
    }),
  }).then(r => r.json());

  // Step 2: Upload directly to R2
  await fetch(uploadUrl, {
    method: "PUT",
    headers: { "Content-Type": file.type },
    body: file,
  });

  // Step 3: Confirm with backend
  const result = await fetch("/api/upload/confirm", {
    method: "POST",
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify({ key }),
  }).then(r => r.json());

  return result.url;
}

5. Multipart Upload — Large Files (GB+)

For files over 100 MB, multipart upload splits the file into smaller parts (5-100 MB each), uploads them in parallel, and R2 reassembles them. If one part fails, only that part needs to be retried.

// Worker — Multipart upload via Workers API
export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    const key = "videos/large-file.mp4";

    // Step 1: Initialize multipart upload
    const mpu = await env.MY_BUCKET.createMultipartUpload(key, {
      httpMetadata: { contentType: "video/mp4" },
    });

    // Step 2: Upload each part (5 MB minimum except last part)
    const partSize = 10 * 1024 * 1024; // 10 MB
    const body = await request.arrayBuffer();
    const uploadedParts: R2UploadedPart[] = [];

    for (let i = 0; i * partSize < body.byteLength; i++) {
      const start = i * partSize;
      const end = Math.min(start + partSize, body.byteLength);
      const chunk = body.slice(start, end);

      const part = await mpu.uploadPart(i + 1, chunk);
      uploadedParts.push(part);
    }

    // Step 3: Complete — R2 assembles all parts
    const object = await mpu.complete(uploadedParts);

    return Response.json({
      key: object.key,
      size: object.size,
      etag: object.httpEtag,
    });
  },
};

Important multipart upload considerations

1. Minimum part size: Each part (except the last) must be at least 5 MB. Violating this causes an error.

2. Maximum parts: Up to 10,000 parts per upload.

3. Auto-abort: Incomplete multipart uploads are automatically aborted after 7 days. Uploaded parts consume storage and incur charges until aborted.

4. Resume: Use resumeMultipartUpload(key, uploadId) to continue interrupted uploads — no need to start over.

6. Event Notifications — Async Processing

R2 Event Notifications trigger Workers when objects change — ideal for image processing, video transcoding, and search indexing:

graph LR
    UPLOAD["Upload image
avatar.jpg"] --> R2["R2 Bucket"] R2 -->|"Event: object-create"| QUEUE["Cloudflare Queue"] QUEUE --> WORKER["Consumer Worker"] WORKER -->|"Resize 3 sizes"| R2_THUMB["R2 Bucket
/thumbs/"] WORKER -->|"Analyze content"| AI["Workers AI
Image Classification"] WORKER -->|"Update metadata"| DB["D1 Database"] style R2 fill:#f76c02,stroke:#fff,color:#fff style QUEUE fill:#2196F3,stroke:#fff,color:#fff style WORKER fill:#e94560,stroke:#fff,color:#fff

Figure 4: Event-driven pipeline — Upload image, resize + AI classify + update DB

// wrangler.toml — Event Notifications configuration
// [[r2_buckets]]
// binding = "MY_BUCKET"
// bucket_name = "my-bucket"
//
// [[queues.consumers]]
// queue = "r2-events"
// max_batch_size = 10
// max_batch_timeout = 5

// Consumer Worker — Process R2 events
export default {
  async queue(
    batch: MessageBatch<R2EventNotification>,
    env: Env
  ): Promise<void> {
    for (const message of batch.messages) {
      const event = message.body;

      if (event.action === "PutObject") {
        const key = event.object.key;

        // Only process images
        if (key.match(/\.(jpg|jpeg|png|webp)$/i)) {
          const original = await env.MY_BUCKET.get(key);
          if (!original) continue;

          const imageData = await original.arrayBuffer();

          // Create 200x200 thumbnail
          const thumb = await resizeImage(imageData, 200, 200);
          await env.MY_BUCKET.put(
            `thumbs/${key}`,
            thumb,
            { httpMetadata: { contentType: "image/webp" } }
          );

          // Update database
          await env.DB.prepare(
            "UPDATE files SET thumbnail_key = ? WHERE key = ?"
          ).bind(`thumbs/${key}`, key).run();
        }
      }

      message.ack();
    }
  },
};

7. Migrating from S3 to R2

Cloudflare provides 2 official migration tools suited for different scenarios:

ToolHow it worksBest for
Super SlurperCopies entire bucket from S3/GCS to R2. Runs in background, handles petabytes.One-time migration when you need all data copied before switching.
SippyIncremental migration — when a client requests an object not yet in R2, Sippy automatically fetches it from S3, stores it in R2, and serves it. Next request serves from R2.Zero-downtime migration without upfront full copy.
sequenceDiagram
    participant C as Client
    participant R2 as Cloudflare R2
    participant S3 as AWS S3 (source)

    Note over R2: Sippy enabled

    C->>R2: GET /images/photo.jpg
    R2->>R2: Check local storage
    alt Object exists in R2
        R2-->>C: 200 OK (from R2)
    else Object not found
        R2->>S3: Fetch /images/photo.jpg
        S3-->>R2: Object data
        R2->>R2: Store in R2 storage
        R2-->>C: 200 OK (now cached)
    end

    Note over C,S3: Next request serves from R2,
no S3 call needed

Figure 5: Sippy migration — Lazy copy from S3, zero downtime, no upfront S3 egress

Practical migration strategy

Step 1: Enable Sippy on R2 bucket, pointing to the source S3 bucket.

Step 2: Update DNS/CDN to point to R2. Client requests are served by Sippy — missing objects are automatically fetched from S3.

Step 3: In parallel, run Super Slurper to copy remaining data (objects not yet requested).

Step 4: Once Super Slurper completes, disable Sippy. R2 now has all data.

This approach avoids paying S3 egress for all data — you only pay for Super Slurper copies (Cloudflare covers egress fees for Sippy lazy-fetched objects).

8. Lifecycle Rules and Infrequent Access

R2 simplifies storage classes to just 2 tiers — no confusion from dozens of tiers like S3:

CriteriaR2 StandardR2 Infrequent AccessS3 StandardS3 Glacier
Storage$0.015/GB$0.01/GB$0.023/GB$0.004/GB
Egress$0$0$0.09/GB$0.09/GB + retrieval
Retrieval feeNone$0.01/GBNone$0.03-0.05/GB
Min durationNone30 daysNone90-180 days
AvailabilityImmediateImmediateImmediateMinutes to hours
# Wrangler CLI — Manage lifecycle rules
# Add rule: transition to IA after 90 days, delete after 365 days
npx wrangler r2 bucket lifecycle add my-bucket \
  --prefix "logs/" \
  --transition-to-ia-after 90 \
  --expire-after 365

# List current rules
npx wrangler r2 bucket lifecycle list my-bucket

# Remove rule
npx wrangler r2 bucket lifecycle remove my-bucket --id rule-id-123

9. Production Patterns

9.1. CDN Cache + R2 — Optimizing read-heavy workloads

// Worker — Serve R2 objects with CDN caching
export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    const url = new URL(request.url);
    const key = url.pathname.slice(1);

    // Check Cache API first
    const cache = caches.default;
    let response = await cache.match(request);
    if (response) return response;

    // No cache — fetch from R2
    const object = await env.MY_BUCKET.get(key);
    if (!object) {
      return new Response("Not Found", { status: 404 });
    }

    const headers = new Headers();
    object.writeHttpMetadata(headers);
    headers.set("etag", object.httpEtag);

    // Cache static assets 1 year, dynamic content 1 hour
    const isStatic = key.match(/\.(js|css|woff2|webp|avif|svg)$/);
    headers.set(
      "cache-control",
      isStatic
        ? "public, max-age=31536000, immutable"
        : "public, max-age=3600, s-maxage=86400"
    );

    response = new Response(object.body, { headers });

    // Store in edge cache
    request.method === "GET" && cache.put(request, response.clone());

    return response;
  },
};

9.2. Access Control — Signed URLs + Auth

// Worker — Auth check before allowing download
export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    // Verify JWT token
    const token = request.headers.get("Authorization")?.replace("Bearer ", "");
    if (!token) return new Response("Unauthorized", { status: 401 });

    const payload = await verifyJWT(token, env.JWT_SECRET);
    if (!payload) return new Response("Forbidden", { status: 403 });

    const key = new URL(request.url).pathname.slice(1);

    // Check file access permission
    const allowed = await checkPermission(env.DB, payload.userId, key);
    if (!allowed) return new Response("Forbidden", { status: 403 });

    const object = await env.MY_BUCKET.get(key);
    if (!object) return new Response("Not Found", { status: 404 });

    const headers = new Headers();
    object.writeHttpMetadata(headers);
    headers.set("cache-control", "private, no-store");

    return new Response(object.body, { headers });
  },
};

10. Comprehensive Comparison with Other Object Storage

CriteriaCloudflare R2AWS S3GCSAzure BlobBackblaze B2
Egress$0$0.09/GB$0.12/GB$0.087/GB$0.01/GB
Storage$0.015/GB$0.023/GB$0.020/GB$0.018/GB$0.006/GB
Free tier10 GB + 10M ops5 GB (12 months)5 GB5 GB (12 months)10 GB
S3 compatibleYesNativeYes (XML API)NoYes
Edge integrationWorkers bindingLambda@EdgeCloud FunctionsAzure FunctionsNone
Built-in CDNCloudflare CDNCloudFrontCloud CDNAzure CDNCloudflare (partner)
Storage classes2 (Standard + IA)6+441
Event systemQueuesSNS/SQS/LambdaPub/SubEvent GridWebhooks

When NOT to use R2

1. Multi-region replication needed: R2 stores data in a single region (automatically chosen closest). S3 has Cross-Region Replication for compliance/DR.

2. Deep AWS ecosystem: If your project heavily uses AWS services (Lambda, SQS, DynamoDB), S3 integrates more naturally — R2 requires an additional connection layer.

3. Ultra-cheap archival storage: S3 Glacier Deep Archive at $0.00099/GB is far cheaper than R2 IA for long-term storage with rare access.

4. Specific compliance: S3 has more certifications (FedRAMP High, HIPAA, PCI DSS Level 1). R2 is gradually adding more.

5. Storage analytics: S3 Select and Athena can query directly on S3. R2 has no equivalent feature.

11. Conclusion

Cloudflare R2 is not just "cheaper S3" — it represents a fundamentally different business model for object storage, completely eliminating the "egress tax" that traditional cloud providers have charged for over a decade. With an S3-compatible API, deep Workers edge integration, event notifications, and zero-downtime Sippy migration — R2 is production-ready at any scale.

If your application serves many files to end users (images, videos, documents, static assets), R2 can reduce storage costs by 90-98% compared to S3. Start with the free tier (10 GB storage, 10 million operations/month), migrate gradually with Sippy, and only pay when you truly scale.

References