Redis Streams — Lightweight Event Streaming Alternative to Kafka for Microservices

Posted on: 4/26/2026 2:25:01 AM

Your team already uses Redis for caching — so why add Kafka just to move a few thousand events per second between microservices? Redis Streams is the answer: an event streaming system built right into Redis, with consumer groups, at-least-once delivery, and sub-millisecond latency. This article deep-dives into Redis Streams architecture, compares it with Kafka, and provides end-to-end implementation on .NET 10.

0.8ms P99 Latency — 15x faster than Kafka
0 Extra infra — already have Redis, ready to go
Consumer Groups Scale-out parallel processing
At-Least-Once Delivery guarantee with ACK

What Are Redis Streams?

Redis Streams is an append-only log data structure — similar to a Kafka topic but living inside Redis. Each entry has a unique ID (timestamp-based), contains key-value pairs, and is durably persisted like any other Redis data type.

graph LR
    P1[Producer 1
Order Service] -->|XADD| S[Redis Stream
orders-stream] P2[Producer 2
Payment Service] -->|XADD| S S -->|XREADGROUP| CG1[Consumer Group: processors] S -->|XREADGROUP| CG2[Consumer Group: analytics] CG1 --> C1[Consumer A
Process Order] CG1 --> C2[Consumer B
Process Order] CG2 --> C3[Consumer C
Update Dashboard] C1 -->|XACK| S C2 -->|XACK| S C3 -->|XACK| S style S fill:#e94560,stroke:#fff,color:#fff style CG1 fill:#2c3e50,stroke:#fff,color:#fff style CG2 fill:#16213e,stroke:#fff,color:#fff

Figure 1: Redis Streams — Producers write to the stream, Consumer Groups read in parallel with ACK

Why Not Redis Pub/Sub?

Redis Pub/Sub is fire-and-forget: if a consumer is offline when a message is sent, that message is gone forever. Redis Streams solves all those limitations:

FeatureRedis Pub/SubRedis Streams
Message persistence❌ Lost when no subscriber✅ Stored in log, replayable
Consumer Groups❌ None✅ Yes — parallel work distribution
Acknowledgement❌ No delivery confirmation✅ XACK — marks as processed
Replay❌ Cannot re-read✅ Read from any ID
Backpressure❌ Slow consumer = lost messages✅ Messages wait in stream
Dead letter❌ None✅ Pending Entry List (PEL) + XCLAIM

Redis Streams Architecture — Core Components

Stream Entry

Each stream entry has the format:

// Entry ID: <timestamp>-<sequence>
// Example: 1714123456789-0
{
  "orderId": "ORD-2026-0042",
  "customerId": "C-1234",
  "amount": "2500000",
  "status": "pending"
}

Consumer Groups — Parallel Processing

sequenceDiagram
    participant P as Producer
    participant S as Redis Stream
    participant CG as Consumer Group
    participant C1 as Consumer A
    participant C2 as Consumer B
    participant PEL as Pending Entry List

    P->>S: XADD orders * orderId ORD-42
    P->>S: XADD orders * orderId ORD-43
    P->>S: XADD orders * orderId ORD-44

    CG->>S: XREADGROUP GROUP processors consumer-a
    S-->>C1: Entry ORD-42 (assigned to A)
    S-->>PEL: Track: ORD-42 → consumer-a

    CG->>S: XREADGROUP GROUP processors consumer-b
    S-->>C2: Entry ORD-43 (assigned to B)
    S-->>PEL: Track: ORD-43 → consumer-b

    C1->>S: XACK orders processors 1714...-0
    Note over PEL: ORD-42 removed from PEL

    Note over C2: Consumer B crashes!
    Note over PEL: ORD-43 stuck in PEL

    C1->>S: XCLAIM orders processors consumer-a 60000 1714...-1
    Note over C1: Consumer A claims ORD-43 after 60s timeout

Figure 2: Consumer Group workflow — message distribution, ACK, and recovery when a consumer crashes

💡 Consumer Group = Kafka Consumer Group

Each message in a stream is delivered to exactly one consumer within a group (like Kafka partition assignment). But different consumer groups can read the same stream independently — like multiple Kafka consumer groups reading the same topic.

Redis Streams vs Kafka — When to Choose Which?

CriteriaRedis StreamsApache Kafka
Latency P99~0.8ms~12.5ms
Max throughputHundreds of thousands msg/sMillions msg/s
StorageIn-memory (with persistence)Disk-based (infinite retention)
RetentionLimited by RAM (MAXLEN/MINID)Unlimited (disk-based)
Infra complexityLow — Redis already availableHigh — Zookeeper/KRaft, broker, topic
OrderingGlobal (single stream)Per-partition
Consumer Groups✅ Yes✅ Yes (more powerful)
Exactly-once❌ At-least-once only✅ Yes (idempotent producer)
Schema Registry❌ None✅ Confluent Schema Registry
Multi-datacenter⚠️ Limited✅ MirrorMaker / Cluster Linking
graph TB
    A[Choose Event Streaming] --> B{Throughput > 500K msg/s?}
    B -->|Yes| K[Apache Kafka]
    B -->|No| C{Need retention > few days?}
    C -->|Yes| K
    C -->|No| D{Already using Redis?}
    D -->|Yes| R[✅ Redis Streams]
    D -->|No| E{Need exactly-once?}
    E -->|Yes| K
    E -->|No| R

    style R fill:#e94560,stroke:#fff,color:#fff
    style K fill:#2c3e50,stroke:#fff,color:#fff
    style A fill:#f8f9fa,stroke:#e94560,color:#2c3e50

Figure 3: Decision tree — Redis Streams for small-to-mid systems already running Redis, Kafka for massive scale

Implementation on .NET 10 With StackExchange.Redis

Producer — Writing Events to a Stream

using StackExchange.Redis;

var redis = await ConnectionMultiplexer.ConnectAsync("localhost:6379");
var db = redis.GetDatabase();

// XADD — append entry to stream
var entryId = await db.StreamAddAsync("orders-stream",
    new NameValueEntry[]
    {
        new("orderId", "ORD-2026-0042"),
        new("customerId", "C-1234"),
        new("amount", "2500000"),
        new("status", "pending"),
        new("timestamp", DateTimeOffset.UtcNow.ToUnixTimeMilliseconds().ToString())
    });

Console.WriteLine($"Added entry: {entryId}");
// Output: Added entry: 1714123456789-0

// Limit stream size to prevent RAM overflow
await db.StreamAddAsync("orders-stream",
    new NameValueEntry[] { new("orderId", "ORD-2026-0043"), new("amount", "1800000") },
    maxLength: 100_000,    // Keep max 100K entries
    useApproximateMaxLength: true  // ~ operator, faster than exact trim
);

Consumer Group — Parallel Reading With ACK

var streamName = "orders-stream";
var groupName = "order-processors";
var consumerName = Environment.MachineName;

// Create consumer group (run once)
try
{
    await db.StreamCreateConsumerGroupAsync(
        streamName, groupName, StreamPosition.NewMessages);
}
catch (RedisServerException ex) when (ex.Message.Contains("BUSYGROUP"))
{
    // Group already exists — OK
}

// Read and process messages
while (true)
{
    var entries = await db.StreamReadGroupAsync(
        streamName, groupName, consumerName,
        position: StreamPosition.NewMessages,
        count: 10);

    if (entries.Length == 0)
    {
        await Task.Delay(100); // Polling interval
        continue;
    }

    foreach (var entry in entries)
    {
        try
        {
            var orderId = entry["orderId"].ToString();
            var amount = decimal.Parse(entry["amount"].ToString());

            // Process order...
            await ProcessOrderAsync(orderId, amount);

            // ACK — mark as successfully processed
            await db.StreamAcknowledgeAsync(streamName, groupName, entry.Id);
        }
        catch (Exception ex)
        {
            // No ACK → message stays in Pending Entry List
            // Will be XCLAIMed by another consumer after timeout
            Console.WriteLine($"Failed to process {entry.Id}: {ex.Message}");
        }
    }
}

Dead Letter Queue — Handling Failed Messages

// Claim messages stuck in PEL for too long
var pendingMessages = await db.StreamPendingMessagesAsync(
    streamName, groupName,
    count: 20,
    consumerName: RedisValue.Null,  // All consumers
    minId: "-",
    maxId: "+");

foreach (var pending in pendingMessages)
{
    // If pending > 5 minutes and delivered > 3 times → dead letter
    if (pending.IdleTimeInMilliseconds > 300_000 && pending.DeliveryCount > 3)
    {
        // Move to dead letter stream
        var entry = await db.StreamRangeAsync(streamName, pending.MessageId, pending.MessageId);
        if (entry.Length > 0)
        {
            await db.StreamAddAsync("orders-dead-letter", entry[0].Values);
            await db.StreamAcknowledgeAsync(streamName, groupName, pending.MessageId);
            Console.WriteLine($"Moved {pending.MessageId} to dead letter after {pending.DeliveryCount} attempts");
        }
    }
    else if (pending.IdleTimeInMilliseconds > 60_000)
    {
        // Reclaim for current consumer
        await db.StreamClaimAsync(streamName, groupName, consumerName,
            minIdleTimeInMs: 60_000,
            messageIds: new[] { pending.MessageId });
    }
}

Pattern: Event-Driven Microservices With Redis Streams

graph TB
    subgraph Order Service
        OS[API Controller]
        OP[Order Producer]
    end

    subgraph Redis
        RS1[orders-stream]
        RS2[payments-stream]
        RS3[notifications-stream]
        DLQ[dead-letter-stream]
    end

    subgraph Payment Service
        PC[Payment Consumer
Group: payment-processors] PP[Payment Producer] end subgraph Notification Service NC1[Email Consumer
Group: notifiers] NC2[SMS Consumer
Group: notifiers] end subgraph Analytics Service AC[Analytics Consumer
Group: analytics] end OS --> OP -->|XADD| RS1 RS1 -->|XREADGROUP| PC PC --> PP -->|XADD| RS2 RS2 -->|XREADGROUP| NC1 RS2 -->|XREADGROUP| NC2 RS1 -->|XREADGROUP| AC PC -.->|Failed 3x| DLQ style RS1 fill:#e94560,stroke:#fff,color:#fff style RS2 fill:#e94560,stroke:#fff,color:#fff style RS3 fill:#e94560,stroke:#fff,color:#fff style DLQ fill:#ff9800,stroke:#fff,color:#fff

Figure 4: Event-driven microservices — each service produces/consumes via Redis Streams

Background Worker With .NET 10 Hosted Service

public class OrderStreamConsumer : BackgroundService
{
    private readonly IConnectionMultiplexer _redis;
    private readonly IServiceScopeFactory _scopeFactory;
    private const string StreamName = "orders-stream";
    private const string GroupName = "order-processors";

    public OrderStreamConsumer(
        IConnectionMultiplexer redis,
        IServiceScopeFactory scopeFactory)
    {
        _redis = redis;
        _scopeFactory = scopeFactory;
    }

    protected override async Task ExecuteAsync(CancellationToken ct)
    {
        var db = _redis.GetDatabase();
        var consumerName = $"worker-{Environment.MachineName}-{Environment.ProcessId}";

        await EnsureConsumerGroupAsync(db);

        while (!ct.IsCancellationRequested)
        {
            var entries = await db.StreamReadGroupAsync(
                StreamName, GroupName, consumerName,
                StreamPosition.NewMessages, count: 20);

            foreach (var entry in entries)
            {
                using var scope = _scopeFactory.CreateScope();
                var handler = scope.ServiceProvider
                    .GetRequiredService<IOrderEventHandler>();

                try
                {
                    await handler.HandleAsync(entry, ct);
                    await db.StreamAcknowledgeAsync(StreamName, GroupName, entry.Id);
                }
                catch (Exception)
                {
                    // PEL retains message, XCLAIM recovery handles it
                }
            }

            if (entries.Length == 0)
                await Task.Delay(50, ct);
        }
    }

    private async Task EnsureConsumerGroupAsync(IDatabase db)
    {
        try
        {
            await db.StreamCreateConsumerGroupAsync(
                StreamName, GroupName, StreamPosition.NewMessages);
        }
        catch (RedisServerException) { }
    }
}

Production Checklist

📋 Production Deployment Checklist for Redis Streams

  • MAXLEN/MINID — Always set stream size limits to prevent RAM overflow. Use MAXLEN ~ 100000 (approximate, faster than exact)
  • Persistence — Enable AOF (appendonly yes) or RDB snapshots so data survives Redis restarts
  • Consumer naming — Use format worker-{hostname}-{pid} for easier debugging
  • XCLAIM timeout — Set 60-300s depending on average processing time. Too short → duplicate processing, too long → delayed recovery
  • Dead letter queue — After N retries (typically 3-5), move messages to a separate DLQ stream
  • Monitoring — Track: XLEN (stream size), XPENDING (pending count), consumer lag, memory usage
  • Idempotent consumers — At-least-once delivery means duplicates are possible. Consumers must be idempotent (check orderId before processing)
  • Redis Cluster — Streams work in Cluster mode, but each stream lives on one shard (use hash tags for co-location if needed)

Monitoring With Redis CLI

# View stream info
XINFO STREAM orders-stream

# View consumer groups
XINFO GROUPS orders-stream

# View consumers in a group
XINFO CONSUMERS orders-stream order-processors

# View pending messages (not yet ACKed)
XPENDING orders-stream order-processors - + 10

# Count entries in stream
XLEN orders-stream

# Trim stream (keep latest 50K entries)
XTRIM orders-stream MAXLEN ~ 50000

When NOT to Use Redis Streams

ScenarioRedis Streams suitable?Alternative
Microservice events < 100K msg/s✅ Perfect
Order processing, task queue✅ Excellent
Real-time notifications✅ Sub-ms latency
AI/LLM token streaming✅ Great fit
Log aggregation (TB/day)❌ RAM overflowKafka + ClickHouse
Event sourcing (infinite retention)❌ RAM-boundKafka / EventStore
Cross-datacenter replication⚠️ LimitedKafka MirrorMaker
Exactly-once processing❌ At-least-once onlyKafka + Transactions

Conclusion

Redis Streams is the pragmatic choice for event streaming at small-to-mid scale: sub-millisecond latency, powerful consumer groups, zero infrastructure overhead (you already run Redis). Don't deploy Kafka just because "everyone uses it" — if your throughput is under a few hundred thousand msg/s and you don't need infinite retention, Redis Streams handles 90% of use cases with 10% of the complexity. Escalate to Kafka when your system truly needs to scale horizontally to millions of messages per second.

References