Valkey vs Redis 2026 — The Fork That Reshaped the In-Memory Database Landscape

Posted on: 4/25/2026 8:13:39 PM

1.19MRPS on a single Valkey 8.0 node
230%Throughput improvement over 7.2
20%Cost savings on AWS ElastiCache
40+Organizations contributing to Valkey

In March 2024, Redis Ltd changed its license from BSD to dual RSAL/SSPL — a decision that shook the entire in-memory database ecosystem. Just weeks later, Valkey was born under the Linux Foundation, backed by Amazon, Google, Oracle, Alibaba, Ericsson, and dozens of other organizations. By mid-2026, the split has gone far beyond a licensing story — the two products are diverging in architecture, features, and strategy.

This article provides a deep-dive comparison of Valkey 8.x and Redis 8.0, from I/O threading architecture to module ecosystems, helping you make the right decision for your production systems.

1. Context: Why the Split Happened

March 2024
Redis Ltd switched from BSD to dual RSAL + SSPL license. Cloud providers could no longer offer Redis-as-a-Service without a commercial agreement.
March 2024
Linux Foundation announced Valkey — forked from Redis OSS 7.2.4, maintaining the BSD 3-Clause license. Amazon, Google, Oracle, Ericsson joined from day one.
September 2024
Valkey 8.0 GA — new I/O threading, 230% throughput increase, reaching 1.19M RPS on a single node.
March 2025
Redis switched to AGPL — acknowledging SSPL was too restrictive, but Valkey had already gone too far to turn back.
April 2026
Valkey 8.1 adds native JSON, Bloom Filter, Vector Search. Redis 8.0 fully integrates Redis Stack (JSON, Search, TimeSeries, Bloom).

The Key Takeaway

Redis's switch to AGPL wasn't about returning to open source — it was because SSPL drove many enterprises to abandon Redis entirely for Valkey. AGPL allows self-hosting but requires publishing all source code if you offer a network service — still a major barrier for cloud providers.

2. Architecture: Same Root, Different Directions

graph TD
    subgraph Redis_8["Redis 8.0"]
        R1["Main Thread
Command Processing"] R2["I/O Threads
Read/Write Network"] R3["Background Threads
AOF, RDB, Lazy Free"] R4["Redis Stack Modules
JSON · Search · TimeSeries · Bloom"] R1 --> R2 R1 --> R3 R1 --> R4 end subgraph Valkey_8["Valkey 8.x"] V1["Main Thread
Command Processing"] V2["Async I/O Threads
Intelligent Distribution"] V3["Background Threads
AOF, RDB, Lazy Free"] V4["Native Extensions
JSON · Bloom · Vector Search"] V1 --> V2 V1 --> V3 V1 --> V4 end style Redis_8 fill:#f8f9fa,stroke:#e94560,color:#2c3e50 style Valkey_8 fill:#f8f9fa,stroke:#4CAF50,color:#2c3e50 style R1 fill:#e94560,stroke:#fff,color:#fff style V1 fill:#4CAF50,stroke:#fff,color:#fff style R4 fill:#fff3e0,stroke:#ff9800,color:#2c3e50 style V4 fill:#e8f5e9,stroke:#4CAF50,color:#2c3e50

High-level architecture comparison: Redis 8.0 vs Valkey 8.x

2.1. I/O Threading — Valkey's Quantum Leap

Both Redis and Valkey maintain the single-threaded command processing model for consistency guarantees. However, their approaches to network I/O differ significantly:

CriteriaRedis 8.0Valkey 8.x
I/O ModelI/O threads read/write synchronously in batchesAsync I/O threading — intelligent distribution based on actual load
Throughput (single node)~1M RPS (c7g.4xlarge)1.19M RPS (c7g.4xlarge) — ~20% higher
Tail Latency (P99.9)Stable at low-medium loadMore consistent thanks to automatic I/O distribution
CPU UtilizationI/O threads idle at low loadScales up/down based on real-time metrics

Valkey's secret lies in AWS's contribution of the Async I/O Threading implementation. Originally, the code from ElastiCache was 15,000 lines — the Valkey community refactored it down to 1,500 lines of C while preserving performance. The result: 230% throughput increase over Valkey 7.2 with zero application code changes.

# Enable I/O threading on Valkey
# valkey.conf
io-threads 4
io-threads-do-reads yes

# Valkey 8.x automatically distributes I/O based on load
# No manual tuning required unlike Redis

2.2. Memory Efficiency

Valkey 8.1 (via ElastiCache) introduces a new hash table that improves memory efficiency by 20%. This is particularly impactful for workloads with millions of small keys — where data structure overhead constitutes a large proportion of actual data.

Production Reality

A workload with 100 million string keys (averaging 200 bytes/key) on Valkey 8.1 consumes approximately 16% less RAM compared to the same workload on Redis 7.2. This translates to downsizing 1-2 instance tiers — significant monthly cost savings.

3. Features: Head-to-Head Comparison

3.1. Modules vs Native Extensions

graph LR
    subgraph Redis_Approach["Redis 8.0 — Unified Stack"]
        RS["Redis Server"]
        RJ["RedisJSON"]
        RSearch["RediSearch"]
        RT["RedisTimeSeries"]
        RB["RedisBloom"]
        RS --> RJ
        RS --> RSearch
        RS --> RT
        RS --> RB
    end
    subgraph Valkey_Approach["Valkey 8.1 — Native + Community"]
        VS["Valkey Server"]
        VJ["valkey-json"]
        VB["valkey-bloom"]
        VSearch["valkey-search
(vector + full-text)"] VRDMA["RDMA Support
(experimental)"] VS --> VJ VS --> VB VS --> VSearch VS --> VRDMA end style Redis_Approach fill:#f8f9fa,stroke:#e94560,color:#2c3e50 style Valkey_Approach fill:#f8f9fa,stroke:#4CAF50,color:#2c3e50 style RS fill:#e94560,stroke:#fff,color:#fff style VS fill:#4CAF50,stroke:#fff,color:#fff

Module strategy: Redis integrates Stack vs Valkey develops native extensions

FeatureRedis 8.0Valkey 8.1
JSONRedisJSON (built-in)valkey-json (native extension)
Full-text SearchRediSearch (powerful, mature)valkey-search (in development)
Vector SearchRediSearch + HNSW/FLATvalkey-search (HNSW)
Bloom FilterRedisBloom (integrated)valkey-bloom (native)
Time SeriesRedisTimeSeries (mature)Not available — Sorted Set workaround
GraphRedisGraph (deprecated)Not supported
RDMANot supportedExperimental — reduces inter-node latency
Probabilistic StructuresBloom, Cuckoo, CMS, TopK, T-DigestBloom (expanding)

Migration Warning

If you're using RediSearch or RedisTimeSeries in production, carefully evaluate before switching to Valkey. Valkey's equivalent modules haven't reached feature parity yet. Valkey excels at core caching/data structures — where it outperforms on raw performance.

3.2. JSON Operations — Practical Comparison

# Redis 8.0 — JSON path operations (RedisJSON)
127.0.0.1:6379> JSON.SET user:1001 $ '{"name":"John","role":"developer","skills":["dotnet","vue","redis"]}'
OK
127.0.0.1:6379> JSON.GET user:1001 $.skills[0]
"[\"dotnet\"]"
127.0.0.1:6379> JSON.ARRAPPEND user:1001 $.skills '"kafka"'
[4]

# Valkey 8.1 — JSON operations (valkey-json, API compatible)
127.0.0.1:6379> JSON.SET user:1001 $ '{"name":"John","role":"developer","skills":["dotnet","vue","valkey"]}'
OK
127.0.0.1:6379> JSON.GET user:1001 $.skills[0]
"[\"dotnet\"]"
127.0.0.1:6379> JSON.ARRAPPEND user:1001 $.skills '"flink"'
[4]

4. Performance: Real-World Benchmarks

graph TD
    subgraph Benchmark["Benchmark on AWS c7g.4xlarge (8 vCPU, 16GB RAM)"]
        B1["Workload: 50% GET + 50% SET
Key size: 64 bytes, Value: 256 bytes"] B2["Redis 8.0
~1,000,000 RPS
P99: 0.4ms"] B3["Valkey 8.1
~1,190,000 RPS
P99: 0.35ms"] B4["Valkey 7.2
~360,000 RPS
P99: 0.8ms"] B1 --> B2 B1 --> B3 B1 --> B4 end style Benchmark fill:#f8f9fa,stroke:#2c3e50,color:#2c3e50 style B2 fill:#fff3e0,stroke:#ff9800,color:#2c3e50 style B3 fill:#e8f5e9,stroke:#4CAF50,color:#2c3e50 style B4 fill:#f8f9fa,stroke:#e0e0e0,color:#888

Throughput benchmark on identical hardware — Valkey 8.1 leads thanks to Async I/O Threading

Key benchmark findings:

  • Throughput: Valkey 8.1 is approximately 19% higher than Redis 8.0 on identical hardware, thanks to a more efficient I/O threading implementation.
  • Latency: Valkey's P99 is more consistent — fewer spikes, less jitter. Critical for SLA-sensitive systems.
  • Cluster mode: Both scale linearly when adding shards, but Valkey's per-slot metrics make debugging bottlenecks easier.
  • Long-running workloads: Redis shows better stability for sustained workloads running >24h with large backlogs.

5. Cloud Provider Ecosystem

Cloud ProviderRedis ManagedValkey Managed
AWSElastiCache for Redis (requires commercial agreement)ElastiCache for Valkey (default, 20% cheaper)
Google CloudMemorystore for RedisMemorystore for Valkey
AzureAzure Cache for Redis (still primary option)Azure Cache for Valkey (preview)
Oracle CloudOCI Cache with RedisOCI Cache with Valkey
HerokuKey-Value Store (Redis)Key-Value Store (Valkey 8.1)

AWS ElastiCache — Valkey by Default

Since late 2024, AWS has made Valkey the default for all new ElastiCache and MemoryDB instances. At 20% lower pricing combined with higher throughput — AWS is clearly betting on Valkey. If you're creating a new cluster on AWS, Valkey is the no-brainer choice.

6. Integration with .NET and Practical Use Cases

Both Redis and Valkey are protocol-compatible (RESP), so popular client libraries work with both without any code changes:

// StackExchange.Redis — works with both Redis and Valkey
using StackExchange.Redis;

// Connect to Valkey cluster on AWS ElastiCache
var config = new ConfigurationOptions
{
    EndPoints = { "valkey-cluster.abc123.cache.amazonaws.com:6379" },
    Ssl = true,
    AbortOnConnectFail = false,
    ConnectRetry = 3,
    SyncTimeout = 5000
};

var connection = ConnectionMultiplexer.Connect(config);
var db = connection.GetDatabase();

// JSON operations (requires valkey-json / RedisJSON module)
await db.ExecuteAsync("JSON.SET", "session:user1001", "$",
    "{\"userId\":1001,\"role\":\"admin\",\"lastActive\":\"2026-04-25T10:00:00Z\"}");

// Bloom filter check (requires valkey-bloom / RedisBloom module)
await db.ExecuteAsync("BF.ADD", "email:seen", "john@example.com");
bool exists = (int)await db.ExecuteAsync("BF.EXISTS", "email:seen",
    "john@example.com") == 1;

// Core operations — identical across both
await db.StringSetAsync("cache:product:42", productJson, TimeSpan.FromMinutes(30));
string cached = await db.StringGetAsync("cache:product:42");
// Distributed caching in ASP.NET Core / .NET 10
// Program.cs
builder.Services.AddStackExchangeRedisCache(options =>
{
    // Just change the connection string — no code changes needed
    options.Configuration = "valkey-cluster.abc123.cache.amazonaws.com:6379,ssl=true";
    options.InstanceName = "MyApp:";
});

// Use IDistributedCache as usual
public class ProductService(IDistributedCache cache)
{
    public async Task<Product?> GetProductAsync(int id)
    {
        var key = $"product:{id}";
        var cached = await cache.GetStringAsync(key);
        if (cached is not null)
            return JsonSerializer.Deserialize<Product>(cached);

        var product = await _repository.FindAsync(id);
        if (product is not null)
        {
            await cache.SetStringAsync(key, JsonSerializer.Serialize(product),
                new DistributedCacheEntryOptions
                {
                    AbsoluteExpirationRelativeToNow = TimeSpan.FromMinutes(15),
                    SlidingExpiration = TimeSpan.FromMinutes(5)
                });
        }
        return product;
    }
}

7. Migration Guide: From Redis to Valkey

graph LR
    A["Assess
Compatibility"] --> B["Test on
Staging"] B --> C["Migrate
Data"] C --> D["Cutover
DNS/Config"] D --> E["Monitor
& Validate"] style A fill:#e94560,stroke:#fff,color:#fff style B fill:#ff9800,stroke:#fff,color:#fff style C fill:#2196F3,stroke:#fff,color:#fff style D fill:#4CAF50,stroke:#fff,color:#fff style E fill:#9C27B0,stroke:#fff,color:#fff

5-step migration from Redis to Valkey

Step 1: Assess Compatibility

Redis and Valkey are approximately 90% compatible at the command level. Check the following:

  • Core commands (GET, SET, HGET, LPUSH, ZADD...): 100% compatible.
  • Modules: If using RediSearch, RedisTimeSeries — evaluate replacements.
  • ACL: Syntax is identical, but Valkey adds some new permissions.
  • Cluster API: Compatible, Valkey adds per-slot metrics.

Step 2: Test on Staging

# Use valkey-benchmark for stress testing
valkey-benchmark -h staging-valkey.internal -p 6379 \
  -t set,get,lpush,lpop,zadd,zrangebyscore \
  -n 1000000 -c 50 -P 16 --threads 4

# Compare with redis-benchmark on identical hardware
redis-benchmark -h staging-redis.internal -p 6379 \
  -t set,get,lpush,lpop,zadd,zrangebyscore \
  -n 1000000 -c 50 -P 16 --threads 4

Step 3: Migrate Data

Three common approaches:

MethodDowntimeComplexityBest When
RDB SnapshotYes (a few minutes)LowSmall dataset (<10GB), downtime acceptable
ReplicationNear-zeroMediumLarge dataset, Valkey supports replicating from Redis
Dual WriteZeroHighMission-critical, need fast rollback
# Migration via replication (zero-downtime preferred)
# On Valkey node, configure replication from Redis master
valkey-cli -h valkey-new.internal
127.0.0.1:6379> REPLICAOF redis-old.internal 6379

# Monitor sync progress
127.0.0.1:6379> INFO replication
# Wait for master_sync_in_progress:0 and master_link_status:up

# When sync completes, promote Valkey to master
127.0.0.1:6379> REPLICAOF NO ONE

# Update connection string in your application
# Switch DNS/config from redis-old.internal to valkey-new.internal

8. When to Choose Redis? When to Choose Valkey?

graph TD
    Q1{"Need
RediSearch/TimeSeries?"} Q2{"Running on
AWS/GCP?"} Q3{"Open license
important?"} Q4{"Need Vector Search
+ JSON + Bloom?"} R1["Redis 8.0
Stronger ecosystem"] R2["Valkey 8.1
Performance + Savings"] R3["Redis 8.0 (AGPL)
Self-host possible"] R4["Both work
Valkey if perf priority"] Q1 -->|Yes| R1 Q1 -->|No| Q2 Q2 -->|Yes| R2 Q2 -->|No| Q3 Q3 -->|Yes| R2 Q3 -->|No| Q4 Q4 -->|Yes| R4 Q4 -->|No| R2 style Q1 fill:#f8f9fa,stroke:#e94560,color:#2c3e50 style Q2 fill:#f8f9fa,stroke:#2196F3,color:#2c3e50 style Q3 fill:#f8f9fa,stroke:#ff9800,color:#2c3e50 style Q4 fill:#f8f9fa,stroke:#9C27B0,color:#2c3e50 style R1 fill:#e94560,stroke:#fff,color:#fff style R2 fill:#4CAF50,stroke:#fff,color:#fff style R3 fill:#ff9800,stroke:#fff,color:#fff style R4 fill:#2196F3,stroke:#fff,color:#fff

Decision tree: Redis vs Valkey for new projects

ScenarioRecommendationReason
New caching layer on AWS/GCPValkeyDefault on ElastiCache, 20% cheaper, higher throughput
Complex full-text + vector searchRedisRediSearch is more mature than valkey-search
Self-hosted, budget-constrainedValkeyBSD license, no restrictions, strong community
Time series + analyticsRedisRedisTimeSeries has no Valkey equivalent yet
Microservices caching + pub/subValkeyStronger core caching, identical pub/sub, cost savings
Running on AzureRedis (currently)Azure Cache for Valkey still in preview, Redis more stable
AI/ML pipeline with vector similarityBothBoth support HNSW vector search, Redis offers more index types

9. The Future: Two Diverging Paths

By mid-2026, Redis and Valkey have diverged enough that they're no longer "same product, different license":

  • Redis is becoming a real-time data platform — integrating search, time series, graph, and vector into a single engine. The "one tool for all real-time workloads" strategy.
  • Valkey focuses on core in-memory performance — I/O threading, memory efficiency, RDMA. With backing from AWS, Google, and the Linux Foundation, Valkey aims to become the "open standard" for in-memory data stores.

2027 Prediction

Command-level compatibility will gradually decrease from 90% to around 80% as both add proprietary features. Client libraries like StackExchange.Redis may need backend-detection flags. Plan your migration early if you're still on Redis 6.x (EOL since January 2026) — the longer you wait, the higher the cost.

Conclusion

The Redis/Valkey split isn't a war — it's specialization. Redis is heading toward a platform approach (many features, one engine), while Valkey pursues core excellence (fewer features but each one as fast as possible). For the majority of caching and pub/sub use cases in 2026, Valkey is the sensible default — thanks to its open-source license, superior performance, and strong support from major cloud providers. Choose Redis only when you truly need the rich module ecosystem that Valkey hasn't replaced yet.

References: