CQRS and Event Sourcing — When CRUD Is No Longer Enough

Posted on: 4/21/2026 9:12:00 PM

Most applications start with the CRUD model — create, read, update, delete on a single data table. But when systems scale to millions of requests per second, when business requires auditing every change, when you need to know why data changed and not just what the current state is — CRUD begins to show serious limitations. That's when CQRS and Event Sourcing become an architectural solution you can't ignore.

The Problem with Traditional CRUD

In the CRUD model, every time you UPDATE a record, the old state is permanently overwritten. You lose the entire change history. Consider an e-commerce system:

graph LR
    A["Place Order"] --> B["orders table
status = pending"] B --> C["Payment"] C --> D["orders table
status = paid"] D --> E["Ship"] E --> F["orders table
status = shipped"] style A fill:#e94560,stroke:#fff,color:#fff style C fill:#e94560,stroke:#fff,color:#fff style E fill:#e94560,stroke:#fff,color:#fff style B fill:#f8f9fa,stroke:#e94560,color:#2c3e50 style D fill:#f8f9fa,stroke:#e94560,color:#2c3e50 style F fill:#f8f9fa,stroke:#e94560,color:#2c3e50

CRUD: each UPDATE overwrites the previous state — no record of when payment happened, who approved, or what the old price was

With CRUD, you only know the order is in shipped status. But you can't tell: when was the order placed? Which payment gateway was used? How many times was the shipping address changed? Who approved the VIP order? These questions are critical in practice — and CRUD can't answer them.

What is CQRS?

Command Query Responsibility Segregation (CQRS) is a pattern that completely separates two flows: write (Command) and read (Query) into distinct models, potentially using different databases, scaling independently.

graph TB
    subgraph "Command Side (Write)"
        CMD["Command Handler"] --> AGG["Aggregate / Domain Model"]
        AGG --> WDB[("Write DB
Optimized for consistency")] end subgraph "Query Side (Read)" QH["Query Handler"] --> RM["Read Model / Projection"] RM --> RDB[("Read DB
Optimized for queries")] end WDB -->|"Sync / Async"| RDB CLIENT["Client"] -->|"Command"| CMD CLIENT -->|"Query"| QH style CMD fill:#e94560,stroke:#fff,color:#fff style QH fill:#4CAF50,stroke:#fff,color:#fff style AGG fill:#2c3e50,stroke:#fff,color:#fff style RM fill:#2c3e50,stroke:#fff,color:#fff style WDB fill:#f8f9fa,stroke:#e94560,color:#2c3e50 style RDB fill:#f8f9fa,stroke:#4CAF50,color:#2c3e50 style CLIENT fill:#16213e,stroke:#fff,color:#fff

CQRS: Command side and Query side are fully separated, scaling independently

Why separate Read/Write?

In most systems, the read-to-write ratio is typically 10:1 to 100:1. When using a single model, you're forced to compromise: either the model is complex to serve fast queries (but writes become slow), or the model is simple for fast writes (but reads require joining many tables). CQRS lets you optimize each side for its specific needs.

Command Side — Ensuring Correctness

The command side receives state-changing instructions: PlaceOrder, ApprovePayment, ShipOrder. Each command goes through validation, business rules, then writes to the database. The model here is designed to ensure consistency and invariants — no concern for display or reporting.

// Command
public record PlaceOrder(Guid OrderId, Guid CustomerId, List<OrderItem> Items);

// Command Handler
public class PlaceOrderHandler
{
    private readonly IDocumentSession _session;

    public async Task Handle(PlaceOrder cmd)
    {
        // Business rules validation
        if (!cmd.Items.Any())
            throw new InvalidOperationException("Order must have at least one item");

        var @event = new OrderPlaced(
            cmd.OrderId, cmd.CustomerId, cmd.Items, DateTimeOffset.UtcNow);

        _session.Events.StartStream<Order>(cmd.OrderId, @event);
        await _session.SaveChangesAsync();
    }
}

Query Side — Optimized for Read Speed

The query side serves display requests: order lists, revenue reports, real-time dashboards. Read models are designed as denormalized — flat, no joins needed, could be documents in MongoDB, rows in a denormalized SQL table, or even cached in memory.

// Read Model — flat, denormalized, UI-ready
public class OrderSummaryView
{
    public Guid OrderId { get; set; }
    public string CustomerName { get; set; }
    public decimal TotalAmount { get; set; }
    public string Status { get; set; }
    public int ItemCount { get; set; }
    public DateTimeOffset PlacedAt { get; set; }
    public DateTimeOffset? ShippedAt { get; set; }
}

// Query Handler — read-only, no business logic
public class GetOrderSummaryHandler
{
    private readonly IQuerySession _query;

    public async Task<OrderSummaryView?> Handle(Guid orderId)
    {
        return await _query.LoadAsync<OrderSummaryView>(orderId);
    }
}

What is Event Sourcing?

Event Sourcing inverts the way data is stored: instead of saving the current state, you save the sequence of events that occurred. The current state is computed by replaying all events from the beginning.

graph LR
    E1["OrderPlaced
10:00 AM"] --> E2["PaymentReceived
10:05 AM"] E2 --> E3["AddressChanged
10:12 AM"] E3 --> E4["OrderApproved
10:15 AM"] E4 --> E5["OrderShipped
2:30 PM"] E5 --> STATE["Current State:
Shipped ✓
Paid ✓
New Address ✓"] style E1 fill:#e94560,stroke:#fff,color:#fff style E2 fill:#e94560,stroke:#fff,color:#fff style E3 fill:#e94560,stroke:#fff,color:#fff style E4 fill:#e94560,stroke:#fff,color:#fff style E5 fill:#e94560,stroke:#fff,color:#fff style STATE fill:#4CAF50,stroke:#fff,color:#fff

Event Sourcing: state = f(events) — every change is permanently stored

100% Audit Trail — every change has evidence
Time Travel — replay to any point in time
0 Data Loss — never lose information from UPDATEs
N+1 Read Models — create unlimited projections from the same events

Defining Domain Events

Events must express domain intent, not technical operations. OrderShipped is better than OrderStatusUpdated. PriceAdjustedForLoyaltyDiscount is better than PriceChanged.

// Domain Events — express clear business intent
public record OrderPlaced(
    Guid OrderId, Guid CustomerId,
    List<OrderItem> Items, DateTimeOffset OccurredAt);

public record PaymentReceived(
    Guid OrderId, decimal Amount,
    string PaymentMethod, string TransactionId, DateTimeOffset OccurredAt);

public record OrderShipped(
    Guid OrderId, string TrackingNumber,
    string Carrier, DateTimeOffset ShippedAt);

public record OrderCancelled(
    Guid OrderId, string Reason,
    Guid CancelledBy, DateTimeOffset OccurredAt);

Anti-pattern: Overly Generic Events

Avoid creating events like OrderUpdated(Dictionary<string, object> changes) — this is "CRUD in disguise." Events must carry specific business meaning. If you can't name the event in domain expert language, that's a sign the design needs work.

Aggregate — The Consistency Boundary

An Aggregate is the transactional unit in Event Sourcing. Each aggregate manages a stream of events and enforces business invariants:

public class Order
{
    public Guid Id { get; private set; }
    public OrderStatus Status { get; private set; }
    public List<OrderItem> Items { get; private set; } = new();
    public decimal TotalAmount { get; private set; }

    // Command method — validate then emit event
    public OrderShipped Ship(string trackingNumber, string carrier)
    {
        if (Status != OrderStatus.Approved)
            throw new InvalidOperationException(
                $"Cannot ship order in '{Status}' status");

        if (string.IsNullOrEmpty(trackingNumber))
            throw new ArgumentException("Tracking number is required");

        return new OrderShipped(Id, trackingNumber, carrier, DateTimeOffset.UtcNow);
    }

    // Apply method — update state from event (no exceptions)
    public void Apply(OrderPlaced e)
    {
        Id = e.OrderId;
        Status = OrderStatus.Placed;
        Items = e.Items;
        TotalAmount = e.Items.Sum(i => i.Price * i.Quantity);
    }

    public void Apply(PaymentReceived e) => Status = OrderStatus.Paid;
    public void Apply(OrderShipped e) => Status = OrderStatus.Shipped;
    public void Apply(OrderCancelled e) => Status = OrderStatus.Cancelled;
}

CQRS + Event Sourcing: The Power of Combination

When combining CQRS with Event Sourcing, the architecture becomes extremely powerful: the Command side writes events, and the Query side builds Projections (read models) from those events.

graph TB
    subgraph "Command Side"
        C["Command"] --> CH["Command Handler"]
        CH --> A["Aggregate"]
        A --> ES[("Event Store
(append-only)")] end subgraph "Projection Engine" ES --> PE["Event Processor"] PE --> P1["Projection 1
Order Summary"] PE --> P2["Projection 2
Revenue Report"] PE --> P3["Projection 3
Customer Dashboard"] end subgraph "Query Side" Q["Query"] --> QH["Query Handler"] QH --> P1 QH --> P2 QH --> P3 end style C fill:#e94560,stroke:#fff,color:#fff style Q fill:#4CAF50,stroke:#fff,color:#fff style ES fill:#16213e,stroke:#fff,color:#fff style PE fill:#2c3e50,stroke:#fff,color:#fff style A fill:#2c3e50,stroke:#fff,color:#fff style CH fill:#e94560,stroke:#fff,color:#fff style QH fill:#4CAF50,stroke:#fff,color:#fff style P1 fill:#f8f9fa,stroke:#e94560,color:#2c3e50 style P2 fill:#f8f9fa,stroke:#e94560,color:#2c3e50 style P3 fill:#f8f9fa,stroke:#e94560,color:#2c3e50

CQRS + Event Sourcing: events are the single source of truth, projections serve reads

Projections — Building Read Models from Events

A Projection is the process of transforming a stream of events into a read model optimized for queries. With Marten on .NET, you can define projections concisely:

// Single-stream projection: 1 aggregate → 1 read model document
public class OrderSummaryProjection : SingleStreamProjection<OrderSummaryView>
{
    public void Apply(OrderPlaced e, OrderSummaryView view)
    {
        view.OrderId = e.OrderId;
        view.CustomerId = e.CustomerId;
        view.ItemCount = e.Items.Count;
        view.TotalAmount = e.Items.Sum(i => i.Price * i.Quantity);
        view.Status = "Placed";
        view.PlacedAt = e.OccurredAt;
    }

    public void Apply(PaymentReceived e, OrderSummaryView view)
    {
        view.Status = "Paid";
        view.PaymentMethod = e.PaymentMethod;
    }

    public void Apply(OrderShipped e, OrderSummaryView view)
    {
        view.Status = "Shipped";
        view.TrackingNumber = e.TrackingNumber;
        view.ShippedAt = e.ShippedAt;
    }
}

// Register projection in Marten
services.AddMarten(opts =>
{
    opts.Projections.Add<OrderSummaryProjection>(ProjectionLifecycle.Inline);
});

Inline vs Async Projection

Inline: projection runs in the same transaction as the event — ensures the read model is always consistent, but writes are slower. Use for strong consistency needs.

Async: projection runs in the background — writes are faster, but the read model may lag a few seconds (eventual consistency). Use for dashboards, reports, and analytics.

Cross-stream Projection — Complex Aggregation

Many reports need data aggregated from multiple streams. For example, a Revenue Dashboard aggregates from all orders:

// Multi-stream projection: aggregate events from multiple aggregates
public class DailyRevenueProjection : MultiStreamProjection<DailyRevenue, string>
{
    public DailyRevenueProjection()
    {
        Identity<PaymentReceived>(e => e.OccurredAt.ToString("yyyy-MM-dd"));
        Identity<OrderCancelled>(e => e.OccurredAt.ToString("yyyy-MM-dd"));
    }

    public void Apply(PaymentReceived e, DailyRevenue view)
    {
        view.TotalRevenue += e.Amount;
        view.OrderCount++;
    }

    public void Apply(OrderCancelled e, DailyRevenue view)
    {
        view.CancelledCount++;
    }
}

Event Store: The Heart of the System

An Event Store is a specialized database for Event Sourcing. It differs from regular databases in that it only appends — no updates, no deletes.

CriteriaEventStoreDBMarten (PostgreSQL)SQL Server + Custom
TypePurpose-built event storeDocument DB + Event Store on PostgreSQLSelf-built on SQL Server
Stream subscriptionBuilt-in (catch-up, persistent)Built-in (async daemon)Self-implement (polling/CDC)
ProjectionBuilt-in JavaScript projectionsBuilt-in .NET projectionsSelf-implement
ConcurrencyOptimistic (stream version)Optimistic (stream version)Self-implement
Best forLarge-scale, event-native systems.NET ecosystem, existing PostgreSQLWhen SQL Server is mandatory
Learning curveMediumLow (if familiar with .NET)High (must build everything yourself)

Event Store Schema on SQL Server

If your team must use SQL Server, here's a minimal schema:

CREATE TABLE EventStore (
    SequenceNumber  BIGINT IDENTITY(1,1) PRIMARY KEY,
    StreamId        NVARCHAR(200)   NOT NULL,
    StreamVersion   INT             NOT NULL,
    EventType       NVARCHAR(500)   NOT NULL,
    Payload         NVARCHAR(MAX)   NOT NULL,  -- JSON serialized event
    Metadata        NVARCHAR(MAX)   NULL,      -- correlation, causation, user info
    CreatedAt       DATETIMEOFFSET  NOT NULL DEFAULT SYSDATETIMEOFFSET(),

    CONSTRAINT UQ_Stream_Version UNIQUE (StreamId, StreamVersion)
);

CREATE INDEX IX_EventStore_StreamId ON EventStore(StreamId, StreamVersion);
CREATE INDEX IX_EventStore_EventType ON EventStore(EventType);
CREATE INDEX IX_EventStore_CreatedAt ON EventStore(CreatedAt);

The UQ_Stream_Version constraint is the key to optimistic concurrency: if two commands try to write version 5 for the same stream, only one succeeds — the other receives a conflict error.

Snapshots — Optimization for Long Streams

When an aggregate has thousands of events, replaying from the beginning becomes slow. Snapshots save the state at a specific point in time, and subsequent loads only replay events AFTER the snapshot.

graph LR
    E1["Event 1"] --> E2["Event 2"]
    E2 --> E3["..."]
    E3 --> E500["Event 500"]
    E500 --> S["📸 Snapshot
at version 500"] S --> E501["Event 501"] E501 --> E502["Event 502"] E502 --> E503["Event 503"] E503 --> STATE["Current State
(only replay 3 events)"] style S fill:#4CAF50,stroke:#fff,color:#fff style STATE fill:#e94560,stroke:#fff,color:#fff style E500 fill:#2c3e50,stroke:#fff,color:#fff style E501 fill:#f8f9fa,stroke:#e94560,color:#2c3e50 style E502 fill:#f8f9fa,stroke:#e94560,color:#2c3e50 style E503 fill:#f8f9fa,stroke:#e94560,color:#2c3e50

Snapshot: instead of replaying 503 events, just load snapshot + replay 3 remaining events

// Configure snapshots in Marten
services.AddMarten(opts =>
{
    opts.Events.AddEventType<OrderPlaced>();
    opts.Events.AddEventType<PaymentReceived>();
    opts.Events.AddEventType<OrderShipped>();

    // Create snapshot every 100 events
    opts.Projections.Snapshot<Order>(SnapshotLifecycle.Inline, 100);
});

Event Versioning — Handling Schema Evolution

Events are immutable — once stored, they can't be modified. But business changes, and you need to add fields or restructure. Event versioning solves this:

Strategy 1: Upcasting

Convert old events to the new format when reading, without modifying the stored data:

// Version 1: originally only had Amount
public record PaymentReceived_V1(Guid OrderId, decimal Amount, DateTimeOffset OccurredAt);

// Version 2: added Currency
public record PaymentReceived(
    Guid OrderId, decimal Amount, string Currency, DateTimeOffset OccurredAt);

// Upcaster: convert V1 → V2 when reading
public class PaymentReceivedUpcaster : EventUpcaster<PaymentReceived_V1, PaymentReceived>
{
    protected override PaymentReceived Upcast(PaymentReceived_V1 old)
    {
        return new PaymentReceived(old.OrderId, old.Amount, "USD", old.OccurredAt);
    }
}

Strategy 2: Weak Schema

Use optional fields and default values to avoid breaking changes:

public record OrderPlaced(
    Guid OrderId,
    Guid CustomerId,
    List<OrderItem> Items,
    DateTimeOffset OccurredAt,
    string? PromotionCode = null,     // added later — old events don't have this field
    string? Channel = "web"           // added later — default = "web"
);

When NOT to Use Event Sourcing

Event Sourcing is not a silver bullet

This is a complex pattern that requires the team to deeply understand the domain and have experience. Using it in the wrong context creates unnecessary complexity.

Good FitPoor Fit
Financial systems, banking — need 100% audit trailSimple CRUD — blogs, CMS, landing pages
Collaborative systems (collaborative editing)Low-traffic apps with rare state changes
Need replay / time-travel (debugging, compliance)Team unfamiliar with DDD and domain modeling
Write-heavy systems needing write scalabilityPrototypes, early-stage MVPs
Microservices needing event-driven integrationData requiring permanent deletion (GDPR right-to-erasure)

Practical Implementation: Marten on .NET

Marten is currently the most powerful .NET library for Event Sourcing, running on PostgreSQL — no specialized database needed:

// Program.cs — Setup Marten
var builder = WebApplication.CreateBuilder(args);

builder.Services.AddMarten(opts =>
{
    opts.Connection(builder.Configuration.GetConnectionString("Postgres")!);

    // Event types
    opts.Events.AddEventType<OrderPlaced>();
    opts.Events.AddEventType<PaymentReceived>();
    opts.Events.AddEventType<OrderShipped>();
    opts.Events.AddEventType<OrderCancelled>();

    // Inline projection — consistent read model
    opts.Projections.Add<OrderSummaryProjection>(ProjectionLifecycle.Inline);

    // Async projection — eventual consistency, runs in background
    opts.Projections.Add<DailyRevenueProjection>(ProjectionLifecycle.Async);

    // Snapshot every 200 events
    opts.Projections.Snapshot<Order>(SnapshotLifecycle.Inline, 200);
})
.AddAsyncDaemon(DaemonMode.HotCold);  // run async projections

Full Workflow: Place Order → Payment → Ship

// API Endpoint — Place Order
app.MapPost("/orders", async (PlaceOrder cmd, IDocumentSession session) =>
{
    var order = new Order();
    var @event = order.Place(cmd.CustomerId, cmd.Items);

    session.Events.StartStream<Order>(cmd.OrderId, @event);
    await session.SaveChangesAsync();

    return Results.Created($"/orders/{cmd.OrderId}", new { cmd.OrderId });
});

// API Endpoint — Ship Order
app.MapPost("/orders/{id}/ship", async (Guid id, ShipRequest req, IDocumentSession session) =>
{
    var order = await session.Events.AggregateStreamAsync<Order>(id)
        ?? throw new NotFoundException($"Order {id} not found");

    var @event = order.Ship(req.TrackingNumber, req.Carrier);

    session.Events.Append(id, @event);
    await session.SaveChangesAsync();

    return Results.Ok();
});

// API Endpoint — Query order summary
app.MapGet("/orders/{id}/summary", async (Guid id, IQuerySession query) =>
{
    var summary = await query.LoadAsync<OrderSummaryView>(id);
    return summary is null ? Results.NotFound() : Results.Ok(summary);
});

Handling Eventual Consistency on the Frontend

When using async projections, the read model may lag a few seconds behind the write. There are three common approaches:

1 Optimistic UI — Frontend updates UI immediately after sending command, without waiting for read model
2 Polling — Frontend polls the read model until version >= expected version
3 WebSocket — Server pushes notification when projection has finished updating

Comparison: CRUD vs CQRS vs CQRS + Event Sourcing

CriteriaCRUDCQRSCQRS + Event Sourcing
StorageCurrent stateCurrent state (2 DBs)Immutable event sequence
Audit trailNo (unless adding audit table)Not automaticBuilt-in — every change is an event
Read/Write scalingTightly coupledIndependent scalingIndependent scaling + append-only writes
ComplexityLowMediumHigh
Time travelNoNoYes — replay events to any point in time
Best forCRUD apps, MVPsRead-heavy systemsFinancial, collaborative, event-driven systems

CQRS + Event Sourcing Implementation Checklist

Step 1: Domain Modeling
Identify Aggregate Boundaries, define Domain Events using business language (Ubiquitous Language). Events must express intent, not CRUD operations.
Step 2: Choose an Event Store
Marten (PostgreSQL) for the .NET ecosystem, EventStoreDB for large-scale event-native systems, or custom on SQL Server if mandatory. Don't build an event store from scratch unless necessary.
Step 3: Command Handlers
Implement command validation, load aggregate from events, execute business logic, append new events. Ensure optimistic concurrency via stream version.
Step 4: Projections
Design read models for each specific UI use case. Inline projection for consistency, async projection for reports/dashboards. A single event can feed multiple projections.
Step 5: Event Versioning Strategy
Decide on upcasting vs weak schema. Establish clear conventions for the team when adding new fields to events. Never modify stored events.
Step 6: Testing & Monitoring
Test aggregate behavior using Given-When-Then: given (prior events), when (command), then (new events). Monitor projection lag to detect eventual consistency issues.

Conclusion

CQRS and Event Sourcing aren't patterns for every project, but when business demands a complete audit trail, the ability to scale reads and writes independently, or event-driven integration between services — they're an architecture worth investing in. Within the .NET ecosystem, Marten + PostgreSQL is the most practical choice to get started without complex infrastructure. Start with a small bounded context, prove the value, then expand gradually.

References