Amazon Aurora DSQL — Distributed Serverless SQL for Multi-Region Architecture

Posted on: 4/26/2026 4:14:44 PM

Table of Contents

Why Distributed SQL?

When your system outgrows a single data center — whether for global user latency, data residency compliance, or business-critical 99.999% uptime — the traditional primary-replica database model starts showing its limits. Failover takes tens of seconds, cross-region reads are stale, and writes only go through a single region.

Distributed SQL solves this: a database cluster spread across multiple Availability Zones (AZs) or Regions, still guaranteeing ACID transactions with strong consistency, while scaling out horizontally as load increases.

99.999% Multi-region SLA
0 Infrastructure to manage
$8 / 1 million DPUs
PostgreSQL Wire protocol compatible

What is Aurora DSQL?

Amazon Aurora DSQL is a distributed, serverless, PostgreSQL-compatible SQL database service, generally available since May 2025. It's the first product in the Aurora family designed from the ground up for active-active multi-region architecture.

Key Distinction

Aurora DSQL is not Aurora Serverless v2 with multi-region bolted on. It's an entirely new engine built on a disaggregated architecture — each component (compute, storage, transaction log, conflict resolution) is an independent service that scales separately.

Dec 2024
AWS re:Invent — Aurora DSQL Preview announced
May 2025
General Availability — single-region and multi-region
Mar 2026
DSQL Playground, new driver connectors, expanded region coverage
Apr 2026
Aurora Serverless 30% scaling improvement, DSQL continues region expansion

The 5-Layer Disaggregated Architecture

The core differentiator of Aurora DSQL is its disaggregated architecture — fully decoupling compute, transaction validation, journaling, routing, and storage into 5 independent layers. Each layer scales independently, deploys independently, and has its own security boundary.

graph TD
    Client["🖥️ Client App"] -->|"PostgreSQL wire protocol"| QP["Query Processor
(QP)"] subgraph DSQL["Aurora DSQL Engine"] QP -->|"Buffered writes
at commit time"| ADJ["Adjudicator
(OCC Validator)"] ADJ -->|"Commit approved"| JRN["Journal
(Durable Log)"] JRN -->|"Ordered stream"| XB["Crossbar
(Stream Merger)"] XB -->|"Chronological writes"| STG["Storage Nodes
(Range-partitioned)"] QP -->|"Direct reads"| STG end style Client fill:#f8f9fa,stroke:#e94560,color:#2c3e50 style QP fill:#e94560,stroke:#fff,color:#fff style ADJ fill:#2c3e50,stroke:#fff,color:#fff style JRN fill:#2c3e50,stroke:#fff,color:#fff style XB fill:#16213e,stroke:#fff,color:#fff style STG fill:#e94560,stroke:#fff,color:#fff style DSQL fill:#f8f9fa,stroke:#e0e0e0,color:#2c3e50
Aurora DSQL's 5-layer disaggregated architecture

1. Query Processor (QP)

Each client connection gets its own dedicated PostgreSQL engine. The QP handles all SQL parsing, planning, and execution locally. The key insight: QP buffers all write operations until transaction commit — it doesn't send individual writes. The QP maintains a shard map to locate data on storage nodes and pushes predicates down to the storage layer for filtering/aggregation at the source.

Smart Design

QPs never communicate directly with each other. Each QP requests individual rows from storage (not disk pages), completely eliminating cache coherence problems between compute nodes.

2. Adjudicator

The "referee" that decides whether a transaction commits or aborts. The Adjudicator enforces Optimistic Concurrency Control (OCC): when the QP submits a commit request, the Adjudicator checks whether any other transaction has modified the same key range. Each Adjudicator is responsible for a specific set of key ranges. In multi-region setups, Adjudicators across different regions communicate to detect cross-region conflicts.

3. Journal

Each Journal is bound to exactly one Adjudicator. When a transaction is approved, the Journal writes a durable log and replicates across AZs. This is where durability is guaranteed — the client receives acknowledgment as soon as the Journal write completes.

4. Crossbar

The Crossbar receives data streams from multiple Journals, merges them in chronological order (not transaction order), then routes to storage. This design ensures storage always receives writes in correct chronological order, simplifying snapshot isolation logic.

5. Storage Nodes

Data is distributed via range-partitioning based on primary key. Each storage node maintains multiple replicas across AZs. The QP reads directly from storage nodes (bypassing Adjudicator/Journal), so read latency is very low.

Transaction Lifecycle

sequenceDiagram
    participant App as Application
    participant QP as Query Processor
    participant ADJ as Adjudicator
    participant JRN as Journal
    participant XB as Crossbar
    participant STG as Storage

    App->>QP: BEGIN + SQL statements
    QP->>STG: Read rows (direct)
    STG-->>QP: Row data
    Note over QP: Buffer all writes locally
    App->>QP: COMMIT
    QP->>ADJ: Submit write set + read set
    ADJ->>ADJ: OCC validation (check conflicts)
    alt No conflict
        ADJ->>JRN: Persist commit record
        JRN->>JRN: Replicate across AZs
        JRN-->>App: COMMIT OK
        JRN->>XB: Stream committed data
        XB->>STG: Apply writes (chronological)
    else Conflict detected
        ADJ-->>App: ABORT (serialization failure)
        Note over App: Retry transaction
    end
  
Sequence diagram: lifecycle of a read-write transaction in Aurora DSQL

Key observations:

  • Read operations: go directly from QP to Storage, bypassing the Adjudicator → extremely low read latency with no cross-AZ overhead
  • Write operations: buffered at QP, sent only once at commit time → fewer network round-trips
  • Commit latency: depends only on Adjudicator validation + Journal replication, not on the number of write statements in the transaction
  • Transaction timeout: maximum 300 seconds (5 minutes)
  • Transaction size: limited to 10 MiB and 3,000 row modifications

Multi-Region Active-Active

This is Aurora DSQL's killer feature. Unlike Aurora Global Database (1 primary region + read-only secondaries), DSQL allows both regions to read and write simultaneously.

graph LR
    subgraph US["US East (Primary Write)"]
        QP1["Query Processor"] --> ADJ1["Adjudicator"]
        ADJ1 --> JRN1["Journal"]
        STG1["Storage"]
    end

    subgraph EU["EU West (Primary Write)"]
        QP2["Query Processor"] --> ADJ2["Adjudicator"]
        ADJ2 --> JRN2["Journal"]
        STG2["Storage"]
    end

    subgraph W["Witness Region"]
        WIT["Witness Node"]
    end

    ADJ1 <-->|"Cross-region
conflict check"| ADJ2 JRN1 <-->|"Sync replication"| JRN2 ADJ1 <-->|"Quorum"| WIT ADJ2 <-->|"Quorum"| WIT style US fill:#f8f9fa,stroke:#e94560,color:#2c3e50 style EU fill:#f8f9fa,stroke:#e94560,color:#2c3e50 style W fill:#f8f9fa,stroke:#e0e0e0,color:#2c3e50 style QP1 fill:#e94560,stroke:#fff,color:#fff style QP2 fill:#e94560,stroke:#fff,color:#fff style ADJ1 fill:#2c3e50,stroke:#fff,color:#fff style ADJ2 fill:#2c3e50,stroke:#fff,color:#fff style JRN1 fill:#16213e,stroke:#fff,color:#fff style JRN2 fill:#16213e,stroke:#fff,color:#fff style STG1 fill:#e94560,stroke:#fff,color:#fff style STG2 fill:#e94560,stroke:#fff,color:#fff style WIT fill:#ff9800,stroke:#fff,color:#fff
Multi-Region Active-Active architecture with Witness Region

How it works:

  • Read operations: served locally in each region, no cross-region latency → user experience equivalent to a local database
  • Write operations: at commit time, incur 2 cross-region round-trips (Adjudicator check + Journal sync). Latency depends on distance between regions (US East ↔ EU West ≈ 70-80ms/RTT → commit latency ≈ 140-160ms)
  • Witness Region: doesn't serve traffic, only participates in quorum and acts as a tiebreaker during network partitions

Write Latency Caveat

Cross-region write latency is per-transaction, not per-statement. Even if a transaction has 100 INSERTs, commit latency is still just 2 RTTs. However, if your application needs write latency <10ms, multi-region DSQL isn't the right choice — use single-region instead.

Optimistic Concurrency Control

Aurora DSQL uses OCC instead of traditional pessimistic locking. This is a critical architectural decision:

Aspect Pessimistic Locking (traditional) OCC (Aurora DSQL)
Mechanism Lock row/page before modifying No locks, validate at commit time
Deadlock Can occur Never deadlocks
Read performance Can be blocked by write locks Reads are never blocked
Write conflict Wait for lock release Abort + retry at application level
Best for High contention (many writes to same row) Low-medium contention (most workloads)
Isolation level Configurable (RC, RR, Serializable) Fixed: Repeatable Read

Practical implications for developers:

  • Applications must handle retry logic when receiving serialization failure (SQL state 40001)
  • Avoid hot key writes (multiple transactions updating the same row) — use random UUIDs as primary keys instead of auto-increment
  • No SELECT ... FOR UPDATE — redesign flows that rely on pessimistic locking
// OCC retry pattern in .NET
async Task ExecuteWithRetry(Func<Task> action, int maxRetries = 5)
{
    for (int i = 0; i < maxRetries; i++)
    {
        try
        {
            await action();
            return;
        }
        catch (PostgresException ex) when (ex.SqlState == "40001")
        {
            if (i == maxRetries - 1) throw;
            await Task.Delay(Random.Shared.Next(10, 50 * (i + 1)));
        }
    }
}

Comparison: Aurora DSQL vs Spanner vs CockroachDB

Criteria Aurora DSQL Google Spanner CockroachDB
Vendor AWS Google Cloud Cockroach Labs
SQL Compatibility PostgreSQL (subset) GoogleSQL / PostgreSQL interface PostgreSQL (broader)
Serverless Yes, scale-to-zero Yes (Spanner Editions) Yes (Serverless tier)
Multi-Region Write 2 regions + witness Unlimited regions Unlimited regions
Concurrency Control OCC (optimistic) Pessimistic (2PL + wound-wait) Serializable (pessimistic)
Clock Sync Amazon Time Sync (GPS atomic) TrueTime (GPS + atomic) Hybrid Logical Clock
Foreign Keys No Yes Yes
Triggers / PL/pgSQL No No Yes (subset)
Data Placement Automatic Configurable Declarative (zone configs)
Starting Price $8/million DPUs ~$0.90/node-hour Free tier available
SLA 99.99% (single) / 99.999% (multi) 99.999% (multi-region) 99.999% (Enterprise)
Vendor Lock-in AWS GCP Multi-cloud / self-host

When to choose which?

  • Aurora DSQL: your team is already on the AWS ecosystem, workload is primarily read-heavy with moderate writes, you want true serverless (scale-to-zero), greenfield project
  • Google Spanner: need >2 write regions, already on GCP, high-contention workload requiring pessimistic locking
  • CockroachDB: need multi-cloud portability, broad PostgreSQL compatibility (foreign keys, triggers), granular data placement for compliance

Pricing & Free Tier

Aurora DSQL uses DPU (Distributed Processing Unit) — consolidating compute and I/O into a single metric:

$8 / 1 million DPUs
$0.33 / GB storage / month
100K DPUs Free each month
1 GB Free storage

Pricing highlights:

  • Scale-to-zero: when there's no traffic, compute cost = $0 (only pay for storage)
  • Multi-region writes: DPUs for writes are charged double (replication to the other region), but there's no separate data transfer fee
  • Storage replication: 3-AZ replication within a region is included in the $0.33/GB price
  • Free tier: 100K DPUs + 1GB storage/month — sufficient for dev environments or small side projects

Key Limitations

Aurora DSQL is not a drop-in replacement for PostgreSQL

Despite wire protocol compatibility, many important PostgreSQL features are missing. Evaluate carefully before migrating.

Unsupported PostgreSQL features:

  • Foreign Keys — must be enforced at the application layer
  • Triggers — use application events or Change Data Capture instead
  • Views (including materialized views)
  • Sequences — use UUID v7 (time-ordered) instead of auto-increment
  • PL/pgSQL — only pure SQL user-defined functions are supported
  • JSON/JSONB data types — must use TEXT + serialize at the application level
  • Extensions (PostGIS, pg_trgm, etc.)
  • Temporary tables

Architectural constraints:

  • Maximum 2 write regions (+ 1 witness) — Spanner and CockroachDB have no limit
  • Isolation level fixed at Repeatable Read — cannot switch to Read Committed or Serializable
  • Transaction maximum: 300 seconds, 10 MiB, 3,000 row modifications
  • Cannot mix DDL and DML in the same transaction
  • Region coverage is still limited (expanding gradually)

.NET 10 Integration

Aurora DSQL is PostgreSQL wire protocol compatible, so you use Npgsql (the PostgreSQL driver for .NET) as usual. The main difference: authentication uses AWS IAM tokens instead of static passwords.

using Amazon.DSQL;
using Amazon.DSQL.Model;
using Npgsql;

// Generate auth token
var client = new AmazonDSQLClient();
var token = await client.GenerateDbConnectAuthTokenAsync(
    new GenerateDbConnectAuthTokenRequest
    {
        Hostname = "your-cluster.dsql.us-east-1.on.aws",
        Region = "us-east-1"
    });

// Connect with Npgsql like standard PostgreSQL
var connStr = new NpgsqlConnectionStringBuilder
{
    Host = "your-cluster.dsql.us-east-1.on.aws",
    Port = 5432,
    Database = "postgres",
    Username = "admin",
    Password = token,
    SslMode = SslMode.Require
}.ConnectionString;

await using var conn = new NpgsqlConnection(connStr);
await conn.OpenAsync();

// CRUD operations — standard PostgreSQL syntax
await using var cmd = new NpgsqlCommand(
    "INSERT INTO orders (id, customer_id, total, status) VALUES ($1, $2, $3, $4)",
    conn);
cmd.Parameters.AddWithValue(Guid.NewGuid());
cmd.Parameters.AddWithValue(customerId);
cmd.Parameters.AddWithValue(total);
cmd.Parameters.AddWithValue("pending");
await cmd.ExecuteNonQueryAsync();

EF Core Support

The Npgsql EF Core provider (Npgsql.EntityFrameworkCore.PostgreSQL) works with Aurora DSQL for basic CRUD operations. However, Migrations will need thorough testing since DSQL doesn't support some DDL features. We recommend using Npgsql directly or Dapper for heavy production workloads.

// Schema design optimized for Aurora DSQL
// Use UUID v7 instead of auto-increment (avoid hot keys)
public class Order
{
    public Guid Id { get; set; } = Guid.CreateVersion7();
    public Guid CustomerId { get; set; }
    public decimal Total { get; set; }
    public string Status { get; set; } = "pending";
    public DateTime CreatedAt { get; set; } = DateTime.UtcNow;
}

// SQL: no SERIAL, no FK constraints
// CREATE TABLE orders (
//     id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
//     customer_id UUID NOT NULL,
//     total NUMERIC(18,2) NOT NULL,
//     status TEXT NOT NULL DEFAULT 'pending',
//     created_at TIMESTAMPTZ NOT NULL DEFAULT now()
// );
// CREATE INDEX idx_orders_customer ON orders (customer_id);
// CREATE INDEX idx_orders_status ON orders (status, created_at);

When Should You Use Aurora DSQL?

Good fit:

  • Applications requiring active-active multi-region with strong consistency (global e-commerce, fintech, inventory management)
  • Read-heavy, write-moderate workloads with low contention
  • Teams wanting zero infrastructure management — no provisioning, no patching, no capacity planning
  • Greenfield projects that can design schemas compatible with DSQL's constraints
  • Development/staging environments that need scale-to-zero to save costs

Not a good fit:

  • Legacy applications heavily reliant on foreign keys, triggers, PL/pgSQL
  • Workloads with high-contention writes to the same row (hot key pattern)
  • Need for >2 write regions
  • Need for JSONB, PostGIS, full-text search with tsvector
  • Applications requiring multi-cloud portability (choose CockroachDB)

Conclusion

Amazon Aurora DSQL represents a significant leap in the distributed database space: for the first time, AWS offers a true distributed SQL database with active-active multi-region, serverless scale-to-zero, and PostgreSQL compatibility — all in a managed service.

The 5-layer disaggregated architecture enables independent scaling of each component. OCC eliminates deadlocks but requires application-level retry handling. Multi-region adds 2 RTTs for writes but reads remain local.

That said, DSQL is still young: limited to 2 write regions, missing many familiar PostgreSQL features, and region coverage is still expanding. It's an excellent choice for greenfield projects on AWS that need global availability, but it's not yet a "PostgreSQL killer" for every workload.

Given its rapid development pace — new Playground in Q1/2026, expanding driver connectors, new regions rolling out — Aurora DSQL is well worth watching over the next 12-18 months.

References