Foundations Intermediate 6 min read

CAP, PACELC, and Consistency Models for Engineers

What CAP really says, why PACELC is the version you actually use, and how consistency models map to .NET databases. Vocabulary for every storage choice.

Table of contents
  1. What does CAP actually say - and why is it constantly misquoted?
  2. What is PACELC and why is it the better framing?
  3. What are the consistency models in practice - and which do I need?
  4. How do I implement strong consistency on a .NET stack?
  5. How do I implement eventual consistency safely?
  6. What failure modes do consistency choices introduce?
  7. When is the consistency conversation a distraction?
  8. Where should you go from here?

Every storage choice in the rest of this series gets argued in consistency vocabulary: "we can use Cassandra here because the counter is eventually consistent" or "this needs strong consistency, so it stays in Postgres". This chapter defines those words once. The short version: CAP is a three-way constraint with one bad name, and PACELC is the version you should actually quote.

What does CAP actually say - and why is it constantly misquoted?

CAP, stated by Eric Brewer in 2000, says a distributed system can provide at most two of:

The misquote is "pick two". The real claim is narrower: during a network partition, you must choose C or A. You cannot give both, because the two halves of a partitioned system cannot agree on what the latest write is without communicating.

When there is no partition - the common case - you have all three. That is why "Postgres is CA" sounds wrong but works in practice: Postgres prefers C during partitions, and partitions are rare. The shorthand is misleading because it implies a permanent choice when the choice only kicks in during a network event.

What is PACELC and why is it the better framing?

PACELC, proposed by Daniel Abadi in 2010, fixes CAP by adding the common case:

The "else" half is the daily trade-off. Even with all nodes healthy, strong consistency requires waiting for a quorum of replicas to acknowledge a write - which adds latency. Eventual consistency returns immediately and lets the cluster catch up in the background.

Database labels under PACELC:

The label is a configuration, not a brand. The same database can be PC/EC or PA/EL depending on how you set replication and read concern.

What are the consistency models in practice - and which do I need?

Five practical models, weakest to strongest:

flowchart LR
    EV[Eventual<br/>read may be stale] --> RYW[Read-your-writes<br/>your own writes are visible]
    RYW --> MR[Monotonic Reads<br/>never go backwards]
    MR --> CC[Causal<br/>cause before effect]
    CC --> Strong[Strong / Linearizable<br/>single global order]

The mistake is picking one model for the whole system. A real .NET app mixes them: payments are strong, feeds are eventual, chat is causal. The architecture follows from that mix.

How do I implement strong consistency on a .NET stack?

For most .NET apps, "strong" means a single primary Postgres + EF Core serializable transactions:

// Strong consistency for an inventory decrement.
// Serializable isolation ensures no two requests can both pass
// the "stock > 0" check and decrement past zero.
await using var tx = await db.Database.BeginTransactionAsync(IsolationLevel.Serializable);

var item = await db.Items
    .Where(i => i.Id == itemId)
    .FirstAsync();

if (item.Stock <= 0)
{
    await tx.RollbackAsync();
    return Result.OutOfStock();
}

item.Stock -= 1;
await db.SaveChangesAsync();
await tx.CommitAsync();

Two things to know. First, serializable transactions in Postgres can fail with a serialization conflict (SQLState 40001) - you must retry. Second, the cost shows up under contention; if 1000 concurrent requests all try to decrement the same row, only one succeeds at a time. That is the correctness guarantee you are buying.

When you need strong consistency across services, you graduate to the outbox pattern and eventually sagas.

How do I implement eventual consistency safely?

Three rules.

Rule 1: name a convergence window. "Eventual" without a number is useless. Say "convergent within 5 seconds, p99 30 seconds" so the team knows what to expect. The window comes from your replication setup.

Rule 2: design idempotent operations. Eventually-consistent systems often retry on the application side. If "increment counter" is not idempotent, retries inflate the count. Chapter 10 covers idempotency keys.

Rule 3: read from a primary for read-your-writes. Even in an eventually-consistent setup, you can route a single user's reads to the primary for a few seconds after their write. ASP.NET Core session affinity + a "stickiness window" header is the .NET implementation:

// After a write, set a cookie that the gateway uses to route
// the next reads from this user to the primary database.
Response.Cookies.Append("db-stickiness", "primary",
    new CookieOptions { MaxAge = TimeSpan.FromSeconds(5) });

The cookie is read by the gateway / reverse proxy and steers the read load. Most users never notice eventual consistency because they are stuck to the primary while their writes are propagating.

What failure modes do consistency choices introduce?

Each model has a characteristic failure:

The observability chapter (13) tracks the failure modes: replication lag in seconds for eventual systems, transaction deadlock count for strong systems, vector-clock skew alerts for causal systems.

When is the consistency conversation a distraction?

When traffic is low. Below ~100 RPS, a single Postgres instance with synchronous replication is strong, fast, and simple - none of the trade-offs apply. The whole CAP/PACELC discussion exists because at scale you cannot keep that simplicity.

The hardest design mistake is reaching for Cassandra or DynamoDB on a service that does 50 RPS. The eventual-consistency tax (replication lag, idempotency keys, stale reads) costs more in code than the non-existent throughput problem. Stay in Postgres until you have the QPS estimate from chapter 2 that says otherwise.

Where should you go from here?

You now own the foundations vocabulary - throughput/latency/QPS, back-of-envelope, CAP/PACELC. Next chapter: Redis caching in .NET, the first building block that sits between strong storage and fast reads. After that the building-block chapters compose freely on top of these foundations.

Frequently asked questions

Why does CAP keep being misquoted?
Because the original 2000 statement is short and provocative - 'pick two of CAP'. Real systems don't choose; partitions are rare, so most of the time you have all three. The honest formulation is PACELC: when partitioned, choose A or C; else, choose latency or consistency. That captures the daily trade-off, not just the partition edge case.
Is Postgres CP or AP?
Single-node Postgres is CA in the loose sense - it does not partition because it is one machine. With a sync standby it is CP - the primary refuses writes if the standby is unreachable. With async replication it is closer to AP for reads but writes still require the primary. The label depends on how you configure replication, not on the brand.
When is eventual consistency the right answer?
When the user does not notice the lag. Like counters, follower counts, view counts, and search indexes can lag a few seconds without complaint. Anything tied to money, security, or sequential causality (a user replying to a post they just made) needs strong consistency for that operation - even if other operations on the same data can be eventual.
How do I implement strong consistency in EF Core?
Use a single primary database, a serializable transaction, and pessimistic or optimistic concurrency. EF Core's [ConcurrencyCheck] and RowVersion give you optimistic locking that detects lost updates. For multi-row invariants, wrap in IsolationLevel.Serializable. The performance cost is real - measure first - but the correctness guarantee is exactly what 'strong' means.