CAP, PACELC, and Consistency Models for Engineers
What CAP really says, why PACELC is the version you actually use, and how consistency models map to .NET databases. Vocabulary for every storage choice.
Table of contents
- What does CAP actually say - and why is it constantly misquoted?
- What is PACELC and why is it the better framing?
- What are the consistency models in practice - and which do I need?
- How do I implement strong consistency on a .NET stack?
- How do I implement eventual consistency safely?
- What failure modes do consistency choices introduce?
- When is the consistency conversation a distraction?
- Where should you go from here?
Every storage choice in the rest of this series gets argued in consistency vocabulary: "we can use Cassandra here because the counter is eventually consistent" or "this needs strong consistency, so it stays in Postgres". This chapter defines those words once. The short version: CAP is a three-way constraint with one bad name, and PACELC is the version you should actually quote.
What does CAP actually say - and why is it constantly misquoted?
CAP, stated by Eric Brewer in 2000, says a distributed system can provide at most two of:
- Consistency - every read sees the most recent write.
- Availability - every request gets a non-error response.
- Partition tolerance - the system keeps working when network links between nodes drop.
The misquote is "pick two". The real claim is narrower: during a network partition, you must choose C or A. You cannot give both, because the two halves of a partitioned system cannot agree on what the latest write is without communicating.
When there is no partition - the common case - you have all three. That is why "Postgres is CA" sounds wrong but works in practice: Postgres prefers C during partitions, and partitions are rare. The shorthand is misleading because it implies a permanent choice when the choice only kicks in during a network event.
What is PACELC and why is it the better framing?
PACELC, proposed by Daniel Abadi in 2010, fixes CAP by adding the common case:
- Partition: choose A or C.
- Else: choose Latency or Consistency.
The "else" half is the daily trade-off. Even with all nodes healthy, strong consistency requires waiting for a quorum of replicas to acknowledge a write - which adds latency. Eventual consistency returns immediately and lets the cluster catch up in the background.
Database labels under PACELC:
- PostgreSQL with sync standby: PC/EC - consistent always, slower writes.
- PostgreSQL with async standby: PA/EL - available during failover, may serve stale reads.
- Cassandra (default QUORUM): PA/EL - available, low latency, eventually consistent.
- MongoDB (replica set, w=majority): PC/EC by default.
- DynamoDB: PA/EL by default; PC/EC with strongly-consistent reads.
The label is a configuration, not a brand. The same database can be PC/EC or PA/EL depending on how you set replication and read concern.
What are the consistency models in practice - and which do I need?
Five practical models, weakest to strongest:
flowchart LR
EV[Eventual<br/>read may be stale] --> RYW[Read-your-writes<br/>your own writes are visible]
RYW --> MR[Monotonic Reads<br/>never go backwards]
MR --> CC[Causal<br/>cause before effect]
CC --> Strong[Strong / Linearizable<br/>single global order]
- Eventual - reads may be stale, but converge. Cheapest. Right for view counts, like counters, search indexes.
- Read-your-writes - your own writes are visible to your next read. Right for "post a comment, see your comment". Usually achieved with session affinity to a primary.
- Monotonic reads - successive reads never go backward in time. Right for activity feeds.
- Causal - if A caused B, every observer sees A before B. Right for chat, threaded comments.
- Strong (linearizable) - all observers see the same single order. Right for money, inventory, unique-username checks. Most expensive.
The mistake is picking one model for the whole system. A real .NET app mixes them: payments are strong, feeds are eventual, chat is causal. The architecture follows from that mix.
How do I implement strong consistency on a .NET stack?
For most .NET apps, "strong" means a single primary Postgres + EF Core serializable transactions:
// Strong consistency for an inventory decrement.
// Serializable isolation ensures no two requests can both pass
// the "stock > 0" check and decrement past zero.
await using var tx = await db.Database.BeginTransactionAsync(IsolationLevel.Serializable);
var item = await db.Items
.Where(i => i.Id == itemId)
.FirstAsync();
if (item.Stock <= 0)
{
await tx.RollbackAsync();
return Result.OutOfStock();
}
item.Stock -= 1;
await db.SaveChangesAsync();
await tx.CommitAsync();
Two things to know. First, serializable transactions in Postgres can
fail with a serialization conflict (SQLState 40001) - you must
retry. Second, the cost shows up under contention; if 1000 concurrent
requests all try to decrement the same row, only one succeeds at a
time. That is the correctness guarantee you are buying.
When you need strong consistency across services, you graduate to the outbox pattern and eventually sagas.
How do I implement eventual consistency safely?
Three rules.
Rule 1: name a convergence window. "Eventual" without a number is useless. Say "convergent within 5 seconds, p99 30 seconds" so the team knows what to expect. The window comes from your replication setup.
Rule 2: design idempotent operations. Eventually-consistent systems often retry on the application side. If "increment counter" is not idempotent, retries inflate the count. Chapter 10 covers idempotency keys.
Rule 3: read from a primary for read-your-writes. Even in an eventually-consistent setup, you can route a single user's reads to the primary for a few seconds after their write. ASP.NET Core session affinity + a "stickiness window" header is the .NET implementation:
// After a write, set a cookie that the gateway uses to route
// the next reads from this user to the primary database.
Response.Cookies.Append("db-stickiness", "primary",
new CookieOptions { MaxAge = TimeSpan.FromSeconds(5) });
The cookie is read by the gateway / reverse proxy and steers the read load. Most users never notice eventual consistency because they are stuck to the primary while their writes are propagating.
What failure modes do consistency choices introduce?
Each model has a characteristic failure:
- Strong - latency under contention; deadlocks; throughput ceiling on the primary.
- Eventual - stale reads; lost updates if you skip idempotency; user confusion ("I just posted, where did it go?").
- Causal - vector-clock complexity; hard to debug "out of order" reports.
The observability chapter (13) tracks the failure modes: replication lag in seconds for eventual systems, transaction deadlock count for strong systems, vector-clock skew alerts for causal systems.
When is the consistency conversation a distraction?
When traffic is low. Below ~100 RPS, a single Postgres instance with synchronous replication is strong, fast, and simple - none of the trade-offs apply. The whole CAP/PACELC discussion exists because at scale you cannot keep that simplicity.
The hardest design mistake is reaching for Cassandra or DynamoDB on a service that does 50 RPS. The eventual-consistency tax (replication lag, idempotency keys, stale reads) costs more in code than the non-existent throughput problem. Stay in Postgres until you have the QPS estimate from chapter 2 that says otherwise.
Where should you go from here?
You now own the foundations vocabulary - throughput/latency/QPS, back-of-envelope, CAP/PACELC. Next chapter: Redis caching in .NET, the first building block that sits between strong storage and fast reads. After that the building-block chapters compose freely on top of these foundations.
Frequently asked questions
Why does CAP keep being misquoted?
Is Postgres CP or AP?
When is eventual consistency the right answer?
How do I implement strong consistency in EF Core?
[ConcurrencyCheck] and RowVersion give you optimistic locking that detects lost updates. For multi-row invariants, wrap in IsolationLevel.Serializable. The performance cost is real - measure first - but the correctness guarantee is exactly what 'strong' means.