Idempotency and the Transactional Outbox in .NET
How to make HTTP endpoints and queue consumers idempotent, and how the outbox pattern guarantees that a database write and a queue publish stay consistent.
Table of contents
- When does idempotency stop being optional?
- What does an idempotent HTTP endpoint look like in .NET?
- What does the outbox pattern look like end-to-end?
- What is the .NET 10 wiring for the outbox?
- What failure modes does this combination introduce?
- When should you skip these patterns?
- Where should you go from here?
The single most expensive bug in .NET services is the duplicate write that nobody caught: charging a card twice, sending two "order shipped" emails, decrementing inventory below zero. Every one of these is solved with the same two patterns - idempotency on the edge, the outbox in the middle - and this chapter wires both into ASP.NET Core in a way you can copy into production tomorrow.
When does idempotency stop being optional?
Three signals.
The operation has external side effects. Charging a card, sending email, calling a partner API. The user's retry on a 500 error may have already succeeded; do it twice and the side effect happens twice.
The operation runs behind a queue. Queues deliver at-least-once (chapter 6). Your consumer will see duplicates eventually - probably this week.
The operation has cross-service consistency. A successful database write that must be followed by an event publish - if the service crashes between, the system is inconsistent. The outbox pattern is the answer.
If none of these apply (a pure read endpoint, a single-DB write inside one transaction), you do not need this chapter.
What does an idempotent HTTP endpoint look like in .NET?
Three parts: the client sends a UUID per request, the server checks it before applying, and the check is done by a database insert with a unique constraint.
public record IdempotencyKey(Guid Key) { }
public class IdempotencyMiddleware(AppDbContext db) : IMiddleware
{
public async Task InvokeAsync(HttpContext ctx, RequestDelegate next)
{
if (!ctx.Request.Headers.TryGetValue("Idempotency-Key", out var keyStr)
|| !Guid.TryParse(keyStr, out var key))
{
await next(ctx);
return;
}
// Try to insert - the unique constraint is the gate.
var entry = new IdempotencyRecord { Key = key, CreatedAt = DateTimeOffset.UtcNow };
try
{
db.IdempotencyRecords.Add(entry);
await db.SaveChangesAsync(ctx.RequestAborted);
}
catch (DbUpdateException) when (IsUniqueViolation())
{
// Already processed - serve the stored response.
var existing = await db.IdempotencyRecords.FindAsync([key], ctx.RequestAborted);
if (existing!.ResponseBody is not null)
{
ctx.Response.StatusCode = existing.StatusCode;
await ctx.Response.WriteAsync(existing.ResponseBody);
}
return;
}
// First time - run the handler and capture the response.
var memory = new MemoryStream();
var original = ctx.Response.Body;
ctx.Response.Body = memory;
try
{
await next(ctx);
entry.StatusCode = ctx.Response.StatusCode;
entry.ResponseBody = Encoding.UTF8.GetString(memory.ToArray());
await db.SaveChangesAsync(ctx.RequestAborted);
memory.Position = 0;
await memory.CopyToAsync(original);
}
finally { ctx.Response.Body = original; }
}
}
The unique constraint on IdempotencyRecord.Key is the
serialisation point. Two concurrent requests with the same key:
exactly one wins the insert, the other gets a unique-violation and
serves the stored response. This is the only shape of idempotent
HTTP I trust.
What does the outbox pattern look like end-to-end?
flowchart LR
API[ASP.NET Core] -->|tx: write row + outbox row| DB[(Postgres)]
DB --> Outbox[(outbox_messages)]
Worker[Outbox publisher] -->|poll| Outbox
Worker -->|publish| Queue[(RabbitMQ)]
Worker -->|mark sent| Outbox
Queue --> Downstream[Consumers]
Two writes in one transaction: the business row and an outbox row that says "publish this event". A worker polls the outbox table, publishes to the queue, marks the row as sent. If the worker crashes mid-publish, the next run sees the unsent rows and republishes - which is fine because the consumer is idempotent.
What is the .NET 10 wiring for the outbox?
// Migration
public class OutboxMessage
{
public Guid Id { get; set; } = Guid.NewGuid();
public string MessageType { get; set; } = "";
public string Payload { get; set; } = "";
public DateTimeOffset CreatedAt { get; set; } = DateTimeOffset.UtcNow;
public DateTimeOffset? SentAt { get; set; }
}
// Producer side - business code adds an outbox row in the same Save.
public async Task PlaceOrderAsync(OrderDto dto, CancellationToken ct)
{
await using var tx = await db.Database.BeginTransactionAsync(ct);
var order = new Order(dto);
db.Orders.Add(order);
db.OutboxMessages.Add(new OutboxMessage
{
MessageType = nameof(OrderPlaced),
Payload = JsonSerializer.Serialize(new OrderPlaced(order.Id, order.UserId)),
});
await db.SaveChangesAsync(ct);
await tx.CommitAsync(ct);
}
// Worker - separate BackgroundService that drains the outbox.
public class OutboxPublisher(IServiceProvider sp, IPublishEndpoint bus, ILogger<OutboxPublisher> log)
: BackgroundService
{
protected override async Task ExecuteAsync(CancellationToken stop)
{
while (!stop.IsCancellationRequested)
{
using var scope = sp.CreateScope();
var db = scope.ServiceProvider.GetRequiredService<AppDbContext>();
var batch = await db.OutboxMessages
.Where(o => o.SentAt == null)
.OrderBy(o => o.CreatedAt)
.Take(100)
.ToListAsync(stop);
foreach (var msg in batch)
{
try
{
await PublishAsync(bus, msg, stop);
msg.SentAt = DateTimeOffset.UtcNow;
}
catch (Exception ex)
{
log.LogError(ex, "Outbox publish failed for {Id}", msg.Id);
break; // pause the batch on first failure - retry next loop
}
}
await db.SaveChangesAsync(stop);
await Task.Delay(TimeSpan.FromSeconds(1), stop);
}
}
}
Three details. The transaction tx covers both the business write
and the outbox insert - they cannot drift. The worker is
single-threaded over the outbox (FOR UPDATE SKIP LOCKED in SQL if
you scale horizontally). The consumer downstream remains
idempotent - that is the contract from
chapter 6.
What failure modes does this combination introduce?
- Idempotency table growth - the table fills up with old keys. Mitigation: TTL on records (e.g. delete keys older than 30 days); most real services run this as a nightly job.
- Outbox lag - the worker falls behind, downstream services are
late. Mitigation: alert on
outbox_pending_countandoutbox_oldest_age; scale workers withSKIP LOCKED. - Lost response body - the idempotency middleware caches the response. If the response is huge (file download), do not cache it - skip idempotency for that endpoint.
- Cross-instance race - two instances of the worker double-publish
the same outbox row. Mitigation:
FOR UPDATE SKIP LOCKEDin Postgres or a single-leader election; MassTransit's outbox handles this for you.
Chapter 13 tracks all four metrics out of the box.
When should you skip these patterns?
When the operation has only one side effect, and that side effect is a database write inside a transaction. Then the database is already exactly-once - it commits or rolls back. Adding idempotency keys and outbox infrastructure to a service that only writes its own database is overhead with no win. The patterns earn their cost when there is a second side effect (queue, email, partner API) that must stay in sync with the first.
Where should you go from here?
Next chapter: circuit breakers with Polly - how to stop calls to a failing dependency before they take your service down. After that, sagas extend the outbox pattern to multi-step business workflows. Together the three chapters of the reliability group form the spine of any production .NET service.
Frequently asked questions
Why can't I just use the queue's exactly-once mode?
Where do I store the idempotency key?
idempotency_keys (key uniqueidentifier primary key, response_body jsonb, created_at timestamptz). For consumer idempotency: a processed_messages table keyed by message ID. The transaction wraps both the business write and the idempotency record - that is the whole trick.