Modular Monolith with .NET 10 — The Middle Path Between Monolith and Microservices with Vertical Slice, Wolverine, and Bounded Context
Posted on: 4/16/2026 10:09:05 PM
Table of contents
- 1. The Pendulum Is Swinging Back Toward Modular Monolith
- 2. The Trap of Premature Microservices
- 3. Timeline: From N-tier to Modular Monolith
- 4. What Exactly Is a Modular Monolith
- 5. Design Principles: Bounded Context and Public API
- 6. Vertical Slice Architecture Inside Each Module
- 7. Real-World .NET 10 Project Layout
- 8. Module-to-Module Communication with Wolverine and MediatR
- 9. Per-Module Persistence — One Database, Many Schemas
- 10. What .NET 10 Brings to Modular Monolith
- 11. Per-Module Testing Strategy
- 12. Observability and Deployment
- 13. The Two-Way Migration Path
- 14. Pitfalls and When NOT to Use It
- 15. A Small Case Study: A Timekeeping SaaS for SMEs
- 16. Conclusion: Pick the Architecture You Need, Not the Fashionable One
- References
1. The Pendulum Is Swinging Back Toward Modular Monolith
Since 2014, when Sam Newman published Building Microservices, the software industry has bet big on one idea: split systems into dozens of small, independent services to scale better, deploy faster, and develop in parallel. Ten years later, many engineers have woken up amid piles of Kubernetes YAML, distributed tracing, half-failed sagas, a single HTTP request turning into six network hops, and weekly meetings about consistency models. The question isn't "monolith or microservices" — it's "when do we actually need microservices?". And for most products with fewer than 30 engineers, the pragmatic answer is: not yet — start with a properly designed modular monolith.
Shopify, Stack Overflow, GitHub, Basecamp are familiar proof that a well-built monolith can serve huge user bases. Going into 2026, the pendulum has clearly swung: Amazon Prime Video publicly moved its audio/video monitoring service from microservices back to a monolith and cut costs by 90%, DHH keeps championing the "majestic monolith", and the .NET community embraces official modular monolith templates from Microsoft. Modular Monolith isn't a compromise — it's a deliberate architectural choice that preserves operational simplicity while keeping domain boundaries intact.
This article dives deep into Modular Monolith through the lens of .NET 10: design principles, organising bounded contexts into modules, in-process communication with Wolverine and MediatR, schema-isolated persistence, testing strategy, and especially the two-way migration path — both from a legacy monolith into modular form and from modular up to microservices when that's actually required.
2. The Trap of Premature Microservices
Microservices solve three very specific problems: scaling independent bounded contexts with different load patterns, letting many large teams deploy in parallel without stepping on each other, and isolating failures within one service from the rest. When an organisation doesn't meet those conditions — small team, shared deploy pipeline, unstable domain — splitting early pays a heavy price in hidden costs a monolith simply doesn't incur.
The five hidden cost layers of early microservices
Network reliability: every call can now fail, requiring retries, circuit breakers, timeouts. Distributed transactions: sagas, outboxes, eventual consistency instead of a simple BEGIN TRAN. Observability: distributed tracing, correlation ids across hops. Schema evolution: changing a model demands backwards-compatible API versioning. Operational overhead: Kubernetes, service mesh, per-service CI/CD, secrets rotation multiplied by the number of services.
For a 10-person team building B2B SaaS, those five costs can devour 40% of engineering time without producing any business value. Modular monolith trades deployment independence for development simplicity: deploy a single artifact, debug in one process, run ACID transactions on one database, centralise logs, and pay zero latency between modules.
An important distinction: monolith does not mean spaghetti. A bad monolith is one where code from every domain is mixed into the same namespaces, with circular references, a giant shared DbContext, and changes in one place breaking three others. A good modular monolith applies the same Domain-Driven Design principles as microservices — only the boundaries are enforced by the compiler and by project conventions instead of by the network.
3. Timeline: From N-tier to Modular Monolith
4. What Exactly Is a Modular Monolith
A Modular Monolith is a system deployed as one single deployable unit (one process, one binary, one container) but internally composed of multiple logically independent modules, each matching a Bounded Context, with a clearly defined public API, and with no direct access to another module's internals. Three mandatory traits:
- High cohesion within a module: all code for a domain (entity, use case, persistence, validation, event) lives together, not scattered across technical layers.
- Loose coupling between modules: a module only knows another module's public API via its contracts (interfaces, messages), never references internal classes directly, and never joins SQL across another module's territory.
- Boundaries enforced by the compiler: split into multiple csproj projects, expose only the necessary public types, use
InternalsVisibleToselectively, and keep at most one "Host" project that references all modules.
The distinction from microservices: a modular monolith runs in a single process. A cross-module call is an in-process function call — no network, no JSON serialisation, no timeouts. The database can be a single SQL Server instance, but each module owns its own schema to enforce data boundaries.
graph TB
subgraph HOST["Host ASP.NET Core (.NET 10) — 1 process, 1 deploy"]
subgraph M1["Module Ordering"]
O1["Endpoints"]
O2["Commands/Queries"]
O3["Domain"]
O4["Persistence"]
end
subgraph M2["Module Billing"]
B1["Endpoints"]
B2["Commands/Queries"]
B3["Domain"]
B4["Persistence"]
end
subgraph M3["Module Catalog"]
C1["Endpoints"]
C2["Commands/Queries"]
C3["Domain"]
C4["Persistence"]
end
BUS[Wolverine Bus in-process]
O2 --> BUS
B2 --> BUS
C2 --> BUS
BUS --> B2
BUS --> O2
end
DB[(SQL Server — schemas: ordering, billing, catalog)]
O4 --> DB
B4 --> DB
C4 --> DB
5. Design Principles: Bounded Context and Public API
The first and most important thing to determine for a module is its boundary. A common mistake is splitting by "entity" (User module, Product module) — that's splitting by data, not by business. The correct split is by bounded context: the organisational unit in which a concept carries a single meaning. A Customer in the Billing context is a legal entity with invoices; the same person in the Support context is a ticket reporter. Two concepts with the same name but different intent.
Once bounded contexts are identified, each module publishes a Public Contract — the set of types other modules are allowed to know about. Typically three kinds:
| Contract type | Purpose | Example |
|---|---|---|
| Commands | State-change request, synchronous, expects a response | PlaceOrder, IssueInvoice |
| Queries | Read-only data fetch, synchronous, no side effects | GetOrderById, GetCustomerBalance |
| Integration Events | Broadcast after business completion, asynchronous, no wait | OrderPlaced, InvoiceIssued |
These types live in a {Module}.Contracts project — public and very thin. Everything else (aggregates, value objects, repositories, handlers, DbContext) sits in {Module}.Domain, {Module}.Application, {Module}.Infrastructure and is marked internal. Hard rule: only the Host and the module itself may reference non-Contracts assemblies.
Why Contracts belongs in its own project
If Contracts sits inside Domain, then when Billing wants to send IssueInvoiceCommand to Ordering, it has to reference all of Ordering.Domain just to obtain a DTO. That drags every internal Ordering entity into Billing — the boundary collapses. A separate Contracts project is the thin "passport" other modules carry.
6. Vertical Slice Architecture Inside Each Module
In the .NET world, Clean Architecture (Onion) was long seen as the standard: Domain in the middle, Application wrapping it, Infrastructure and Presentation on the outside, dependencies pointing inward. Theoretically nice, but three practical problems appear once a module has hundreds of use cases:
- Each feature touches files across four layers — poor navigation, long PR reviews.
- Many abstractions are unnecessary (
IUserRepositoryhas one implementation). - The domain model becomes "anemic" because the real logic lives in services/handlers.
Vertical Slice Architecture (Jimmy Bogard) flips this: each use case is a vertical slice running from endpoint to database. One feature = one folder = a few files. Slices can share the Domain layer when needed, but you're not forced through a shared "Application Services" layer.
graph LR
subgraph CLEAN["Clean Architecture"]
C_P[Presentation]
C_A[Application Services]
C_D[Domain]
C_I[Infrastructure]
C_P --> C_A
C_A --> C_D
C_I --> C_D
end
subgraph VSA["Vertical Slice"]
V1["Slice: PlaceOrder
(endpoint + handler + validator + persistence)"]
V2["Slice: CancelOrder
(endpoint + handler + validator + persistence)"]
V3["Slice: GetOrders
(endpoint + handler + query)"]
VD[Shared Domain + Infrastructure primitives]
V1 -. shared .-> VD
V2 -. shared .-> VD
V3 -. shared .-> VD
end
Combining Modular Monolith with Vertical Slice gives a two-level structure: outer level is modules-by-bounded-context, inner level is slices-by-use-case. This scales evenly from 5 slices to 500 without breaking the architecture.
7. Real-World .NET 10 Project Layout
A layout proven effective across many production projects. Assume a SaaS with three modules: Ordering, Billing, Catalog.
src/
├── Host/
│ └── App.Host.csproj (ASP.NET Core entry, references every module)
├── Modules/
│ ├── Ordering/
│ │ ├── App.Ordering.Contracts.csproj (public commands/queries/events)
│ │ ├── App.Ordering.Domain.csproj (internal, references nothing)
│ │ ├── App.Ordering.Application.csproj (internal, refs Domain + Contracts)
│ │ └── App.Ordering.Infrastructure.csproj (internal, refs Application + EF)
│ ├── Billing/
│ │ └── ... (same structure)
│ └── Catalog/
│ └── ... (same structure)
├── BuildingBlocks/
│ ├── App.Bus.csproj (Wolverine abstraction)
│ ├── App.Abstractions.csproj (Result, DomainEvent, IUnitOfWork)
│ └── App.Observability.csproj (OpenTelemetry helpers)
└── Tests/
├── Ordering.Tests/
├── Billing.Tests/
└── Architecture.Tests/ (NetArchTest — guard the boundaries)
A few hard rules applied in every csproj:
{Module}.Domaindoes not reference anything except .NET BCL andBuildingBlocks.Abstractions. No EF, no ASP.NET.{Module}.Contractscontains only records/interfaces and also does not reference Domain.- The Host project is the only one that references every
{Module}.Infrastructureto register DI. - Each module provides a
ModuleExtensions.cswithAddOrderingModule(IServiceCollection).
Guard the boundary with ArchTest
Don't rely on reviews alone. Write a single CI test using NetArchTest: "Ordering.Domain must not reference Billing.*", "Ordering.Infrastructure must not reference Catalog.*", "Only Host may reference Infrastructure". One test protects the whole architecture.
8. Module-to-Module Communication with Wolverine and MediatR
In-process messaging is the spine of a modular monolith. Two main tools exist in the .NET ecosystem:
- MediatR: Jimmy Bogard's lightweight library, an in-process dispatcher only. Ideal for synchronous commands/queries.
- Wolverine: Jeremy D. Miller's full-fat messaging framework, supporting both in-process handlers and external transports (Rabbit, Kafka, Azure Service Bus), with a transactional outbox integrated with EF Core. Ideal for asynchronous integration events.
In production modular monoliths, the common formula is: MediatR inside a module (slice-to-slice), Wolverine between modules (integration events with an outbox). Example:
// Module Ordering — Application layer, after persisting to DB
public sealed class PlaceOrderHandler(
OrderingDbContext db,
IMessageBus bus) : IRequestHandler<PlaceOrder, Result<OrderId>>
{
public async Task<Result<OrderId>> Handle(PlaceOrder cmd, CancellationToken ct)
{
var order = Order.Create(cmd.CustomerId, cmd.Items);
db.Orders.Add(order);
await db.SaveChangesAsync(ct);
// Integration event — Wolverine writes it to the outbox in the same transaction
await bus.PublishAsync(new OrderPlaced(
order.Id, order.CustomerId, order.Total));
return order.Id;
}
}
// Module Billing — handler in a different project, connected via Wolverine
public sealed class OnOrderPlacedHandler
{
public async Task Handle(
OrderPlaced evt,
BillingDbContext db,
CancellationToken ct)
{
var invoice = Invoice.For(evt.OrderId, evt.CustomerId, evt.Total);
db.Invoices.Add(invoice);
await db.SaveChangesAsync(ct);
}
}
Key point: OrderPlaced lives in Ordering.Contracts — the only project Billing references. Billing knows nothing about the Order aggregate, the OrderingDbContext, or Ordering's internal validators. When Ordering changes its internals, Billing doesn't need to rebuild.
sequenceDiagram
autonumber
participant API as API Endpoint
participant OH as Ordering Handler
participant DB as SQL Server
participant OB as Wolverine Outbox
participant BH as Billing Handler
API->>OH: PlaceOrder command
OH->>DB: BEGIN TRAN
OH->>DB: Insert Order (schema ordering)
OH->>OB: Enqueue OrderPlaced
OH->>DB: COMMIT TRAN
Note over DB,OB: Order + outbox row are atomic
OB->>BH: Dispatch OrderPlaced (in-process)
BH->>DB: Insert Invoice (schema billing)
BH-->>OB: Ack
9. Per-Module Persistence — One Database, Many Schemas
A common question: should each module have its own database? The practical answer for a modular monolith is no, it doesn't need to. Share one SQL Server, PostgreSQL, or Azure SQL instance, but each module owns a dedicated schema:
- Schema
ordering: Orders, OrderItems, OrderHistory. - Schema
billing: Invoices, Payments, Refunds. - Schema
catalog: Products, Categories.
Each module has its own DbContext with a schema-specific MigrationsHistoryTable and HasDefaultSchema:
public sealed class OrderingDbContext(DbContextOptions<OrderingDbContext> opt)
: DbContext(opt)
{
public DbSet<Order> Orders => Set<Order>();
protected override void OnModelCreating(ModelBuilder mb)
{
mb.HasDefaultSchema("ordering");
mb.ApplyConfigurationsFromAssembly(typeof(OrderingDbContext).Assembly);
}
}
// Host/Program.cs
builder.Services.AddDbContext<OrderingDbContext>(o =>
o.UseSqlServer(cs, sql => sql.MigrationsHistoryTable(
"__MigrationsHistory", "ordering")));
The golden rule: no cross-schema querying. Billing needs order information? It keeps its own projection (billing.OrderSnapshot) updated by the OrderPlaced event. There must be no JOIN ordering.Orders from the Billing DbContext. Violating this rule is the first step back into the mud — and it must be blocked by code review or a dedicated ArchTest (grep FROM ordering. in the Billing assembly fails the build).
When data really is shared
Some tables like Tenant, Currency, Country are lookup-style shared data. Two options: (1) consolidate into a "Shared Reference Data" module with read-only contracts that other modules query; (2) duplicate the lookup into each schema and sync via events. Option (1) is simpler for a modular monolith. Option (2) prepares the ground for a future microservices extraction.
10. What .NET 10 Brings to Modular Monolith
.NET 10 didn't invent modular monolith, but its feature set makes the pattern much easier to implement than the previous two versions:
| .NET 10 feature | Application to Modular Monolith |
|---|---|
| Primary constructors on classes | Less boilerplate in handlers and services, focus on business logic |
| Keyed services DI (stable since .NET 8, production-ready in 10) | Each module gets its own DbContext, IConnectionFactory under the same interface but different keys |
| Minimal APIs + Endpoint groups | Each module maps a RouteGroupBuilder, with its own filters and OpenAPI tags |
| Improved Native AOT | Enables a modular monolith cold-start under 200ms, ideal for scale-to-zero |
| Typed OpenAPI document (replacing Swagger/Swashbuckle) | Auto-splits OpenAPI docs per module, precise frontend typing per module |
| .NET Aspire 10 orchestration | Local dev-loop runs host + SQL + Redis + Seq/Jaeger with a single aspire run |
| Official Result pattern (ProblemDetails + TypedResults) | Handlers return Result<T>, endpoints map to unified TypedResults.Ok/Problem |
A keyed services example isolating connections per module:
// Host/Program.cs
builder.Services.AddKeyedSingleton<SqlConnectionFactory>("ordering",
(sp, _) => new SqlConnectionFactory(cfg["ConnectionStrings:Ordering"]));
builder.Services.AddKeyedSingleton<SqlConnectionFactory>("billing",
(sp, _) => new SqlConnectionFactory(cfg["ConnectionStrings:Billing"]));
// Module Billing — receives its own instance, can't see Ordering's
public sealed class InvoiceRepository(
[FromKeyedServices("billing")] SqlConnectionFactory factory) { /*...*/ }
11. Per-Module Testing Strategy
Modular monoliths have a big testing advantage: no need to spin up many containers for cross-module end-to-end tests. A layered test structure:
- Unit tests: aggregates, value objects, domain services in the Domain layer. No DB, no DI container. Run in milliseconds.
- Module integration tests: use Testcontainers to spin a real SQL Server in Docker, testing one module from endpoint to DB. One test project per module.
- System tests: spin the entire host via
WebApplicationFactory, test a few cross-module E2E scenarios (PlaceOrder → OrderPlaced → Invoice). - Architecture tests: NetArchTest protects the referencing conventions. A single file runs under 500ms.
Testcontainers for SQL Server 2022 on .NET 10 uses a new Alpine container with cold-start under 5 seconds — fast enough to run integration tests in CI without caching DB state between cases.
12. Observability and Deployment
Even running in a single process, a modular monolith should still carry per-module telemetry so you can watch their health separately. OpenTelemetry on .NET 10 supports a per-module ActivitySource:
// Each module gets its own ActivitySource
internal static class OrderingTelemetry
{
public const string SourceName = "App.Ordering";
public static readonly ActivitySource Source = new(SourceName);
}
// Host registers them all
builder.Services.AddOpenTelemetry()
.WithTracing(t => t
.AddSource("App.Ordering", "App.Billing", "App.Catalog")
.AddAspNetCoreInstrumentation()
.AddEntityFrameworkCoreInstrumentation()
.AddOtlpExporter());
On a dashboard, each span carries a service.module attribute for filtering. SLO alerts are also defined per module: "Ordering's p95 latency < 300ms", not aggregated for the whole host.
Deployment: one Dockerfile, one image, one container. That said, you can still scale with feature flags — if Catalog's load suddenly spikes, there's no need to extract a service yet; you can scale the host instances up and enable a read-only DB replica. Only when a module's load pattern is wildly different (say Catalog needs 20 instances while the rest need 2) should you actually consider splitting.
13. The Two-Way Migration Path
A Modular Monolith is a strategic middle ground: from here you can move up (extract microservices) or move in (promote legacy into modular).
13a. From a Legacy Monolith — Strangler Fig
Apply Martin Fowler's Strangler Fig pattern: instead of a rewrite, carve out one bounded context at a time from the legacy codebase, place it in a new module project alongside, and gradually reroute old endpoints to the new module. Practical steps:
- Identify a bounded context that's easy to carve out first (usually a peripheral domain — notifications, audit logs, invoice exports).
- Create
{Module}.Contracts,{Module}.Domain,{Module}.Application,{Module}.Infrastructureprojects next to the legacy code. - Copy (don't cut) the models and logic into the new projects, adjusting to the modular style.
- Create a dedicated schema for the module in the same DB. Use views or replication to sync data temporarily.
- Migrate one endpoint from the legacy controller to the module's new endpoint. Both routes coexist. Use feature flags to steer traffic.
- When stable, delete the legacy code.
Repeat for each bounded context. Within 3–6 months, the legacy shrinks, the new modules take over, and eventually the legacy disappears.
13b. From Modular Monolith to Microservices
If a module actually needs extraction (uneven load, dedicated team, separate compliance), a modular monolith has already done 80% of the work:
- Contracts already exist — just change the transport from in-process to HTTP/gRPC/message broker.
- DB schemas are already separated — just move to a dedicated instance and update connection strings.
- Integration events already run through an outbox — just switch the outbox from an in-process dispatcher to publishing to Rabbit/Kafka.
- Observability is already per-module — copy it to the new service and you're done.
Wolverine makes this especially smooth: same handler, only the transport config flips from UseInProcess() to UseRabbitMQ(). The business code doesn't change.
graph LR
A[Legacy Monolith
spaghetti, one shared DB] -- Strangler Fig --> B[Modular Monolith
one process, N modules, N schemas]
B -- Extract when needed --> C[Hybrid
N-1 modules in monolith, 1 extracted service]
C -- If truly needed --> D[Microservices
many processes, many DBs]
B -. most systems stop here .-> B
14. Pitfalls and When NOT to Use It
Six common anti-patterns
- One giant DbContext for the whole system: kills the boundary on day one. One DbContext per module.
- Cross-module SQL joins: convenient today, disastrous three months later when migration tries to split them. Always go via a query contract.
- Calling into a module's internals through a service class instead of a command/query: erodes the boundary and silently rebuilds coupling.
- A "kitchen sink" Contracts project: dumping Entities into Contracts for convenience — instantly turns Contracts into Domain and the whole reference chain collapses.
- No ArchTest: human review forgets, the compiler doesn't catch it; only ArchTest protects the boundary after two years.
- Microservices-cosplay: adding a fake REST client between modules, JSON-serialising in-process just to "look microservice-like" — wasted CPU, solves nothing.
A modular monolith is not a good fit in three cases:
- Load across bounded contexts is very uneven — e.g. one module needs to auto-scale to 100 instances while others need only 2.
- Bounded contexts have different compliance requirements (e.g. a PCI-DSS module must run in an isolated subnet).
- A team of more than 50 engineers spread across time zones that needs independent deployment pipelines to avoid blocking each other.
Those three cases are rare at startups and SMB projects. Even at mid-sized companies, a well-designed modular monolith is more than capable of serving hundreds of thousands of users.
15. A Small Case Study: A Timekeeping SaaS for SMEs
A reference system (a composite of several real projects): a timekeeping + payroll SaaS for 50–500-employee businesses. The domain has four bounded contexts:
- Identity: signup, login, permissions, SSO via Microsoft Entra.
- Timekeeping: attendance, shifts, overtime, leave requests.
- Payroll: salary formulas, personal income tax, insurance, payslip export.
- Reporting: dashboards, Excel export, an accounting API.
Applying modular monolith:
- One Azure SQL instance, four schemas (
identity,timekeeping,payroll,reporting). - An ASP.NET Core .NET 10 host, Native AOT, 120MB container.
- Wolverine in-process for integration events (
AttendanceApproved→ Payroll,PayrollIssued→ Reporting). - The Reporting module has a read model updated by events and never queries another schema.
- A single CI/CD pipeline, deploy via Azure Container Apps, auto-scale 2–10 instances.
- Observability via OpenTelemetry exporting to Application Insights; a dedicated dashboard per module.
Real outcome after one year: a team of 8 engineers serving 300+ customers, API p95 < 250ms, infrastructure cost under $400/month, 3 deploys per day. When the biggest customer required Payroll to be isolated on compliance grounds, the extraction into a dedicated service took exactly one two-week sprint — entirely because the boundaries had been drawn cleanly from the start.
16. Conclusion: Pick the Architecture You Need, Not the Fashionable One
Modular Monolith isn't a new technology, nor "what comes after microservices". It's the return of a basic engineering idea: pick the architecture that's enough for the current phase while leaving room to change later. With .NET 10, the ecosystem has matured — Wolverine, MediatR, multi-schema EF Core, Aspire, Minimal API, Native AOT, keyed services, NetArchTest — enough for a small team to build a serious product without paying microservices' operational tax.
The practical rules can be distilled into four lines:
- Start as a monolith. Design modules by bounded context from day one.
- One schema, one contracts project, one DbContext, one ActivitySource per module.
- Communicate within a module via a synchronous mediator; between modules via integration events with an outbox.
- Only extract to microservices when the data says so — never because "it's cool".
When you do need microservices, a modular monolith has already paved the road. When you don't, it keeps you focused on the right thing: solving your customer's problem rather than fighting self-inflicted complexity.
References
- Microsoft Learn — Architecting Modern Web Applications with ASP.NET Core and Azure
- Kamil Grzybek — Modular Monolith with DDD (sample repo)
- Wolverine — In-process and out-of-process messaging for .NET
- Jimmy Bogard — Vertical Slice Architecture
- Prime Video Tech Blog — Scaling audio/video monitoring and reducing cost by 90%
- Martin Fowler — Strangler Fig Application
- NetArchTest — Fluent API for architecture tests
- .NET Aspire — Cloud-native orchestration
Payment Gateway System Design 2026 — Idempotency, Saga Pattern, and Double-Charge Defence for Stripe-scale
Claude Code Skills 2026 — Progressive Disclosure and How to Standardise Workflow for Engineering Teams
Disclaimer: The opinions expressed in this blog are solely my own and do not reflect the views or opinions of my employer or any affiliated organizations. The content provided is for informational and educational purposes only and should not be taken as professional advice. While I strive to provide accurate and up-to-date information, I make no warranties or guarantees about the completeness, reliability, or accuracy of the content. Readers are encouraged to verify the information and seek independent advice as needed. I disclaim any liability for decisions or actions taken based on the content of this blog.