Case Studies Intermediate 5 min read

Greenfield MVP Launch: Zero to Production in 12 Weeks

How to ship a brand-new product from kickoff to launch in one quarter. The week-by-week plan, decisions to make early, and traps that delay greenfield projects.

Table of contents
  1. When does the MVP shape actually fit?
  2. What is the cost of treating MVP like a normal project?
  3. What does the 12-week MVP timeline look like?
  4. What does the kickoff doc look like for an MVP?
  5. What artifacts run during the build phase?
  6. How does an MVP scale (or not) to multi-team?
  7. What failure modes does the MVP plan introduce?
  8. When is MVP the wrong shape?
  9. Where should you go from here?

A greenfield MVP is the most fun project a team can do and the most likely to fail by overscoping. The constraint is exactly 12 weeks; the scope must be small enough to fit. This case study shows the week-by-week plan, the discipline that keeps the scope honest, and the artifacts that make MVPs ship.

When does the MVP shape actually fit?

Three signals.

You are validating a hypothesis. The MVP exists to learn whether real users want this. If the requirement is "ship a production system at scale", that's a different project shape

You can defer "real" engineering. Multi-region, deep observability, comprehensive testing - these can wait until the MVP proves users care.

Sponsor is OK with a fast feedback loop. Sponsor will see working software in 12 weeks and decide whether to invest further. If they want a 6-month plan with 5 phases, MVP isn't the framing.

If you're building a critical service that must work right from day 1 (compliance, payments to scale), use the lifecycle chapters with a longer horizon, not MVP.

What is the cost of treating MVP like a normal project?

Three failure modes.

Scope creep into 6 months. "While we're at it..." adds 5 weeks each. By month 4, the MVP is a v1 release. Worst of both worlds: too much for fast learning, too little for real launch.

Over-engineered architecture. Microservices for 3 endpoints, Kafka for 100 events/day, multi-region for 50 users. Months spent on platform that the MVP doesn't need.

Decision paralysis. Treating every choice as load-bearing. The MVP's architecture is meant to be temporary; not every ADR deserves an hour.

What does the 12-week MVP timeline look like?

gantt
    title 12-Week MVP Plan
    dateFormat YYYY-MM-DD
    section Discovery
    Stakeholder interviews         :a1, 2026-06-21, 5d
    Kickoff doc + RACI             :a2, after a1, 3d
    section Build
    Sprint 1: auth + skeleton      :b1, 2026-07-06, 10d
    Sprint 2: core feature 1       :b2, after b1, 10d
    Sprint 3: core feature 2       :b3, after b2, 10d
    Sprint 4: persist + integrate  :b4, after b3, 10d
    section Beta
    Internal beta (alpha users)    :c1, 2026-08-31, 7d
    Friendly customer beta         :c2, after c1, 7d
    section Launch
    Launch checklist + comms       :d1, 2026-09-14, 7d
    Public launch + monitor        :d2, after d1, 7d

The structure is brutal: 2 weeks discovery, 8 weeks build, 2 weeks launch. Anything that does not fit gets deferred to v2.

What does the kickoff doc look like for an MVP?

# MVP Kickoff: {{ Product Name }}

## The hypothesis we're testing
{{ One sentence: users will pay/use/sign up for X because Y }}

## Success metric
{{ Specific, measurable: 100 sign-ups in 30 days post-launch }}

## In scope (Must list, all 5 items)
- M1: User can sign up with email
- M2: User can complete the core flow
- M3: We can charge them (if applicable)
- M4: We can see their behaviour (analytics)
- M5: They can contact us (support form)

## Out of scope (everything else)
- Multi-language
- Mobile app
- Advanced search
- Admin dashboard
- ...

## Architecture (intentionally simple)
- ASP.NET Core monolith
- PostgreSQL single instance
- Stripe for payments
- Mailgun for email
- Auth0 for auth
- No queues, no caches, no microservices

## Team
- TL/PM: {{ name }} (60% time)
- Engineers: 3 full-time
- Designer: half time
- Sponsor: VP Product

## Beta plan
- Week 9: 5 internal users
- Week 10: 20 friendly customers
- Week 11: 100 waitlist users (private launch)
- Week 12: public

## Date commitment
2026-09-14 public launch. If we cannot, we cut scope further -
date is fixed.

Two details. The hypothesis and success metric are explicit - this is what we're learning. The architecture section is short because choices are deliberately defaults; only deviations from the default need rationale.

What artifacts run during the build phase?

flowchart LR
    Standup[Daily standup<br/>15 min] --> Sprint[Sprint planning<br/>2-week]
    Sprint --> Demo[End-of-sprint demo<br/>30 min sponsor + team]
    Demo --> Status[Weekly status<br/>1-page]
    Status --> Retro[End-of-sprint retro<br/>60 min]
    Retro --> Sprint

The MVP runs the lifecycle artifacts compressed: standup daily, 2-week sprints, demo + status + retro at end of each. Demo is the secret weapon - sponsor sees real progress every 2 weeks and trust compounds.

How does an MVP scale (or not) to multi-team?

It doesn't. MVPs are single-team by design. If the work genuinely needs multiple teams, it's not an MVP - it's a program. The right shape is then one team builds the MVP slice and other teams contribute as consulted dependencies.

If the MVP succeeds, then the program scales: more teams, Now/Next/Later roadmap, multiple status reports rolling up. But the MVP itself stays single-team.

What failure modes does the MVP plan introduce?

When is MVP the wrong shape?

Two cases.

Already have product-market fit. If users are already using something, the next thing isn't an MVP - it's a feature addition with stakeholder expectations. Use the normal lifecycle chapters.

Compliance or safety-critical. Medical, financial, infrastructure software cannot ship an "MVP" with cut corners. Use the same kickoff/scope/launch artifacts but with a longer timeline and full quality bars.

Where should you go from here?

Next case study: legacy modernisation - the opposite shape, an old system being upgraded. After that, vendor managed project covers the case where most of the work is outside your team.

Frequently asked questions

What goes into the MVP scope?
The smallest set of features that lets a real user accomplish a real outcome. Not 'minimum viable product' as 'all the features but quick' - actually minimum. Use MoSCoW ruthlessly: 5-7 Must items, everything else deferred. Saying 'no' is the MVP's design discipline.
How small should the team be?
3-5 engineers, one PM (or tech lead playing PM), one designer at half time. Cross-functional inside one team. Larger teams slow MVPs because of coordination overhead. The PM/EM/TPM chapter covers role assumption when you can't have everyone.
Should the MVP architecture be production-ready?
Production-ready in correctness; not necessarily in scale. Default to PostgreSQL + ASP.NET Core monolith (System Design chapter 5). Skip microservices, skip Kafka, skip multi-region. The MVP must work for the first 1000 users; scale comes after product-market fit.
What's different about MVP launch vs normal launch?
Two things. The audience is smaller (beta users, friendlies) so launch tooling can be lighter. But the learning is more important - you need analytics, feedback channels, and a way to iterate fast. The launch chapter checklist still applies; add a 'how do we hear what users think' section.