Methodology Intermediate 5 min read

Kanban Flow: WIP Limits, Cycle Time, Continuous Delivery

How Kanban actually works for software teams: WIP limits, cycle time, throughput. The board template and the metrics that matter, without inventing sprints.

Table of contents
  1. When does Kanban genuinely fit a team?
  2. What is the cost of running Kanban without WIP limits?
  3. What does the minimal Kanban board look like?
  4. What are the concrete metrics and how do you collect them?
  5. How does Kanban scale to multi-team?
  6. What failure modes does Kanban introduce?
  7. When is Kanban the wrong choice?
  8. Where should you go from here?

The Kanban method has one rule that changes everything: limit work in progress. Everything else - the board, the metrics, the ceremonies you don't have - flows from that rule. This chapter shows the minimal Kanban setup that works for software teams, the WIP limits that catch overload before it shows up in burnout, and when to pick Kanban over Scrum.

When does Kanban genuinely fit a team?

Three signals.

Continuous-arrival work. Tickets, incidents, requests from other teams. The work shows up daily in unpredictable shape; you cannot batch it into two-week sprints without lying to yourself about what's coming.

Mixed work types. A team that does features + bug fixes + ops support has trouble fitting all three into Scrum's single-goal sprint. Kanban handles the mix natively.

The team is interrupt-driven. Platform, infra, and support teams get pulled into other teams' urgencies. Kanban acknowledges this; Scrum fights it.

If the work is feature-driven with predictable arrival rate and the team has a single product owner, Scrum often beats Kanban.

What is the cost of running Kanban without WIP limits?

Three failure modes.

Everything in progress, nothing done. Each engineer juggles 4-5 cards. Context switching kills throughput. The board looks busy; the released-software graph is flat.

Stale work in review. PRs sit in review for days because reviewers are pulled to start new work instead of finishing old work. Cycle time inflates.

Hidden bottleneck. One column (often QA or review) builds up silently. Nobody can see the queue is the problem because no WIP limit catches it. The team blames "things are slow" without a specific cause.

What does the minimal Kanban board look like?

For a 5-engineer team:

flowchart LR
    BL[Backlog<br/>no limit] --> Ready[Ready<br/>WIP 5]
    Ready --> Doing[Doing<br/>WIP 3]
    Doing --> Review[Review<br/>WIP 3]
    Review --> Done[Done<br/>this week]
    style Doing fill:#fef3c7,stroke:#d97706
    style Review fill:#fef3c7,stroke:#d97706

Five columns, two of them limited. Backlog is the prioritised list of what's coming; Ready is items refined enough to pull; Doing and Review are limited so work cannot pile up; Done is this-week's output (cleared weekly to a "shipped" archive).

The WIP limits (3, 3) are starting numbers. Adjust based on metrics — see the failure modes section.

What are the concrete metrics and how do you collect them?

Four numbers, computed from the card's column-change timestamps:

team_kanban_dashboard:
  period: "Week of 2026-06-01"

  throughput:
    cards_completed_this_week: 9
    average_last_4_weeks: 8.2
    trend: stable

  cycle_time_days:
    p50: 2.5
    p95: 7.0
    target_p95: 7.0
    note: "p95 within target; one outlier was a vendor-blocked card"

  wip_history:
    monday: 3
    tuesday: 3
    wednesday: 4    # one card pulled while another in review
    thursday: 3
    friday: 3

  flow_efficiency:
    active_time_percent: 35
    waiting_time_percent: 65
    note: "65% waiting is typical; reduce by limiting WIP further"

Two details. Cycle time is from card pulled to Doing until card moved to Done — wait time in Backlog doesn't count. Throughput is what the team actually delivers; it is the number to defend in status reports.

How does Kanban scale to multi-team?

Two patterns.

Per-team Kanban with shared dashboard:

flowchart TB
    Team1[Team A board] --> Dash[Org dashboard:<br/>cycle time + throughput]
    Team2[Team B board] --> Dash
    Team3[Team C board] --> Dash
    Dash --> Review[Monthly flow review]

Each team runs its own board with its own WIP limits. A monthly flow review compares cycle times across teams to surface bottlenecks (Team C consistently slow? maybe they need help or the work shape is wrong).

Cross-team Kanban for a single program: when a feature flows through 3 teams (mobile, backend, ops), one shared board with columns per team visualises the handoffs. WIP limits per column prevent any team from being overloaded by upstream.

What failure modes does Kanban introduce?

When is Kanban the wrong choice?

Three cases.

Stakeholder demands sprint-style commitment. Marketing wants "these 5 features ship in 2 weeks". Kanban cannot promise that; Scrum can. Choose Scrum or run a hybrid where Kanban handles ops and Scrum handles features.

Team needs forced retrospection. Some teams skip improvement entirely without Scrum's mandatory retro. Kanban does have its own retro (kaizen events) but it's optional in practice; if the team will not self-organise to do them, Scrum's discipline is useful.

Project with hard deadline. Kanban is great for steady-state delivery, weaker for "we must hit X by Y". When the deadline is tight, switch to a project-style plan (planning chapter) and use Kanban as the execution mechanism inside it.

Where should you go from here?

Next chapter: method selection - the decision tree for choosing Scrum, Kanban, hybrid, or waterfall based on the work in front of you. After that, the lifecycle chapters walk a project end-to-end.

Frequently asked questions

How is WIP limit different from a backlog priority?
Backlog priority orders what to do next; WIP limit caps how many things are in progress simultaneously. The trick is that low WIP forces fast finishing - if only 3 items can be in Doing at once, the team finishes them before pulling new ones. Most teams accidentally have unlimited WIP and wonder why nothing ever ships; setting a WIP limit fixes that overnight.
What's a good WIP limit?
Roughly the number of engineers minus 1 or 2, so people occasionally pair on hard items rather than each running their own card. A 5-engineer team typically caps Doing at 3-4. Watch the metrics for two weeks; if cycle time drops and throughput stays the same or rises, you found the right number. If throughput drops noticeably, raise it slightly.
Can I use Kanban for feature delivery?
Yes, but you lose the cadence Scrum gives stakeholders. Kanban delivers continuously, which is great for ops/platform/support but harder to align with marketing launches. Many teams run Kanban internally but report progress on a 2-week cadence externally to give stakeholders a regular surface. The stakeholder chapter covers the framing.
What metrics replace sprint velocity?
Cycle time (median + p95) and throughput per week. Cycle time tells you how fast individual items move; throughput tells you the team's overall delivery rate. Plot a cumulative flow diagram in your tool of choice. Avoid story-point-based metrics in Kanban - the philosophy is item count over estimate accuracy.