Skip to Content
DocsMetricsDVS Flow Metrics
DRAFT — This document is under development and not yet reviewed.

DVS Flow Metrics

This framework defines metrics for measuring and managing the flow of features through the DVS Feature Kanban system. It gives Development Value Streams visibility into how features move through the delivery system — where they flow freely, where they stall, and how predictably the DVS delivers against its commitments.


Measurement Scope

What We Measure

Two measurement domains:

Flow Metrics — How features move through the DVS: time per stage, WIP accumulation, throughput, and distribution across feature types.

Predictability Metrics — How reliably the DVS delivers what it commits to at PI Planning.

What We Do Not Measure

Business outcome metrics — whether the features delivered produced the expected customer and business outcomes. That requires a separate measurement capability, connected to product analytics and business KPIs.


What is a Feature?

A Feature is a service that fulfills a stakeholder need — sized to be deliverable by a single DVS within a Program Increment, with a benefit hypothesis and acceptance criteria.

Feature types:

TypeDescriptionTypical distribution
Business FeatureDelivers direct customer value70–80%
Enabler FeatureSupports future business functionality — architecture, infrastructure, compliance20–30%

A healthy DVS maintains a conscious balance between Business and Enabler Features. A distribution heavily skewed toward Business Features may indicate accumulating technical debt. A distribution heavily skewed toward Enablers may indicate that customer value delivery has stalled.


Feature-Level Metrics

14 metrics in total across two categories.

Flow Time Metrics (7)

All flow time metrics use calendar days.

1. Flow Time — TTM (Time to Market) Total elapsed time from entering Analyzing to Done. Date entered Done − Date entered Analyzing

2. Flow Time — Analyzing Date exited Analyzing − Date entered Analyzing

3. Flow Time — Backlog Wait time after approval before implementation begins. Backlog wait time is pure opportunity cost — value approved but not yet delivered. Date exited Backlog − Date entered Backlog

4. Flow Time — Implementing Date exited Implementing − Date entered Implementing Typically the longest stage. Features that take significantly longer than one PI to implement warrant investigation.

5. Flow Time — Validating on Staging Date exited Validating − Date entered Validating Long validation times are often a signal of integration issues or quality problems introduced upstream.

6. Flow Time — Deploying to Production Date exited Deploying − Date entered Deploying Should be short. Persistent delays here indicate pipeline investment is needed.

7. Flow Time — Releasing Date exited Releasing − Date entered Releasing Time from production deployment to customer availability.


WIP Age Metrics (7)

WIP Age measures how long a feature has been in its current stage. Used to identify stalled work before it becomes a blocker.

8. WIP Age — TTM Total time the feature has been active. Current Date − Date entered Analyzing

9. WIP Age — Analyzing Features stalling in Analyzing often indicate unclear scope, missing stakeholder input, or analysis capacity constraints.

10. WIP Age — Backlog The most significant WIP age metric at DVS level. Long Backlog wait times mean approved, valuable work is sitting idle — a direct cost to the system.

11. WIP Age — Implementing Features significantly exceeding one PI in implementation are a system signal. Either the feature needs splitting, or there is a dependency, capacity, or complexity issue worth surfacing.

12. WIP Age — Validating on Staging Long staging validation times indicate integration or quality issues that should be traced back to their root cause in the flow.

13. WIP Age — Deploying to Production Should be very short. Persistent age here indicates a deployment process problem.

14. WIP Age — Releasing Extended release times may indicate release strategy issues, go-to-market dependencies, or risk aversion in the release process.


DVS-Level Metrics

5 metrics in total.

Flow Metrics (4)

1. Flow Velocity Features completed per PI. The primary measure of DVS throughput capacity. Velocity should be tracked as a trend over multiple PIs — a single PI’s velocity is not meaningful in isolation.

2. Flow Time Median feature flow time from Analyzing to Done, total and per stage.

The phase breakdown reveals where time is spent in the system. A healthy distribution typically shows Implementing as the largest component, with Validating, Deploying, and Releasing together accounting for a small fraction of total time — a signal that the technical delivery pipeline is functioning well.

Example (well-functioning DVS, 48 features in PI):

  • Median TTM: 67 days
  • Analyzing: 8 days
  • Backlog: 12 days
  • Implementing: 35 days
  • Validating: 6 days
  • Deploying: 3 days
  • Releasing: 3 days

3. Flow Load (WIP) Total features in active work across all stages, segmented by stage and type. This is a direct measure of system load.

High WIP relative to throughput causes longer flow times — a direct expression of Little’s Law. When the DVS has significantly more features in flight than it can complete in a PI, everything slows down. Reducing WIP is often the fastest way to reduce lead times.

Monitoring the distribution across stages reveals where work accumulates — which stages are acting as system constraints.

4. Flow Distribution The balance between Business Features and Enabler Features in the active flow. An imbalance in either direction is a signal worth examining — not necessarily a problem, but always worth understanding.

Predictability Metric (1)

5. Flow Predictability (Features committed at PI Planning that reached Done / Total features committed) × 100

Example: 42 of 50 committed features reached Done → Predictability = 84%

Target range: 80–100%

RangeInterpretation
80–100%Healthy — the DVS understands its capacity and commits reliably
70–79%Improvement needed — investigate overcommitment, decomposition, or dependency patterns
<70%Systemic issues — overcommitment, poor feature decomposition, unplanned work, or dependency failures

Low predictability is a system signal. Before treating it as a team performance issue, examine the system: are features well-decomposed? Are dependencies identified and managed? Is unplanned work disrupting committed flow?


Per iteration — Quick health check on WIP Age and Flow Load. Identify stalled features early.

End of PI (Inspect and Adapt) — Complete review of all metrics. Use flow data to inform the improvement backlog.

PI Planning — Use historical velocity and flow time data to inform realistic capacity and commitment decisions.

Monthly with leadership — Report on trends and predictability. Connect DVS flow health to portfolio-level outcomes.


Framework Roots

  • SAFe — Program Kanban, PI Planning, and Flow Predictability as a PI-level metric
  • Lean / Little’s Law — WIP and flow time relationship; flow as the primary optimization target
  • Don Reinertsen — Flow economics and the cost of WIP in product development
  • DORA metrics — Deployment frequency and lead time as complementary technical flow indicators
  • Mik Kersten (Project to Product) — Flow metrics applied to software delivery at scale