Skip to Content
DocsDVS Feature Kanban Flow
DRAFT — This document is under development and not yet reviewed.

DVS Feature Kanban Flow

Updated 20 March 2026

A delivery system exists to move value from idea to outcome. But not all stages of that journey follow the same logic — and treating them as if they do is one of the most reliable ways to make a delivery system perform poorly.

The DVS Feature Kanban separates two fundamentally different types of work. Upstream, the system is building understanding and filtering options. The goal is not throughput — it is to ensure that the right ideas, defined well enough, are available when teams are ready to commit to them. At each upstream stage, a go/no go decision can discard a weak candidate cheaply. This is the economic core of upstream kanban: the cost of rejecting an idea rises sharply the further it travels into the system. An idea rejected in the Funnel costs almost nothing. The same idea rejected mid-implementation has consumed weeks of team capacity and left partially done work behind.

Downstream, the logic changes completely. Once a feature is committed at PI Planning, the goal shifts to flow: moving work through development, integration, verification, and deployment with as little delay and variability as possible. Options thinking gives way to delivery discipline. WIP limits, explicit quality gates, and short feedback loops are the mechanisms that keep downstream flow healthy.

PI Planning is the structural boundary between the two. It is the point at which optionality is deliberately exchanged for commitment — where the highest-priority, well-understood features are sequenced into a PI plan by the teams who will build them. The cost of changing course rises significantly after this point, which is precisely why the upstream stages exist: to ensure that what crosses the PI Planning boundary is worth committing to.

WIP limits apply to the active stages in both sections. The appropriate limits depend on the DVS’s capacity and flow patterns and must be established empirically — there are no universal numbers. What is universal is the underlying principle: a system that allows unlimited work in progress will produce long wait times, frequent context switching, and low throughput. Little’s Law is unforgiving on this point.


Four Continuous Phases

The ten stages of the DVS Feature Kanban are organized into four continuous phases. “Continuous” is deliberate — these phases run in parallel, not in sequence. While one feature is being explored upstream, others are being implemented downstream. The system is always in motion across all four phases simultaneously.

PhaseStagesLogic
Continuous ExplorationFunnel → Exploring → Analyzing → PI BacklogBuild understanding, filter options, and prepare features that are worth committing to
Continuous Dev & IntegrationCommitted → Implementing → SIT → Verification in StageDevelop, integrate, and validate committed features until they are ready for production
Continuous DeploymentDeploying to ProductionMove validated features into the production environment, decoupled from the release decision
Release on DemandReleasing → DoneMake deployed features available to customers on a business-driven schedule

PI Planning marks the boundary between Continuous Exploration and Continuous Dev & Integration. It is not a phase — it is the event that converts the best upstream candidates into downstream commitments.


Flow Stages

1. Funnel

Every feature starts here. The Funnel holds all candidate work — new ideas, stakeholder requests, technical needs, system gap analysis findings, and enabler proposals. Entering the Funnel is not a commitment to deliver, or even to invest analysis effort. It is a deliberate signal that something may be worth the DVS’s attention.

The only question asked in the Funnel is whether an idea is worth exploring further. This is an intentionally low-cost filter. Ideas that are clearly not viable — too expensive, technically implausible, or of insufficient strategic relevance — are rejected here before any significant time is spent on them. An explicit rejection is a better outcome than silent drift into the next stage.

InFeature and enabler ideas; exploration candidates; system gap analysis findings; stakeholder requests
WorkAssess viability: is this worth exploring further?
WhoProduct Owners, System / Solution Architect, Product Manager
DecisionGo/no go — Product Owners, Product Manager, System / Solution Architect
OutMove to Exploring (go), or reject

2. Exploring

Exploring is where the DVS builds the understanding that Analyzing will later formalize. The distinction matters. Analyzing refines and defines — but only once the fundamental questions about a feature have been answered: Is this technically feasible? What are the architectural implications? What do customers and the business actually need here? What capability gaps does this expose?

These questions cannot be answered at a desk by a single person reviewing a brief. They require active inquiry — prototyping, spikes, stakeholder conversations, architectural assessment. Exploration groups are assembled with the specific expertise a given feature demands. There is no fixed composition; the group is built around the questions.

A feature entering Exploring is a hypothesis. It may emerge confirmed, reshaped, split into several features, or replaced by different features entirely. Some explorations will return new candidates to the Funnel rather than a single refined feature moving to Analyzing. This is not failure — it is the upstream filter working as intended.

InFeature candidate with go decision from Funnel; responsible Product Owner identified; exploration group assembled
WorkExplore: technical and architectural implications, business needs, customer needs, system gap analysis. Summarize findings. Detail, split, or replace the feature as understanding develops.
WhoResponsible Product Owner; architects, domain experts, UX, Business Analysts with relevant knowledge
DecisionInvolved Product Owners, System / Solution Architect, Product Manager — go/no go to proceed to Analyzing
OutSolution design; confirmed feature(s) for Analyzing; new feature ideas returned to Funnel if identified during exploration

3. Analyzing

By the time a feature enters Analyzing, the question of whether to build it is settled. The question now is what — defined precisely enough that teams can commit to it at PI Planning and begin implementation without rework.

This is where the benefit hypothesis is written, acceptance criteria defined, and Non-Functional Requirements identified. It is also where the feature is sliced to fit within a single PI and its dependencies are made explicit. Unresolved dependencies at PI Planning are the primary cause of execution failures during the PI — Analyzing is the last practical opportunity to surface them.

Economic inputs are also established here: Cost of Delay components are estimated, and the feature’s relationship to its parent initiative, the product vision, and the DVS’s strategic objectives is documented. It is important to be precise about what this is and is not. WSJF is a relative prioritization method — it ranks features against each other, and a ranking requires multiple features to compare. A WSJF score cannot be meaningfully calculated for a single feature in isolation. What Analyzing produces are the inputs to that ranking: the Cost of Delay components and the job size estimate. The ranking itself happens continuously in the PI Backlog as features accumulate.

WIP is limited in Analyzing. Analysis capacity is a real constraint — if too many features are in Analyzing simultaneously, none of them reach the required quality before PI Planning. Managing WIP here is not a process preference; it is an economic decision.

InFeature candidate from Exploring with go decision; responsible Product Owner identified
WorkDefine benefit hypothesis. Identify high-level solution and architectural intent. Define acceptance criteria. Identify Non-Functional Requirements. Identify dependencies. Slice and estimate size. Document strategic alignment. Estimate Cost of Delay components.
WhoProduct Owners, Business Analysts, System / Solution Architect, exploration group members as needed
DecisionInvolved Product Owners go/no go; Product Management approval to enter PI Backlog
OutFeature meeting all Definition of Ready criteria; Cost of Delay components estimated; strategic alignment documented

4. PI Backlog

The PI Backlog holds features that are ready to be committed — analyzed, approved, and waiting for the next PI Planning event. This is not a passive queue. It is an actively managed pool, continuously re-prioritized using WSJF as new features complete Analyzing and as business context shifts.

WSJF ranking is meaningful here precisely because the PI Backlog contains multiple features that can be compared against each other. The ranking answers a specific question: given the team capacity available at the next PI Planning, which combination of features delivers the most economic value? A feature that has been waiting in the PI Backlog for several PIs is not necessarily low-priority — it may have been consistently outranked by higher-urgency work. The backlog makes this visible and explicit.

InFeature meeting all Definition of Ready criteria; Product Management approval
WorkContinuous WSJF re-prioritization as new features arrive and context changes
WhoProduct Manager, Product Owners, System / Solution Architect
DecisionContinuous prioritization by Product Manager and Product Owners; commit decision at PI Planning
OutHighest-priority features enter PI Planning as candidates for commitment

PI Planning

PI Planning is the boundary event between Continuous Exploration and Continuous Dev & Integration. It is not a kanban stage — no development work happens here on individual features. It is the event at which teams collectively review the highest-priority features from the PI Backlog, assess capacity and dependencies, and produce a PI plan.

The features that enter the PI plan move to the Committed backlog. The features that do not are re-prioritized in the PI Backlog for the next PI. The cost of changing course after PI Planning rises significantly — a committed feature mid-implementation carries far more disruption to redirect than a feature sitting in the PI Backlog. This is why the upstream stages exist: not as bureaucratic gates, but as the mechanism that makes commitment meaningful.

PI Planning is described in detail in DVS Ways of Working.


5. Committed

The Committed backlog is the PI plan made explicit on the kanban board. It contains the features sequenced at PI Planning for the current PI. No active development work is in progress — Committed is a pull queue. Teams pull from it when capacity becomes available.

Committed is a distinct backlog from the PI Backlog, and the distinction is meaningful. A feature in the PI Backlog is a candidate — it has not been committed to by any team. A feature in Committed is a team commitment for the current PI, sequenced against real team capacity, with dependencies assessed. The PI Planning event created this list; it does not exist before that event.

InFeatures sequenced into current PI at PI Planning
WorkNone — this is a pull queue
Who
DecisionTeams pull when capacity is available
OutFeature moves to Implementing when a team begins active development

6. Implementing

Active development. Teams decompose features into stories, build, and test the solution. Implementing is typically the longest active stage in the downstream flow, and the one most likely to surface the hidden cost of decisions made upstream — unclear acceptance criteria, underspecified NFRs, and unidentified dependencies all show up here.

Features spanning more than one PI are a signal worth examining. The cause is usually one of two things: the feature was not sliced sufficiently during Analyzing, or there is a systemic capacity or dependency problem at DVS level. Both are worth understanding. Extended implementation times reflect the system, not the teams.

InFeature pulled from Committed; team capacity available
WorkDecompose into stories. Build and test: design, implementation, automated testing, security review, and integration.
WhoDevelopment teams
DecisionTeam Definition of Done governs exit
OutFeature implementation complete; integration testing can begin

7. System Integration Test (SIT)

SIT verifies that features work correctly together — not just in isolation within a single team’s environment. Features are integrated with other features and components in a shared integration environment, and tested across the full system.

Long times in SIT are one of the most informative signals in the downstream flow. They rarely indicate that integration testing itself is slow — they indicate integration issues, environment instability, or quality problems introduced earlier that only become visible when features are combined. The root cause is almost always upstream.

Sub-states:

  • In Integration Test — active integration testing underway
  • Ready for VER — integration testing complete; no open blockers
InImplementation complete per team Definition of Done
WorkIntegration testing in shared environment across all features in the PI
WhoDevelopment teams, QA / Test Lead
DecisionPull policy governs exit — integration testing complete, no open blockers
OutFeature integrated and tested across the system; ready for Verification in Stage

8. Verification in Stage

The feature is deployed to a production-like staging environment and verified against its defined acceptance criteria. This is the final confirmation that all quality gate criteria have been met before production deployment.

It is worth being precise about what this stage is not. It is not a separate acceptance test phase sitting at the end of the delivery process — a pattern inherited from waterfall thinking where a distinct group validates the system after development completes. Acceptance of functionality is built into the gate criteria throughout the flow. Verification in Stage confirms that those criteria have been met, in an environment that mirrors production closely enough to surface any remaining integration or environment-specific issues before they reach the customer.

Long times in Verification in Stage are a signal of quality issues introduced upstream, environment instability, or acceptance criteria that were insufficiently defined during Analyzing.

Sub-states:

  • In Verification — active verification underway in staging environment
  • Ready for PROD — verified and approved; awaiting production deployment
InIntegration testing complete; no open blockers
WorkDeploy to staging. Verify feature against acceptance criteria. Product Management approval.
WhoTest engineers, Product Owners, Product Management
DecisionProduct Management approval; DoD 1 confirmation governs exit
OutFeature verified; DoD 1 met; ready for Deploying to Production

9. Deploying to Production

The feature is deployed to the production environment. It may not yet be visible to customers — a feature toggle can hold it in a deployed but unreleased state until a business release decision is made. This decoupling of deployment from release is one of the more powerful capabilities a mature delivery system can develop. It allows the technical act of getting software to production to happen continuously and safely, independent of go-to-market timing, staged rollouts, or business readiness.

This stage should be short. Extended times here are a signal that the deployment pipeline needs investment — not that the stage itself needs more time.

Sub-states:

  • In Progress — deployment and deployment testing underway
  • Ready for Release — deployed to production; feature toggle in place where applicable; release decision pending
InDoD 1 met; Product Management approval confirmed
WorkDeploy to production. Deployment testing. Activate feature toggle if release is deferred.
WhoDevelopment teams, platform and operations
DecisionBusiness release decision — determines whether the feature moves immediately to Releasing or waits behind a toggle
OutFeature live in production; DoD 2 met when release decision is confirmed

10. Releasing

The feature is made available to customers — immediately, incrementally, or on a business-driven schedule. Releasing is the point at which the delivery system’s work becomes visible to the people it exists to serve.

The distinction between Deploying and Releasing is not a technicality. It reflects a deliberate design choice: the technical decision to put software in production and the business decision to make it available to customers are separate concerns, made by different people at potentially different times. A feature can sit in production for days or weeks before release — behind a feature toggle, staged to a subset of users, or pending a coordinated go-to-market event. The delivery system’s job ends at Done. What the business does with that capability is a separate question.

InDoD 2 met; release decision confirmed
WorkEnable feature for customers: toggle on, progressive rollout, or full release.
WhoProduct Management, platform and operations
DecisionRelease scope and timing — Product Management
OutFeature customer-visible; DoD 3 met; benefit hypothesis evaluation can begin

Done

The feature has exited active DVS governance. It is available to customers, and the benefit hypothesis evaluation can begin.

Done marks the point at which the feature is released — not the point at which the benefit hypothesis is confirmed. Benefit hypothesis evaluation is a subsequent activity, typically tracked at portfolio level against the initiative the feature belongs to. A released feature that fails to deliver its expected outcome is not a delivery failure — it is a learning that should feed back into how future features in the same initiative are shaped and prioritized.


Policies

Definition of Ready

A feature must meet all of the following criteria before it enters PI Planning as a candidate for commitment. The Definition of Ready is not a checklist for its own sake — each criterion removes a specific category of risk from the PI. Features that arrive at PI Planning without meeting it create noise in the planning process: time spent estimating what should have been resolved upstream, commitments made on insufficient understanding, and execution problems that surface mid-PI when they are most expensive to resolve.

CriterionPurpose
Benefit hypothesisDefines the expected outcome — the why behind the feature. Teams need to know what value they are building toward.
High-level solution and architectural intentEstablishes a shared technical direction sufficient for teams to plan and identify dependencies. Does not need to be a full design.
Acceptance criteriaDefines how the team will know the feature is done. Prevents ambiguity at implementation time.
Non-Functional RequirementsIdentifies the system-level constraints (performance, security, availability, compliance) the feature must satisfy.
DependenciesSurfaces all known dependencies on other features, teams, or external systems. Unidentified dependencies are the primary cause of PI execution failures.
Sliced and estimatedFeature is sized to be deliverable within a single PI. Size estimate is available for PI Planning capacity planning.
Cost of Delay components estimatedEstablishes the economic inputs needed for WSJF ranking in the PI Backlog — user and business value, time criticality, risk reduction and opportunity enablement, and job size estimate. Without these, the feature cannot be meaningfully ranked against other candidates.

Definitions of Done — Delivery Perspective

Three cumulative definitions of done mark the meaningful thresholds in a feature’s journey from verified to customer-available. Each represents a distinct state with its own decision logic.

DoDCriteriaMeaning
DoD 1 — Ready for Deploy to ProdImplementation complete. Integration testing passed. Verification in Stage passed. All acceptance criteria met.The feature is technically ready for production. Deployment may be deferred — for example, pending feature toggle infrastructure.
DoD 2 — Ready for ReleaseFeature deployed to production. Feature toggle in place where applicable. Release decision confirmed.The feature is in production and awaiting customer activation. No technical blockers remain.
DoD 3 — ReleasedFeature available to customers.The feature has reached the customer. Benefit hypothesis evaluation can begin.

Kanban Pull Policies

A feature is not pulled forward until its pull policy is met. Making these policies explicit — visible on the board, agreed by the team — is one of the most effective ways to reduce variability and remove the ambiguity that otherwise leads to different people applying different standards to the same question.

TransitionPull policy
Funnel → ExploringGo decision confirmed
Exploring → AnalyzingExploration complete; go decision confirmed
Analyzing → PI BacklogAll Definition of Ready criteria met; Product Management approval
PI Backlog → CommittedSequenced at PI Planning
Committed → ImplementingTeam capacity available
Implementing → SITImplementation complete per team Definition of Done
SIT → Verification in StageIntegration testing complete; no open blockers
Verification in Stage → DeployingDoD 1 met; Product Management approval
Deploying → ReleasingDoD 2 met; release decision confirmed
Releasing → DoneDoD 3 met

Flow Visualization

The ten stages span four continuous phases, separated by the PI Planning event:

Stages with blue border are active work stages with WIP limits. PI Planning (diamond) is the boundary event between Continuous Exploration and Continuous Dev & Integration — not a kanban stage.


Sources

  • Anderson, D.J.Kanban (2010). Foundational source for upstream kanban, WIP limits, and explicit policies. The principle that upstream flow is about managing options — not maximizing throughput — is central to how the Continuous Exploration phase is designed.
  • Reinertsen, D.G.Principles of Product Development Flow (2009). Economic framework for Cost of Delay, WSJF as a relative prioritization method, the cost of queues, and the options thinking that underpins the upstream stages.
  • Humble, J. & Farley, D.Continuous Delivery (2010). Source for the deployment vs. release distinction and the role of feature toggles in decoupling technical delivery from business release decisions.
  • Benson, J. & DeMaria Barry, T.Personal Kanban (2011). The principle that making policies explicit reduces variability and removes ambiguity at stage transitions.

ASP Content Rules v1.9 — ArdorX Consulting AB