An Initiative is not just a project; it is a collaborative vehicle designed to navigate uncertainty. This guide defines the primitives, roles, and dimensions required to steward an idea from genesis to impact.
The structured container for collaboration. Subject to strict governance of Mission, Roles, and Roadmap.
The broader context where the Initiative operates. Contains stakeholders, other systems, and ambient constraints.
Traditional project management assumes a known path to a fixed target. Real innovation is a walk through fog. We replace "Management" with Stewardship—acknowledging that while we cannot control the Territory (Uncertainty), we can rigorously define the Vehicle (Agency) used to navigate it.
Every Initiative must have exactly one Driver. If two people are driving, no one is driving. The Driver has final decision authority and accountability.
An Initiative must operate within a clearly defined Territory Boundary. We cannot "fix everything." We must define what is IN and what is OUT of scope.
Define the "What" and "Why" (Mission), but trust the Team with the "How" (Roadmap). Empowerment requires clear intent, not micromanagement.
We do not build "Requirements"; we test "Hypotheses". Every feature is a bet. State the expected outcome, and measure it.
Treating a button change like a platform rewrite is bureaucratic suicide. Treating a platform rewrite like a button change is negligence. We calibrate our governance to the Physics of the Territory—scaling investment and scrutiny proportional to the system's gravity.
| Scale | Time Horizon | Typical Scope |
|---|---|---|
| Element | Days/Weeks | Login Button, Search Input |
| Feature | Weeks/Months | User Auth, Checkout Flow |
| Product | Months/Years | Mobile App, Admin Dashboard |
| Platform | Years | Identity Service, Design System |
| Ecosystem | Decades | Developer Community, App Store |
How big is the territory?
A single building block or atomic unit. (e.g., "The Login Button") (Days to Weeks)
How much do we know?
Activity is not progress. Teams burn 100% of their energy building the wrong thing. We replace "Task Completion" with Epistemic State—measuring progress by the reduction of uncertainty. We do not ask "Are you done?"; we ask "How confident are you?"
Mapping the territory. High uncertainty. Problem space definition.
Defining the bet. Bounding the system. Writing the pitch.
Testing the core hypothesis. Prototyping. "Tracer bullets".
Production execution. Building the full system. High investment.
Live in production. Gathering feedback. Maintenance mode.
Distributed responsibility is no responsibility. The "Committee" is where accountability goes to die. We define roles by their Relationship to the Work, enforcing a single-threaded Driver for every initiative to ensure that difficult trade-offs are made, not avoided.
The primary owner and decision-maker for the initiative.
A visual or written representation of the problem space boundaries.
"Because Exploring produces understanding of the problem space, the Driver must synthesize discoveries into a coherent map of the territory. This artifact defines the boundaries (in-scope vs. out-of-scope) and provides the foundation for all subsequent decisions. Without a map, the team wanders."
The articulated purpose and success criteria for the initiative.
"Because an initiative without a clear "why" drifts into feature factories, the Driver must articulate the Mission early (Exploring) and refine it as constraints emerge (Shaping). The Mission is the North Star that guides trade-off decisions."
A falsifiable prediction about what change will achieve the desired outcome.
"Because Shaping is about "betting" on a direction, the Driver must formulate falsifiable hypotheses. A hypothesis transforms vague intent into testable predictions: "If we do X, then Y will happen, measurable by Z." This is the core output of Shaping."
A prioritized sequence of hypotheses to test.
"Because hypotheses must be sequenced by value and dependency, the Driver owns the Roadmap. It is not a feature list but a sequence of bets ordered by learning value. The Roadmap evolves as hypotheses are proven or disproven."
Documentation of significant decisions with context and rationale.
"Because the Driver makes final decisions, they must capture the rationale for future Thinkers. A decision without recorded reasoning becomes tribal knowledge—invisible, fragile, and lost when people leave. Decision Records are the antidote to amnesia."
Regular communication of progress, blockers, and next steps.
"Because stakeholders need visibility without micromanaging, the Driver provides regular Status Updates during active phases. These communicate progress, blockers, and upcoming milestones. Status Updates are supporting material, not deliverables."
Justification for moving between lifecycle phases.
"Because lifecycle transitions are significant epistemic shifts, the Driver must document why the initiative is ready to move forward (or backward). This prevents premature scaling and ensures transitions are deliberate, not accidental."
Active participant who does the work.
Findings from time-boxed technical or domain investigations.
"Because Exploring requires hands-on investigation of unknowns, Contributors conduct "spikes"—time-boxed explorations of technical or domain uncertainty. Spike Reports capture findings that feed into the Territory Map. They answer: "Can we do this? How?""
Expert input on feasibility, risks, and constraints.
"Because the Driver needs expert input to shape realistic hypotheses, Contributors provide Technical Advisories during Shaping. These capture constraints, risks, and recommendations that inform the bet. "Here is what we know about the cost of X.""
A minimal implementation to test the core hypothesis.
"Because Proving tests hypotheses with minimal investment, Contributors build Prototypes. A Prototype is not production code—it is a "tracer bullet" designed to validate or invalidate the core assumption. Speed and learning matter more than polish."
Documented outcomes of hypothesis testing.
"Because hypotheses are bets, we must record outcomes. Contributors document Hypothesis Results: what was tested, what was observed, and whether the hypothesis was supported. This is the empirical record that informs the next iteration."
Production-quality implementation of the validated design.
"Because Scaling is full-investment execution, Contributors produce Production Code. This is the real system, built to production standards. It is the culmination of validated hypotheses, not speculative features."
Technical and operational documentation for the system.
"Because systems outlive their creators, Contributors produce Documentation during Scaling and Observing. This includes API docs, runbooks, and architectural decisions. Good documentation enables the next generation of Contributors."
Structured documentation of obstacles requiring escalation.
"Because Contributors encounter obstacles that require escalation, they produce Blocker Reports. These are not complaints but structured requests for help: "I am blocked by X, I need Y to proceed." This enables the Sponsor to remove organizational friction."
Stakeholder who watches progress and provides feedback.
External perspective on user needs and constraints.
"Because Observers represent external interests, they contribute Requirements Insights during early phases. These are not traditional "requirements" but observations about user needs, market conditions, or regulatory constraints that inform the hypothesis."
Criteria that define successful hypothesis validation.
"Because hypotheses need success criteria, Observers define Acceptance Criteria during Shaping. These answer: "How will we know the hypothesis succeeded?" Observers bring the outside-in perspective that prevents teams from declaring victory prematurely."
Structured input on system behavior and user experience.
"Because systems exist to serve users, Observers provide Feedback during active phases. This is structured input on whether the system meets expectations. Feedback is a record that accumulates, informing future iterations."
Documentation of unexpected system behavior.
"Because production systems have defects, Observers file Bug Reports. These are distinct from Feedback: a bug is an unexpected behavior, not a feature request. Bug Reports enable prioritization of fixes vs. enhancements."
Formal documentation of critical concerns requiring intervention.
"Because Observers may detect critical issues that require urgent attention, they produce Escalation Records. This is not the same as Feedback—it is a formal signal that something is seriously wrong and needs Driver or Sponsor intervention."
External confirmation that the prototype meets acceptance criteria.
"Because Proving requires external validation, Observers produce Validation Reports. These confirm (or refute) that the Prototype meets Acceptance Criteria from the stakeholder perspective. This is the outside-in seal of approval to proceed."
Resource provider who enables the work.
Formal authorization to pursue the initiative.
"Because initiatives need organizational legitimacy, Sponsors produce Charters during Exploring. A Charter is the formal authorization to investigate a problem space. It grants the team permission to spend time and resources on discovery."
Documented resource commitment for hypothesis testing.
"Because hypotheses require investment, Sponsors allocate Budget during Shaping. This is not a blank check but a bounded commitment: "We will invest X to test Y." Budget Allocation forces explicit trade-offs and prevents scope creep."
Documentation of how the initiative serves organizational goals.
"Because initiatives must fit organizational strategy, Sponsors document Strategic Alignment during Shaping. This answers: "How does this initiative serve our broader goals?" Misaligned initiatives drain resources without strategic return."
Formal decision to proceed to Scaling or to pivot/stop.
"Because Proving is a gate before Scaling, Sponsors make the Go/No-Go Decision. This is the formal commitment to proceed with full investment or to pivot/stop. It prevents zombie initiatives that limp along without clear authorization."
Documentation of organizational blockers removed.
"Because organizational friction blocks progress, Sponsors produce Blocker Resolutions. When Contributors escalate blockers, Sponsors navigate organizational politics, secure approvals, and remove obstacles. This is their core enabling function."
Identification of key stakeholders and engagement strategies.
"Because initiatives exist in a political landscape, Sponsors provide Stakeholder Maps. These identify who cares about the initiative, what they need, and how to engage them. Navigating stakeholders is organizational knowledge that Sponsors bring."
Formal assignment of people, infrastructure, and budget for Scaling.
"Because Scaling requires full resources (people, infrastructure, budget), Sponsors formally allocate them. Resource Allocation during Scaling is broader than Budget Allocation during Shaping—it includes team assignments and infrastructure provisioning."
Process degenerates into "Process Theater"—creating documents no one reads. We prevent this by temporally binding artifacts to Lifecycle Phases. We do not ask for a Roadmap in the Exploring phase; we ask for a Map. The output must match the epistemic need.
Traditional role definitions list artifacts without context. A "Prototype" makes sense during Proving, not Exploring. By aligning artifacts to phases, we answer the question every Thinker asks: "What should I produce right now?"
Hover over any artifact to see its rationale. Shaded cells indicate the role's primary phases of activity.
Each artifact has a rationale explaining why it exists and when it matters.
Because the Driver makes final decisions, they must capture the rationale for future Thinkers. A decision without recorded reasoning becomes tribal knowledge—invisible, fragile, and lost when people leave. Decision Records are the antidote to amnesia.
Because Proving tests hypotheses with minimal investment, Contributors build Prototypes. A Prototype is not production code—it is a "tracer bullet" designed to validate or invalidate the core assumption. Speed and learning matter more than polish.
Because hypotheses need success criteria, Observers define Acceptance Criteria during Shaping. These answer: "How will we know the hypothesis succeeded?" Observers bring the outside-in perspective that prevents teams from declaring victory prematurely.
Because Proving is a gate before Scaling, Sponsors make the Go/No-Go Decision. This is the formal commitment to proceed with full investment or to pivot/stop. It prevents zombie initiatives that limp along without clear authorization.
Initiatives often fail not because of what was decided, but because why it was decided is lost. We combat "Tribal Knowledge" with an immutable Decision Log.
In most organizations, context lives in people's heads. When the Driver leaves, the rationale evaporates. This leads to Chesterton's Fence violations: future teams removing constraints they don't understand, causing regressions.
"A decision without rationale is just a guess waiting to be reverted."
We are deferring the "User Profile" features to V2 to ensure we hit the Q3 security audit deadline.
Security audit requires a stable auth implementation 2 weeks prior to review. Adding profile complexity now puts this stability at risk.
Conceptual drift is silent and expensive. If I say "Project" and you hear "Feature", we are fighting a phantom war. We enforce a Ubiquitous Language to prevent the friction that arises when precise technical realities map to ambiguous colloquialisms.
High-minded philosophy dies on Monday morning. These protocols bridge the gap between Theory and Practice—forcing the abstract definition of the Vehicle to occur before a single line of code is written.
Bottom-up. Transforming an insight or artifact into an initiative.
Top-down. Defining a strategic vehicle and assigning ownership.
Lateral. Spinning off a sub-territory into a focused vehicle.
Before writing code or assigning tasks, answer these four questions to establish the Vehicle.
The inventory of our shared reality. Definitions are not pedantry; they are the compilation of our Ontological Commitments.
A temporary or permanent organization of Thinkers around a shared Mission to evolve a specific Territory.
The fundamental unit of collaborative action.
The Initiative conceived as transport through uncertainty. The structured container (Mission, Territory, Team) that carries intent toward impact.
Reframes "project management" as navigation, not control.
The "North Star" intent that defines why the Initiative exists and what success looks like.
Without a clear Mission, efforts disperse and entropy increases.
The bounded problem space where the Initiative operates. The system, domain, or area subject to change.
Defines where we have agency. No territory, no focus.
The explicit definition of what is "Inside" (controlled) vs. "Outside" (context) the Initiative's scope.
Prevents scope creep and clarifies agency.
The magnitude dimension of a Territory: Element, Feature, Product, Platform, or Ecosystem. Determines investment and governance calibration.
Mismatched scale destroys either velocity or quality.
The single individual accountable for the Initiative's outcomes and decision-making.
Distributed responsibility is no responsibility. Every Initiative needs one Driver.
A Thinker's relationship to the Initiative: Driver (accountable), Contributor (executes), Observer (informs), or Sponsor (enables). Roles are not exclusive.
Explicit roles prevent ambiguity about who decides, who does, and who watches.
A proposed change to the Territory, phrased as a falsifiable prediction (If we do X, then Y will happen).
Moves from "Building Features" to "Testing Beliefs".
A sequence of Hypotheses ordered by value and dependency.
The tactical plan to achieve the strategic Mission.
A tangible output produced during a lifecycle phase. Categorized as deliverable (marks progress), supporting (enables work), or record (captures decisions).
Artifacts are evidence of epistemic state, not bureaucratic checkboxes.
An append-only record of significant decisions with rationale, alternatives considered, and constraints at decision time.
Kills tribal knowledge. Future Thinkers inherit context, not mystery.
The current level of confidence about the problem and solution. Measured as lifecycle phase: Exploring (low) → Scaling (high).
Progress is confidence gain, not task completion.
A linguistic boundary within which a specific model is valid.
Ensures terms like "User" or "Product" mean the same thing to everyone in the Initiative.
The rigorous vocabulary shared by the Team and the Code.
Eliminates translation cost between Thinkers and the Territory.
Novelty is forgotten history. We reject "Not Invented Here" in favor of Lineage. Every primitive in this model traces back to its academic or industrial origin—proving that our methods are grounded in surviving theory, not current fashion.
| Primitive | Source (Origin) | Term Mapping | Status |
|---|---|---|---|
| Bounded Context Defining explicit linguistic boundaries to prevent conceptual corruption. | Domain-Driven Design Evans (2003) | "Bounded Context" | Adopted |
| The Driver (DRI) Single-threaded ownership ensures decisions are made. | Apple / GitLab Directly Responsible Individual | "Accountability" | Adapted |
| Hypothesis-Driven Development Treating product changes as experiments rather than requirements. | Lean Startup / Scientific Method Ries (2011) | "Empiricism" | Adopted |
| Territory Scale Understanding that systems nest (Element -> Feature -> Product -> Ecosystem). | Systems Thinking Meadows (2008) | "Hierarchy of Systems" | Adapted |
| Mission Command Defining the "What" and "Why" (Mission), leaving the "How" to the team. | Military Doctrine Auftragstaktik | "Intent-Based Leadership" | Adapted |