Structured Argumentation

Theory

Making Thought Computable

Raw thought is messy and non-linear. To make it computable, we must structure it into discrete units of Rationale—claims that can be linked, weighed, and traced back to their origins.

This domain explores how to capture, link, and weigh arguments (Support, Attack, Evidence) to build a robust Knowledge Graph that resists bias and encourages intellectual honesty.

Domain Axioms

  • Explicit Linkage: No claim stands alone; it must connect to evidence.
  • Weighted Truth: Confidence is a vector, not a boolean.

Interactive Argument Tree

Live Prototype
Supports
95%

The "Void" system should prioritize epistemic transparency over raw computational speed.

ID: ROOT
Justifies
80%

Transparency builds trust, which is the primary metric for long-term user adoption in cognitive tools.

ID: C1
Evidence
90%

User surveys (N=500) indicate "Fear of Black Box" as top churn reason.

ID: C1-1
Challenges
60%

However, real-time feedback loops require <50ms latency, which deep introspection layers may compromise.

ID: C2
Supports
100%

Engineering benchmarks show introspection adds ~120ms overhead per query.

ID: C2-1

Design Analysis

A breakdown of the UI affordances that enable this structured thinking.

Component 01: Atomic Unit

The Rationale Node

Analysis of the atomic unit of argumentation. Notice how color and shape afford rapid scanning.

Trigger:User encounters subjective claim
Action:Reification (Turning text into object)
Success:Claim can be linked/weighted
// Live Component Instance
Challenges
85%

While efficiency increases, the loss of human oversight creates a "Black Box" risk that violates our core transparency principles.

ID: RAT-001
Component 02: Composite Structure

The Argument Tree

Analysis of recursive nesting. The indentation and guide lines afford 'Drill-Down' behavior.

Trigger:User needs to trace logic
Action:Drill-Down / Navigation
Success:Root cause identified
// Recursive Structure
Supports
95%

The "Void" system should prioritize epistemic transparency over raw computational speed.

ID: ROOT
Justifies
80%

Transparency builds trust, which is the primary metric for long-term user adoption in cognitive tools.

ID: C1
Evidence
90%

User surveys (N=500) indicate "Fear of Black Box" as top churn reason.

ID: C1-1
Challenges
60%

However, real-time feedback loops require <50ms latency, which deep introspection layers may compromise.

ID: C2
Supports
100%

Engineering benchmarks show introspection adds ~120ms overhead per query.

ID: C2-1