Generate a personalized route through an unfamiliar problem space. Discover the concepts you need to acquire, ordered to respect dependencies, adapting as your understanding evolves.
We've moved the prototype to a dedicated full-screen tool. Use the Knowledge Space Explorer to chart dynamic learning paths through unfamiliar domains.
The Concept Card is the atomic unit of this interface. It is designed to bridge the gap between machine reasoning and human learning by separating content from metadata.
We use a Progressive Disclosure pattern to handle the density of AI-generated metadata. The primary view remains focused on learning content, while the "Meta Layer" provides access to the system's reasoning.
The atomic unit of the Exploration Interface. It encapsulates a single domain concept, separating the primary learning content from the AI's meta-analysis and provenance data.
Hovering the ScanEye icon reveals the Analysis Protocol HUD. This "heads-up display" surfaces the confidence score, relevance score, and the AI's generation rationale without cluttering the reading flow.
The component is backed by a strict Zod schema (GeneratedConceptLLM) that enforces provenance. Every concept must carry its own justification.
{
"id": "concept-id",
"label": "Concept Name",
"category": "core", // foundation | core | advanced
"epistemicType": "term", // term | fact | principle | ...
// AI Provenance (The Meta Layer)
"confidence": 0.95, // 0.0 - 1.0
"relevanceScore": 0.88, // 0.0 - 1.0
"rationale": "Included because X is a prerequisite for Y...",
// Content
"description": "...",
"prerequisites": ["other-concept-id"]
}A Thinker arrives with a goal but lacks the conceptual vocabulary to achieve it. They don't just need information—they need to restructure how they think about the domain.
The Core Problem
"I don't know the answer" is solvable with search.
"I don't know what questions to ask" requires a different intervention.
Where is the Thinker now? Diagnose the current knowledge state.
Where do they want to go? Define the goal state requirements.
Generate an optimal path through the domain's concept structure.
Most tools either give answers (assuming you know the question) or give structure (assuming you know the concepts). This experiment does something different: it helps you discover what the structure should be before you have it.
The minimal set of concepts required to understand this experiment's domain. Each term traces to established research.
The set of all possible states a problem can be in, including the initial state, goal state, and all intermediate states reachable through valid operators.
Defines the "terrain" the Thinker must navigate. Without a map of the problem space, movement is random.
An internal cognitive representation of how something works. Also called a Schema (Bartlett) or Frame (Minsky).
The Thinker's current mental model determines what they can perceive, what questions they can ask, and what solutions they can imagine.
The felt sense of discontinuity between what a person currently understands and what they need to understand to achieve their goal (Dervin).
The gap is the problem. The experiment exists to help identify and bridge this gap systematically.
The process of restructuring existing knowledge when new information conflicts with prior understanding. Not addition—transformation.
Learning often requires unlearning. The Thinker's naive model may need to be dismantled before a better one can form.
Temporary support structures that help a learner accomplish tasks beyond their current independent capability (Wood, Bruner & Ross).
The tool acts as scaffolding—providing structure that can be removed once the Thinker has internalized the domain.
The specific set of concepts a person currently understands, represented as a subset of the domain ontology (Doignon & Falmagne).
Assessing the current knowledge state is prerequisite to generating a valid learning path.
A sequence of concepts to acquire, ordered to respect dependencies (prerequisites) and optimize for the goal.
The core output of the experiment: a dynamic, personalized route through the problem space.
A formal specification of the concepts in a domain and the relationships between them. Taxonomy emphasizes hierarchical classification; ontology includes richer relations.
The "road network" that enables pathfinding. Without domain structure, there is no map to navigate.
A typed edge between two concepts in a domain ontology. Types include: prerequisite (A before B), contrast (A differs from B), analogy (A is like B), hierarchy (A contains B), and sibling (A parallels B).
Relationships determine valid learning paths and enable strategic generation moves. Contrastive relationships clarify boundaries; analogical relationships bootstrap understanding; prerequisite relationships constrain sequencing.
Every primitive in this experiment traces to established research. This table maps our vocabulary to its academic origins, ensuring intellectual honesty and enabling deeper exploration.
| Primitive | Source (Origin) | Term Mapping | Status |
|---|---|---|---|
| Problem Space The foundational model: problems as navigation through a space of states connected by operators. | Human Problem Solving Newell & Simon (1972) | "State Space Search" | Adopted |
| Mental Model Establishes that humans reason using internal simulations, not formal logic. The model constrains what can be thought. | Mental Models Johnson-Laird (1983) | "Internal Representation" | Adopted |
| Schema Theory Memory and understanding are constructive processes shaped by prior schemas. New information is assimilated into existing structures. | Remembering Bartlett (1932) | "Schema" | Adopted |
| Gap-Bridging Information seeking as bridging discontinuities. The "gap" is the core unit of analysis. | Sense-Making Methodology Dervin (1983) | "Epistemic Gap" | Adopted |
| Sensemaking Loop HCI model of sensemaking: generate representations, test against data, shift when they fail. | The Cost Structure of Sensemaking Russell et al. (1993) | "Representational Shift" | Adopted |
| Zone of Proximal Development Learning happens in the zone between independent capability and capability with guidance. The tool operates in this zone. | Mind in Society Vygotsky (1978) | "ZPD" | Adopted |
| Scaffolding Temporary, adjustable support that enables performance beyond current ability. Removed as competence grows. | The Role of Tutoring in Problem Solving Wood, Bruner & Ross (1976) | "Scaffolding" | Adopted |
| Conceptual Change Learning as belief revision. Four conditions: dissatisfaction, intelligibility, plausibility, fruitfulness. | Accommodation of a Scientific Conception Posner et al. (1982) | "Conceptual Change Theory" | Adopted |
| Ontological Shift Distinguishes belief revision from category reassignment. Some misconceptions require ontological recategorization. | Three Types of Conceptual Change Chi (2008) | "Categorical Shift" | Adopted |
| Expert-Novice Differences Experts organize knowledge by deep principles; novices by surface features. Taxonomy discovery builds expert-like chunking. | Categorization and Representation of Physics Problems Chi, Feltovich & Glaser (1981) | "Chunking / Deep Structure" | Adopted |
| Knowledge Space Theory Mathematical framework for adaptive assessment and curriculum sequencing. Defines valid learning paths through prerequisite structures. | Spaces for the Assessment of Knowledge Doignon & Falmagne (1985) | "Knowledge State / Learning Path" | Adopted |
| Concept Maps Visual externalization of conceptual structure. Based on Ausubel's assimilation theory of meaningful learning. | The Theory Underlying Concept Maps Novak & Cañas (2008) | "Meaningful Learning" | Adopted |
| Epistemic Cognition How people think about knowing itself. May inform how we help Thinkers clarify what kind of understanding they seek. | The Development of Epistemological Theories Hofer & Pintrich (1997) | "Epistemic Beliefs" | Auditioning |