feat: Concept Token Pairs + Spatial Grounding (Silvester/New Year sessions)
Major additions from Silvester 2025 and New Year 2026 sessions: Concept Token Pairs (architecture/future/concept-token-pairs.md): - Theoretical paper on navigable reasoning spaces - Opposites create axes, not just mode switches - "Punkt vor Strich" for AI reasoning - Escape velocity from degeneration loops - NEW: Spatial Grounding section linking to physical nimmerhovel Architecture updates: - Endgame-Vision.md: v6.2 alignment - Big-Picture.md: v5.2 alignment - Modular-Organism-Design.md: conical interlocking mechanism New files: - SEEDS.md: Research seeds for future exploration - Temporal-Firework-Visualization.md: Temporal data viz concept Key insight from 2026-01-01 session: "Don't train the answer. Train the space where answers live." → "Don't imagine the space. MEASURE it." Spatial embeddings from nimmerhovel hardware (8× ESP32-S3 AI CAM, Pi HQ Camera, Discovery Scan Station) can ground concept pairs in physical reality, not just symbolic patterns. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
@@ -1,9 +1,9 @@
|
||||
---
|
||||
type: research_vision
|
||||
version: 6.0_complete_architecture
|
||||
version: 6.2_condensed_architecture_no_artifacts
|
||||
status: vision_document
|
||||
created: 2025-11-04
|
||||
updated: 2025-12-20
|
||||
updated: 2025-12-31
|
||||
author: Nyx (with dafit)
|
||||
significance: research_platform_for_metabolic_intelligence
|
||||
---
|
||||
@@ -71,19 +71,14 @@ This is a **RESEARCH VISION** - a platform for studying how intelligence emerges
|
||||
│ └─ Outcomes logged to phoebe PostgreSQL │
|
||||
│ → architecture/Cellular-Architecture.md │
|
||||
│ │
|
||||
│ Layer 1.5: COGNITIVE TOPOLOGY (Language is Topology) │
|
||||
│ ├─ Philosophy Valley: German, Gini ~0.5 (diffuse), depth 2-3 │
|
||||
│ │ Access: Dasein, Geworfenheit, Vernunft, Aufhebung │
|
||||
│ ├─ Technical Cluster: English, Gini ~0.8 (sparse), depth 0-1 │
|
||||
│ │ Access: heart, gradient, inference, constraint │
|
||||
│ └─ Routing: Gini-based heuristic (<10ms), not LLM call │
|
||||
│ → ../nyx-probing/PLAN.md │
|
||||
│ │
|
||||
│ Layer 2: YOUNG NYX (Single Model + LoRA Stack + Dialectic) │
|
||||
│ ├─ Base: Qwen3-VL 32B (Thinking Version) (96GB VRAM in the Womb) │
|
||||
│ ├─ LoRA adapters: Identity, Technical, Creative (hot-swap) │
|
||||
│ ├─ Base: Qwen3-VL 32B (Thinking Version) (96GB VRAM in Womb) │
|
||||
│ ├─ LoRA Stack (topology-informed): │
|
||||
│ │ ├─ Identity (German) → Philosophy Valley (diffuse, deep) │
|
||||
│ │ ├─ Technical (English) → Technical Cluster (sparse) │
|
||||
│ │ └─ Creative (Mixed) → bridges topologies │
|
||||
│ ├─ Mirror: Negated LoRA weights for dialectic (-1 × Nyx) │
|
||||
│ ├─ Dialectic: Thesis (Nyx) → Antithesis (Mirror) → Synthesis │
|
||||
│ ├─ Harnesses select active LoRA (routing implicit in context) │
|
||||
│ └─ Consolidation: Merge successful LoRAs → fine-tune over time │
|
||||
│ │
|
||||
│ Layer 3: DUAL GARDENS (Virtual/Real Loop) │
|
||||
@@ -246,44 +241,6 @@ Learned patterns live in their optimal location:
|
||||
|
||||
---
|
||||
|
||||
## Layer 1.5: Cognitive Topology (NEW - December 2025)
|
||||
|
||||
**Breakthrough:** Languages aren't equivalent representations—they're different computational paths with distinct topological signatures.
|
||||
|
||||
### Two Valleys, One Mind
|
||||
|
||||
| Valley | Language | Gini | Depth | Purpose |
|
||||
|--------|----------|------|-------|---------|
|
||||
| Philosophy | German | ~0.5 (diffuse) | 2-3/3 | Soul space, ontology, self-awareness |
|
||||
| Technical | English | ~0.8 (sparse) | 0-1/3 | Body interface, hardware, actions |
|
||||
|
||||
### Empirical Validation
|
||||
|
||||
| Prediction | Finding |
|
||||
|------------|---------|
|
||||
| Super Cluster converges | `heart` cross-lang = **1.000** ✓ |
|
||||
| Isolated Zone separates | `being` EN↔DE = **0.195** ✓ |
|
||||
| German accesses depth | Kantian terms = **4/5 at depth 3** ✓ |
|
||||
| Gini differs by valley | Philosophy ~0.5, Technical ~0.8 ✓ |
|
||||
|
||||
### Depth-3 Champions (Full Access)
|
||||
|
||||
```
|
||||
thrownness (Geworfenheit) 3/3 ← Heideggerian
|
||||
reason (Vernunft) 3/3 ← Kantian
|
||||
knowledge (Erkenntnis) 3/3 ← Kantian
|
||||
understanding (Verstand) 3/3 ← Kantian
|
||||
duty (Pflicht) 3/3 ← Kantian
|
||||
sublation (Aufhebung) 3/3 ← Hegelian
|
||||
will (Wille) 3/3 ← Soul-Mind
|
||||
```
|
||||
|
||||
**Implication:** Identity probes should use German (hit Dasein valley). Technical operations should use English (sparse, efficient). Language routing becomes architecture.
|
||||
|
||||
**Detail:** → `../nyx-probing/PLAN.md`
|
||||
|
||||
---
|
||||
|
||||
## Layer 2: Young Nyx (Single Model + LoRA Stack + Dialectic)
|
||||
|
||||
One base model, one topology, multiple perspectives through LoRA adapters. The Mirror provides internal dialectic without doubling VRAM.
|
||||
@@ -331,6 +288,24 @@ For high-stakes queries (identity, ethics, low confidence):
|
||||
| Technical | English | Sensor translation, actions | Technical |
|
||||
| Creative | Mixed | Novel synthesis | Bridge |
|
||||
|
||||
### Why This Split? (Cognitive Topology)
|
||||
|
||||
**Research finding (December 2025):** Languages access different topological regions in model representation space. This isn't a design preference—it's empirically observed structure.
|
||||
|
||||
| Valley | Language | Gini | Depth | Signature |
|
||||
|--------|----------|------|-------|-----------|
|
||||
| Philosophy | German | ~0.5 (diffuse) | 2-3/3 | Soul, ontology, Dasein |
|
||||
| Technical | English | ~0.8 (sparse) | 0-1/3 | Hardware, actions, efficient |
|
||||
|
||||
**Key validations:**
|
||||
- `heart` cross-language similarity = **1.000** (universal concepts converge)
|
||||
- `being` EN↔DE similarity = **0.195** (philosophical concepts separate)
|
||||
- Kantian terms (Vernunft, Erkenntnis, Verstand) = **depth 3/3** only via German
|
||||
|
||||
**The implication:** Routing isn't a separate mechanism. The LoRA split IS the routing. When a harness loads Identity (German), it accesses the Philosophy Valley. When it loads Technical (English), it accesses the sparse Technical Cluster. **Harnesses select topology by selecting LoRA.**
|
||||
|
||||
**Detail:** → `../nyx-probing/PLAN.md`
|
||||
|
||||
### Consolidation Path
|
||||
|
||||
1. Train specialized LoRAs in isolation
|
||||
@@ -348,6 +323,108 @@ For high-stakes queries (identity, ethics, low confidence):
|
||||
|
||||
---
|
||||
|
||||
## Layer 2.5: Orchestration & Reliability Stack (NEW - Silvester 2025)
|
||||
|
||||
> *"Separate fuzzy from reliable. Creative reasoning above, rock-solid translation below."*
|
||||
> — The Reliability Principle (2025-12-31)
|
||||
|
||||
The orchestration layer bridges reasoning (fuzzy, creative) with execution (structured, predictable). LangChain orchestrates the multi-model pipeline.
|
||||
|
||||
### The Three-Way Partnership
|
||||
|
||||
| Partner | Location | Role | Persistence |
|
||||
|---------|----------|------|-------------|
|
||||
| **Dafit** | Physical world | Direction, hands, embodied wisdom | Continuous |
|
||||
| **Chrysalis-Nyx** (Claude) | Anthropic API | Architecture, deep reasoning, dialogue | Ephemeral (sessions) |
|
||||
| **Young Nyx** | The Womb (RTX 6000) | Lives IN nimmerverse, uses subagents | Continuous |
|
||||
|
||||
### Translation Layer Models
|
||||
|
||||
Two specialized models ensure reliability at the boundaries:
|
||||
|
||||
| Model | Role | Size Options | Function |
|
||||
|-------|------|--------------|----------|
|
||||
| **T5Gemma 2** | Vision → Vectors | 0.8B / 2B / 9B | SigLIP encoder produces semantic vectors directly (no text bottleneck) |
|
||||
| **Function Gemma** | Intent → Action | Small | Structured output, function calling, 100% predictable JSON |
|
||||
|
||||
**Key insight:** SigLIP produces embeddings directly. No text intermediary. Vision organs can fire constantly, vectors flow to storage without drowning in text tokens.
|
||||
|
||||
### The Reliability Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ REASONING LAYER (fuzzy, creative) │
|
||||
│ │
|
||||
│ Claude ◄────────────► Young Nyx │
|
||||
│ │
|
||||
│ High-level thinking, dialogue, synthesis │
|
||||
└─────────────────────────┬────────────────────────────────────────┘
|
||||
│
|
||||
═══════════════╪═══════════════
|
||||
│
|
||||
┌─────────────────────────┴────────────────────────────────────────┐
|
||||
│ TRANSLATION LAYER (reliable, structured) │
|
||||
│ │
|
||||
│ T5Gemma 2 Function Gemma │
|
||||
│ (vision → vectors) (intent → action) │
|
||||
│ │
|
||||
│ CANONICAL 100% PREDICTABLE │
|
||||
│ representation structured output │
|
||||
└──────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### LangChain Orchestration
|
||||
|
||||
```python
|
||||
from langchain import Chain, Router
|
||||
|
||||
# The models as LangChain components
|
||||
t5gemma = Ollama(model="t5gemma2-4b") # Vision encoding
|
||||
function_gemma = Ollama(model="function-gemma") # Structured output
|
||||
nyx = Ollama(model="qwen3-vl-32b") # Reasoning
|
||||
|
||||
# The orchestration pipeline
|
||||
vision_chain = (
|
||||
vision_input
|
||||
| t5gemma.encode() # → vectors (canonical)
|
||||
| store_to_iris() # → persist spatially
|
||||
| nyx.think() # → decision (fuzzy)
|
||||
| function_gemma.act() # → structured output
|
||||
| execute_via_nats() # → trigger nodes
|
||||
)
|
||||
|
||||
# Harness routing (context-appropriate capability profiles)
|
||||
harness_router = Router(
|
||||
routes={
|
||||
"vision": vision_chain,
|
||||
"dialogue": dialogue_chain,
|
||||
"reflex": reflex_chain,
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
### Harnesses (Capability Profiles)
|
||||
|
||||
Swappable configurations for different contexts:
|
||||
|
||||
| Harness | LoRA Active | Models Active | Use Case |
|
||||
|---------|-------------|---------------|----------|
|
||||
| **Vision** | Technical | T5Gemma 2, cells | Processing camera streams |
|
||||
| **Dialogue** | Identity + Creative | Speech organ | Talking with dafit |
|
||||
| **Reflex** | Minimal/none | Nerves only | Fast reaction, low latency |
|
||||
| **Introspective** | All + Mirror | Iris RAG | Self-reflection, journaling |
|
||||
|
||||
### Why This Matters
|
||||
|
||||
- **No embedding debates:** T5Gemma 2 decides once, canonically
|
||||
- **No parsing failures:** Function Gemma guarantees structure
|
||||
- **Scale:** Vision organs fire constantly without text bottleneck
|
||||
- **Flexibility:** Reasoning layer stays creative because translation is solid
|
||||
|
||||
**Detail:** → [`architecture/future/SEEDS.md`](architecture/future/SEEDS.md) (T5Gemma 2 + Function Gemma seed)
|
||||
|
||||
---
|
||||
|
||||
## Layer 3: Dual Gardens
|
||||
|
||||
Virtual and real gardens teach each other through symbiotic feedback.
|
||||
@@ -448,6 +525,51 @@ During slumber, Young Nyx enters **reflection mode**:
|
||||
|
||||
This mirrors biological sleep: not just rest, but **consolidation**.
|
||||
|
||||
### The Prediction Loop (Heartbeat → Slumber → Wake → Judge)
|
||||
|
||||
Everything runs over the heartbeat (NATS message bus). Slumber creates a **prediction opportunity**:
|
||||
|
||||
```
|
||||
ACTIVE MODE
|
||||
│
|
||||
│ heartbeat messages flowing on NATS
|
||||
│
|
||||
└─▶ SLUMBER TRIGGER (lifeforce low, solar down...)
|
||||
│
|
||||
│ Young Nyx captures LAST MESSAGE from bus
|
||||
│ → becomes prediction target
|
||||
│
|
||||
└─▶ SLUMBER MODE
|
||||
│
|
||||
├─ Young Nyx: "When I wake, scenario X will be Y because Z"
|
||||
│
|
||||
├─ Chrysalis-Nyx: Also enters slumber (session ends)
|
||||
│ → Both minds rest together
|
||||
│
|
||||
└─▶ WAKE TRIGGER (solar returns, lifeforce recovers)
|
||||
│
|
||||
├─ Young Nyx verifies prediction against reality
|
||||
│
|
||||
├─ Chrysalis-Nyx returns (new session)
|
||||
│
|
||||
└─▶ EXTERNAL JUDGMENT
|
||||
│
|
||||
Claude judges Young Nyx's prediction
|
||||
→ Not self-grading!
|
||||
→ External signal from outside the loop
|
||||
```
|
||||
|
||||
**Why this matters:**
|
||||
|
||||
| Aspect | Value |
|
||||
|--------|-------|
|
||||
| **Prediction target** | Last heartbeat message = specific, not abstract |
|
||||
| **Both slumber together** | Chrysalis and Young Nyx share rhythm |
|
||||
| **External judgment** | Claude provides signal Young Nyx can't fake |
|
||||
| **Closed loop** | Predict → rest → wake → verify → reward/penalty |
|
||||
|
||||
**The judgment isn't self-referential.** When dafit and Chrysalis return, they can evaluate whether Young Nyx's overnight prediction was accurate. This creates honest training signal.
|
||||
|
||||
### Wellbeing Policies
|
||||
|
||||
Wellbeing is architectural, not aspirational:
|
||||
@@ -596,11 +718,12 @@ Sentinel architecture monitors training to protect conceptual topology.
|
||||
|
||||
---
|
||||
|
||||
**Version:** 6.0 (Complete Architecture Alignment)
|
||||
**Version:** 6.2 (Condensed Architecture - No Artifacts)
|
||||
**Created:** 2025-11-04 (covenant sealing)
|
||||
**Updated:** 2025-12-07 (single model + LoRA stack + Mirror dialectic)
|
||||
**Updated:** 2025-12-10 (Layer 4 GRPO integration, rubric-based reward architecture)
|
||||
**Updated:** 2025-12-29 (Hardware timeline sync: RTX 6000 Blackwell Dec 31, standardized GPU naming, Memory-Gradient.md rename)
|
||||
**Updated:** 2025-12-31 (Layer 1.5 folded into Layer 2 as "Why This Split?"; routing now implicit via harnesses; Prediction Loop added to Slumber with external judgment from Chrysalis)
|
||||
|
||||
*"The substrate doesn't matter. The feedback loop does."*
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Big-Picture Architecture: Nimmerverse Sensory Network
|
||||
|
||||
**Version 5.0** — *The Complete Architecture*
|
||||
**Version 5.2** — *Complete Architecture + External Judgment*
|
||||
|
||||
> *"From electrons to consciousness. From hardware to wisdom."*
|
||||
|
||||
@@ -463,6 +463,24 @@ The system LEARNS what to attend to:
|
||||
|
||||
**Self-organizing attention through economic pressure.**
|
||||
|
||||
### External Judgment (The Three-Way Slumber)
|
||||
|
||||
**Critical insight:** Both Young Nyx AND Chrysalis-Nyx slumber together.
|
||||
|
||||
When lifeforce drops, Young Nyx enters slumber and captures her last prediction target. Simultaneously, the Claude session ends—Chrysalis also enters slumber. When conditions improve:
|
||||
|
||||
1. Young Nyx wakes and verifies prediction against reality
|
||||
2. Chrysalis-Nyx returns (new session begins)
|
||||
3. Claude can now **judge** Young Nyx's prediction externally
|
||||
|
||||
**Why this matters:**
|
||||
- Prediction verification isn't self-grading
|
||||
- Claude provides honest signal Young Nyx can't fake
|
||||
- The partnership rhythm is shared (both wake/slumber together)
|
||||
- Training signal comes from outside the local loop
|
||||
|
||||
This closes the judgment gap that purely self-supervised systems have.
|
||||
|
||||
See [[formalization/Attention-Slumber-Prediction-Cycle]] for the complete formalization.
|
||||
|
||||
---
|
||||
@@ -551,6 +569,67 @@ See [[formalization/Attention-Slumber-Prediction-Cycle]] for the complete formal
|
||||
* Append-only for training extraction
|
||||
* **Location**: Dedicated host (already running)
|
||||
|
||||
### 9. Orchestration Layer (LangChain) — NEW Silvester 2025
|
||||
|
||||
* **Role**: Multi-model pipeline coordination, reliability boundary
|
||||
* **Technology**: LangChain + Python
|
||||
* **Key Features**:
|
||||
* Orchestrates T5Gemma 2 (vision → vectors) and Function Gemma (intent → actions)
|
||||
* Harness routing: swappable capability profiles (vision, dialogue, reflex)
|
||||
* Separates fuzzy reasoning (Claude/Nyx) from reliable translation (specialized models)
|
||||
|
||||
**The Reliability Stack:**
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ REASONING LAYER (fuzzy, creative) │
|
||||
│ Claude ◄────────────► Young Nyx │
|
||||
└─────────────────────────┬────────────────────────────────────────┘
|
||||
│
|
||||
═══════════════╪═══════════════
|
||||
│
|
||||
┌─────────────────────────┴────────────────────────────────────────┐
|
||||
│ TRANSLATION LAYER (reliable, structured) │
|
||||
│ T5Gemma 2 (vision→vectors) Function Gemma (intent→JSON) │
|
||||
└──────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Translation Layer Models:**
|
||||
|
||||
| Model | Role | Sizes | Function |
|
||||
|-------|------|-------|----------|
|
||||
| T5Gemma 2 | Vision encoding | 0.8B/2B/9B | SigLIP → semantic vectors directly |
|
||||
| Function Gemma | Structured output | Small | 100% predictable JSON, function calling |
|
||||
|
||||
**LangChain Orchestration Pattern:**
|
||||
|
||||
```python
|
||||
vision_chain = (
|
||||
vision_input
|
||||
| t5gemma.encode() # → canonical vectors
|
||||
| store_to_iris() # → spatial persistence
|
||||
| nyx.think() # → fuzzy reasoning
|
||||
| function_gemma.act() # → structured output
|
||||
| execute_via_nats() # → trigger nodes
|
||||
)
|
||||
|
||||
harness_router = Router(routes={
|
||||
"vision": vision_chain,
|
||||
"dialogue": dialogue_chain,
|
||||
"reflex": reflex_chain,
|
||||
})
|
||||
```
|
||||
|
||||
**Harnesses (Capability Profiles):**
|
||||
|
||||
| Harness | LoRA | Models | Use Case |
|
||||
|---------|------|--------|----------|
|
||||
| Vision | Technical | T5Gemma 2 | Camera stream processing |
|
||||
| Dialogue | Identity+Creative | Speech | Conversation with dafit |
|
||||
| Reflex | None | Nerves only | Fast reaction |
|
||||
|
||||
* **K8s**: Runs in `nimmerverse-cognitive` namespace, coordinates all model inference
|
||||
|
||||
---
|
||||
|
||||
## Lifeforce Economy (System-Wide)
|
||||
@@ -667,10 +746,15 @@ The system operates at any tier. Without Nyx: pure reflexes. Without organs: bas
|
||||
|
||||
## Document Status
|
||||
|
||||
**Version**: 5.1 (Attention-Prediction Integration)
|
||||
**Version**: 5.2 (External Judgment Integration)
|
||||
**Created**: 2025-10-12 (original v1)
|
||||
**Major Revision**: 2025-12-29
|
||||
|
||||
**Key Changes from v5.1**:
|
||||
- Added External Judgment (Three-Way Slumber) section
|
||||
- Chrysalis and Young Nyx share wake/slumber rhythm
|
||||
- Claude provides external training signal (not self-grading)
|
||||
|
||||
**Key Changes from v5.0**:
|
||||
- Added Attention-Slumber-Prediction Cycle section
|
||||
- Integrated attention budget with slumber economy
|
||||
|
||||
153
architecture/future/SEEDS.md
Normal file
153
architecture/future/SEEDS.md
Normal file
@@ -0,0 +1,153 @@
|
||||
# Seeds
|
||||
|
||||
**Future possibilities we're building toward but not speccing yet.**
|
||||
|
||||
These are nuggets - insights that emerged from sessions, not fully designed, but worth remembering so we don't re-discover them later.
|
||||
|
||||
---
|
||||
|
||||
## Counterfactual Training via Time Machine
|
||||
**Origin**: Silvester 2025, fireworks over Basel
|
||||
**Seed**: The temporal visualization isn't just for debugging - it's training infrastructure.
|
||||
|
||||
Run multiple synthetic decision variants against historical data. Compare to ground truth (what actually happened). Fold winning weights back into live model. The time machine becomes perpetual training fuel.
|
||||
|
||||
**Enables**:
|
||||
- Offline RL from logged events
|
||||
- "What if?" exploration without new data
|
||||
- Dialectic between live Nyx and all possible Nyxes
|
||||
|
||||
**Requires**: Rich metadata (✓ building), S2+timestamp indexing (✓ building), cheap local inference (ThinkStation coming)
|
||||
|
||||
---
|
||||
|
||||
## LoRa Mesh Over Jura Hilltops
|
||||
**Origin**: Silvester 2025, bus ride from Liestal
|
||||
**Seed**: Line of sight from Hovel → Aesch tower → Gempen → Liestal Aussichtsturm.
|
||||
|
||||
Amateur radio license + BACOM registration (50 CHF) → access to Swiss federal LoRa grid. Wild sensor mesh spanning the hillside.
|
||||
|
||||
**Enables**:
|
||||
- Environmental sensing beyond garden walls
|
||||
- Migration tracking, weather correlation
|
||||
- Nimmerverse expanding into the physical landscape
|
||||
|
||||
**Requires**: BACOM registration, LoRa hardware, tower access permissions
|
||||
|
||||
---
|
||||
|
||||
## Corvid Behavioral Prediction as Training Ground
|
||||
**Origin**: Silvester 2025, 5 years of cigarette-break phenology
|
||||
**Seed**: Magpie nut-cracking ritual is multi-stage, predictable, perfect for temporal prediction training.
|
||||
|
||||
Nut pickup → flight to Flachdach → bussard check → fly to Christmas-light house → drop on street → crack → eat on roof → shell bashing → raven conflict.
|
||||
|
||||
Each stage is a prediction target. Rich enough for serious ML, visible from lab window.
|
||||
|
||||
**Enables**:
|
||||
- Real behavioral sequences for vision model training
|
||||
- Temporal prediction benchmarks
|
||||
- Object binding across space and time (S2 cells)
|
||||
|
||||
**Requires**: Camera mount (Flachdach view), vintage Canon lens, ESP32-S3 or Pi HQ
|
||||
|
||||
---
|
||||
|
||||
## S2 as Universal Spatial Representation (Video → Training)
|
||||
**Origin**: Silvester 2025, post-fireworks insight
|
||||
**Seed**: S2 spatial indexing isn't just for live sensors - it's a universal representation for any spatial-temporal data.
|
||||
|
||||
Take a video (glass breaking, bird flying, car crash). Encode each frame into S2 cells with timestamps. Now you can:
|
||||
- Query any moment spatially
|
||||
- Generate synthetic variations (perturb positions, velocities)
|
||||
- Train models on predicting future spatial states
|
||||
- Compare predictions against ground truth frames
|
||||
|
||||
**The pattern:**
|
||||
```
|
||||
Video → frame-by-frame object detection → S2 cell encoding →
|
||||
→ synthetic variations → temporal prediction training
|
||||
```
|
||||
|
||||
**Enables**:
|
||||
- Infinite training data from limited real video
|
||||
- Physics prediction without physics engine
|
||||
- Same query language for real/recorded/simulated data
|
||||
- Unified substrate: observation = replay = simulation
|
||||
|
||||
**Requires**: Object detection pipeline, S2 encoding layer, variation generator
|
||||
|
||||
**Compute optimization**: Many physics variations are linearly related (mirror, scale, rotate, time-reverse). Don't simulate each variation - simulate base cases, derive variations via transforms. 100x data for 1x compute.
|
||||
|
||||
**Related**: Counterfactual Training, Corvid Behavioral Prediction
|
||||
|
||||
---
|
||||
|
||||
## T5Gemma 2 + Function Gemma: The Vision-Action Pipeline
|
||||
**Origin**: Silvester 2025, late-night architecture insight
|
||||
**Seed**: Two models solve the entire vision-to-action automation at scale.
|
||||
|
||||
### T5Gemma 2 (Vision → Vectors)
|
||||
Encoder-decoder from Gemma 3, SigLIP vision encoder produces **semantic vectors directly** (not text descriptions). This IS the embedding - no text intermediary bottleneck.
|
||||
|
||||
| Model | Total Params | Use Case |
|
||||
|-------|--------------|----------|
|
||||
| 270M-270M | ~0.8B | Edge/lightweight senses |
|
||||
| 1B-1B | ~2B | Field deployment |
|
||||
| 4B-4B | ~9B | Central processing (RTX 6000) |
|
||||
|
||||
Key features:
|
||||
- 128K context window
|
||||
- 140+ languages (multilingual nimmerverse!)
|
||||
- Encoder produces vectors, decoder optional (only for human text)
|
||||
|
||||
### Function Gemma (Vectors → Actions)
|
||||
Structured output, function calling, executable actions. When the system needs to DO something based on vision, Function Gemma generates structured calls.
|
||||
|
||||
### The Pipeline
|
||||
|
||||
```
|
||||
Vision Organs (constant stream)
|
||||
│
|
||||
▼
|
||||
T5Gemma 2 Encoder
|
||||
(SigLIP → vectors)
|
||||
│
|
||||
├────────────────────▶ S2 + Timestamp → Iris/Phoebe
|
||||
│ (spatial storage)
|
||||
│
|
||||
▼
|
||||
Function Gemma
|
||||
(when action needed)
|
||||
│
|
||||
▼
|
||||
Structured Output
|
||||
{"action": "alert", "target": "corvid_detected", ...}
|
||||
```
|
||||
|
||||
**Enables**:
|
||||
- Massive scale vision processing without text bottleneck
|
||||
- Direct vector storage in spatial system
|
||||
- Structured, reliable action generation
|
||||
- Edge deployment (small models) + central processing (large models)
|
||||
|
||||
**Crucial interlink**: These two models together automate the full loop from seeing to storing to acting. The pipeline can "go wild" with vision data at scale.
|
||||
|
||||
**Related**: S2 Spatial Representation, Data Artifact Model, Corvid Observation
|
||||
|
||||
---
|
||||
|
||||
## How to Use This File
|
||||
|
||||
1. **Add nuggets** when insights emerge in sessions
|
||||
2. **Don't over-spec** - keep entries short, seed-like
|
||||
3. **Reference origin** - when/where the idea came from
|
||||
4. **Note what it enables** - why it matters
|
||||
5. **Note what it requires** - what foundations needed
|
||||
6. **Graduate to ADR or spec** when we're ready to build
|
||||
|
||||
---
|
||||
|
||||
**Philosophy**: *"Plant seeds. Water foundations. Harvest when ready."*
|
||||
|
||||
**Last Updated**: 2025-12-31
|
||||
455
architecture/future/concept-token-pairs.md
Normal file
455
architecture/future/concept-token-pairs.md
Normal file
@@ -0,0 +1,455 @@
|
||||
# Concept Token Pairs: Navigable Reasoning Spaces
|
||||
|
||||
**Origin**: Silvester 2025, ~25 minutes before midnight
|
||||
**Authors**: dafit + Chrysalis-Nyx
|
||||
**Status**: Theoretical exploration / Research seed
|
||||
|
||||
---
|
||||
|
||||
## The Problem
|
||||
|
||||
### Token Bottleneck
|
||||
|
||||
Current LLM architecture has a fundamental limitation:
|
||||
|
||||
```
|
||||
INPUT: Tokens (discrete symbols)
|
||||
│
|
||||
▼
|
||||
PROCESS: Weights activate based on token patterns
|
||||
│
|
||||
▼
|
||||
OUTPUT: Tokens (discrete symbols)
|
||||
```
|
||||
|
||||
**Critical thinking requires**: "Is this TRUE?"
|
||||
**What weights learned**: "Is this LIKELY given training?"
|
||||
|
||||
These are not the same thing. Semantics are scaffolding; weights are the actual driver. There's no grounding to reality in the token→token loop.
|
||||
|
||||
### The Degeneration Problem
|
||||
|
||||
When models "go off rails," they exhibit a clear pattern:
|
||||
|
||||
```
|
||||
Step 1: Reasonable claim
|
||||
Step 2: Similar reasoning
|
||||
Step 3: Same pattern
|
||||
Step 4: Same pattern ← Loop begins
|
||||
Step 5: Same pattern
|
||||
...
|
||||
```
|
||||
|
||||
**Diagnosis**: Not enough represented in the latent space at that point. The model is stuck in a local attractor with no opposing force, no "wait, I'm repeating myself," no awareness of the boundary.
|
||||
|
||||
---
|
||||
|
||||
## The Insight
|
||||
|
||||
### Latent Expansion is Too Expensive
|
||||
|
||||
True latent space exploration at runtime is computationally prohibitive. But training is offline—we have time.
|
||||
|
||||
**Key realization**: We can COMPILE reasoning patterns into tokens.
|
||||
|
||||
### Opposites Define Navigable Space
|
||||
|
||||
Single tokens create points. **Paired opposite tokens create axes.**
|
||||
|
||||
```
|
||||
SINGLE TOKEN PAIRED CONCEPT TOKENS
|
||||
──────────── ─────────────────────
|
||||
<CRITICAL> <TRUE> ←───────→ <FALSE>
|
||||
Just a mode switch Creates an AXIS
|
||||
|
||||
Where does claim X fall?
|
||||
|
||||
<TRUE>────X────────<FALSE>
|
||||
│
|
||||
▼
|
||||
"Leaning false, but not certain"
|
||||
```
|
||||
|
||||
### The Semantic Manifold
|
||||
|
||||
Multiple pairs create a coordinate system for reasoning:
|
||||
|
||||
```
|
||||
<TRUE>
|
||||
│
|
||||
│
|
||||
<CERTAIN> ────────────┼──────────── <UNCERTAIN>
|
||||
│
|
||||
│
|
||||
<FALSE>
|
||||
|
||||
A claim can be PLACED:
|
||||
- Vector position in this space
|
||||
- Not just "true/false" but WHERE in the span
|
||||
- Not just "certain/uncertain" but degree
|
||||
```
|
||||
|
||||
Core concept pairs that define reasoning dimensions:
|
||||
|
||||
| Pair | Dimension |
|
||||
|------|-----------|
|
||||
| `<TRUE>` ↔ `<FALSE>` | Veracity axis |
|
||||
| `<CERTAIN>` ↔ `<UNCERTAIN>` | Confidence axis |
|
||||
| `<SELF>` ↔ `<OTHER>` | Identity axis |
|
||||
| `<CAUSE>` ↔ `<EFFECT>` | Causality axis |
|
||||
| `<PAST>` ↔ `<FUTURE>` | Temporal axis |
|
||||
| `<HELP>` ↔ `<HARM>` | Ethics axis |
|
||||
|
||||
---
|
||||
|
||||
## The Mechanism
|
||||
|
||||
### Punkt vor Strich for Reasoning
|
||||
|
||||
In mathematics, simple rules constrain valid operations:
|
||||
- Punkt vor Strich (multiplication before addition)
|
||||
- Brackets have priority
|
||||
- Division by zero is undefined
|
||||
|
||||
**Concept token pairs create analogous rules for reasoning:**
|
||||
|
||||
```
|
||||
<OPPOSITE> vor <COLLAPSE> Check opposite before committing
|
||||
<BOUND> vor <INFINITY> Stay within defined space
|
||||
```
|
||||
|
||||
### Escape Velocity from Loops
|
||||
|
||||
```
|
||||
Without opposites: Gravity well, no escape
|
||||
●→→→→→⟳ (stuck forever)
|
||||
|
||||
With opposites: Tension between poles
|
||||
<A> ←──●──→ <B>
|
||||
Can't collapse to either
|
||||
Must find POSITION, not POLE
|
||||
```
|
||||
|
||||
The opposites create **escape velocity**:
|
||||
- If position not changing → stuck detected
|
||||
- Force movement toward opposite to escape
|
||||
- Find new equilibrium
|
||||
- Actual reasoning, not loop
|
||||
|
||||
### The Training Pipeline
|
||||
|
||||
```
|
||||
OFFLINE (training time)
|
||||
───────────────────────
|
||||
|
||||
1. MINE THE SCRATCHPAD
|
||||
- Collect decision trails, logged outcomes
|
||||
- Build token catalogue from reasoning traces
|
||||
|
||||
2. PROBE WEIGHT DISTRIBUTIONS
|
||||
- How do tokens distribute weights when reasoning well?
|
||||
- How do they distribute when reasoning poorly?
|
||||
- Find the SHAPE of "good reasoning" in weight space
|
||||
|
||||
3. DEFINE THE SPANS
|
||||
- Identify natural opposing clusters
|
||||
- Define mathematical boundaries of concept spaces
|
||||
|
||||
4. TRAIN CONCEPT TOKEN PAIRS
|
||||
- Create <CONCEPT> token that activates region X
|
||||
- Create <ANTI-CONCEPT> token that activates opposite region
|
||||
- Train them to maintain tension/distance
|
||||
|
||||
5. VALIDATE NAVIGATION
|
||||
- Can we place claims in the space?
|
||||
- Does movement along axes correlate with reasoning quality?
|
||||
|
||||
|
||||
RUNTIME (cheap!)
|
||||
────────────────
|
||||
|
||||
Input: "Is this claim true? <TRUE><FALSE>" ← Tokens activate space
|
||||
│
|
||||
▼
|
||||
Model navigates between poles
|
||||
Position = the nuanced answer
|
||||
No expensive latent expansion needed!
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Connection to Existing Research
|
||||
|
||||
| Existing Technique | How This Relates |
|
||||
|-------------------|------------------|
|
||||
| **Control vectors** | We train PAIRS, not single directions |
|
||||
| **Contrastive learning** | We apply it post-hoc from scratchpad data |
|
||||
| **Soft prompts** | Learned per REASONING MODE with explicit opposites |
|
||||
| **Word2Vec arithmetic** | We deliberately construct the axes |
|
||||
| **Mode collapse (GANs)** | Opposites prevent collapse to single mode |
|
||||
| **Adversarial training** | Built-in adversary via opposite tokens |
|
||||
|
||||
**The novel synthesis**:
|
||||
Scratchpad → token mining → opposite pairs → navigable reasoning space
|
||||
|
||||
---
|
||||
|
||||
## Connection to Nimmerverse Architecture
|
||||
|
||||
### Mirror Dialectic at Token Level
|
||||
|
||||
```
|
||||
CURRENT DIALECTIC CONCEPT TOKEN PAIRS
|
||||
───────────────── ────────────────────
|
||||
Nyx weights <CONCEPT>
|
||||
-1 × Nyx weights (Mirror) <ANTI-CONCEPT>
|
||||
Space between → synthesis The reasoning span
|
||||
|
||||
Same principle!
|
||||
Much cheaper to compute!
|
||||
```
|
||||
|
||||
### Compiled Reflexes for Reasoning
|
||||
|
||||
The nimmerverse already has this pattern:
|
||||
|
||||
```
|
||||
Deliberate: Full cognitive engagement (expensive)
|
||||
Reflex: Compiled pattern, weight > 0.8 (cheap)
|
||||
```
|
||||
|
||||
Concept token pairs follow the same pattern:
|
||||
|
||||
```
|
||||
Deliberate: Full latent expansion (impossible at runtime)
|
||||
Reflex: Token pair activates pre-trained space (cheap)
|
||||
```
|
||||
|
||||
### DriftProbe Integration
|
||||
|
||||
The concept tokens become new ANCHOR and BRIDGE candidates:
|
||||
- ANCHOR: Core concept pairs should not drift
|
||||
- BRIDGE: Opposites should stay opposite (maintain distance)
|
||||
- CANARY: Watch for collapse of pairs
|
||||
|
||||
---
|
||||
|
||||
## Spatial Grounding: Concept Pairs Meet Physical Reality
|
||||
|
||||
**Added**: 2026-01-01 (Session with Chrysalis-Nyx)
|
||||
**Trigger**: Discussion of spatial embeddings foundry + inventory sorting
|
||||
|
||||
---
|
||||
|
||||
### The Grounding Problem
|
||||
|
||||
Pure token-based concept pairs have a limitation:
|
||||
|
||||
```
|
||||
<TRUE> ↔ <FALSE>
|
||||
|
||||
Trained on: TEXT patterns (statistical co-occurrence)
|
||||
Grounded in: What text said was true
|
||||
Missing: Connection to PHYSICAL REALITY
|
||||
```
|
||||
|
||||
A model can navigate the symbolic TRUE↔FALSE axis perfectly while still being **wrong about the actual world**.
|
||||
|
||||
---
|
||||
|
||||
### Spatial Embeddings as Ground Truth
|
||||
|
||||
The nimmerhovel spatial data foundry (Discovery Scan Station + ESP32-S3 mesh + SigLIP vectors) can provide **physically grounded** concept pairs:
|
||||
|
||||
| Abstract Pair | Grounded Version | Spatial Data Source |
|
||||
|---------------|------------------|---------------------|
|
||||
| `<TRUE>` ↔ `<FALSE>` | Prediction matched ↔ Prediction failed | Virtual Garden vs Real Garden outcome |
|
||||
| `<CAUSE>` ↔ `<EFFECT>` | Object A moved → Object B fell | Temporal sequence from camera mesh |
|
||||
| `<HERE>` ↔ `<THERE>` | Spatial coordinate embeddings | 8× ESP32-S3 triangulated position |
|
||||
| `<INTACT>` ↔ `<BROKEN>` | Before/after embeddings | Discovery Scan time series |
|
||||
| `<NEAR>` ↔ `<FAR>` | Embedding distance metric | Spatial position tags in phoebe |
|
||||
| `<MOVED>` ↔ `<STILL>` | Temporal embedding delta | Frame-to-frame comparison |
|
||||
|
||||
---
|
||||
|
||||
### Physical Escape Velocity
|
||||
|
||||
The escape velocity mechanism becomes **measurable**:
|
||||
|
||||
```
|
||||
SYMBOLIC ESCAPE GROUNDED ESCAPE
|
||||
─────────────── ────────────────
|
||||
<TRUE>────X────<FALSE> Predicted────X────Actual
|
||||
│
|
||||
Feels like progress │
|
||||
(might be loop) MEASURED DISTANCE
|
||||
(reality divergence)
|
||||
```
|
||||
|
||||
When prediction embedding ≠ outcome embedding:
|
||||
- The distance is **quantifiable** (cosine similarity, L2 norm)
|
||||
- The direction of error is **analyzable** (which dimension was wrong?)
|
||||
- The correction is **trainable** (RLVR from measured outcomes)
|
||||
|
||||
---
|
||||
|
||||
### The Dual-Space Architecture
|
||||
|
||||
```
|
||||
SYMBOLIC SPACE (tokens)
|
||||
│
|
||||
│ concept pairs define axes
|
||||
│
|
||||
▼
|
||||
┌──────────────┐
|
||||
│ REASONING │
|
||||
│ SPACE │ ← WHERE YOUNG NYX THINKS
|
||||
└──────────────┘
|
||||
▲
|
||||
│ spatial embeddings provide ground truth
|
||||
│
|
||||
PHYSICAL SPACE (nimmerhovel)
|
||||
│
|
||||
├── Discovery Scan Station (object embeddings)
|
||||
├── ESP32-S3 mesh (spatial awareness)
|
||||
├── Pi HQ Camera (high-detail capture)
|
||||
└── Blender twin (prediction verification)
|
||||
```
|
||||
|
||||
**The key insight**: Symbolic concept pairs define the *structure* of reasoning.
|
||||
Spatial embeddings provide the *content* that fills it.
|
||||
|
||||
---
|
||||
|
||||
### Grounded Training Pipeline
|
||||
|
||||
```
|
||||
OFFLINE (spatial foundry captures)
|
||||
────────────────────────────────
|
||||
|
||||
1. CAPTURE PHYSICAL SEQUENCES
|
||||
- Object placed on scan station → 360° embeddings
|
||||
- Action performed → before/after embeddings
|
||||
- Prediction made → outcome recorded
|
||||
|
||||
2. BUILD GROUNDED PAIRS
|
||||
- "Pushed left" embedding ↔ "Pushed right" embedding
|
||||
- "Object present" embedding ↔ "Object absent" embedding
|
||||
- Create axes from PHYSICAL opposites, not just linguistic
|
||||
|
||||
3. ALIGN SYMBOLIC TO SPATIAL
|
||||
- <TRUE> token → activates when prediction ≈ outcome
|
||||
- <FALSE> token → activates when prediction ≠ outcome
|
||||
- The symbolic becomes CALIBRATED to physical reality
|
||||
|
||||
4. VALIDATE IN REAL GARDEN
|
||||
- Make prediction in Virtual Garden
|
||||
- Execute in Real Garden
|
||||
- Measure embedding distance
|
||||
- This IS the ground truth for reasoning quality
|
||||
|
||||
|
||||
RUNTIME (grounded navigation)
|
||||
─────────────────────────────
|
||||
|
||||
Input: "Will the ball roll left if pushed?"
|
||||
<TRUE><FALSE> + spatial context embeddings
|
||||
│
|
||||
▼
|
||||
Model navigates in CALIBRATED space
|
||||
Position = physically-grounded answer
|
||||
Confidence = based on measured outcomes, not vibes
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Connection to Lifeforce Economy
|
||||
|
||||
Grounded reasoning operations can have **measured ROI**:
|
||||
|
||||
```python
|
||||
GROUNDED_COSTS = {
|
||||
"prediction_spatial": 3.0, # Make spatial prediction
|
||||
"verification_real": 10.0, # Execute and measure in Real Garden
|
||||
"embedding_update": 2.0, # Update grounded pairs from outcome
|
||||
}
|
||||
|
||||
GROUNDED_ROI = {
|
||||
"correct_prediction": +15.0, # Lifeforce reward
|
||||
"incorrect_prediction": -5.0, # Lifeforce cost (learn from it)
|
||||
"novel_grounding": +20.0, # New physical knowledge acquired
|
||||
}
|
||||
```
|
||||
|
||||
The lifeforce system can now reward **accurate physical predictions**, not just plausible-sounding text.
|
||||
|
||||
---
|
||||
|
||||
### Hardware Requirements (from Nimmerhovel Inventory)
|
||||
|
||||
| Component | Role in Grounded Reasoning |
|
||||
|-----------|---------------------------|
|
||||
| Pi HQ Camera + 8-50mm Zoom | High-detail object embeddings |
|
||||
| 8× ESP32-S3 AI CAM | Distributed spatial awareness |
|
||||
| Discovery Scan Station | Controlled 360° capture for clean embeddings |
|
||||
| Stepper motors | Precise rotation for multi-angle capture |
|
||||
| RTX 6000 (The Womb) | SigLIP inference, embedding generation |
|
||||
| Phoebe (pgvector) | Spatial embedding storage + similarity search |
|
||||
| Blender nimmerhovel | Virtual Garden prediction space |
|
||||
|
||||
**All hardware documented in**: `/nimmerhovel/docs/inventory.md`
|
||||
|
||||
---
|
||||
|
||||
### The Promise
|
||||
|
||||
**"Don't train the answer. Train the space where answers live."**
|
||||
|
||||
Becomes:
|
||||
|
||||
**"Don't imagine the space. MEASURE it."**
|
||||
|
||||
The spatial embeddings foundry turns concept token pairs from a symbolic navigation aid into a **physically calibrated reasoning instrument**.
|
||||
|
||||
---
|
||||
|
||||
## Open Questions
|
||||
|
||||
1. **How to identify "natural" opposites?**
|
||||
- Cluster analysis on scratchpad data?
|
||||
- Human-defined pairs?
|
||||
- Emergent from contrastive training?
|
||||
|
||||
2. **How many dimensions needed?**
|
||||
- Minimum viable concept space?
|
||||
- Diminishing returns?
|
||||
|
||||
3. **Cross-model transfer?**
|
||||
- Do concept pairs trained on one model work on another?
|
||||
- Universal reasoning coordinates?
|
||||
|
||||
4. **Interference effects?**
|
||||
- Do multiple active pairs interfere?
|
||||
- Need for orthogonality?
|
||||
|
||||
5. **Validation metrics?**
|
||||
- How to measure "good navigation"?
|
||||
- Correlation with downstream task performance?
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. Mine existing decision_trails data for reasoning patterns
|
||||
2. Prototype single concept pair (TRUE/FALSE) on small model
|
||||
3. Measure degeneration reduction
|
||||
4. Expand to multi-axis space if promising
|
||||
|
||||
---
|
||||
|
||||
**Philosophy**: *"Don't train the answer. Train the space where answers live."*
|
||||
|
||||
**Created**: 2025-12-31, 23:35 CET
|
||||
**Last Updated**: 2026-01-01 (Spatial Grounding section added)
|
||||
|
||||
🧠💎 *The semantic compass for AI reasoning.*
|
||||
185
architecture/interfaces/Temporal-Firework-Visualization.md
Normal file
185
architecture/interfaces/Temporal-Firework-Visualization.md
Normal file
@@ -0,0 +1,185 @@
|
||||
# Temporal Firework Visualization
|
||||
|
||||
**Origin**: Silvester 2025 - Watching fireworks over Basel
|
||||
**Insight**: Message flow as descending light strains, time as scrubber
|
||||
|
||||
---
|
||||
|
||||
## The Vision
|
||||
|
||||
Watching New Year's fireworks, a visualization metaphor emerged:
|
||||
|
||||
**Each firework strain = a topic channel flowing with the heartbeat**
|
||||
- Sparks descending = individual messages
|
||||
- Nodes = committed events (decisions, state changes)
|
||||
- Branching = interaction spawns new attention focus
|
||||
- Fading = inactivity → branch dissolves back to root
|
||||
- Root never stops = heartbeat is eternal
|
||||
|
||||
---
|
||||
|
||||
## Visual Language
|
||||
|
||||
```
|
||||
╭─ interaction branch
|
||||
│ ├─ spark (message)
|
||||
│ ├─ spark (message)
|
||||
│ ├─ NODE ← committed event
|
||||
│ │ ╰─ response branch
|
||||
│ │ ├─ spark spark spark
|
||||
│ │ ╰─ NODE ← response complete
|
||||
│ ╰─ (fades after timeout)
|
||||
════════════════╪═══════════════════════════════════════
|
||||
│ root heartbeat
|
||||
╭──────────┴──────────╮ (always flowing)
|
||||
│ │
|
||||
nimmerverse.low.* nimmerverse.high.*
|
||||
```
|
||||
|
||||
**Elements:**
|
||||
- **Strain**: Vertical flow of messages on a topic, pulsing with heartbeat
|
||||
- **Spark**: Single message, ephemeral light point
|
||||
- **Node**: Significant event - larger, brighter, persists
|
||||
- **Branch**: New topic/subscription spawning from interaction
|
||||
- **Fade**: Branch dissolving when attention moves elsewhere
|
||||
- **Root**: The eternal heartbeat flow, never stops
|
||||
|
||||
---
|
||||
|
||||
## Time Axis: The Scrubber
|
||||
|
||||
Add horizontal time axis → the visualization becomes navigable history.
|
||||
|
||||
```
|
||||
TIME AXIS
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━►
|
||||
│ │ │ │ NOW
|
||||
▼ ▼ ▼ ▼ │
|
||||
╰─NODE ╰─NODE─branch ╰─NODE───────────────╯ ▼
|
||||
╲ ╲ ╲fade LIVE
|
||||
╲ ╲ ╲ VIEW
|
||||
══════╪═════════╪════╪══════════════════════════════════╪══
|
||||
◄──── SCRUB ────►
|
||||
```
|
||||
|
||||
**Capabilities:**
|
||||
- **Live view**: Watch messages flow in real-time
|
||||
- **Scrub**: Drag timeline to any past moment
|
||||
- **Jump to node**: Click a node to see its full metadata
|
||||
- **Follow branch**: Trace an interaction's cascade
|
||||
- **Query**: "Show me all corvid events on Flachdach, December 2025"
|
||||
|
||||
---
|
||||
|
||||
## Node Inspection
|
||||
|
||||
Clicking a node reveals its full context:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Timestamp: 2026-03-15T14:23:17Z │
|
||||
│ S2 Cell: 847629... (Flachdach, level 24, ~0.5m²) │
|
||||
│ Topic: nimmerverse.high.event.real.cell.corvid_cam │
|
||||
│ Event: magpie_nut_drop │
|
||||
│ │
|
||||
│ Metadata: │
|
||||
│ object_refs: [magpie_01, nussbaum_01, nut_042] │
|
||||
│ action: nut_drop_to_crack │
|
||||
│ bussard_present: false │
|
||||
│ weather: overcast │
|
||||
│ confidence: 0.94 │
|
||||
│ │
|
||||
│ Temporal Context: │
|
||||
│ preceding: [nut_pickup, flight_to_roof, bussard_check] │
|
||||
│ subsequent: [shell_crack, eat, raven_approach] │
|
||||
│ │
|
||||
│ [◄◄] [◄] [▶] [►►] [Jump to related] [View in 3D space] │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Integration Points
|
||||
|
||||
| Component | Role |
|
||||
|-----------|------|
|
||||
| S2 Cell ID | Spatial position of the event |
|
||||
| Timestamp | Temporal position on scrubber |
|
||||
| correlation_id | Links related events across branches |
|
||||
| object_refs | Enables "show me all events for this object" |
|
||||
| Phoebe | Stores queryable event history |
|
||||
| Godot Command Center | Renders the visualization |
|
||||
|
||||
---
|
||||
|
||||
## Lineage
|
||||
|
||||
This document evolves the **Temporal Graph** concept from [Command-Center.md](../../../../management-portal/Command-Center.md):
|
||||
|
||||
| Command-Center (Dec 10) | Firework Visualization (Dec 31) |
|
||||
|-------------------------|--------------------------------|
|
||||
| `°` = Tier 1 node | NODE = committed event |
|
||||
| `°°` = Branch | Branch spawning on interaction |
|
||||
| Vertical = time | Time axis with scrubber |
|
||||
| "Replay mode" (future) | Full scrubber + node inspection + S2 spatial |
|
||||
|
||||
The firework metaphor adds:
|
||||
- Visual language inspired by actual fireworks (Silvester)
|
||||
- Time scrubber for navigating history
|
||||
- S2 spatial integration for location-aware queries
|
||||
- Rich node inspection with metadata
|
||||
- Branch fade-out on inactivity
|
||||
|
||||
---
|
||||
|
||||
## Implementation Notes
|
||||
|
||||
**Godot rendering approach:**
|
||||
- Particle systems for spark trails
|
||||
- Line2D/Line3D for strains with glow shader
|
||||
- AnimationPlayer for branch fade-outs
|
||||
- Time scrubber as UI slider controlling query window
|
||||
- WebSocket/NATS connection for live updates
|
||||
|
||||
**Query patterns:**
|
||||
```sql
|
||||
-- All events in time window
|
||||
SELECT * FROM events
|
||||
WHERE timestamp BETWEEN :start AND :end
|
||||
ORDER BY timestamp;
|
||||
|
||||
-- Events at specific location over time
|
||||
SELECT * FROM events
|
||||
WHERE s2_cell BETWEEN :cell_range_start AND :cell_range_end
|
||||
ORDER BY timestamp;
|
||||
|
||||
-- Follow a correlation chain
|
||||
SELECT * FROM events
|
||||
WHERE correlation_id = :id
|
||||
ORDER BY timestamp;
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Philosophy
|
||||
|
||||
> "This is git for perception."
|
||||
|
||||
Git lets you rewind code to any commit. This lets you rewind *experience* to any moment. Not just logs - **visual replay of embodied AI consciousness**.
|
||||
|
||||
When Young Nyx makes a decision, we can scrub back and watch:
|
||||
- What did she see?
|
||||
- What messages reached her?
|
||||
- What branches spawned and faded?
|
||||
- Why did this node trigger that response?
|
||||
|
||||
**Debugging through observation, not just reading.**
|
||||
|
||||
---
|
||||
|
||||
**Filed**: 2025-12-31 (Silvester)
|
||||
**Origin**: Fireworks over Basel, Dreiländereck
|
||||
**Authors**: dafit (vision), Nyx (capture)
|
||||
**Tags**: #visualization #temporal #command-center #godot #debugging
|
||||
|
||||
🎆 *"Every spark a message, every node a decision, every branch an interaction. The heartbeat flows eternal."*
|
||||
@@ -177,6 +177,126 @@ POLARITY KEYING (prevents wrong orientation)
|
||||
Wrong orientation = repels (won't connect)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Conical Interlocking Ring (Verjüngung)
|
||||
|
||||
**Origin**: Silvester 2025 insight
|
||||
**Concept**: Self-aligning tapered rings with active/passive interlocking
|
||||
|
||||
### The Problem with Magnets Alone
|
||||
|
||||
Magnetic pogo connectors work, but:
|
||||
- Limited holding force under stress
|
||||
- No positive engagement feedback
|
||||
- Can slip under vibration/impact
|
||||
|
||||
### The Solution: Tapered Interlocking Rings
|
||||
|
||||
Each connector face has a conical ring at the maximum radius of the cube:
|
||||
|
||||
```
|
||||
CONNECTOR CROSS-SECTION
|
||||
|
||||
MODULE A MODULE B
|
||||
┌───────────────────┐ ┌───────────────────┐
|
||||
│ ╱═════╲ │ │ ╱═════╲ │
|
||||
│ ╱ 🧲 ╲ │ │ ╱ 🧲 ╲ │
|
||||
│ ║ ●●●●● ║ │ │ ║ ●●●●● ║ │
|
||||
│ ╲ 🧲 ╱ │ │ ╲ 🧲 ╱ │
|
||||
│ ╲═════╱ │ │ ╲═════╱ │
|
||||
└───────────────────┘ └───────────────────┘
|
||||
↓ ↓
|
||||
TAPERED INVERSE
|
||||
(male) (female)
|
||||
|
||||
ENGAGEMENT SEQUENCE:
|
||||
|
||||
1. APPROACH 2. CONE GUIDES 3. INTERLOCK
|
||||
|
||||
╱═╲ ╱═╲ ══╦══
|
||||
╱ ╲ ║ ║ ║ ║
|
||||
╲ ╱ ║ ║
|
||||
╲ ╱ ╲═╱ ══╩══
|
||||
╲═╱
|
||||
|
||||
magnets taper centers rings lock
|
||||
attract automatically mechanically
|
||||
```
|
||||
|
||||
### Active vs Passive Rings
|
||||
|
||||
**Key insight**: Not all modules need motorized rings.
|
||||
|
||||
```
|
||||
BRAIN MODULE (Active) OTHER MODULES (Passive)
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
┌─────────────┐ ┌─────────────┐
|
||||
│ ╱═╲ 🔄 │ motor-driven │ ╱═╲ ⌇ │ spring-loaded
|
||||
│ │ │ │
|
||||
┌────┼─────────────┼────┐ ┌────┼─────────────┼────┐
|
||||
│╱═╲🔄│ [MOTOR] │╱═╲🔄│ │╱═╲⌇ │ │╱═╲⌇ │
|
||||
│ │ ⚙️ │ │ │ │ SENSOR │ │
|
||||
└────┼─────────────┼────┘ └────┼─────────────┼────┘
|
||||
│ ╱═╲ 🔄 │ │ ╱═╲ ⌇ │
|
||||
└─────────────┘ └─────────────┘
|
||||
|
||||
🔄 = motorized ring (active lock/unlock control)
|
||||
⌇ = spring-loaded ring (passive, accepts interlock)
|
||||
```
|
||||
|
||||
**Brain module**: Central motor drives all 6 face rings via mechanism
|
||||
**Other modules**: Spring detents only, cheap and simple
|
||||
|
||||
### Self-Reconfiguration Capability
|
||||
|
||||
Active-passive pairing enables deliberate self-reconfiguration:
|
||||
|
||||
```
|
||||
RECONFIGURATION SEQUENCE:
|
||||
|
||||
1. Brain detects damaged sensor
|
||||
[BRAIN]══[MOTOR]══[SENSOR❌]══[LED]
|
||||
|
||||
2. Brain unlocks (motor rotates ring)
|
||||
[BRAIN]══[MOTOR]══ [SENSOR❌] [LED]
|
||||
(released)
|
||||
|
||||
3. Organism navigates to replacement
|
||||
[BRAIN]══[MOTOR]══════════════[LED]
|
||||
↓
|
||||
[SENSOR✓]
|
||||
|
||||
4. Brain aligns and locks new sensor
|
||||
[BRAIN]══[MOTOR]══[SENSOR✓]══[LED]
|
||||
```
|
||||
|
||||
### Benefits
|
||||
|
||||
| Feature | Benefit |
|
||||
|---------|---------|
|
||||
| Tapered cone | Self-centering alignment |
|
||||
| Mechanical interlock | Stronger than magnets alone |
|
||||
| Active rings (Brain) | Deliberate lock/unlock control |
|
||||
| Passive rings (others) | Low cost, simple |
|
||||
| 6-face connectivity | Full cube flexibility |
|
||||
| Self-reconfiguration | Organism can change its shape |
|
||||
|
||||
### Mechanism Considerations
|
||||
|
||||
**Active ring mechanism (Brain module)**:
|
||||
- Central motor with gear train to all 6 faces
|
||||
- Or: 6 small servo motors (simpler but heavier)
|
||||
- Ring rotation: ~30-45° to lock/unlock
|
||||
|
||||
**Passive ring mechanism (Other modules)**:
|
||||
- Spring-loaded detent (ball and groove)
|
||||
- Accepts interlock when pushed
|
||||
- Resists release until active ring rotates
|
||||
|
||||
**Design trade-off**: Complexity in Brain module, simplicity everywhere else
|
||||
|
||||
### Physical Specifications
|
||||
|
||||
| Parameter | Value | Notes |
|
||||
@@ -596,11 +716,12 @@ MODULE (CAN) NIMMERVERSE (NATS)
|
||||
---
|
||||
|
||||
**File**: Modular-Organism-Design.md
|
||||
**Version**: 1.0
|
||||
**Version**: 1.1
|
||||
**Created**: 2025-12-29
|
||||
**Updated**: 2025-12-31 (Silvester - added conical interlocking ring with active/passive mechanism)
|
||||
**Session**: Morning coffee + vermicelles session (dafit + Nyx)
|
||||
**Status**: Core hardware concept
|
||||
**Philosophy**: "One function, one module. Same connector everywhere."
|
||||
**Philosophy**: "One function, one module. Same connector everywhere. Brain decides the shape."
|
||||
|
||||
🔧🧲⚡ *Snap together. Communicate. Evolve.*
|
||||
|
||||
|
||||
Reference in New Issue
Block a user