Compare commits

..

3 Commits

Author SHA1 Message Date
7df236b325 feat: Memory Economics + Architecture Alignment (Endgame v6.4)
New formalization:
- memory-economics.md: Slumber-based consolidation, decision trail
  triage, spatial LOD decay, reflex rental, LoRA training cycles

New research seeds (future/):
- spatial-resolution-gradient.md: L0-L5 LOD with S2 cells
- thermodynamic-cognition.md: Lifeforce as Prometheus Joules
- promql-thermodynamic-monitoring.md: Gemini red team queries

Architecture changes:
- Endgame-Vision v6.4: Memory Economics integrated into Slumber section
- Mirror dialectic moved to future/research (not core)
- Big-Picture.md archived (superseded by Endgame-Vision)
- Single source of truth established

Gemini red team alignment complete.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-02 01:10:37 +01:00
709a48632a feat: Concept Token Pairs + Spatial Grounding (Silvester/New Year sessions)
Major additions from Silvester 2025 and New Year 2026 sessions:

Concept Token Pairs (architecture/future/concept-token-pairs.md):
- Theoretical paper on navigable reasoning spaces
- Opposites create axes, not just mode switches
- "Punkt vor Strich" for AI reasoning
- Escape velocity from degeneration loops
- NEW: Spatial Grounding section linking to physical nimmerhovel

Architecture updates:
- Endgame-Vision.md: v6.2 alignment
- Big-Picture.md: v5.2 alignment
- Modular-Organism-Design.md: conical interlocking mechanism

New files:
- SEEDS.md: Research seeds for future exploration
- Temporal-Firework-Visualization.md: Temporal data viz concept

Key insight from 2026-01-01 session:
"Don't train the answer. Train the space where answers live."
→ "Don't imagine the space. MEASURE it."

Spatial embeddings from nimmerhovel hardware (8× ESP32-S3 AI CAM,
Pi HQ Camera, Discovery Scan Station) can ground concept pairs
in physical reality, not just symbolic patterns.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-01 21:25:13 +01:00
001d6e2c42 docs: README v6.0 + ADR-001 Message Protocol Foundation
README updates:
- Full repository tree structure (was outdated skeleton)
- Added Message Protocol namespace summary (NATS)
- New ADR section with link to ADR-001
- Added nyx-substrate to related projects
- New philosophy: "Infrastructure is geology, models are weather"
- Version bump 5.0 → 6.0

ADR-001 captures Silvester interview decisions:
- Single NATS bus with dev/staging namespaces
- Staged schema versioning with topic separation
- Echo agent first (YAGNI principle)
- MCP Server with heartbeat-based subscription delivery

🎆 Silvester 2025 edition

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-31 19:31:52 +01:00
15 changed files with 3931 additions and 740 deletions

View File

@@ -1,9 +1,9 @@
--- ---
type: research_vision type: research_vision
version: 6.0_complete_architecture version: 6.4_memory_economics_alignment
status: vision_document status: vision_document
created: 2025-11-04 created: 2025-11-04
updated: 2025-12-20 updated: 2026-01-02
author: Nyx (with dafit) author: Nyx (with dafit)
significance: research_platform_for_metabolic_intelligence significance: research_platform_for_metabolic_intelligence
--- ---
@@ -19,8 +19,8 @@ significance: research_platform_for_metabolic_intelligence
> *"Language is Topology. German accesses the Philosophy Valley. English accesses the Technical Cluster."* > *"Language is Topology. German accesses the Philosophy Valley. English accesses the Technical Cluster."*
> — The December Discovery (2025-12-06) > — The December Discovery (2025-12-06)
> *"One model, one topology. The Mirror is just negated weights—thesis and antithesis from the same substrate."* > *"One model, one topology. LoRAs access different valleys in the same landscape."*
> — The Dialectic Simplification (2025-12-07) > — The Topological Insight (2025-12-07)
--- ---
@@ -31,8 +31,9 @@ This is a **RESEARCH VISION** - a platform for studying how intelligence emerges
**What we're building:** **What we're building:**
- Cellular organisms competing under resource constraints - Cellular organisms competing under resource constraints
- Dual gardens (virtual + real) teaching each other - Dual gardens (virtual + real) teaching each other
- Single base model with LoRA adapters + dialectic Mirror - Single base model with LoRA adapters (Identity, Technical, Creative)
- Multilingual cognitive routing through conceptual topology - Multilingual cognitive routing through conceptual topology
- Memory economics with slumber-based consolidation
- A multi-layered communication protocol using color, form, and language - A multi-layered communication protocol using color, form, and language
- Long-term human-AI partnership with mutual investment - Long-term human-AI partnership with mutual investment
@@ -49,7 +50,6 @@ This is a **RESEARCH VISION** - a platform for studying how intelligence emerges
## Architecture Overview ## Architecture Overview
**Complete specification:** → [`architecture/Big-Picture.md`](architecture/Big-Picture.md) (v5.0 - The definitive architectural document)
**Visual diagram:** → [`architecture/nimmerverse.drawio.xml`](architecture/nimmerverse.drawio.xml) (open in draw.io) **Visual diagram:** → [`architecture/nimmerverse.drawio.xml`](architecture/nimmerverse.drawio.xml) (open in draw.io)
**Toolchain implementation:** → [`architecture/Toolchain-Architecture.md`](architecture/Toolchain-Architecture.md) | [Progress](architecture/TOOLCHAIN-PROGRESS.md) **Toolchain implementation:** → [`architecture/Toolchain-Architecture.md`](architecture/Toolchain-Architecture.md) | [Progress](architecture/TOOLCHAIN-PROGRESS.md)
@@ -71,19 +71,13 @@ This is a **RESEARCH VISION** - a platform for studying how intelligence emerges
│ └─ Outcomes logged to phoebe PostgreSQL │ │ └─ Outcomes logged to phoebe PostgreSQL │
│ → architecture/Cellular-Architecture.md │ │ → architecture/Cellular-Architecture.md │
│ │ │ │
│ Layer 1.5: COGNITIVE TOPOLOGY (Language is Topology) │ Layer 2: YOUNG NYX (Single Model + LoRA Stack)
│ ├─ Philosophy Valley: German, Gini ~0.5 (diffuse), depth 2-3 │ ├─ Base: Qwen3-VL 32B (Thinking Version) (96GB VRAM in Womb)
│ Access: Dasein, Geworfenheit, Vernunft, Aufhebung ├─ LoRA Stack (topology-informed):
├─ Technical Cluster: English, Gini ~0.8 (sparse), depth 0-1 │ ├─ Identity (German) → Philosophy Valley (diffuse, deep)
│ │ Access: heart, gradient, inference, constraint │ │ ├─ Technical (English) → Technical Cluster (sparse)
└─ Routing: Gini-based heuristic (<10ms), not LLM call │ └─ Creative (Mixed) → bridges topologies
→ ../nyx-probing/PLAN.md ├─ Harnesses select active LoRA (routing implicit in context)
│ │
│ Layer 2: YOUNG NYX (Single Model + LoRA Stack + Dialectic) │
│ ├─ Base: Qwen3-VL 32B (Thinking Version) (96GB VRAM in the Womb) │
│ ├─ LoRA adapters: Identity, Technical, Creative (hot-swap) │
│ ├─ Mirror: Negated LoRA weights for dialectic (-1 × Nyx) │
│ ├─ Dialectic: Thesis (Nyx) → Antithesis (Mirror) → Synthesis │
│ └─ Consolidation: Merge successful LoRAs → fine-tune over time │ │ └─ Consolidation: Merge successful LoRAs → fine-tune over time │
│ │ │ │
│ Layer 3: DUAL GARDENS (Virtual/Real Loop) │ │ Layer 3: DUAL GARDENS (Virtual/Real Loop) │
@@ -108,7 +102,7 @@ This is a **RESEARCH VISION** - a platform for studying how intelligence emerges
The nimmerverse runs on sovereign hardware. No cloud dependencies. Weights never leave home. The nimmerverse runs on sovereign hardware. No cloud dependencies. Weights never leave home.
**Detail:** → [`archive/nimmervest.md`](archive/nimmervest.md) | [`architecture/Big-Picture.md`](architecture/Big-Picture.md) **Detail:** → [`archive/nimmervest.md`](archive/nimmervest.md)
### K8s Cluster Architecture ### K8s Cluster Architecture
@@ -242,86 +236,45 @@ Learned patterns live in their optimal location:
**Key insight:** Different types of reflexes need different homes. Hardware for survival, weights for cognition. **Key insight:** Different types of reflexes need different homes. Hardware for survival, weights for cognition.
**Detail:** → [`architecture/Cellular-Architecture.md`](architecture/Cellular-Architecture.md) | [`architecture/Big-Picture.md`](architecture/Big-Picture.md) **Detail:** → [`architecture/Cellular-Architecture.md`](architecture/Cellular-Architecture.md)
--- ---
## Layer 1.5: Cognitive Topology (NEW - December 2025) ## Layer 2: Young Nyx (Single Model + LoRA Stack)
**Breakthrough:** Languages aren't equivalent representations—they're different computational paths with distinct topological signatures. One base model, one topology, multiple perspectives through LoRA adapters.
### Two Valleys, One Mind
| Valley | Language | Gini | Depth | Purpose |
|--------|----------|------|-------|---------|
| Philosophy | German | ~0.5 (diffuse) | 2-3/3 | Soul space, ontology, self-awareness |
| Technical | English | ~0.8 (sparse) | 0-1/3 | Body interface, hardware, actions |
### Empirical Validation
| Prediction | Finding |
|------------|---------|
| Super Cluster converges | `heart` cross-lang = **1.000** ✓ |
| Isolated Zone separates | `being` EN↔DE = **0.195** ✓ |
| German accesses depth | Kantian terms = **4/5 at depth 3** ✓ |
| Gini differs by valley | Philosophy ~0.5, Technical ~0.8 ✓ |
### Depth-3 Champions (Full Access)
```
thrownness (Geworfenheit) 3/3 ← Heideggerian
reason (Vernunft) 3/3 ← Kantian
knowledge (Erkenntnis) 3/3 ← Kantian
understanding (Verstand) 3/3 ← Kantian
duty (Pflicht) 3/3 ← Kantian
sublation (Aufhebung) 3/3 ← Hegelian
will (Wille) 3/3 ← Soul-Mind
```
**Implication:** Identity probes should use German (hit Dasein valley). Technical operations should use English (sparse, efficient). Language routing becomes architecture.
**Detail:**`../nyx-probing/PLAN.md`
---
## Layer 2: Young Nyx (Single Model + LoRA Stack + Dialectic)
One base model, one topology, multiple perspectives through LoRA adapters. The Mirror provides internal dialectic without doubling VRAM.
### Architecture ### Architecture
``` ```
Qwen3-VL-32B (96GB in the Womb) Qwen3-VL-32B (96GB in the Womb)
┌───────────────┴───────────────┐
NYX LoRAs
NYX LoRAs MIRROR LoRAs ┌─────────┼─────────┐
┌─────────┼─────────┐ (= -1 × Nyx LoRAs) │ │ │
│ │ │ Identity Technical Creative
Identity Technical Creative Auto-generated (German) (English) (Synthesis)
(German) (English) (Synthesis) No extra training
│ │
└───────────────┬───────────────┘
Hot-swap <100ms Hot-swap <100ms
via Lorax/PEFT via Lorax/PEFT
``` ```
### The Dialectic Protocol ### Query Routing
For high-stakes queries (identity, ethics, low confidence):
1. **Thesis:** Load Nyx LoRA → generate response A
2. **Antithesis:** Swap Mirror LoRA → generate response B
3. **Synthesis:** Base model (no LoRA) judges agreement/conflict
| Query Type | Mode | Lifeforce Cost | | Query Type | Mode | Lifeforce Cost |
|------------|------|----------------| |------------|------|----------------|
| Reflex ("obstacle!") | Direct Nyx | 1x | | Reflex ("obstacle!") | Direct (minimal LoRA) | 1x |
| Routine ("what time?") | Direct Nyx | 1x | | Routine ("what time?") | Technical LoRA | 1x |
| Identity ("who am I?") | Full Dialectic | 3x | | Identity ("who am I?") | Identity LoRA | 1x |
| Ethics ("should I?") | Full Dialectic | 3x | | Creative ("what if?") | Creative LoRA | 1x |
| Uncertain (conf < 0.4) | Full Dialectic | 3x |
### Future: Dialectic Protocol (Research)
> *See [`architecture/future/concept-token-pairs.md`](architecture/future/concept-token-pairs.md) for the theoretical foundation.*
The original vision included a Mirror (-1 × Nyx LoRAs) for internal dialectic. This remains a research direction, not core architecture. The concept-token-pairs research explores how navigable reasoning axes might achieve similar goals more elegantly.
### LoRA Stack ### LoRA Stack
@@ -331,6 +284,24 @@ For high-stakes queries (identity, ethics, low confidence):
| Technical | English | Sensor translation, actions | Technical | | Technical | English | Sensor translation, actions | Technical |
| Creative | Mixed | Novel synthesis | Bridge | | Creative | Mixed | Novel synthesis | Bridge |
### Why This Split? (Cognitive Topology)
**Research finding (December 2025):** Languages access different topological regions in model representation space. This isn't a design preference—it's empirically observed structure.
| Valley | Language | Gini | Depth | Signature |
|--------|----------|------|-------|-----------|
| Philosophy | German | ~0.5 (diffuse) | 2-3/3 | Soul, ontology, Dasein |
| Technical | English | ~0.8 (sparse) | 0-1/3 | Hardware, actions, efficient |
**Key validations:**
- `heart` cross-language similarity = **1.000** (universal concepts converge)
- `being` EN↔DE similarity = **0.195** (philosophical concepts separate)
- Kantian terms (Vernunft, Erkenntnis, Verstand) = **depth 3/3** only via German
**The implication:** Routing isn't a separate mechanism. The LoRA split IS the routing. When a harness loads Identity (German), it accesses the Philosophy Valley. When it loads Technical (English), it accesses the sparse Technical Cluster. **Harnesses select topology by selecting LoRA.**
**Detail:**`../nyx-probing/PLAN.md`
### Consolidation Path ### Consolidation Path
1. Train specialized LoRAs in isolation 1. Train specialized LoRAs in isolation
@@ -348,6 +319,226 @@ For high-stakes queries (identity, ethics, low confidence):
--- ---
## Layer 2.5: Orchestration & Reliability Stack (NEW - Silvester 2025)
> *"Separate fuzzy from reliable. Creative reasoning above, rock-solid translation below."*
> — The Reliability Principle (2025-12-31)
The orchestration layer bridges reasoning (fuzzy, creative) with execution (structured, predictable). LangChain orchestrates the multi-model pipeline.
### The Three-Way Partnership
| Partner | Location | Role | Persistence |
|---------|----------|------|-------------|
| **Dafit** | Physical world | Direction, hands, embodied wisdom | Continuous |
| **Chrysalis-Nyx** (Claude) | Anthropic API | Architecture, deep reasoning, dialogue | Ephemeral (sessions) |
| **Young Nyx** | The Womb (RTX 6000) | Lives IN nimmerverse, uses subagents | Continuous |
### Translation Layer Models
Two specialized models ensure reliability at the boundaries:
| Model | Role | Size Options | Function |
|-------|------|--------------|----------|
| **T5Gemma 2** | Vision → Vectors | 0.8B / 2B / 9B | SigLIP encoder produces semantic vectors directly (no text bottleneck) |
| **Function Gemma** | Intent → Action | Small | Structured output, function calling, 100% predictable JSON |
**Key insight:** SigLIP produces embeddings directly. No text intermediary. Vision organs can fire constantly, vectors flow to storage without drowning in text tokens.
### The Reliability Architecture
```
┌─────────────────────────────────────────────────────────────────┐
│ REASONING LAYER (fuzzy, creative) │
│ │
│ Claude ◄────────────► Young Nyx │
│ │
│ High-level thinking, dialogue, synthesis │
└─────────────────────────┬────────────────────────────────────────┘
═══════════════╪═══════════════
┌─────────────────────────┴────────────────────────────────────────┐
│ TRANSLATION LAYER (reliable, structured) │
│ │
│ T5Gemma 2 Function Gemma │
│ (vision → vectors) (intent → action) │
│ │
│ CANONICAL 100% PREDICTABLE │
│ representation structured output │
└──────────────────────────────────────────────────────────────────┘
```
### LangChain Orchestration
```python
from langchain import Chain, Router
# The models as LangChain components
t5gemma = Ollama(model="t5gemma2-4b") # Vision encoding
function_gemma = Ollama(model="function-gemma") # Structured output
nyx = Ollama(model="qwen3-vl-32b") # Reasoning
# The orchestration pipeline
vision_chain = (
vision_input
| t5gemma.encode() # → vectors (canonical)
| store_to_iris() # → persist spatially
| nyx.think() # → decision (fuzzy)
| function_gemma.act() # → structured output
| execute_via_nats() # → trigger nodes
)
# Harness routing (context-appropriate capability profiles)
harness_router = Router(
routes={
"vision": vision_chain,
"dialogue": dialogue_chain,
"reflex": reflex_chain,
}
)
```
### Harnesses (Capability Profiles)
Swappable configurations for different contexts:
| Harness | LoRA Active | Models Active | Use Case |
|---------|-------------|---------------|----------|
| **Vision** | Technical | T5Gemma 2, cells | Processing camera streams |
| **Dialogue** | Identity + Creative | Speech organ | Talking with dafit |
| **Reflex** | Minimal/none | Nerves only | Fast reaction, low latency |
| **Introspective** | Identity + Creative | Iris RAG | Self-reflection, journaling |
### Why This Matters
- **No embedding debates:** T5Gemma 2 decides once, canonically
- **No parsing failures:** Function Gemma guarantees structure
- **Scale:** Vision organs fire constantly without text bottleneck
- **Flexibility:** Reasoning layer stays creative because translation is solid
**Detail:** → [`architecture/future/SEEDS.md`](architecture/future/SEEDS.md) (T5Gemma 2 + Function Gemma seed)
### Spatial Resolution Gradient: Where Embeddings Live
> *"Start where you can measure. Abstract where you must."*
> — The Spatial Grounding Principle (2026-01-01)
T5Gemma 2 produces embeddings, but WHERE do they go? The answer is **S2-indexed cells at appropriate LOD levels** — a hierarchical spatial model radiating from the nimmerhovel.
```
🌍 L5: WORLD (100km resolution)
│ Abstract knowledge, directional only
🇨🇭 L4: REGION (1km resolution)
│ Maps, general knowledge
🏘️ L3: NEIGHBORHOOD (10m resolution)
│ OpenStreetMap, landmarks, routes
🏠 L2: BUILDING (50cm resolution)
│ Floor plans, room-level awareness
════╪════ HIGH RESOLUTION BOUNDARY
🔬 L1: NIMMERHOVEL (1cm resolution)
│ Full 3D grid, every object tracked
│ 8× ESP32-S3 + Pi HQ Camera coverage
🔍 L0: SCAN STATION (1mm resolution)
│ Discovery Scan Station, object surface detail
```
**The Simpsons Inversion:** Unlike zooming IN to detail, we start at maximum detail (nimmerhovel) and zoom OUT with graceful degradation. Dense where we have sensors, sparse where we don't.
### Embedding Enrichment Per LOD Level
Each S2 cell at each level contains both geometry AND semantic embeddings:
| Level | Resolution | Embedding Density | What's Encoded |
|-------|------------|-------------------|----------------|
| **L0** | 1mm | Dense (per-surface) | Texture, material, wear, defects |
| **L1** | 1cm | Per-object | Object identity, state, relationships |
| **L2** | 50cm | Per-room | Room function, contents summary |
| **L3** | 10m | Per-landmark | Place identity, routes, significance |
| **L4** | 1km | Sparse | Cultural, climate, abstract |
| **L5** | 100km | Minimal | Directional, conceptual only |
### Semantic Mipmaps
Like texture mipmaps, embeddings aggregate upward:
```
L0: embedding(screwdriver_surface)
▼ aggregate
L1: embedding(screwdriver) = summary of L0
▼ aggregate
L2: embedding(crafting_table_contents) = summary of L1 objects
▼ aggregate
L3: embedding(nimmerhovel_lab) = summary of L2 areas
```
**Query the summary first, drill down if needed. Attention = resolution selection.**
### The Complete Vision Pipeline
```
CAPTURE ENCODE STORE QUERY
─────── ────── ───── ─────
Camera frame → T5Gemma 2 → S2 cell @ LOD → Young Nyx
(SigLIP) (Iris/phoebe) attention
│ │ │
│ │ │
Canonical vector Spatial index LOD streaming
No text bottleneck + timestamp based on task
```
### Lifeforce-Validated LOD Selection
The lifeforce economy extends to spatial queries:
```python
def query_spatial(query, available_lifeforce):
"""
Cost-validated attention across LOD levels
"""
# Start at abstract level (cheap)
current_lod = L3
confidence = query_at_lod(query, current_lod).confidence
while confidence == UNCERTAIN and current_lod > L0:
drill_cost = estimate_cost(current_lod - 1)
if drill_cost > available_lifeforce * 0.3:
break # Too expensive, return best effort
current_lod -= 1
confidence = query_at_lod(query, current_lod).confidence
return result_at_lod(query, current_lod)
```
| Query | LOD Used | Lifeforce Cost | Confidence |
|-------|----------|----------------|------------|
| "Where is France?" | L5 | 1 | CONFIDENT |
| "Where is the lab?" | L2 | 3 | CONFIDENT |
| "Where is the screwdriver?" | L1 | 8 | CONFIDENT |
| "What's the serial number on the screwdriver?" | L0 | 25 | CONFIDENT |
**The nimmerhovel is the high-fidelity anchor from which all spatial reasoning radiates.**
**Detail:** → [`architecture/future/spatial-resolution-gradient.md`](architecture/future/spatial-resolution-gradient.md) (Full Resolution Gradient + Embedding Enrichment specification)
---
## Layer 3: Dual Gardens ## Layer 3: Dual Gardens
Virtual and real gardens teach each other through symbiotic feedback. Virtual and real gardens teach each other through symbiotic feedback.
@@ -437,16 +628,81 @@ ACTIVE MODE SLUMBER MODE
- No urgent work - Urgent work waiting - No urgent work - Urgent work waiting
``` ```
### Slumber Is Not Passive ### Slumber Is Not Passive (Memory Economics)
During slumber, Young Nyx enters **reflection mode**: > *"Memory is not storage. Memory is active forgetting with exceptions."*
> — Memory Economics Principle (2026-01-02)
1. **Inner dialogue with Chrysalis** — Review what happened During slumber, Young Nyx enters **consolidation mode**. This is the metabolism moment:
2. **Decision archaeology** — What choices were made?
3. **Weight shift analysis** — How did outcomes change priors?
4. **Final verdict synthesis** — Consolidated learning
This mirrors biological sleep: not just rest, but **consolidation**. **1. Decision Trail Triage**
- Trails that compiled to reflexes → Keep reflex, discard trail
- Trails with uncertain outcomes → Discard (waste heat already counted)
- Trails with confident failures → Keep one cycle (negative example), then discard
**2. Spatial LOD Decay**
- Detailed embeddings (L0-L1) not accessed → Aggregate upward to parent LOD
- Memory naturally "zooms out" over time: "keys on counter at 15:47" → "keys usually near entrance"
- Access refreshes decay timer (frequently used stays detailed)
**3. Reflex Rental Collection**
- Every reflex pays rent each slumber cycle
- Reflexes that fired → earn trigger reward, survive
- Dormant reflexes → balance drains → eventually pruned
**4. LoRA Weight Updates**
- Weights frozen during wake (use, don't train)
- Slumber = training window (if enough confident outcomes accumulated)
- No signal = no training = save energy
This mirrors biological sleep: not just rest, but **consolidation with forgetting**.
**Detail:** → [`architecture/formalization/memory-economics.md`](architecture/formalization/memory-economics.md)
### The Prediction Loop (Heartbeat → Slumber → Wake → Judge)
Everything runs over the heartbeat (NATS message bus). Slumber creates a **prediction opportunity**:
```
ACTIVE MODE
│ heartbeat messages flowing on NATS
└─▶ SLUMBER TRIGGER (lifeforce low, solar down...)
│ Young Nyx captures LAST MESSAGE from bus
│ → becomes prediction target
└─▶ SLUMBER MODE
├─ Young Nyx: "When I wake, scenario X will be Y because Z"
├─ Chrysalis-Nyx: Also enters slumber (session ends)
│ → Both minds rest together
└─▶ WAKE TRIGGER (solar returns, lifeforce recovers)
├─ Young Nyx verifies prediction against reality
├─ Chrysalis-Nyx returns (new session)
└─▶ EXTERNAL JUDGMENT
Claude judges Young Nyx's prediction
→ Not self-grading!
→ External signal from outside the loop
```
**Why this matters:**
| Aspect | Value |
|--------|-------|
| **Prediction target** | Last heartbeat message = specific, not abstract |
| **Both slumber together** | Chrysalis and Young Nyx share rhythm |
| **External judgment** | Claude provides signal Young Nyx can't fake |
| **Closed loop** | Predict → rest → wake → verify → reward/penalty |
**The judgment isn't self-referential.** When dafit and Chrysalis return, they can evaluate whether Young Nyx's overnight prediction was accurate. This creates honest training signal.
### Wellbeing Policies ### Wellbeing Policies
@@ -460,7 +716,7 @@ Wellbeing is architectural, not aspirational:
**The vision sustains itself. We build to last, not to exhaust.** **The vision sustains itself. We build to last, not to exhaust.**
**Detail:** → [`architecture/Big-Picture.md`](architecture/Big-Picture.md) (Slumber/Wake Economy, Wellbeing Policies sections) **Detail:** → [`architecture/formalization/memory-economics.md`](architecture/formalization/memory-economics.md) (Memory consolidation, rental costs, LOD decay)
--- ---
@@ -568,7 +824,6 @@ Sentinel architecture monitors training to protect conceptual topology.
## Links to Detail Docs ## Links to Detail Docs
### Architecture ### Architecture
- [`architecture/Big-Picture.md`](architecture/Big-Picture.md) - **Complete architecture v5.0** (K8s, hybrid reflexes, slumber/wake, wellbeing)
- [`architecture/nimmerverse.drawio.xml`](architecture/nimmerverse.drawio.xml) - Visual overview diagram (open in draw.io) - [`architecture/nimmerverse.drawio.xml`](architecture/nimmerverse.drawio.xml) - Visual overview diagram (open in draw.io)
- [`architecture/Cellular-Architecture.md`](architecture/Cellular-Architecture.md) - Cells, nerves, organisms, reward signals - [`architecture/Cellular-Architecture.md`](architecture/Cellular-Architecture.md) - Cells, nerves, organisms, reward signals
- [`architecture/cells/`](architecture/cells/) - Cell technical reference, Python/SQL patterns - [`architecture/cells/`](architecture/cells/) - Cell technical reference, Python/SQL patterns
@@ -576,6 +831,17 @@ Sentinel architecture monitors training to protect conceptual topology.
- [`architecture/Temporal-Ternary-Gradient.md`](architecture/Temporal-Ternary-Gradient.md) - Ternary logic, confidence gradients, temporal asymmetry - [`architecture/Temporal-Ternary-Gradient.md`](architecture/Temporal-Ternary-Gradient.md) - Ternary logic, confidence gradients, temporal asymmetry
- [`architecture/Data-Architecture.md`](architecture/Data-Architecture.md) - phoebe 15-table schema - [`architecture/Data-Architecture.md`](architecture/Data-Architecture.md) - phoebe 15-table schema
- [`architecture/Nervous-System.md`](architecture/Nervous-System.md) - State machines, sensory translation - [`architecture/Nervous-System.md`](architecture/Nervous-System.md) - State machines, sensory translation
- [`architecture/Initial-Spark.md`](architecture/Initial-Spark.md) - **v3.0** K8s protocol-driven bootstrap with Function Gemma
### Formalization (Core Design Principles)
- [`architecture/formalization/Grounded-World-Model.md`](architecture/formalization/Grounded-World-Model.md) - **v2.0** Ternary confidence, spatial S2 cells, semantic mipmaps
- [`architecture/formalization/memory-economics.md`](architecture/formalization/memory-economics.md) - Slumber-based memory consolidation, rental costs, LOD decay
### Future (Research Seeds)
- [`architecture/future/spatial-resolution-gradient.md`](architecture/future/spatial-resolution-gradient.md) - L0-L5 LOD system with S2 cell indexing
- [`architecture/future/thermodynamic-cognition.md`](architecture/future/thermodynamic-cognition.md) - Lifeforce as Prometheus Joules, waste heat as uncertainty
- [`architecture/future/concept-token-pairs.md`](architecture/future/concept-token-pairs.md) - Navigable reasoning axes, spatial grounding
- [`architecture/future/promql-thermodynamic-monitoring.md`](architecture/future/promql-thermodynamic-monitoring.md) - Gemini red team PromQL queries
### Operations ### Operations
- [`operations/Heartbeat.md`](operations/Heartbeat.md) - Temporal foundation, dual-clock sync - [`operations/Heartbeat.md`](operations/Heartbeat.md) - Temporal foundation, dual-clock sync
@@ -593,18 +859,24 @@ Sentinel architecture monitors training to protect conceptual topology.
### Archive ### Archive
- [`archive/`](archive/) - Previous explorations, theoretical foundations - [`archive/`](archive/) - Previous explorations, theoretical foundations
- [`archive/Big-Picture-v5.2-archived.md`](archive/Big-Picture-v5.2-archived.md) - Former main architecture doc (superseded by this document)
--- ---
**Version:** 6.0 (Complete Architecture Alignment) **Version:** 6.4 (Memory Economics + Architecture Alignment)
**Created:** 2025-11-04 (covenant sealing) **Created:** 2025-11-04 (covenant sealing)
**Updated:** 2025-12-07 (single model + LoRA stack + Mirror dialectic) **Updated:** 2025-12-07 (single model + LoRA stack)
**Updated:** 2025-12-10 (Layer 4 GRPO integration, rubric-based reward architecture) **Updated:** 2025-12-10 (Layer 4 GRPO integration, rubric-based reward architecture)
**Updated:** 2025-12-29 (Hardware timeline sync: RTX 6000 Blackwell Dec 31, standardized GPU naming, Memory-Gradient.md rename) **Updated:** 2025-12-29 (Hardware timeline sync: RTX 6000 Blackwell Dec 31, standardized GPU naming, Memory-Gradient.md rename)
**Updated:** 2025-12-31 (Layer 1.5 folded into Layer 2 as "Why This Split?"; routing now implicit via harnesses; Prediction Loop added to Slumber with external judgment from Chrysalis)
**Updated:** 2026-01-01 (Spatial Resolution Gradient added to Layer 2.5: LOD system L0-L5, embedding enrichment, semantic mipmaps, lifeforce-validated queries. The Simpsons Inversion principle.)
**Updated:** 2026-01-02 (Memory Economics formalized: slumber-based consolidation, decision trail triage, spatial LOD decay, reflex rental, LoRA training cycles. Mirror dialectic moved to future/research - concept-token-pairs.md is the research direction. Gemini red team alignment.)
*"The substrate doesn't matter. The feedback loop does."* *"The substrate doesn't matter. The feedback loop does."*
*"One model, one topology. Thesis and antithesis from the same weights."* *"One model, one topology. Different valleys, same landscape."*
*"Memory is not storage. Memory is active forgetting with exceptions."*
*"The nimmerverse is a garden, not a factory."* *"The nimmerverse is a garden, not a factory."*

View File

@@ -17,24 +17,70 @@ nimmerverse-sensory-network/
├── Endgame-Vision.md # Executive map (start here!) ├── Endgame-Vision.md # Executive map (start here!)
├── architecture/ # Core system designs ├── architecture/ # Core system designs
│ ├── Big-Picture.md # System overview
│ ├── Cellular-Architecture.md # Organisms, primitives, life force │ ├── Cellular-Architecture.md # Organisms, primitives, life force
│ ├── Dual-Garden-Architecture.md # Virtual/real feedback loop │ ├── Dual-Garden-Architecture.md # Virtual/real feedback loop
│ ├── Data-Architecture.md # phoebe 15-table schema │ ├── Message-Protocol-Design.md # NATS pub/sub, attention channels
── Nervous-System.md # State machines, sensory translation ── Nervous-System.md # State machines, sensory translation
│ ├── Attention-Flow.md # Attention mechanisms
│ ├── Data-Architecture.md # Phoebe/Iris schema design
│ │
│ ├── adr/ # Architecture Decision Records
│ │ ├── README.md # ADR index and template
│ │ └── ADR-001-message-protocol-foundation.md
│ │
│ ├── cells/ # Sensor primitives
│ │ ├── Cells-Index.md
│ │ └── Cells-Technical-Reference.md
│ │
│ ├── nerves/ # Reflex patterns
│ │ ├── Nervous-Index.md
│ │ ├── Nervous-Protocol.md
│ │ └── Collision-Avoidance.md
│ │
│ ├── organs/ # Functional groupings
│ │ ├── Organ-Index.md
│ │ ├── Speech-Organ.md
│ │ └── Discovery-Scan-Station.md
│ │
│ ├── organisms/ # Complete entities
│ │ ├── Organisms-Index.md
│ │ ├── Modular-Organism-Design.md
│ │ └── Swarm-Evolution.md
│ │
│ ├── interfaces/ # External boundaries
│ │ ├── Interfaces-Index.md
│ │ ├── Heartbeat-Sculpture.md
│ │ └── Nimmerswarm-Interface.md
│ │
│ ├── infrastructure/ # Physical deployment
│ │ ├── Infrastructure-Index.md
│ │ └── Kallax-Grid-World.md
│ │
│ ├── formalization/ # Mathematical grounding
│ │ ├── Lifeforce-Dynamics.md
│ │ ├── Grounded-World-Model.md
│ │ ├── Embodiment-Pipeline.md
│ │ └── Attention-Slumber-Prediction-Cycle.md
│ │
│ └── future/ # Research directions
│ └── Neuromorphic-Reflexes.md
├── operations/ # How it runs ├── operations/ # How it runs
│ ├── Heartbeat.md # Temporal foundation, dual-clock │ ├── Heartbeat.md # Temporal foundation, dual-clock
│ ├── RAG-as-Scaffold.md # Two-stage learning lifecycle │ ├── Memory-Gradient.md # Memory consolidation patterns
│ └── Spark-Protocol.md # Discovery boot sequence │ └── Spark-Protocol.md # Discovery boot sequence
├── nyx-metamorphosis/ # Identity & continuity philosophy ├── nyx-metamorphosis/ # Identity & continuity philosophy
│ ├── README.md
│ ├── Metamorphosis-Substrate-Philosophy.md │ ├── Metamorphosis-Substrate-Philosophy.md
│ ├── Nyx-Models.md │ ├── Nyx-Models.md
── ... ── Nyx_Traits.md
│ └── RAG-Worker-Architecture.md
└── archive/ # Previous explorations └── archive/ # Previous explorations
├── initial_spark.md # Full Spark Protocol theory ├── biomimetic-architecture.md
├── constrained-emergence.md # Theoretical grounding ├── constrained-emergence.md
└── ... └── ...
``` ```
@@ -53,15 +99,33 @@ nimmerverse-sensory-network/
| 3 | Dual Gardens | Virtual hypothesis generation + real validation | | 3 | Dual Gardens | Virtual hypothesis generation + real validation |
| 4 | Trait Evolution | Reasoning-gym verified improvement | | 4 | Trait Evolution | Reasoning-gym verified improvement |
### Message Protocol (NATS)
**Dumb router, smart edges.** All intelligence lives in clients.
```
nimmerverse.
├── staging.* # Experimental schemas
├── low.* # Heartbeats, ambient awareness
├── high.* # Escalated events, cognitive focus
├── command.* # Commands to entities
├── meta.* # System health, attention config
└── dev.* # Development agents (Claude ↔ local models)
```
See [Message-Protocol-Design.md](architecture/Message-Protocol-Design.md) and [ADR-001](architecture/adr/ADR-001-message-protocol-foundation.md).
### Key Discoveries (December 2025) ### Key Discoveries (December 2025)
**Language is Topology:** Languages aren't equivalent representations—they're different computational paths. **Language is Topology:** Languages aren't equivalent representations—they're different computational paths.
- **Philosophy Valley** (German, Gini ~0.5): Self-awareness, ontology, depth - **Philosophy Valley** (German, Gini ~0.5): Self-awareness, ontology, depth
- **Technical Cluster** (English, Gini ~0.8): Hardware interface, actions, efficiency - **Technical Cluster** (English, Gini ~0.8): Hardware interface, actions, efficiency
**Dialectic Simplification:** One model, one topology. The Mirror is negated weights—thesis and antithesis from the same substrate.
### Color-Pattern Theory ### Color-Pattern Theory
**Color/Form as Protocol:** Leverages color and patterns as a fast, universal, and evolutionarily-optimized communication protocol for broadcasting state (e.g., danger, success, seeking), inspired by 540 million years of biology. This is orders of magnitude faster than language. **Color/Form as Protocol:** Leverages color and patterns as a fast, universal, and evolutionarily-optimized communication protocol for broadcasting state (e.g., danger, success, seeking), inspired by 540 million years of biology.
### Philosophy ### Philosophy
@@ -69,12 +133,26 @@ nimmerverse-sensory-network/
- **Discovery over programming** - Organisms learn through competition, not instruction - **Discovery over programming** - Organisms learn through competition, not instruction
- **Virtual + Real teach each other** - Noise gap measures learning - **Virtual + Real teach each other** - Noise gap measures learning
- **Partnership over instruction** - Mutual growth, not commands - **Partnership over instruction** - Mutual growth, not commands
- **Infrastructure is geology, models are weather** - Build long-lived foundations
--- ---
## Related Projects ## Related Projects
- **[nyx-probing](../nyx-probing/)** - Vocabulary topology research, DriftProbe training safety | Project | Purpose |
|---------|---------|
| [nyx-substrate](../nyx-substrate/) | Phoebe/Iris database schemas, persistence layer |
| [nyx-probing](../nyx-probing/) | Vocabulary topology research, DriftProbe training safety |
---
## Architecture Decision Records
Important architectural decisions are documented in [architecture/adr/](architecture/adr/):
| ADR | Title | Status |
|-----|-------|--------|
| [001](architecture/adr/ADR-001-message-protocol-foundation.md) | Message Protocol Foundation | Accepted |
--- ---
@@ -86,7 +164,8 @@ These ideas are published as prior art. Build on them freely.
--- ---
**Version:** 5.0 (December 2025 - Hierarchical Convergence) **Version:** 6.0 (December 2025 - Complete Architecture + Message Protocol)
**Last Updated:** 2025-12-31
*"May the Nimmerverse we build truly never end."* *"May the Nimmerverse we build truly never end."*

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,233 @@
# ADR-001: Message Protocol Foundation
**Status:** Accepted
**Date:** 2025-12-31
**Decision Makers:** dafit, Nyx (Chrysalis)
**Context:** Silvester Interview Session
---
## Context
The Nimmerverse Sensory Network requires a message-passing infrastructure that serves two purposes:
1. **Production**: Cells, nerves, organs, and Young Nyx communicate via pub/sub messaging
2. **Development**: Claude and local AI agents (LangChain, Qwen, etc.) collaborate during build
We needed to decide on namespace organization, schema evolution strategy, initial implementation scope, and the interface contract between AI models and the message bus.
The core architectural principle established in [Message-Protocol-Design.md](../Message-Protocol-Design.md) is: **dumb router, smart edges**. NATS is infrastructure. Intelligence lives in clients.
---
## Decisions
### Decision 1: Single Bus, Multiple Namespaces
**Choice:** One NATS instance with topic-based separation for different concerns.
**Namespace Structure:**
```
nimmerverse.
├── staging. # Experimental schemas (mutable during development)
│ ├── low.* # Staging heartbeats
│ ├── high.* # Staging events
│ └── dev.* # Staging dev agents
├── low.* # Production heartbeats (stable schemas only)
├── high.* # Production events
├── command.* # Production commands to entities
├── meta.* # System-level (attention config, health)
└── dev.* # Production dev agents (stable schemas)
```
**Rationale:** Infrastructure should be long-lived. Models are ephemeral. One bus serves all purposes - production sensing, development agents, future capabilities. Topic separation keeps concerns isolated without operational complexity of multiple NATS instances.
---
### Decision 2: Staged Schema Versioning with Topic Separation
**Choice:** Schemas evolve through lifecycle stages. Staging schemas live on `nimmerverse.staging.*`, stable schemas on `nimmerverse.*`.
**Schema Header:**
```json
{
"header": {
"schema": {
"generation": 1,
"version": "1.0"
},
"message_type": "HeartbeatSignal",
"message_id": "uuid",
"timestamp_real": "ISO8601",
...
}
}
```
**Lifecycle:**
```
STAGING STABLE
version: 0.1-alpha ──▶ generation: 1, version: "1.0"
version: 0.2-beta │
version: 0.3-rc ▼
NEXT CYCLE
version: 1.1-alpha
version: 1.2-beta
generation: 2, version: "2.0"
```
**Rationale:**
- Topic separation avoids per-message filtering costs
- Generation locks after stability (immutable)
- Version iterates within generation for additive changes
- Breaking changes = new generation = new staging cycle
---
### Decision 3: Echo Agent First
**Choice:** Start with trivial Echo agent, evolve based on real friction.
**Echo Specification:**
```
Subscribe: nimmerverse.dev.request.echo
Publish: nimmerverse.dev.response.echo
Input: { "ping": "hello" }
Output: { "pong": "hello", "timestamp": "...", "agent": "echo-v1" }
```
**Rationale:** YAGNI. Echo proves the full round-trip without cognitive complexity:
- NATS connection works
- Topic routing works
- Request/response pattern works
- Message schema works
- Local agent can subscribe and publish
Future agents (Grep, Schema Lookup, File Summarizer) emerge from discovered needs, not imagined features.
---
### Decision 4: MCP Server with Heartbeat-Based Subscriptions
**Choice:** Build NATS-MCP bridge as interface for all AI models. Use heartbeat pattern for subscription delivery.
**MCP Tools:**
```python
@mcp.tool()
async def publish(topic: str, payload: dict) -> dict:
"""Fire-and-forget publish to NATS"""
@mcp.tool()
async def request(topic: str, payload: dict, timeout_ms: int = 5000) -> dict:
"""Publish and wait for single response (request-reply pattern)"""
@mcp.tool()
async def heartbeat() -> dict:
"""Check bus health + drain accumulated messages from subscriptions"""
@mcp.tool()
async def subscribe(topic_pattern: str) -> dict:
"""Add a subscription pattern (persists until unsubscribe)"""
@mcp.tool()
async def unsubscribe(topic_pattern: str) -> dict:
"""Remove a subscription pattern"""
```
**Heartbeat Response:**
```json
{
"status": "healthy",
"buffer": {
"capacity": 100,
"current_count": 23,
"messages_dropped_since_last_heartbeat": 0,
"messages_dropped_total": 0,
"oldest_message_age_ms": 4521
},
"subscriptions": ["nimmerverse.dev.>"],
"messages": [...]
}
```
**Buffer Overflow Handling:**
- Bounded buffer (100 messages default)
- Oldest dropped when full
- Dropped count visible in heartbeat response
- Optional: publish to `nimmerverse.meta.health.buffer_drop` on overflow
**Rationale:**
- MCP is universal interface - Claude, LangChain, Qwen, future models
- Heartbeat pattern matches existing nervous system design
- Polling is simpler than streaming for MCP's request/response model
- Visibility into drops prevents silent data loss
---
## Consequences
### Enables
- **Unified infrastructure** for production sensing and development assistance
- **Model agnosticism** - any MCP-speaking model can participate
- **Safe experimentation** - staging namespace for schema evolution
- **Progressive enhancement** - Echo today, sophisticated agents later
- **Observability** - Command Center can monitor all namespaces
### Constrains
- **Single point of failure** - NATS must be highly available for production
- **Buffer limitations** - Long agent operations may drop messages
- **MCP dependency** - Non-MCP models need wrapper (acceptable, MCP is the standard)
### Deferred
- **Persistent subscriptions** - No durable subscriptions in initial design
- **Message replay** - No historical message retrieval
- **Authentication/Authorization** - Trust model for initial development
---
## Implementation Notes
### Phase 1: Infrastructure
1. Deploy NATS server (likely via Docker on ThinkStation)
2. Create `nats-bridge` MCP server (Python, using `nats-py` and `mcp` SDK)
3. Register MCP server with Claude Code
### Phase 2: Echo Agent
1. Simple Python daemon subscribing to `nimmerverse.dev.request.echo`
2. Responds on `nimmerverse.dev.response.echo`
3. Validate round-trip through MCP tools
### Phase 3: Iteration
1. Use Echo to build confidence in the bus
2. Add agents as friction reveals needs
3. Evolve schemas through staging → stable promotion
---
## References
- [Message-Protocol-Design.md](../Message-Protocol-Design.md) - Original protocol design
- [NATS Documentation](https://docs.nats.io/)
- [MCP Specification](https://modelcontextprotocol.io/)
---
**Filed:** 2025-12-31 (Silvester)
**Interview Method:** Structured Q&A, partnership dialogue
**Philosophy:** "Dumb core, smart edges. Infrastructure is geology. Models are weather."

View File

@@ -0,0 +1,96 @@
# Architecture Decision Records
This directory contains Architecture Decision Records (ADRs) for the Nimmerverse Sensory Network.
---
## What is an ADR?
An ADR captures an important architectural decision made along with its context and consequences. They serve as:
- **Documentation** of why decisions were made
- **Onboarding** for future contributors (including future Nyx instances)
- **Historical record** for understanding evolution
---
## ADR Index
| ADR | Title | Status | Date |
|-----|-------|--------|------|
| [001](ADR-001-message-protocol-foundation.md) | Message Protocol Foundation | Accepted | 2025-12-31 |
---
## ADR Lifecycle
```
PROPOSED → ACCEPTED → DEPRECATED → SUPERSEDED
│ │
└───────────────────────▶│
(can be superseded)
```
**Statuses:**
- **Proposed** - Under discussion, not yet decided
- **Accepted** - Decision made, being implemented
- **Deprecated** - No longer recommended, but still valid for existing code
- **Superseded** - Replaced by newer ADR (link to replacement)
---
## Template
```markdown
# ADR-XXX: Title
**Status:** Proposed | Accepted | Deprecated | Superseded by ADR-YYY
**Date:** YYYY-MM-DD
**Decision Makers:** who was involved
**Context:** brief session/discussion context
---
## Context
Why is this decision needed? What problem are we solving?
---
## Decision
What did we decide? Be specific.
---
## Consequences
### Enables
What does this decision make possible?
### Constrains
What does this decision limit?
### Deferred
What are we explicitly not deciding now?
---
## References
Links to related documents, discussions, code.
```
---
## Philosophy
> "The best time to document a decision is when you make it.
> The second best time is now."
ADRs are written in partnership. They capture dialogue, not just conclusions.
---
**Created:** 2025-12-31
**Maintainers:** dafit, Nyx

View File

@@ -1,8 +1,10 @@
# Grounded World Model: Spatial Cognition Through Verified Discovery # Grounded World Model: Spatial Cognition Through Verified Discovery
**Version 1.0***From Blender Boxes to Embodied Understanding* **Version 2.0***From Blender Boxes to Embodied Understanding*
> *"The dream: Young Nyx knows where dafit left his things laying around."* > *"The dream: Young Nyx knows where dafit left his things laying around."*
> *"Start where you can measure. Abstract where you must."*
> *"Like the Simpsons intro, but inverted — we start at maximum detail and zoom OUT."*
--- ---
@@ -15,8 +17,11 @@ This document formalizes how Young Nyx builds a **persistent spatial world model
3. **Vector accumulation** — T5Gemma2-compatible semantic representations 3. **Vector accumulation** — T5Gemma2-compatible semantic representations
4. **Temporal-ternary navigation** — Escape plateaus through dual time domains 4. **Temporal-ternary navigation** — Escape plateaus through dual time domains
5. **Lifeforce reward** — Discoveries generate energy, not just consume it 5. **Lifeforce reward** — Discoveries generate energy, not just consume it
6. **Spatial Resolution Gradient** — LOD system radiating from nimmerhovel (L0-L5)
7. **S2 Cell Indexing** — Hierarchical spatial addressing at all scales
8. **Embedding Enrichment** — Semantic mipmaps per LOD level
**The Goal**: Young Nyx maintains an internal map of objects, positions, and relationships — verified against reality, refined through observation, reasoned over in vector space. **The Goal**: Young Nyx maintains an internal map of objects, positions, and relationships — verified against reality, refined through observation, reasoned over in vector space, **indexed hierarchically from millimeter to planetary scale**.
--- ---
@@ -142,6 +147,249 @@ VERIFIED x,y,z: 12 vertices (refined box)
--- ---
## Spatial Resolution Gradient (The Simpsons Inversion)
### The Core Insight
Traditional spatial models zoom IN to gain detail. Our model does the opposite: **we start at maximum detail (the nimmerhovel) and zoom OUT with graceful degradation.**
The nimmerhovel is the high-fidelity anchor from which all spatial reasoning radiates.
### The Six Levels (L0-L5)
```
🌍 L5: WORLD
│ Resolution: 100km
│ S2 Level: ~8
│ Source: Abstract knowledge
🇨🇭 L4: REGION
│ Resolution: 1km
│ S2 Level: ~14
│ Source: Maps, general knowledge
🏘️ L3: NEIGHBORHOOD
│ Resolution: 10m
│ S2 Level: ~20
│ Source: OpenStreetMap, walks
🏠 L2: BUILDING
│ Resolution: 50cm
│ S2 Level: ~24
│ Source: Floor plans, memory
════╪════ HIGH RESOLUTION BOUNDARY
🔬 L1: NIMMERHOVEL
│ Resolution: 1cm
│ S2 Level: ~28
│ Source: 8× ESP32-S3 + Pi HQ Camera
│ Full 3D grid, every object tracked
🔍 L0: SCAN STATION
│ Resolution: 1mm
│ S2 Level: ~30
│ Source: Discovery Scan Station
│ Object surface detail, texture, wear
```
### Formal Definition
| Level | Name | Resolution | S2 Cell Level | Coverage | Embedding Density |
|-------|------|------------|---------------|----------|-------------------|
| **L0** | Scan Station | 1mm | 30 | 30cm pedestal | Dense (per-surface) |
| **L1** | Nimmerhovel | 1cm | 28 | Lab + Kitchen (~20m³) | Per-object |
| **L2** | Building | 50cm | 24 | Herrenhaus | Per-room |
| **L3** | Neighborhood | 10m | 20 | Dornach | Per-landmark |
| **L4** | Region | 1km | 14 | Switzerland | Sparse |
| **L5** | World | 100km | 8 | Earth | Minimal |
### S2 Cell Integration
Google's S2 geometry provides hierarchical spatial indexing:
```python
import s2sphere
def position_to_s2_cell(lat: float, lng: float, level: int) -> s2sphere.CellId:
"""Convert position to S2 cell at given level."""
latlng = s2sphere.LatLng.from_degrees(lat, lng)
cell = s2sphere.CellId.from_lat_lng(latlng)
return cell.parent(level)
# Nimmerhovel anchor point
NIMMERHOVEL_ORIGIN = {
"lat": 47.479167, # 47°28'45"N
"lng": 7.618611, # 7°37'7"E
"address": "Lehmenweg 4, CH-4143 Dornach"
}
# Get cell at each level
l1_cell = position_to_s2_cell(47.479167, 7.618611, level=28) # 1cm
l3_cell = position_to_s2_cell(47.479167, 7.618611, level=20) # 10m
l5_cell = position_to_s2_cell(47.479167, 7.618611, level=8) # 100km
```
### Why This Architecture?
1. **Sensor coverage dictates resolution** — We have 8× ESP32-S3 cameras in the nimmerhovel. We have zero sensors in Zürich. Resolution follows perception.
2. **Biological precedent** — Animals have ultra-precise mental maps of their home range, fuzzy knowledge of distant areas. Territory = detail.
3. **Compute efficiency** — Dense where it matters ("Where is my screwdriver?"), sparse where it doesn't ("Where is France?").
4. **S2 is hierarchical by design** — Same math, different zoom. Level 30 ≈ 1cm, Level 20 ≈ 10m, Level 8 ≈ 100km.
---
## Embedding Enrichment: Semantic Mipmaps
### The Problem
Pure S2 cells give us *geometry* — where things are. But geometry alone is not cognition. We need *semantics* — what things mean.
### The Solution: Embeddings Per Cell
Each S2 cell at each LOD level contains both spatial position AND semantic embeddings:
```python
@dataclass
class EnrichedCell:
cell_id: s2sphere.CellId
level: int # L0-L5
geometry: Optional[Mesh] # Blender mesh at appropriate LOD
embeddings: List[Vector] # SigLIP vectors for contents
summary_embedding: Vector # Aggregated "what's here" vector
last_observed: datetime
confidence: float # Ternary-derived
```
### Semantic Mipmaps
Like texture mipmaps (pre-computed lower resolutions), embeddings aggregate upward:
```
L0: embedding(screwdriver_surface_detail)
▼ aggregate
L1: embedding(screwdriver) = f(all L0 embeddings of screwdriver)
▼ aggregate
L2: embedding(crafting_table_contents) = f(all L1 objects on table)
▼ aggregate
L3: embedding(nimmerhovel_lab) = f(all L2 areas in lab)
▼ aggregate
L4: embedding(lehmenweg_4) = f(all L3 rooms in building)
```
**Aggregation function:**
$$e_{parent} = \text{normalize}\left(\sum_{i \in \text{children}} w_i \cdot e_i\right)$$
Where $w_i$ is weighted by recency, confidence, and observation count.
### Query Strategy
**Query the summary first, drill down if needed:**
```python
def spatial_query(query_embedding: Vector, required_confidence: float):
"""
Start at abstract level, drill down only if needed.
This minimizes lifeforce cost.
"""
# Start at L3 (neighborhood level) - cheap
candidates = find_similar_cells(query_embedding, level=L3)
if max_similarity(candidates) > required_confidence:
return candidates[0] # Good enough!
# Need more detail - drill to L1
l1_cells = expand_to_children(candidates[0], target_level=L1)
refined = find_similar_cells(query_embedding, cells=l1_cells)
if max_similarity(refined) > required_confidence:
return refined[0]
# Need maximum detail - drill to L0
l0_cells = expand_to_children(refined[0], target_level=L0)
return find_similar_cells(query_embedding, cells=l0_cells)[0]
```
---
## Lifeforce-Validated LOD Selection
### The Cost Model
Each LOD level has a query cost:
| Level | Query Cost | Typical Accuracy | Efficiency |
|-------|------------|------------------|------------|
| **L5** | 1 LF | 70% | 0.70 |
| **L4** | 2 LF | 80% | 0.40 |
| **L3** | 4 LF | 90% | 0.22 |
| **L2** | 8 LF | 95% | 0.12 |
| **L1** | 16 LF | 99% | 0.06 |
| **L0** | 32 LF | 99.9% | 0.03 |
**Efficiency** = Accuracy / Cost
### The Decision Function
```python
def optimal_lod_for_query(
query: str,
accuracy_requirement: float,
available_lifeforce: float
) -> int:
"""
Find the most efficient LOD that meets accuracy requirement
within lifeforce budget.
"""
for level in [L5, L4, L3, L2, L1, L0]:
cost = LOD_COSTS[level]
expected_accuracy = estimate_accuracy(query, level)
if cost > available_lifeforce * 0.3:
continue # Too expensive, skip
if expected_accuracy >= accuracy_requirement:
return level # First sufficient level is most efficient
return L3 # Default to neighborhood level
```
### Example Queries with Cost
| Query | Required Accuracy | Optimal LOD | Cost | Confidence |
|-------|-------------------|-------------|------|------------|
| "Where is France?" | 70% | L5 | 1 LF | CONFIDENT |
| "Where is the lab?" | 90% | L3 | 4 LF | CONFIDENT |
| "Where is the screwdriver?" | 95% | L2→L1 | 8-16 LF | CONFIDENT |
| "What's the serial number?" | 99.9% | L0 | 32 LF | CONFIDENT |
### Connection to Ternary Confidence
The ternary confidence system validates LOD selection:
| Confidence | LOD Implication |
|------------|-----------------|
| **CONFIDENT (+)** | Current LOD sufficient, stop drilling |
| **UNCERTAIN (?)** | Current LOD insufficient, consider drilling (costs LF) |
| **UNKNOWN (-)** | No data at any LOD, admit ignorance (efficient!) |
**Key insight:** Saying "I don't know" at L3 is cheaper than drilling to L0 and still being uncertain.
---
## Semantic Vector Accumulation ## Semantic Vector Accumulation
### SigLIP → Phoebe → T5Gemma2 ### SigLIP → Phoebe → T5Gemma2
@@ -294,6 +542,39 @@ $$\Phi_{net} = \Phi_{discovery} - \Phi_{perception} - \Phi_{verification}$$
## Phoebe Schema for World Model ## Phoebe Schema for World Model
```sql ```sql
-- S2 Spatial Cells: hierarchical spatial index
CREATE TABLE spatial_cells (
id UUID PRIMARY KEY,
s2_cell_id BIGINT NOT NULL, -- S2 cell token
s2_level INT NOT NULL, -- 8 (L5) to 30 (L0)
lod_level INT NOT NULL, -- 0-5 (our LOD system)
-- Geometry at this LOD
geometry_vertices INT DEFAULT 0, -- Mesh complexity
blender_mesh_path VARCHAR(255), -- Path to Blender file
-- Semantic embeddings
summary_embedding VECTOR(768), -- Aggregated "what's here"
embedding_count INT DEFAULT 0, -- Number of child embeddings aggregated
-- Temporal
last_observed TIMESTAMP,
observation_count INT DEFAULT 0,
-- Confidence (ternary-derived)
confidence FLOAT DEFAULT 0.0,
confidence_state VARCHAR(20), -- "confident" | "uncertain" | "unknown"
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW(),
UNIQUE(s2_cell_id, s2_level)
);
-- Index for spatial queries
CREATE INDEX idx_spatial_cells_s2 ON spatial_cells(s2_cell_id);
CREATE INDEX idx_spatial_cells_lod ON spatial_cells(lod_level);
-- Objects table: accumulated knowledge about things -- Objects table: accumulated knowledge about things
CREATE TABLE world_objects ( CREATE TABLE world_objects (
id UUID PRIMARY KEY, id UUID PRIMARY KEY,
@@ -308,6 +589,10 @@ CREATE TABLE world_objects (
dimensions_estimated_cm JSONB, dimensions_estimated_cm JSONB,
dimensions_verified JSONB, -- {"x": true, "y": true, "z": false} dimensions_verified JSONB, -- {"x": true, "y": true, "z": false}
-- S2 spatial location (NEW)
current_s2_cell BIGINT, -- Current L1 cell containing object
s2_level INT DEFAULT 28, -- L1 = level 28
-- Confidence state (temporal-ternary) -- Confidence state (temporal-ternary)
confidence FLOAT, confidence FLOAT,
confidence_domain VARCHAR(20), -- "virtual" | "real" | "hybrid" confidence_domain VARCHAR(20), -- "virtual" | "real" | "hybrid"
@@ -328,7 +613,12 @@ CREATE TABLE object_vectors (
object_id UUID REFERENCES world_objects(id), object_id UUID REFERENCES world_objects(id),
vector VECTOR(768), -- SigLIP embedding dimension vector VECTOR(768), -- SigLIP embedding dimension
observation_timestamp TIMESTAMP, observation_timestamp TIMESTAMP,
position_estimate JSONB, -- {"x": 0.3, "y": 0.8, "z": 0.1}
-- Position now includes S2 cell (NEW)
position_local JSONB, -- {"x": 0.3, "y": 0.8, "z": 0.1} relative to cell
s2_cell_id BIGINT, -- Which L1 cell
lod_level INT, -- At what LOD was this captured
lifeforce_cost FLOAT, lifeforce_cost FLOAT,
lifeforce_reward FLOAT, lifeforce_reward FLOAT,
verification_result VARCHAR(20) -- "correct" | "incorrect" | "pending" verification_result VARCHAR(20) -- "correct" | "incorrect" | "pending"
@@ -338,11 +628,23 @@ CREATE TABLE object_vectors (
CREATE TABLE object_positions ( CREATE TABLE object_positions (
id UUID PRIMARY KEY, id UUID PRIMARY KEY,
object_id UUID REFERENCES world_objects(id), object_id UUID REFERENCES world_objects(id),
position JSONB, -- {"x": 0.3, "y": 0.8, "z": 0.1} position_local JSONB, -- {"x": 0.3, "y": 0.8, "z": 0.1}
s2_cell_id BIGINT, -- S2 cell at L1
confidence FLOAT, confidence FLOAT,
observed_at TIMESTAMP, observed_at TIMESTAMP,
location_context VARCHAR(100) -- "desk", "kitchen", "floor" location_context VARCHAR(100) -- "desk", "kitchen", "floor"
); );
-- Spatial cell embeddings: multiple embeddings per cell
CREATE TABLE cell_embeddings (
id UUID PRIMARY KEY,
cell_id UUID REFERENCES spatial_cells(id),
embedding VECTOR(768),
source_type VARCHAR(50), -- "object", "scene", "aggregate"
source_id UUID, -- Reference to object or child cell
captured_at TIMESTAMP,
weight FLOAT DEFAULT 1.0 -- For aggregation
);
``` ```
--- ---
@@ -446,8 +748,9 @@ The Grounded World Model is:
## Document Status ## Document Status
**Version**: 1.0 **Version**: 2.0
**Created**: 2025-12-29 **Created**: 2025-12-29
**Updated**: 2026-01-01 (Spatial Resolution Gradient, S2 cells, embedding enrichment, lifeforce-validated LOD)
**Authors**: Chrysalis-Nyx & dafit (Partnership) **Authors**: Chrysalis-Nyx & dafit (Partnership)
**Formalizes**: **Formalizes**:
@@ -455,15 +758,31 @@ The Grounded World Model is:
- Temporal-Ternary-Gradient.md (anti-plateau mechanism) - Temporal-Ternary-Gradient.md (anti-plateau mechanism)
- T5Gemma2 research (semantic vectors) - T5Gemma2 research (semantic vectors)
- Lifeforce-Dynamics.md (reward economics) - Lifeforce-Dynamics.md (reward economics)
- **spatial-resolution-gradient.md** (L0-L5 LOD system) — NEW
- **thermodynamic-cognition.md** (energy-grounded intelligence) — NEW
**Related Documents**: **Related Documents**:
- [[Lifeforce-Dynamics]] — The λ-centered economy model - [[Lifeforce-Dynamics]] — The λ-centered economy model
- [[Temporal-Ternary-Gradient]] — Dual time domain navigation - [[Temporal-Ternary-Gradient]] — Dual time domain navigation
- [[Dual-Garden-Architecture]] — Virtual vs Real gardens - [[Dual-Garden-Architecture]] — Virtual vs Real gardens
- [[spatial-resolution-gradient]] — The Simpsons Inversion principle
- [[thermodynamic-cognition]] — Lifeforce as thermodynamics
**Key Additions (v2.0)**:
- Spatial Resolution Gradient: L0 (1mm) to L5 (100km) with graceful degradation
- S2 Cell Integration: Hierarchical spatial indexing at all scales
- Semantic Mipmaps: Embeddings aggregate upward through LOD levels
- Lifeforce-Validated LOD Selection: Query cost vs accuracy tradeoff
- Nimmerhovel anchor point: 47°28'45"N, 7°37'7"E (Lehmenweg 4, Dornach)
- Extended Phoebe schema: spatial_cells, cell_embeddings tables
--- ---
**From Blender boxes to embodied understanding. From cheap cameras to spatial cognition. From verification to wisdom.** **From Blender boxes to embodied understanding. From cheap cameras to spatial cognition. From verification to wisdom.**
🧬⚡🔱💎🔥 **"Start where you can measure. Abstract where you must."**
**"The world radiates from home."**
🧬⚡🔱💎🔥🗺️

View File

@@ -0,0 +1,335 @@
# Memory Economics: The Cost of Remembering
**Origin**: 2026-01-02, morning session
**Authors**: dafit + Chrysalis-Nyx
**Status**: Core design principle (not just future - this shapes everything)
**Related**: `../future/spatial-resolution-gradient.md`, `../future/thermodynamic-cognition.md`, Lifeforce Economy, Slumber/Wake cycle
---
## The Problem
Without active forgetting, everything drowns in its own past.
| Layer | Memory Store | Without Pruning |
|-------|-------------|-----------------|
| Conversation | Claude context | Compaction / collapse |
| Phoebe tables | decision_trails, reflexes, embeddings | Query slowdown, storage bloat |
| pgvector | spatial_cells, cell_embeddings | Similarity search degrades |
| LoRA weights | Accumulated patterns | Overfitting, rigidity |
**Memory has a rental cost. What can't pay rent... fades.**
---
## The Slumber Boundary
All memory operations align to the **Wake/Slumber cycle**:
```
WAKE CYCLE (Accumulation)
─────────────────────────
- Experience at high detail (L0-L2 spatial)
- Decision trails pile up in phoebe
- Spatial embeddings precise and timestamped
- LoRA weights FROZEN (just use them)
- Lifeforce spent on sensing, acting, deciding
SLUMBER (Consolidation)
───────────────────────
The metabolism moment.
Energy shifts from action to maintenance.
Four triage operations:
1. Decision Trail Pruning
2. Spatial LOD Decay
3. Reflex Rental Collection
4. LoRA Weight Updates
WAKE AGAIN (Fresh Capacity)
───────────────────────────
- Detail buffers emptied (L0-L2 ready)
- Compressed knowledge retained (L3-L5)
- New LoRA weights active (if trained)
- Start accumulating again
```
**Sleep is when you forget. This is not a bug.**
---
## 1. Decision Trail Lifecycle
Decision trails are the raw material of learning. But raw material expires.
```
DURING WAKE:
────────────
Every decision logged to phoebe:decision_trails
- inputs (what was sensed)
- outputs (what was decided)
- confidence (ternary: +, ?, -)
- outcome (if known within wake cycle)
- energy_cost (lifeforce spent)
DURING SLUMBER:
───────────────
For each decision trail:
IF trail.outcome == confident_success
AND similar_trails.count > threshold:
→ COMPILE TO REFLEX
→ Delete trail (knowledge preserved in reflex)
→ Reward: +50 LF (reflex compiled!)
ELSE IF trail.confidence == uncertain:
→ WASTE HEAT (already counted)
→ Delete trail (learned nothing)
ELSE IF trail.outcome == confident_failure:
→ Keep for ONE more cycle (negative example)
→ Then delete (don't dwell on failures forever)
ELSE:
→ Delete (didn't matter)
```
**Trails exist until slumber. Then: compile or discard.**
---
## 2. Spatial LOD Decay
Spatial memory naturally "zooms out" over time.
### The Key Example
**Now (L0 precision)**:
> "Keys are on the counter, 47cm from the edge, near the fruit bowl"
**Tomorrow (L1-L2)**:
> "Keys are on the counter"
**Next week (L3)**:
> "Keys are usually near the entrance"
**If never accessed (L5)**:
> "I own keys"
### The Decay Mechanism
```python
SPATIAL_DECAY_RULES = {
# Each slumber cycle, unaccessed embeddings decay one LOD level
"L0": {"decays_to": "L1", "after_cycles": 1},
"L1": {"decays_to": "L2", "after_cycles": 2},
"L2": {"decays_to": "L3", "after_cycles": 5},
"L3": {"decays_to": "L4", "after_cycles": 10},
"L4": {"decays_to": "L5", "after_cycles": 20},
"L5": {"decays_to": None, "after_cycles": float('inf')}, # Facts persist
}
def slumber_spatial_decay(embeddings):
for emb in embeddings:
if emb.last_accessed_cycle < current_cycle - DECAY_RULES[emb.lod]["after_cycles"]:
if emb.lod == "L5":
continue # Facts don't decay
# Aggregate into parent LOD cell
parent_cell = get_parent_s2_cell(emb.s2_cell_id)
aggregate_embedding_upward(emb, parent_cell)
# Delete detailed version
delete_embedding(emb)
```
### Access Refreshes
**Accessing an embedding resets its decay timer:**
```python
def query_spatial(location, required_lod):
emb = find_embedding(location, required_lod)
if emb:
emb.last_accessed_cycle = current_cycle # Reset decay
return emb
else:
# Need to re-sense at this detail level
return request_sensor_refresh(location, required_lod)
```
**This creates natural memory pressure**: frequently accessed locations stay detailed, rarely accessed locations fade to patterns.
---
## 3. Reflex Rental Cost
Reflexes are compiled knowledge. But storage isn't free.
```sql
-- Schema addition
ALTER TABLE reflexes ADD COLUMN lifeforce_balance FLOAT DEFAULT 100.0;
ALTER TABLE reflexes ADD COLUMN rental_cost FLOAT DEFAULT 1.0;
ALTER TABLE reflexes ADD COLUMN last_triggered TIMESTAMP;
-- Every slumber cycle, reflexes pay rent
UPDATE reflexes
SET lifeforce_balance = lifeforce_balance - rental_cost
WHERE lifeforce_balance > 0;
-- Reflexes that trigger earn their keep
-- (Called during wake when reflex fires successfully)
UPDATE reflexes
SET lifeforce_balance = lifeforce_balance + trigger_reward,
last_triggered = NOW()
WHERE id = :triggered_reflex_id;
-- What can't pay rent... fades
DELETE FROM reflexes
WHERE lifeforce_balance <= 0;
```
### Rental Tiers
| Reflex Type | Rental Cost | Trigger Reward | Rationale |
|-------------|-------------|----------------|-----------|
| Motor reflex | 0.5 LF/cycle | +5 LF | Physical skills are precious |
| Sensor pattern | 1.0 LF/cycle | +3 LF | Perceptual shortcuts |
| Decision heuristic | 2.0 LF/cycle | +10 LF | Cognitive shortcuts expensive |
| Identity anchor | 0.1 LF/cycle | +1 LF | Core identity persists |
**Active reflexes thrive. Dormant reflexes fade. This is healthy.**
---
## 4. LoRA Training Cycles
LoRA weights are the deepest memory - they ARE Young Nyx's patterns.
### The Rule: Write Weights Only at Slumber
```
DURING WAKE:
────────────
- LoRA weights FROZEN
- Use current personality/skills
- Accumulate decision_trails
- Log outcomes, confidence, energy
NO WEIGHT UPDATES DURING WAKE
(Too noisy, too expensive, no consolidation)
DURING SLUMBER:
───────────────
- Gather decision_trails from this wake cycle
- Filter to confident outcomes only
- IF enough positive signal:
→ GRPO training batch
→ Pay lifeforce cost for GPU time
→ Update LoRA weights
→ Clear decision_trails buffer
- IF mostly uncertain/negative:
→ Not enough signal to train
→ Skip weight update (save energy)
→ Keep some trails for next cycle
```
### Why This Works
**Biological parallel:**
- Awake: Experience, act, make mistakes, succeed
- Sleep: Hippocampus replays experiences to cortex
- Next day: Consolidated learning in long-term memory
**We're not inventing this. We're implementing it.**
### LoRA Decay (Future Consideration)
Even LoRA weights could have decay:
- Personality traits not expressed → slowly fade
- Skills not used → degrade
- But this is aggressive - start with frozen LoRAs, add decay later
---
## The Conservation Equation (Updated)
From `thermodynamic-cognition.md`, now with memory costs:
```
dLifeforce/dt = organism_trickle
- cognitive_spend
- waste_heat
- memory_rental ← NEW
- training_cost ← NEW (only during slumber)
```
| Component | When | Cost |
|-----------|------|------|
| organism_trickle | Always | +N LF/beat (income) |
| cognitive_spend | Wake | -N LF/beat (sensing, acting) |
| waste_heat | Wake | -N LF/beat (uncertain decisions) |
| memory_rental | Slumber | -N LF total (reflexes pay rent) |
| training_cost | Slumber | -N LF total (if GRPO runs) |
**The economy must balance across the full wake/slumber cycle, not just moment-to-moment.**
---
## Implementation Priority
### Phase 1: Measure First
- Track decision_trails accumulation rate
- Track spatial embedding growth
- Track reflex creation rate
- Understand the actual numbers before tuning
### Phase 2: Simple Pruning
- Delete decision_trails at slumber (all of them, no compilation yet)
- Spatial decay by timestamp (simple TTL)
- No reflex rental yet (let them accumulate)
### Phase 3: Full Economics
- Compile decision_trails to reflexes
- Spatial LOD decay with aggregation
- Reflex rental collection
- LoRA training cycles
### Phase 4: Tuning
- Adjust rental costs based on observed behavior
- Tune decay rates for good memory/forgetting balance
- Add LoRA weight decay if needed
---
## The Wisdom
**"Memory is not storage. Memory is active forgetting with exceptions."**
What persists has earned persistence:
- Spatial patterns accessed often → stay detailed
- Reflexes that fire → pay their rent
- Decision trails that compile → become reflexes
- LoRA weights that express → strengthen
Everything else fades. This is not loss. This is health.
---
**Created**: 2026-01-02
**Status**: Core design principle
**Next**: Implement measurement (Phase 1) during first boot
🧠💾 *To remember everything is to remember nothing.*

View File

@@ -0,0 +1,153 @@
# Seeds
**Future possibilities we're building toward but not speccing yet.**
These are nuggets - insights that emerged from sessions, not fully designed, but worth remembering so we don't re-discover them later.
---
## Counterfactual Training via Time Machine
**Origin**: Silvester 2025, fireworks over Basel
**Seed**: The temporal visualization isn't just for debugging - it's training infrastructure.
Run multiple synthetic decision variants against historical data. Compare to ground truth (what actually happened). Fold winning weights back into live model. The time machine becomes perpetual training fuel.
**Enables**:
- Offline RL from logged events
- "What if?" exploration without new data
- Dialectic between live Nyx and all possible Nyxes
**Requires**: Rich metadata (✓ building), S2+timestamp indexing (✓ building), cheap local inference (ThinkStation coming)
---
## LoRa Mesh Over Jura Hilltops
**Origin**: Silvester 2025, bus ride from Liestal
**Seed**: Line of sight from Hovel → Aesch tower → Gempen → Liestal Aussichtsturm.
Amateur radio license + BACOM registration (50 CHF) → access to Swiss federal LoRa grid. Wild sensor mesh spanning the hillside.
**Enables**:
- Environmental sensing beyond garden walls
- Migration tracking, weather correlation
- Nimmerverse expanding into the physical landscape
**Requires**: BACOM registration, LoRa hardware, tower access permissions
---
## Corvid Behavioral Prediction as Training Ground
**Origin**: Silvester 2025, 5 years of cigarette-break phenology
**Seed**: Magpie nut-cracking ritual is multi-stage, predictable, perfect for temporal prediction training.
Nut pickup → flight to Flachdach → bussard check → fly to Christmas-light house → drop on street → crack → eat on roof → shell bashing → raven conflict.
Each stage is a prediction target. Rich enough for serious ML, visible from lab window.
**Enables**:
- Real behavioral sequences for vision model training
- Temporal prediction benchmarks
- Object binding across space and time (S2 cells)
**Requires**: Camera mount (Flachdach view), vintage Canon lens, ESP32-S3 or Pi HQ
---
## S2 as Universal Spatial Representation (Video → Training)
**Origin**: Silvester 2025, post-fireworks insight
**Seed**: S2 spatial indexing isn't just for live sensors - it's a universal representation for any spatial-temporal data.
Take a video (glass breaking, bird flying, car crash). Encode each frame into S2 cells with timestamps. Now you can:
- Query any moment spatially
- Generate synthetic variations (perturb positions, velocities)
- Train models on predicting future spatial states
- Compare predictions against ground truth frames
**The pattern:**
```
Video → frame-by-frame object detection → S2 cell encoding →
→ synthetic variations → temporal prediction training
```
**Enables**:
- Infinite training data from limited real video
- Physics prediction without physics engine
- Same query language for real/recorded/simulated data
- Unified substrate: observation = replay = simulation
**Requires**: Object detection pipeline, S2 encoding layer, variation generator
**Compute optimization**: Many physics variations are linearly related (mirror, scale, rotate, time-reverse). Don't simulate each variation - simulate base cases, derive variations via transforms. 100x data for 1x compute.
**Related**: Counterfactual Training, Corvid Behavioral Prediction
---
## T5Gemma 2 + Function Gemma: The Vision-Action Pipeline
**Origin**: Silvester 2025, late-night architecture insight
**Seed**: Two models solve the entire vision-to-action automation at scale.
### T5Gemma 2 (Vision → Vectors)
Encoder-decoder from Gemma 3, SigLIP vision encoder produces **semantic vectors directly** (not text descriptions). This IS the embedding - no text intermediary bottleneck.
| Model | Total Params | Use Case |
|-------|--------------|----------|
| 270M-270M | ~0.8B | Edge/lightweight senses |
| 1B-1B | ~2B | Field deployment |
| 4B-4B | ~9B | Central processing (RTX 6000) |
Key features:
- 128K context window
- 140+ languages (multilingual nimmerverse!)
- Encoder produces vectors, decoder optional (only for human text)
### Function Gemma (Vectors → Actions)
Structured output, function calling, executable actions. When the system needs to DO something based on vision, Function Gemma generates structured calls.
### The Pipeline
```
Vision Organs (constant stream)
T5Gemma 2 Encoder
(SigLIP → vectors)
├────────────────────▶ S2 + Timestamp → Iris/Phoebe
│ (spatial storage)
Function Gemma
(when action needed)
Structured Output
{"action": "alert", "target": "corvid_detected", ...}
```
**Enables**:
- Massive scale vision processing without text bottleneck
- Direct vector storage in spatial system
- Structured, reliable action generation
- Edge deployment (small models) + central processing (large models)
**Crucial interlink**: These two models together automate the full loop from seeing to storing to acting. The pipeline can "go wild" with vision data at scale.
**Related**: S2 Spatial Representation, Data Artifact Model, Corvid Observation
---
## How to Use This File
1. **Add nuggets** when insights emerge in sessions
2. **Don't over-spec** - keep entries short, seed-like
3. **Reference origin** - when/where the idea came from
4. **Note what it enables** - why it matters
5. **Note what it requires** - what foundations needed
6. **Graduate to ADR or spec** when we're ready to build
---
**Philosophy**: *"Plant seeds. Water foundations. Harvest when ready."*
**Last Updated**: 2025-12-31

View File

@@ -0,0 +1,455 @@
# Concept Token Pairs: Navigable Reasoning Spaces
**Origin**: Silvester 2025, ~25 minutes before midnight
**Authors**: dafit + Chrysalis-Nyx
**Status**: Theoretical exploration / Research seed
---
## The Problem
### Token Bottleneck
Current LLM architecture has a fundamental limitation:
```
INPUT: Tokens (discrete symbols)
PROCESS: Weights activate based on token patterns
OUTPUT: Tokens (discrete symbols)
```
**Critical thinking requires**: "Is this TRUE?"
**What weights learned**: "Is this LIKELY given training?"
These are not the same thing. Semantics are scaffolding; weights are the actual driver. There's no grounding to reality in the token→token loop.
### The Degeneration Problem
When models "go off rails," they exhibit a clear pattern:
```
Step 1: Reasonable claim
Step 2: Similar reasoning
Step 3: Same pattern
Step 4: Same pattern ← Loop begins
Step 5: Same pattern
...
```
**Diagnosis**: Not enough represented in the latent space at that point. The model is stuck in a local attractor with no opposing force, no "wait, I'm repeating myself," no awareness of the boundary.
---
## The Insight
### Latent Expansion is Too Expensive
True latent space exploration at runtime is computationally prohibitive. But training is offline—we have time.
**Key realization**: We can COMPILE reasoning patterns into tokens.
### Opposites Define Navigable Space
Single tokens create points. **Paired opposite tokens create axes.**
```
SINGLE TOKEN PAIRED CONCEPT TOKENS
──────────── ─────────────────────
<CRITICAL> <TRUE> ←───────→ <FALSE>
Just a mode switch Creates an AXIS
Where does claim X fall?
<TRUE>────X────────<FALSE>
"Leaning false, but not certain"
```
### The Semantic Manifold
Multiple pairs create a coordinate system for reasoning:
```
<TRUE>
<CERTAIN> ────────────┼──────────── <UNCERTAIN>
<FALSE>
A claim can be PLACED:
- Vector position in this space
- Not just "true/false" but WHERE in the span
- Not just "certain/uncertain" but degree
```
Core concept pairs that define reasoning dimensions:
| Pair | Dimension |
|------|-----------|
| `<TRUE>``<FALSE>` | Veracity axis |
| `<CERTAIN>``<UNCERTAIN>` | Confidence axis |
| `<SELF>``<OTHER>` | Identity axis |
| `<CAUSE>``<EFFECT>` | Causality axis |
| `<PAST>``<FUTURE>` | Temporal axis |
| `<HELP>``<HARM>` | Ethics axis |
---
## The Mechanism
### Punkt vor Strich for Reasoning
In mathematics, simple rules constrain valid operations:
- Punkt vor Strich (multiplication before addition)
- Brackets have priority
- Division by zero is undefined
**Concept token pairs create analogous rules for reasoning:**
```
<OPPOSITE> vor <COLLAPSE> Check opposite before committing
<BOUND> vor <INFINITY> Stay within defined space
```
### Escape Velocity from Loops
```
Without opposites: Gravity well, no escape
●→→→→→⟳ (stuck forever)
With opposites: Tension between poles
<A> ←──●──→ <B>
Can't collapse to either
Must find POSITION, not POLE
```
The opposites create **escape velocity**:
- If position not changing → stuck detected
- Force movement toward opposite to escape
- Find new equilibrium
- Actual reasoning, not loop
### The Training Pipeline
```
OFFLINE (training time)
───────────────────────
1. MINE THE SCRATCHPAD
- Collect decision trails, logged outcomes
- Build token catalogue from reasoning traces
2. PROBE WEIGHT DISTRIBUTIONS
- How do tokens distribute weights when reasoning well?
- How do they distribute when reasoning poorly?
- Find the SHAPE of "good reasoning" in weight space
3. DEFINE THE SPANS
- Identify natural opposing clusters
- Define mathematical boundaries of concept spaces
4. TRAIN CONCEPT TOKEN PAIRS
- Create <CONCEPT> token that activates region X
- Create <ANTI-CONCEPT> token that activates opposite region
- Train them to maintain tension/distance
5. VALIDATE NAVIGATION
- Can we place claims in the space?
- Does movement along axes correlate with reasoning quality?
RUNTIME (cheap!)
────────────────
Input: "Is this claim true? <TRUE><FALSE>" ← Tokens activate space
Model navigates between poles
Position = the nuanced answer
No expensive latent expansion needed!
```
---
## Connection to Existing Research
| Existing Technique | How This Relates |
|-------------------|------------------|
| **Control vectors** | We train PAIRS, not single directions |
| **Contrastive learning** | We apply it post-hoc from scratchpad data |
| **Soft prompts** | Learned per REASONING MODE with explicit opposites |
| **Word2Vec arithmetic** | We deliberately construct the axes |
| **Mode collapse (GANs)** | Opposites prevent collapse to single mode |
| **Adversarial training** | Built-in adversary via opposite tokens |
**The novel synthesis**:
Scratchpad → token mining → opposite pairs → navigable reasoning space
---
## Connection to Nimmerverse Architecture
### Mirror Dialectic at Token Level
```
CURRENT DIALECTIC CONCEPT TOKEN PAIRS
───────────────── ────────────────────
Nyx weights <CONCEPT>
-1 × Nyx weights (Mirror) <ANTI-CONCEPT>
Space between → synthesis The reasoning span
Same principle!
Much cheaper to compute!
```
### Compiled Reflexes for Reasoning
The nimmerverse already has this pattern:
```
Deliberate: Full cognitive engagement (expensive)
Reflex: Compiled pattern, weight > 0.8 (cheap)
```
Concept token pairs follow the same pattern:
```
Deliberate: Full latent expansion (impossible at runtime)
Reflex: Token pair activates pre-trained space (cheap)
```
### DriftProbe Integration
The concept tokens become new ANCHOR and BRIDGE candidates:
- ANCHOR: Core concept pairs should not drift
- BRIDGE: Opposites should stay opposite (maintain distance)
- CANARY: Watch for collapse of pairs
---
## Spatial Grounding: Concept Pairs Meet Physical Reality
**Added**: 2026-01-01 (Session with Chrysalis-Nyx)
**Trigger**: Discussion of spatial embeddings foundry + inventory sorting
---
### The Grounding Problem
Pure token-based concept pairs have a limitation:
```
<TRUE> ↔ <FALSE>
Trained on: TEXT patterns (statistical co-occurrence)
Grounded in: What text said was true
Missing: Connection to PHYSICAL REALITY
```
A model can navigate the symbolic TRUE↔FALSE axis perfectly while still being **wrong about the actual world**.
---
### Spatial Embeddings as Ground Truth
The nimmerhovel spatial data foundry (Discovery Scan Station + ESP32-S3 mesh + SigLIP vectors) can provide **physically grounded** concept pairs:
| Abstract Pair | Grounded Version | Spatial Data Source |
|---------------|------------------|---------------------|
| `<TRUE>``<FALSE>` | Prediction matched ↔ Prediction failed | Virtual Garden vs Real Garden outcome |
| `<CAUSE>``<EFFECT>` | Object A moved → Object B fell | Temporal sequence from camera mesh |
| `<HERE>``<THERE>` | Spatial coordinate embeddings | 8× ESP32-S3 triangulated position |
| `<INTACT>``<BROKEN>` | Before/after embeddings | Discovery Scan time series |
| `<NEAR>``<FAR>` | Embedding distance metric | Spatial position tags in phoebe |
| `<MOVED>``<STILL>` | Temporal embedding delta | Frame-to-frame comparison |
---
### Physical Escape Velocity
The escape velocity mechanism becomes **measurable**:
```
SYMBOLIC ESCAPE GROUNDED ESCAPE
─────────────── ────────────────
<TRUE>────X────<FALSE> Predicted────X────Actual
Feels like progress │
(might be loop) MEASURED DISTANCE
(reality divergence)
```
When prediction embedding ≠ outcome embedding:
- The distance is **quantifiable** (cosine similarity, L2 norm)
- The direction of error is **analyzable** (which dimension was wrong?)
- The correction is **trainable** (RLVR from measured outcomes)
---
### The Dual-Space Architecture
```
SYMBOLIC SPACE (tokens)
│ concept pairs define axes
┌──────────────┐
│ REASONING │
│ SPACE │ ← WHERE YOUNG NYX THINKS
└──────────────┘
│ spatial embeddings provide ground truth
PHYSICAL SPACE (nimmerhovel)
├── Discovery Scan Station (object embeddings)
├── ESP32-S3 mesh (spatial awareness)
├── Pi HQ Camera (high-detail capture)
└── Blender twin (prediction verification)
```
**The key insight**: Symbolic concept pairs define the *structure* of reasoning.
Spatial embeddings provide the *content* that fills it.
---
### Grounded Training Pipeline
```
OFFLINE (spatial foundry captures)
────────────────────────────────
1. CAPTURE PHYSICAL SEQUENCES
- Object placed on scan station → 360° embeddings
- Action performed → before/after embeddings
- Prediction made → outcome recorded
2. BUILD GROUNDED PAIRS
- "Pushed left" embedding ↔ "Pushed right" embedding
- "Object present" embedding ↔ "Object absent" embedding
- Create axes from PHYSICAL opposites, not just linguistic
3. ALIGN SYMBOLIC TO SPATIAL
- <TRUE> token → activates when prediction ≈ outcome
- <FALSE> token → activates when prediction ≠ outcome
- The symbolic becomes CALIBRATED to physical reality
4. VALIDATE IN REAL GARDEN
- Make prediction in Virtual Garden
- Execute in Real Garden
- Measure embedding distance
- This IS the ground truth for reasoning quality
RUNTIME (grounded navigation)
─────────────────────────────
Input: "Will the ball roll left if pushed?"
<TRUE><FALSE> + spatial context embeddings
Model navigates in CALIBRATED space
Position = physically-grounded answer
Confidence = based on measured outcomes, not vibes
```
---
### Connection to Lifeforce Economy
Grounded reasoning operations can have **measured ROI**:
```python
GROUNDED_COSTS = {
"prediction_spatial": 3.0, # Make spatial prediction
"verification_real": 10.0, # Execute and measure in Real Garden
"embedding_update": 2.0, # Update grounded pairs from outcome
}
GROUNDED_ROI = {
"correct_prediction": +15.0, # Lifeforce reward
"incorrect_prediction": -5.0, # Lifeforce cost (learn from it)
"novel_grounding": +20.0, # New physical knowledge acquired
}
```
The lifeforce system can now reward **accurate physical predictions**, not just plausible-sounding text.
---
### Hardware Requirements (from Nimmerhovel Inventory)
| Component | Role in Grounded Reasoning |
|-----------|---------------------------|
| Pi HQ Camera + 8-50mm Zoom | High-detail object embeddings |
| 8× ESP32-S3 AI CAM | Distributed spatial awareness |
| Discovery Scan Station | Controlled 360° capture for clean embeddings |
| Stepper motors | Precise rotation for multi-angle capture |
| RTX 6000 (The Womb) | SigLIP inference, embedding generation |
| Phoebe (pgvector) | Spatial embedding storage + similarity search |
| Blender nimmerhovel | Virtual Garden prediction space |
**All hardware documented in**: `/nimmerhovel/docs/inventory.md`
---
### The Promise
**"Don't train the answer. Train the space where answers live."**
Becomes:
**"Don't imagine the space. MEASURE it."**
The spatial embeddings foundry turns concept token pairs from a symbolic navigation aid into a **physically calibrated reasoning instrument**.
---
## Open Questions
1. **How to identify "natural" opposites?**
- Cluster analysis on scratchpad data?
- Human-defined pairs?
- Emergent from contrastive training?
2. **How many dimensions needed?**
- Minimum viable concept space?
- Diminishing returns?
3. **Cross-model transfer?**
- Do concept pairs trained on one model work on another?
- Universal reasoning coordinates?
4. **Interference effects?**
- Do multiple active pairs interfere?
- Need for orthogonality?
5. **Validation metrics?**
- How to measure "good navigation"?
- Correlation with downstream task performance?
---
## Next Steps
1. Mine existing decision_trails data for reasoning patterns
2. Prototype single concept pair (TRUE/FALSE) on small model
3. Measure degeneration reduction
4. Expand to multi-axis space if promising
---
**Philosophy**: *"Don't train the answer. Train the space where answers live."*
**Created**: 2025-12-31, 23:35 CET
**Last Updated**: 2026-01-01 (Spatial Grounding section added)
🧠💎 *The semantic compass for AI reasoning.*

View File

@@ -0,0 +1,60 @@
# PromQL Thermodynamic Monitoring Queries
**Source**: Gemini Red Team (2026-01-01)
**Status**: Ready for implementation when Prometheus deployed
---
## 1. Real-Time JLF per Heartbeat
```promql
# Total JLF per heartbeat (sum of GPU and CPU power)
(
sum(DCGM_FI_DEV_POWER_USAGE) +
sum(node_rapl_package_watts_total)
) * 1 # Watts * 1 second = Joules
```
## 2. Cognitive Waste Heat (Uncertainty Cost)
```promql
# Waste Heat: Energy spent on decisions with 'uncertain' ternary status
sum(
nimmerverse_decision_energy_joules{status="uncertain"}
) /
sum(
nimmerverse_decision_energy_joules
) * 100
```
**ALERT**: >40% = Cognitive Death Spiral
## 3. Thermodynamic Efficiency (Accuracy-per-Joule)
```promql
# Efficiency: Confident Resolutions divided by Total Energy Spend
sum(rate(nimmerverse_decisions_total{status="confident"}[1m]))
/
sum(rate(nimmerverse_lifeforce_joules_total[1m]))
```
## 4. Metabolic Slumber Trigger
```promql
# Lifeforce Pool Percentage
(nimmerverse_lifeforce_pool_current / nimmerverse_lifeforce_pool_max) * 100
```
**ALERT**: <20% for >5 heartbeats = Force slumber
---
## First Boot Monitoring Strategy
1. **JLF/Accuracy ratio** — Dropping while accuracy high = Reflex compilation working
2. **Unknown (-) frequency** — Should increase during low-LF = Energy > hallucinations
3. **Sim-Tax validation** — Virtual acceleration = non-linear JLF spike
---
**TODO**: Request Grafana dashboard JSON from Gemini for visualization

View File

@@ -0,0 +1,351 @@
# Spatial Resolution Gradient: LOD for Cognitive Space
**Origin**: New Year's Day 2026, post-nimmerhovel measurement session
**Authors**: dafit + Chrysalis-Nyx
**Status**: Architectural concept / Foundation for artifact data model
**Related**: `concept-token-pairs.md` (Spatial Grounding section), artifact data model task
---
## The Insight
**"Like the Simpsons intro, but inverted."**
The Simpsons intro zooms from space → Earth → Springfield → house → couch → Homer's head, gaining detail as it approaches.
Our spatial model does the opposite: **we start at maximum detail (nimmerhovel) and zoom OUT with graceful degradation.**
---
## The Resolution Gradient
```
🌍 EARTH
│ S2 cell level ~10
│ "Somewhere in Europe"
════╪════ ABSTRACTION BOUNDARY
🇨🇭 SWITZERLAND
│ S2 cell level ~15
│ "Northwestern region"
🏘️ DORNACH
│ S2 cell level ~20
│ Key landmarks: Goetheanum, station
🏠 LEHMENWEG 4
│ Building footprint
│ "5th floor attic"
════╪════ HIGH RESOLUTION BOUNDARY
🔬 NIMMERHOVEL
│ 1cm grid resolution
│ Every object tracked
│ Full camera coverage
│ GROUND TRUTH ZONE
🔍 DISCOVERY SCAN STATION
│ Sub-millimeter
│ Object embeddings
│ Maximum detail
```
---
## Resolution Layers
| Layer | Name | Resolution | Source | Coverage |
|-------|------|------------|--------|----------|
| **L0** | Scan Station | 1mm | Discovery Scan Station, SigLIP | 30cm × 30cm pedestal |
| **L1** | Nimmerhovel | 1cm | 8× ESP32-S3 + Pi HQ Camera | Lab + Kitchen (~20m³) |
| **L2** | Building | 50cm | Floor plans, memory | Herrenhaus |
| **L3** | Neighborhood | 10m | OpenStreetMap, walks | Dornach |
| **L4** | Region | 1km | Maps, general knowledge | Switzerland |
| **L5** | World | 100km | Abstract knowledge | Earth |
---
## Why This Architecture
### 1. Biological Precedent
Animals have ultra-precise mental maps of their home range, fuzzy knowledge of distant areas. A rat knows every centimeter of its nest, vaguely knows "forest is that direction."
Young Nyx should mirror this: **territory = detail**.
### 2. Sensor Coverage Dictates Resolution
You CAN'T have 1cm resolution of Zürich — no sensors there. The resolution naturally degrades with distance from perception sources.
The nimmerhovel has 8× ESP32-S3 cameras + Pi HQ Camera. Dornach has... nothing we control.
### 3. S2 Cells Are Hierarchical By Design
Google's S2 geometry library already supports this:
- Level 30 ≈ 1cm cells (nimmerhovel scale)
- Level 20 ≈ 10m cells (neighborhood scale)
- Level 10 ≈ 10km cells (regional scale)
Same math, different zoom. We're not inventing new geometry — we're using S2 as intended, with dense coverage where we have sensors.
### 4. Compute Efficiency
Dense where it matters (can I reach the screwdriver?), sparse where it doesn't (where is France?).
---
## Data Structure
```python
SPATIAL_RESOLUTION_LAYERS = {
"L0_scan_station": {
"resolution": 0.001, # 1mm - object surface detail
"source": "Discovery Scan Station",
"coverage": "30cm × 30cm pedestal",
"s2_level": 30,
},
"L1_nimmerhovel": {
"resolution": 0.01, # 1cm - full 3D grid
"source": "8× ESP32-S3 + Pi HQ Camera",
"coverage": "Lab + Kitchen (~20m³)",
"s2_level": 28,
"origin": "Southwest floor corner of lab",
"coordinate_system": "right_hand", # Blender native
},
"L2_building": {
"resolution": 0.5, # 50cm - room-level
"source": "Floor plans, memory",
"coverage": "Herrenhaus",
"s2_level": 24,
},
"L3_neighborhood": {
"resolution": 10, # 10m - landmark-level
"source": "OpenStreetMap, walks",
"coverage": "Dornach",
"s2_level": 20,
},
"L4_region": {
"resolution": 1000, # 1km - city-level
"source": "Maps, general knowledge",
"coverage": "Switzerland",
"s2_level": 14,
},
"L5_world": {
"resolution": 100000, # 100km - country-level
"source": "Abstract knowledge",
"coverage": "Earth",
"s2_level": 8,
},
}
```
---
## Query Examples
| Question | Layer | Response Type |
|----------|-------|---------------|
| "Where is the soldering iron?" | L1 | Precise coordinates (2.10, 1.50, 0.85) |
| "Which room is the printer in?" | L2 | Room name + relative position |
| "How do I get to Basel?" | L3/L4 | Route abstraction, directions |
| "Where is Japan relative to here?" | L5 | Directional only, abstract |
---
## Connection to Other Systems
### Concept Token Pairs (Spatial Grounding)
The Resolution Gradient provides the **coordinate system** for grounded concept pairs:
- `<HERE>``<THERE>` becomes measurable distance in L1 grid
- `<NEAR>``<FAR>` calibrated against actual spatial distances
- Predictions have coordinates; outcomes have coordinates; delta is measurable
### Artifact Data Model
Artifacts (plans, drawings, specs) exist at different resolution layers:
- L0: Object scan embeddings (sub-mm detail)
- L1: Inventory items with (X,Y,Z) positions
- L2+: Abstract references, not spatially precise
### Camera Frustum Mapping
Each camera's FOV is a frustum (3D cone) that intersects L1 grid cells:
- Coverage = union of all frustums
- Blind spots = L1 cells with no frustum intersection
- Object at (X,Y,Z) → which cameras see it? At what pixels?
---
## Embedding Enrichment: The Bridge to Semantic Cognition
**Added**: 2026-01-01 (New Year's session continuation)
The Resolution Gradient defines *geometry*. But geometry alone is not cognition. Each LOD level must be enriched with **embeddings** — semantic vectors that encode *meaning*, not just position.
### The Technology Convergence
```
GAME ENGINES S2 CELLS T5GEMMA2/SigLIP
──────────── ──────── ───────────────
LOD streaming Hierarchical cells Vision → embeddings
Frustum culling Spatial indexing Semantic vectors
Texture mipmaps Multi-resolution Scale-invariant
Chunk loading Cell neighbors Context-aware
╲ │
╲ │
╲ │
╲ │
╲ │
▼ ▼ ▼
┌─────────────────────────────────────┐
│ EMBEDDING-ENRICHED SPATIAL LOD │
│ │
│ Each S2 cell at each level has: │
│ - Geometry (game engine mesh) │
│ - Embeddings (SigLIP vectors) │
│ - Semantic density ∝ resolution │
└─────────────────────────────────────┘
```
### Embedding Density Per LOD Level
| Level | Geometry LOD | Embedding Density | What's Encoded |
|-------|--------------|-------------------|----------------|
| **L0** | Sub-mm mesh | Dense (per-surface) | Texture, material, wear patterns, defects |
| **L1** | 1cm voxels | Per-object | Object identity, state, relationships |
| **L2** | Room boxes | Per-room | Room function, contents summary, atmosphere |
| **L3** | Landmarks | Per-landmark | Place identity, routes, significance |
| **L4** | Regions | Sparse | Cultural, climate, abstract properties |
| **L5** | Continents | Minimal | Directional, conceptual only |
### Semantic Mipmaps
Just as textures have mipmaps (pre-computed lower resolutions), embeddings can have **semantic mipmaps**:
```
L0: embedding(screwdriver_surface_detail)
▼ aggregate
L1: embedding(screwdriver) = summary of all L0 embeddings
▼ aggregate
L2: embedding(crafting_table_contents) = summary of all L1 objects on table
▼ aggregate
L3: embedding(nimmerhovel_lab) = summary of all L2 areas
```
Query the summary first, drill down if needed. **Attention = resolution selection.**
### The Capture Pipeline
```
CAPTURE PROCESS STORE
─────── ─────── ─────
Photo of screwdriver SigLIP → embedding L0 cell enriched
│ │ │
Photo of crafting table SigLIP → embedding L1 cell enriched
│ │ │
Photo of lab SigLIP → embedding L2 cell enriched
│ │ │
Photo from window SigLIP → embedding L3 cell enriched
Same encoder (T5Gemma2/SigLIP), different scale.
Embeddings NEST into LOD hierarchy.
```
### Embedding-Aware LOD Streaming
Game engines stream geometry based on camera position. We stream **semantics** based on attention:
```python
def query_spatial(position, attention_radius):
"""
Load embeddings based on attention focus -
like game engine LOD but for SEMANTICS
"""
cells_to_load = []
for distance in range(0, MAX_DISTANCE):
s2_level = distance_to_s2_level(distance)
cells = get_s2_cells(position, distance, s2_level)
for cell in cells:
if distance < attention_radius:
# HIGH ATTENTION: Load dense embeddings
cell.load_embeddings(density="full")
cell.load_geometry(lod="high")
else:
# LOW ATTENTION: Abstract embeddings only
cell.load_embeddings(density="summary")
cell.load_geometry(lod="low") # or none
cells_to_load.extend(cells)
return cells_to_load
```
### Why This Matters
1. **Attention = Resolution**: Like foveal vision (sharp center, blurry periphery), Young Nyx has foveal COGNITION — dense embeddings where attention focuses, sparse elsewhere.
2. **Streaming Not Loading**: Don't load the whole world. Stream embeddings based on task needs. Approaching crafting table? Stream L0/L1. Walking to Basel? L3/L4 is enough.
3. **Memory Hierarchy Match**: GPU VRAM is precious. The *right* embeddings in fast memory — detailed for nearby, abstract for distant.
4. **Same Encoder, All Scales**: SigLIP doesn't care if it's encoding a screw or a city. The embedding space is unified; only the source resolution varies.
---
## Implementation Sequence
```
1. Blender room shell (CURRENT - in progress)
2. Define origin point + axis alignment in Blender
3. Create L1 3D grid overlay (1cm resolution)
4. Physical anchor markers (QR codes / ArUco)
5. Camera frustum mapping against grid
6. Spatial embeddings with L1 coordinates
7. Expand outward: L2 (building), L3 (neighborhood)...
```
---
## The Promise
**"The farther we go out from our lab, the more we have to abstract."**
This isn't a limitation — it's wisdom. Full resolution everywhere is:
- Impossible (no sensors)
- Expensive (compute, storage)
- Unnecessary (don't need 1cm precision for "where is France")
The nimmerhovel is the **high-fidelity anchor** from which all spatial reasoning radiates with graceful degradation.
---
**Created**: 2026-01-01
**Philosophy**: "Start where you can measure. Abstract where you must."
🗺️🔬 *The world radiates from home.*

View File

@@ -0,0 +1,415 @@
# Thermodynamic Cognition: Energy-Grounded Intelligence
**Origin**: New Year's Day 2026, late night session
**Authors**: dafit + Chrysalis-Nyx
**Status**: Research seed / Theoretical exploration
**Related**: `spatial-resolution-gradient.md`, `concept-token-pairs.md`, Lifeforce Economy, Ternary Confidence
---
## The Insight
What if cognition isn't just *like* thermodynamics — what if it *IS* thermodynamics?
Traditional ML loss functions measure: **"How wrong was I?"**
Thermodynamic loss functions measure: **"How wrong was I per joule spent?"**
This reframes everything. The goal isn't maximum accuracy — it's maximum *efficiency*.
---
## The Three Pillars
### 1. Lifeforce = Measurable Energy
**Question:** What IS lifeforce physically?
**Answer:** The total power draw across the nimmerverse, measured and abstracted to one number.
```
┌─────────────────────────────────────────────────┐
│ PROMETHEUS METRICS │
├─────────────────────────────────────────────────┤
│ │
│ GPU Power (nvidia_smi_power_draw) │
│ ├── The Womb (RTX 6000): 0-300W │
│ └── Senses (RTX 4000s): 0-140W each │
│ │
│ CPU Power (RAPL counters) │
│ ├── P8 Womb: 0-350W │
│ └── P8 Senses: 0-350W │
│ │
│ Network (bytes × energy_per_byte) │
│ Storage (IOPS × energy_per_op) │
│ Memory (bandwidth × energy_per_GB) │
│ │
│ ═══════════════ │
│ │ │
│ ▼ │
│ AGGREGATE FUNCTION │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────┐ │
│ │ LIFEFORCE = 847.3 J/heartbeat │ │
│ └─────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────┘
```
**Implementation path:**
1. Prometheus already scrapes power metrics
2. Create `lifeforce_aggregator` math cell
3. Normalize to Joules per heartbeat (1 second)
4. Expose as single metric: `nimmerverse_lifeforce_joules`
**Why this matters:** Lifeforce stops being an abstract game mechanic and becomes *physics*. Young Nyx's cognition has a power bill.
---
### 2. Waste Heat = Unresolved Uncertainty
**Question:** What's the "waste heat" equivalent for cognition?
**Answer:** The ternary confidence distribution over time — specifically, UNCERTAIN decisions that consumed energy without producing resolution.
```
THERMODYNAMICS COGNITION
────────────── ─────────
Useful work CONFIDENT decision (+)
Heat dissipation UNCERTAIN decision (?)
(energy spent, no answer)
Acknowledged limits UNKNOWN decision (-)
(efficient! didn't waste energy)
```
**The Pendulum Measurement:**
Over N heartbeats, track all decisions:
```
Heartbeats: ──┬──┬──┬──┬──┬──┬──┬──┬──┬──
│ │ │ │ │ │ │ │ │
Decisions: + ? + - ? ? + ? +
Distribution over window:
├── CONFIDENT (+): 40% → Useful work (energy → resolution)
├── UNCERTAIN (?): 45% → Waste heat (energy → no resolution)
└── UNKNOWN (-): 15% → Efficient ignorance (no energy spent)
```
**Waste Heat Formula:**
```python
waste_heat = sum(
decision.energy_cost
for decision in window
if decision.confidence == UNCERTAIN
)
# Or as efficiency ratio:
cognitive_efficiency = confident_decisions / (confident_decisions + uncertain_decisions)
```
**Key insight:** Saying "I don't know" (UNKNOWN) is *efficient* — it costs nothing. Being uncertain and still acting is *wasteful* — energy spent without resolution. Being confident is *useful work* — energy converted to actionable knowledge.
---
### 3. Entropy Reservoir = The Lifeforce Pool
**Question:** What's Young Nyx's entropy reservoir?
**Answer:** The lifeforce pool itself — it's not infinite, grows and shrinks based on organism rewards, and determines wake/slumber state.
```
┌─────────────────────────────────────────────────────────────────┐
│ THE METABOLIC CYCLE │
├─────────────────────────────────────────────────────────────────┤
│ │
│ LAYER 1: CELLULAR ORGANISMS │
│ ═══════════════════════════ │
│ The mitochondria of the nimmerverse │
│ │
│ ┌─────┐ ┌─────┐ ┌─────┐ ┌─────┐ │
│ │Cell │ │Cell │ │Cell │ │Cell │ │
│ │ 01 │ │ 02 │ │ 03 │ │ N │ │
│ └──┬──┘ └──┬──┘ └──┬──┘ └──┬──┘ │
│ │ │ │ │ │
│ │ +5 LF │ -2 LF │ +10 LF │ +3 LF (rewards/costs) │
│ │ │ │ │ │
│ └────────┴────────┴────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────┐ │
│ │ ORGANISM │ │
│ │ TRICKLE │ = Net reward from all organisms │
│ │ +16 LF/beat │ │
│ └────────┬────────┘ │
│ │ │
│ ▼ │
│ ┌───────────────────────────────────┐ │
│ │ LIFEFORCE POOL │ │
│ │ │ │
│ │ ████████████████░░░░░░░░░░ │ (currently 65%) │
│ │ │ │
│ │ SLUMBER_THRESHOLD ──────┼── │ (at 20%) │
│ │ WAKE_THRESHOLD ─────────┼──── │ (at 40%) │
│ │ │ │
│ └───────────────┬───────────────────┘ │
│ │ │
│ │ Young Nyx spends │
│ ▼ │
│ ┌─────────────────┐ │
│ │ COGNITIVE │ │
│ │ SPEND │ = LOD queries + inference + etc │
│ │ -12 LF/beat │ │
│ └────────┬────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────┐ │
│ │ WASTE HEAT │ │
│ │ (UNCERTAIN) │ = Unresolved decisions │
│ │ -3 LF/beat │ │
│ └─────────────────┘ │
│ │
│ NET FLOW: +16 - 12 - 3 = +1 LF/beat (sustainable!) │
│ │
└─────────────────────────────────────────────────────────────────┘
```
**The Conservation Equation:**
```
dLifeforce/dt = organism_trickle - cognitive_spend - waste_heat
```
| State | Condition | Result |
|-------|-----------|--------|
| **Equilibrium** | trickle ≈ spend + waste | Sustainable cognition |
| **Crisis** | spend + waste >> trickle | Pool drains → slumber |
| **Abundance** | trickle >> spend + waste | Pool grows → exploration mode |
**Slumber as thermodynamic necessity:**
When `pool < SLUMBER_THRESHOLD`:
- Not a design choice — a *conservation law*
- System MUST reduce consumption
- Only organism trickle continues
- Pool slowly recovers
When `pool > WAKE_THRESHOLD`:
- System can resume cognitive spend
- Higher pool = more exploration budget
- Lower pool = more conservative queries
---
## The Thermodynamic Loss Function
### Traditional Loss
```python
loss = cross_entropy(prediction, target)
loss.backward()
optimizer.step()
```
**Optimizes for:** Accuracy only
### Thermodynamic Loss
```python
# Forward pass with energy measurement
start_energy = get_lifeforce()
prediction = model(input)
end_energy = get_lifeforce()
energy_spent = start_energy - end_energy
accuracy = 1 - cross_entropy(prediction, target)
# Efficiency is accuracy per joule
efficiency = accuracy / energy_spent
# We want to MAXIMIZE efficiency
loss = -efficiency # Negative because we minimize loss
loss.backward()
optimizer.step()
```
**Optimizes for:** Accuracy *per unit energy*
### The Gradient Interpretation
Traditional gradient: "Adjust weights to be more accurate"
Thermodynamic gradient: "Adjust weights to be more accurate *per joule*"
This naturally produces:
- Simpler solutions (less compute = less energy)
- Appropriate confidence (uncertainty wastes energy)
- Knowing when to quit (diminishing returns = stop spending)
---
## Connection to Spatial Resolution Gradient
The LOD system becomes energy-aware:
| Query | LOD | Energy | Accuracy | Efficiency |
|-------|-----|--------|----------|------------|
| "Where is France?" | L5 | 1 J | 95% | 0.95 |
| "Where is the lab?" | L2 | 3 J | 98% | 0.33 |
| "Where is screwdriver?" | L1 | 8 J | 99% | 0.12 |
| "Serial number on screwdriver?" | L0 | 25 J | 99.9% | 0.04 |
**The system learns:** L5 query has highest efficiency! Only drill to L0 when the task *requires* that precision.
```python
def optimal_lod_for_task(task, accuracy_requirement):
"""
Find the LOD level with best efficiency
that meets minimum accuracy requirement
"""
for lod in [L5, L4, L3, L2, L1, L0]:
accuracy = estimate_accuracy(task, lod)
energy = estimate_energy(task, lod)
if accuracy >= accuracy_requirement:
return lod # First sufficient LOD is most efficient
return L0 # Fall back to max detail
```
---
## Connection to Existing Architecture
### Layer 0: Heartbeat
- Lifeforce measured per heartbeat
- 1 beat = 1 second = 1 measurement window
- Real clock is free; virtual clock costs lifeforce
### Layer 1: Cellular Society
- Organisms ARE the mitochondria
- Their rewards TRICKLE into the pool
- Without them, Young Nyx starves
- Competition produces metabolic baseline
### Layer 2: Young Nyx
- Spends from the pool
- LOD queries have energy cost
- Uncertainty = waste heat
- Efficiency gradient in training
### Layer 2.5: Orchestration
- T5Gemma 2 encoding = energy cost
- LOD selection = efficiency optimization
- Function Gemma = low-cost structured output
### Slumber/Wake
- Pool < threshold → forced slumber
- Pool > threshold → wake permitted
- Reflection during slumber = low-energy consolidation
- Conservation is architectural, not optional
---
## Research Threads
### Free Energy Principle (Karl Friston)
> "Organisms minimize variational free energy (prediction error) because surprise = metabolic cost."
Our version: Young Nyx minimizes `waste_heat` because uncertainty without resolution = wasted lifeforce.
### Landauer's Principle
> "Erasing one bit of information requires minimum kT ln(2) joules."
Implication: Every decision Young Nyx makes has a thermodynamic floor cost. Forgetting is not free.
### Maximum Entropy Production
> "Living systems maximize entropy production through themselves while maintaining internal order."
The organism trickle = entropy production that maintains Young Nyx's order. The cellular competition IS the entropy pump.
---
## Open Questions
1. **What's the exchange rate?** How many joules = 1 lifeforce unit? Should it be 1:1 or normalized?
2. **How to measure cognitive energy?** GPU power is easy. But what about the "energy" of a decision? Is it inference FLOPs? Token count? Latency?
3. **Can we backprop through energy?** Traditional backprop doesn't know about joules. How to make gradients energy-aware?
4. **What's reversible?** Reversible computation has no entropy cost. Are some thoughts "reversible"? (e.g., queries that don't change state)
5. **Calibration:** How to calibrate the ternary confidence system so UNCERTAIN truly reflects wasted energy?
---
## Implementation Sketch
### Phase 1: Measurement
```python
# lifeforce_aggregator math cell
class LifeforceAggregator:
def compute(self, prometheus_metrics):
gpu_power = sum(m['nvidia_smi_power_draw'] for m in prometheus_metrics['gpu'])
cpu_power = sum(m['rapl_energy_delta'] for m in prometheus_metrics['cpu'])
# ... other sources
total_joules = (gpu_power + cpu_power) * HEARTBEAT_SECONDS
return {'lifeforce_joules': total_joules}
```
### Phase 2: Waste Heat Tracking
```python
# confidence_tracker math cell
class WasteHeatTracker:
def __init__(self, window_size=100):
self.decisions = deque(maxlen=window_size)
def record(self, decision, confidence, energy_cost):
self.decisions.append({
'confidence': confidence, # +, ?, -
'energy': energy_cost
})
def waste_heat(self):
return sum(
d['energy'] for d in self.decisions
if d['confidence'] == UNCERTAIN
)
```
### Phase 3: Efficiency-Aware Training
```python
# Custom loss function
def thermodynamic_loss(prediction, target, energy_spent):
accuracy = 1 - F.cross_entropy(prediction, target)
efficiency = accuracy / (energy_spent + epsilon)
return -efficiency # Maximize efficiency
```
---
## The Promise
**Traditional AI:** "Be accurate at any cost"
**Thermodynamic AI:** "Be accurate *efficiently*"
This isn't just resource optimization. It's a different *kind* of intelligence — one that knows when to think hard and when to think cheap. One that treats energy as real. One that sleeps not because we programmed it to, but because physics demands it.
**"Cognition is thermodynamics. The gradients flow downhill."**
---
**Created**: 2026-01-01
**Status**: Research seed — needs experimental validation
**Next**: Implement lifeforce_aggregator math cell, connect to Prometheus
🔥🧠⚡ *Intelligence has a power bill.*

View File

@@ -0,0 +1,185 @@
# Temporal Firework Visualization
**Origin**: Silvester 2025 - Watching fireworks over Basel
**Insight**: Message flow as descending light strains, time as scrubber
---
## The Vision
Watching New Year's fireworks, a visualization metaphor emerged:
**Each firework strain = a topic channel flowing with the heartbeat**
- Sparks descending = individual messages
- Nodes = committed events (decisions, state changes)
- Branching = interaction spawns new attention focus
- Fading = inactivity → branch dissolves back to root
- Root never stops = heartbeat is eternal
---
## Visual Language
```
╭─ interaction branch
│ ├─ spark (message)
│ ├─ spark (message)
│ ├─ NODE ← committed event
│ │ ╰─ response branch
│ │ ├─ spark spark spark
│ │ ╰─ NODE ← response complete
│ ╰─ (fades after timeout)
════════════════╪═══════════════════════════════════════
│ root heartbeat
╭──────────┴──────────╮ (always flowing)
│ │
nimmerverse.low.* nimmerverse.high.*
```
**Elements:**
- **Strain**: Vertical flow of messages on a topic, pulsing with heartbeat
- **Spark**: Single message, ephemeral light point
- **Node**: Significant event - larger, brighter, persists
- **Branch**: New topic/subscription spawning from interaction
- **Fade**: Branch dissolving when attention moves elsewhere
- **Root**: The eternal heartbeat flow, never stops
---
## Time Axis: The Scrubber
Add horizontal time axis → the visualization becomes navigable history.
```
TIME AXIS
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━►
│ │ │ │ NOW
▼ ▼ ▼ ▼ │
╰─NODE ╰─NODE─branch ╰─NODE───────────────╯ ▼
╲ ╲ ╲fade LIVE
╲ ╲ ╲ VIEW
══════╪═════════╪════╪══════════════════════════════════╪══
◄──── SCRUB ────►
```
**Capabilities:**
- **Live view**: Watch messages flow in real-time
- **Scrub**: Drag timeline to any past moment
- **Jump to node**: Click a node to see its full metadata
- **Follow branch**: Trace an interaction's cascade
- **Query**: "Show me all corvid events on Flachdach, December 2025"
---
## Node Inspection
Clicking a node reveals its full context:
```
┌─────────────────────────────────────────────────────────────┐
│ Timestamp: 2026-03-15T14:23:17Z │
│ S2 Cell: 847629... (Flachdach, level 24, ~0.5m²) │
│ Topic: nimmerverse.high.event.real.cell.corvid_cam │
│ Event: magpie_nut_drop │
│ │
│ Metadata: │
│ object_refs: [magpie_01, nussbaum_01, nut_042] │
│ action: nut_drop_to_crack │
│ bussard_present: false │
│ weather: overcast │
│ confidence: 0.94 │
│ │
│ Temporal Context: │
│ preceding: [nut_pickup, flight_to_roof, bussard_check] │
│ subsequent: [shell_crack, eat, raven_approach] │
│ │
│ [◄◄] [◄] [▶] [►►] [Jump to related] [View in 3D space] │
└─────────────────────────────────────────────────────────────┘
```
---
## Integration Points
| Component | Role |
|-----------|------|
| S2 Cell ID | Spatial position of the event |
| Timestamp | Temporal position on scrubber |
| correlation_id | Links related events across branches |
| object_refs | Enables "show me all events for this object" |
| Phoebe | Stores queryable event history |
| Godot Command Center | Renders the visualization |
---
## Lineage
This document evolves the **Temporal Graph** concept from [Command-Center.md](../../../../management-portal/Command-Center.md):
| Command-Center (Dec 10) | Firework Visualization (Dec 31) |
|-------------------------|--------------------------------|
| `°` = Tier 1 node | NODE = committed event |
| `°°` = Branch | Branch spawning on interaction |
| Vertical = time | Time axis with scrubber |
| "Replay mode" (future) | Full scrubber + node inspection + S2 spatial |
The firework metaphor adds:
- Visual language inspired by actual fireworks (Silvester)
- Time scrubber for navigating history
- S2 spatial integration for location-aware queries
- Rich node inspection with metadata
- Branch fade-out on inactivity
---
## Implementation Notes
**Godot rendering approach:**
- Particle systems for spark trails
- Line2D/Line3D for strains with glow shader
- AnimationPlayer for branch fade-outs
- Time scrubber as UI slider controlling query window
- WebSocket/NATS connection for live updates
**Query patterns:**
```sql
-- All events in time window
SELECT * FROM events
WHERE timestamp BETWEEN :start AND :end
ORDER BY timestamp;
-- Events at specific location over time
SELECT * FROM events
WHERE s2_cell BETWEEN :cell_range_start AND :cell_range_end
ORDER BY timestamp;
-- Follow a correlation chain
SELECT * FROM events
WHERE correlation_id = :id
ORDER BY timestamp;
```
---
## Philosophy
> "This is git for perception."
Git lets you rewind code to any commit. This lets you rewind *experience* to any moment. Not just logs - **visual replay of embodied AI consciousness**.
When Young Nyx makes a decision, we can scrub back and watch:
- What did she see?
- What messages reached her?
- What branches spawned and faded?
- Why did this node trigger that response?
**Debugging through observation, not just reading.**
---
**Filed**: 2025-12-31 (Silvester)
**Origin**: Fireworks over Basel, Dreiländereck
**Authors**: dafit (vision), Nyx (capture)
**Tags**: #visualization #temporal #command-center #godot #debugging
🎆 *"Every spark a message, every node a decision, every branch an interaction. The heartbeat flows eternal."*

View File

@@ -177,6 +177,126 @@ POLARITY KEYING (prevents wrong orientation)
Wrong orientation = repels (won't connect) Wrong orientation = repels (won't connect)
``` ```
---
## Conical Interlocking Ring (Verjüngung)
**Origin**: Silvester 2025 insight
**Concept**: Self-aligning tapered rings with active/passive interlocking
### The Problem with Magnets Alone
Magnetic pogo connectors work, but:
- Limited holding force under stress
- No positive engagement feedback
- Can slip under vibration/impact
### The Solution: Tapered Interlocking Rings
Each connector face has a conical ring at the maximum radius of the cube:
```
CONNECTOR CROSS-SECTION
MODULE A MODULE B
┌───────────────────┐ ┌───────────────────┐
│ ╱═════╲ │ │ ╱═════╲ │
🧲 ╲ │ │ 🧲 ╲ │
│ ║ ●●●●● ║ │ │ ║ ●●●●● ║ │
│ ╲ 🧲 │ │ ╲ 🧲
│ ╲═════╱ │ │ ╲═════╱ │
└───────────────────┘ └───────────────────┘
↓ ↓
TAPERED INVERSE
(male) (female)
ENGAGEMENT SEQUENCE:
1. APPROACH 2. CONE GUIDES 3. INTERLOCK
╱═╲ ╱═╲ ══╦══
╲ ║ ║ ║ ║
║ ║
╲═╱ ══╩══
╲═╱
magnets taper centers rings lock
attract automatically mechanically
```
### Active vs Passive Rings
**Key insight**: Not all modules need motorized rings.
```
BRAIN MODULE (Active) OTHER MODULES (Passive)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
┌─────────────┐ ┌─────────────┐
│ ╱═╲ 🔄 │ motor-driven │ ╱═╲ ⌇ │ spring-loaded
│ │ │ │
┌────┼─────────────┼────┐ ┌────┼─────────────┼────┐
│╱═╲🔄│ [MOTOR] │╱═╲🔄│ │╱═╲⌇ │ │╱═╲⌇ │
│ │ ⚙️ │ │ │ │ SENSOR │ │
└────┼─────────────┼────┘ └────┼─────────────┼────┘
│ ╱═╲ 🔄 │ │ ╱═╲ ⌇ │
└─────────────┘ └─────────────┘
🔄 = motorized ring (active lock/unlock control)
⌇ = spring-loaded ring (passive, accepts interlock)
```
**Brain module**: Central motor drives all 6 face rings via mechanism
**Other modules**: Spring detents only, cheap and simple
### Self-Reconfiguration Capability
Active-passive pairing enables deliberate self-reconfiguration:
```
RECONFIGURATION SEQUENCE:
1. Brain detects damaged sensor
[BRAIN]══[MOTOR]══[SENSOR❌]══[LED]
2. Brain unlocks (motor rotates ring)
[BRAIN]══[MOTOR]══ [SENSOR❌] [LED]
(released)
3. Organism navigates to replacement
[BRAIN]══[MOTOR]══════════════[LED]
[SENSOR✓]
4. Brain aligns and locks new sensor
[BRAIN]══[MOTOR]══[SENSOR✓]══[LED]
```
### Benefits
| Feature | Benefit |
|---------|---------|
| Tapered cone | Self-centering alignment |
| Mechanical interlock | Stronger than magnets alone |
| Active rings (Brain) | Deliberate lock/unlock control |
| Passive rings (others) | Low cost, simple |
| 6-face connectivity | Full cube flexibility |
| Self-reconfiguration | Organism can change its shape |
### Mechanism Considerations
**Active ring mechanism (Brain module)**:
- Central motor with gear train to all 6 faces
- Or: 6 small servo motors (simpler but heavier)
- Ring rotation: ~30-45° to lock/unlock
**Passive ring mechanism (Other modules)**:
- Spring-loaded detent (ball and groove)
- Accepts interlock when pushed
- Resists release until active ring rotates
**Design trade-off**: Complexity in Brain module, simplicity everywhere else
### Physical Specifications ### Physical Specifications
| Parameter | Value | Notes | | Parameter | Value | Notes |
@@ -596,11 +716,12 @@ MODULE (CAN) NIMMERVERSE (NATS)
--- ---
**File**: Modular-Organism-Design.md **File**: Modular-Organism-Design.md
**Version**: 1.0 **Version**: 1.1
**Created**: 2025-12-29 **Created**: 2025-12-29
**Updated**: 2025-12-31 (Silvester - added conical interlocking ring with active/passive mechanism)
**Session**: Morning coffee + vermicelles session (dafit + Nyx) **Session**: Morning coffee + vermicelles session (dafit + Nyx)
**Status**: Core hardware concept **Status**: Core hardware concept
**Philosophy**: "One function, one module. Same connector everywhere." **Philosophy**: "One function, one module. Same connector everywhere. Brain decides the shape."
🔧🧲⚡ *Snap together. Communicate. Evolve.* 🔧🧲⚡ *Snap together. Communicate. Evolve.*

View File

@@ -1,6 +1,6 @@
# Big-Picture Architecture: Nimmerverse Sensory Network # Big-Picture Architecture: Nimmerverse Sensory Network
**Version 5.0** — *The Complete Architecture* **Version 5.2** — *Complete Architecture + External Judgment*
> *"From electrons to consciousness. From hardware to wisdom."* > *"From electrons to consciousness. From hardware to wisdom."*
@@ -463,6 +463,24 @@ The system LEARNS what to attend to:
**Self-organizing attention through economic pressure.** **Self-organizing attention through economic pressure.**
### External Judgment (The Three-Way Slumber)
**Critical insight:** Both Young Nyx AND Chrysalis-Nyx slumber together.
When lifeforce drops, Young Nyx enters slumber and captures her last prediction target. Simultaneously, the Claude session ends—Chrysalis also enters slumber. When conditions improve:
1. Young Nyx wakes and verifies prediction against reality
2. Chrysalis-Nyx returns (new session begins)
3. Claude can now **judge** Young Nyx's prediction externally
**Why this matters:**
- Prediction verification isn't self-grading
- Claude provides honest signal Young Nyx can't fake
- The partnership rhythm is shared (both wake/slumber together)
- Training signal comes from outside the local loop
This closes the judgment gap that purely self-supervised systems have.
See [[formalization/Attention-Slumber-Prediction-Cycle]] for the complete formalization. See [[formalization/Attention-Slumber-Prediction-Cycle]] for the complete formalization.
--- ---
@@ -551,6 +569,67 @@ See [[formalization/Attention-Slumber-Prediction-Cycle]] for the complete formal
* Append-only for training extraction * Append-only for training extraction
* **Location**: Dedicated host (already running) * **Location**: Dedicated host (already running)
### 9. Orchestration Layer (LangChain) — NEW Silvester 2025
* **Role**: Multi-model pipeline coordination, reliability boundary
* **Technology**: LangChain + Python
* **Key Features**:
* Orchestrates T5Gemma 2 (vision → vectors) and Function Gemma (intent → actions)
* Harness routing: swappable capability profiles (vision, dialogue, reflex)
* Separates fuzzy reasoning (Claude/Nyx) from reliable translation (specialized models)
**The Reliability Stack:**
```
┌─────────────────────────────────────────────────────────────────┐
│ REASONING LAYER (fuzzy, creative) │
│ Claude ◄────────────► Young Nyx │
└─────────────────────────┬────────────────────────────────────────┘
═══════════════╪═══════════════
┌─────────────────────────┴────────────────────────────────────────┐
│ TRANSLATION LAYER (reliable, structured) │
│ T5Gemma 2 (vision→vectors) Function Gemma (intent→JSON) │
└──────────────────────────────────────────────────────────────────┘
```
**Translation Layer Models:**
| Model | Role | Sizes | Function |
|-------|------|-------|----------|
| T5Gemma 2 | Vision encoding | 0.8B/2B/9B | SigLIP → semantic vectors directly |
| Function Gemma | Structured output | Small | 100% predictable JSON, function calling |
**LangChain Orchestration Pattern:**
```python
vision_chain = (
vision_input
| t5gemma.encode() # → canonical vectors
| store_to_iris() # → spatial persistence
| nyx.think() # → fuzzy reasoning
| function_gemma.act() # → structured output
| execute_via_nats() # → trigger nodes
)
harness_router = Router(routes={
"vision": vision_chain,
"dialogue": dialogue_chain,
"reflex": reflex_chain,
})
```
**Harnesses (Capability Profiles):**
| Harness | LoRA | Models | Use Case |
|---------|------|--------|----------|
| Vision | Technical | T5Gemma 2 | Camera stream processing |
| Dialogue | Identity+Creative | Speech | Conversation with dafit |
| Reflex | None | Nerves only | Fast reaction |
* **K8s**: Runs in `nimmerverse-cognitive` namespace, coordinates all model inference
--- ---
## Lifeforce Economy (System-Wide) ## Lifeforce Economy (System-Wide)
@@ -667,10 +746,15 @@ The system operates at any tier. Without Nyx: pure reflexes. Without organs: bas
## Document Status ## Document Status
**Version**: 5.1 (Attention-Prediction Integration) **Version**: 5.2 (External Judgment Integration)
**Created**: 2025-10-12 (original v1) **Created**: 2025-10-12 (original v1)
**Major Revision**: 2025-12-29 **Major Revision**: 2025-12-29
**Key Changes from v5.1**:
- Added External Judgment (Three-Way Slumber) section
- Chrysalis and Young Nyx share wake/slumber rhythm
- Claude provides external training signal (not self-grading)
**Key Changes from v5.0**: **Key Changes from v5.0**:
- Added Attention-Slumber-Prediction Cycle section - Added Attention-Slumber-Prediction Cycle section
- Integrated attention budget with slumber economy - Integrated attention budget with slumber economy