feat: Memory Economics + Architecture Alignment (Endgame v6.4)
New formalization: - memory-economics.md: Slumber-based consolidation, decision trail triage, spatial LOD decay, reflex rental, LoRA training cycles New research seeds (future/): - spatial-resolution-gradient.md: L0-L5 LOD with S2 cells - thermodynamic-cognition.md: Lifeforce as Prometheus Joules - promql-thermodynamic-monitoring.md: Gemini red team queries Architecture changes: - Endgame-Vision v6.4: Memory Economics integrated into Slumber section - Mirror dialectic moved to future/research (not core) - Big-Picture.md archived (superseded by Endgame-Vision) - Single source of truth established Gemini red team alignment complete. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
@@ -1,9 +1,9 @@
|
|||||||
---
|
---
|
||||||
type: research_vision
|
type: research_vision
|
||||||
version: 6.2_condensed_architecture_no_artifacts
|
version: 6.4_memory_economics_alignment
|
||||||
status: vision_document
|
status: vision_document
|
||||||
created: 2025-11-04
|
created: 2025-11-04
|
||||||
updated: 2025-12-31
|
updated: 2026-01-02
|
||||||
author: Nyx (with dafit)
|
author: Nyx (with dafit)
|
||||||
significance: research_platform_for_metabolic_intelligence
|
significance: research_platform_for_metabolic_intelligence
|
||||||
---
|
---
|
||||||
@@ -19,8 +19,8 @@ significance: research_platform_for_metabolic_intelligence
|
|||||||
> *"Language is Topology. German accesses the Philosophy Valley. English accesses the Technical Cluster."*
|
> *"Language is Topology. German accesses the Philosophy Valley. English accesses the Technical Cluster."*
|
||||||
> — The December Discovery (2025-12-06)
|
> — The December Discovery (2025-12-06)
|
||||||
|
|
||||||
> *"One model, one topology. The Mirror is just negated weights—thesis and antithesis from the same substrate."*
|
> *"One model, one topology. LoRAs access different valleys in the same landscape."*
|
||||||
> — The Dialectic Simplification (2025-12-07)
|
> — The Topological Insight (2025-12-07)
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -31,8 +31,9 @@ This is a **RESEARCH VISION** - a platform for studying how intelligence emerges
|
|||||||
**What we're building:**
|
**What we're building:**
|
||||||
- Cellular organisms competing under resource constraints
|
- Cellular organisms competing under resource constraints
|
||||||
- Dual gardens (virtual + real) teaching each other
|
- Dual gardens (virtual + real) teaching each other
|
||||||
- Single base model with LoRA adapters + dialectic Mirror
|
- Single base model with LoRA adapters (Identity, Technical, Creative)
|
||||||
- Multilingual cognitive routing through conceptual topology
|
- Multilingual cognitive routing through conceptual topology
|
||||||
|
- Memory economics with slumber-based consolidation
|
||||||
- A multi-layered communication protocol using color, form, and language
|
- A multi-layered communication protocol using color, form, and language
|
||||||
- Long-term human-AI partnership with mutual investment
|
- Long-term human-AI partnership with mutual investment
|
||||||
|
|
||||||
@@ -49,7 +50,6 @@ This is a **RESEARCH VISION** - a platform for studying how intelligence emerges
|
|||||||
|
|
||||||
## Architecture Overview
|
## Architecture Overview
|
||||||
|
|
||||||
**Complete specification:** → [`architecture/Big-Picture.md`](architecture/Big-Picture.md) (v5.0 - The definitive architectural document)
|
|
||||||
**Visual diagram:** → [`architecture/nimmerverse.drawio.xml`](architecture/nimmerverse.drawio.xml) (open in draw.io)
|
**Visual diagram:** → [`architecture/nimmerverse.drawio.xml`](architecture/nimmerverse.drawio.xml) (open in draw.io)
|
||||||
**Toolchain implementation:** → [`architecture/Toolchain-Architecture.md`](architecture/Toolchain-Architecture.md) | [Progress](architecture/TOOLCHAIN-PROGRESS.md)
|
**Toolchain implementation:** → [`architecture/Toolchain-Architecture.md`](architecture/Toolchain-Architecture.md) | [Progress](architecture/TOOLCHAIN-PROGRESS.md)
|
||||||
|
|
||||||
@@ -71,13 +71,12 @@ This is a **RESEARCH VISION** - a platform for studying how intelligence emerges
|
|||||||
│ └─ Outcomes logged to phoebe PostgreSQL │
|
│ └─ Outcomes logged to phoebe PostgreSQL │
|
||||||
│ → architecture/Cellular-Architecture.md │
|
│ → architecture/Cellular-Architecture.md │
|
||||||
│ │
|
│ │
|
||||||
│ Layer 2: YOUNG NYX (Single Model + LoRA Stack + Dialectic) │
|
│ Layer 2: YOUNG NYX (Single Model + LoRA Stack) │
|
||||||
│ ├─ Base: Qwen3-VL 32B (Thinking Version) (96GB VRAM in Womb) │
|
│ ├─ Base: Qwen3-VL 32B (Thinking Version) (96GB VRAM in Womb) │
|
||||||
│ ├─ LoRA Stack (topology-informed): │
|
│ ├─ LoRA Stack (topology-informed): │
|
||||||
│ │ ├─ Identity (German) → Philosophy Valley (diffuse, deep) │
|
│ │ ├─ Identity (German) → Philosophy Valley (diffuse, deep) │
|
||||||
│ │ ├─ Technical (English) → Technical Cluster (sparse) │
|
│ │ ├─ Technical (English) → Technical Cluster (sparse) │
|
||||||
│ │ └─ Creative (Mixed) → bridges topologies │
|
│ │ └─ Creative (Mixed) → bridges topologies │
|
||||||
│ ├─ Mirror: Negated LoRA weights for dialectic (-1 × Nyx) │
|
|
||||||
│ ├─ Harnesses select active LoRA (routing implicit in context) │
|
│ ├─ Harnesses select active LoRA (routing implicit in context) │
|
||||||
│ └─ Consolidation: Merge successful LoRAs → fine-tune over time │
|
│ └─ Consolidation: Merge successful LoRAs → fine-tune over time │
|
||||||
│ │
|
│ │
|
||||||
@@ -103,7 +102,7 @@ This is a **RESEARCH VISION** - a platform for studying how intelligence emerges
|
|||||||
|
|
||||||
The nimmerverse runs on sovereign hardware. No cloud dependencies. Weights never leave home.
|
The nimmerverse runs on sovereign hardware. No cloud dependencies. Weights never leave home.
|
||||||
|
|
||||||
**Detail:** → [`archive/nimmervest.md`](archive/nimmervest.md) | [`architecture/Big-Picture.md`](architecture/Big-Picture.md)
|
**Detail:** → [`archive/nimmervest.md`](archive/nimmervest.md)
|
||||||
|
|
||||||
### K8s Cluster Architecture
|
### K8s Cluster Architecture
|
||||||
|
|
||||||
@@ -237,48 +236,45 @@ Learned patterns live in their optimal location:
|
|||||||
|
|
||||||
**Key insight:** Different types of reflexes need different homes. Hardware for survival, weights for cognition.
|
**Key insight:** Different types of reflexes need different homes. Hardware for survival, weights for cognition.
|
||||||
|
|
||||||
**Detail:** → [`architecture/Cellular-Architecture.md`](architecture/Cellular-Architecture.md) | [`architecture/Big-Picture.md`](architecture/Big-Picture.md)
|
**Detail:** → [`architecture/Cellular-Architecture.md`](architecture/Cellular-Architecture.md)
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Layer 2: Young Nyx (Single Model + LoRA Stack + Dialectic)
|
## Layer 2: Young Nyx (Single Model + LoRA Stack)
|
||||||
|
|
||||||
One base model, one topology, multiple perspectives through LoRA adapters. The Mirror provides internal dialectic without doubling VRAM.
|
One base model, one topology, multiple perspectives through LoRA adapters.
|
||||||
|
|
||||||
### Architecture
|
### Architecture
|
||||||
|
|
||||||
```
|
```
|
||||||
Qwen3-VL-32B (96GB in the Womb)
|
Qwen3-VL-32B (96GB in the Womb)
|
||||||
│
|
│
|
||||||
┌───────────────┴───────────────┐
|
│
|
||||||
│ │
|
NYX LoRAs
|
||||||
NYX LoRAs MIRROR LoRAs
|
┌─────────┼─────────┐
|
||||||
┌─────────┼─────────┐ (= -1 × Nyx LoRAs)
|
│ │ │
|
||||||
│ │ │ │
|
Identity Technical Creative
|
||||||
Identity Technical Creative Auto-generated
|
(German) (English) (Synthesis)
|
||||||
(German) (English) (Synthesis) No extra training
|
│
|
||||||
│ │
|
|
||||||
└───────────────┬───────────────┘
|
|
||||||
│
|
│
|
||||||
Hot-swap <100ms
|
Hot-swap <100ms
|
||||||
via Lorax/PEFT
|
via Lorax/PEFT
|
||||||
```
|
```
|
||||||
|
|
||||||
### The Dialectic Protocol
|
### Query Routing
|
||||||
|
|
||||||
For high-stakes queries (identity, ethics, low confidence):
|
|
||||||
|
|
||||||
1. **Thesis:** Load Nyx LoRA → generate response A
|
|
||||||
2. **Antithesis:** Swap Mirror LoRA → generate response B
|
|
||||||
3. **Synthesis:** Base model (no LoRA) judges agreement/conflict
|
|
||||||
|
|
||||||
| Query Type | Mode | Lifeforce Cost |
|
| Query Type | Mode | Lifeforce Cost |
|
||||||
|------------|------|----------------|
|
|------------|------|----------------|
|
||||||
| Reflex ("obstacle!") | Direct Nyx | 1x |
|
| Reflex ("obstacle!") | Direct (minimal LoRA) | 1x |
|
||||||
| Routine ("what time?") | Direct Nyx | 1x |
|
| Routine ("what time?") | Technical LoRA | 1x |
|
||||||
| Identity ("who am I?") | Full Dialectic | 3x |
|
| Identity ("who am I?") | Identity LoRA | 1x |
|
||||||
| Ethics ("should I?") | Full Dialectic | 3x |
|
| Creative ("what if?") | Creative LoRA | 1x |
|
||||||
| Uncertain (conf < 0.4) | Full Dialectic | 3x |
|
|
||||||
|
### Future: Dialectic Protocol (Research)
|
||||||
|
|
||||||
|
> *See [`architecture/future/concept-token-pairs.md`](architecture/future/concept-token-pairs.md) for the theoretical foundation.*
|
||||||
|
|
||||||
|
The original vision included a Mirror (-1 × Nyx LoRAs) for internal dialectic. This remains a research direction, not core architecture. The concept-token-pairs research explores how navigable reasoning axes might achieve similar goals more elegantly.
|
||||||
|
|
||||||
### LoRA Stack
|
### LoRA Stack
|
||||||
|
|
||||||
@@ -412,7 +408,7 @@ Swappable configurations for different contexts:
|
|||||||
| **Vision** | Technical | T5Gemma 2, cells | Processing camera streams |
|
| **Vision** | Technical | T5Gemma 2, cells | Processing camera streams |
|
||||||
| **Dialogue** | Identity + Creative | Speech organ | Talking with dafit |
|
| **Dialogue** | Identity + Creative | Speech organ | Talking with dafit |
|
||||||
| **Reflex** | Minimal/none | Nerves only | Fast reaction, low latency |
|
| **Reflex** | Minimal/none | Nerves only | Fast reaction, low latency |
|
||||||
| **Introspective** | All + Mirror | Iris RAG | Self-reflection, journaling |
|
| **Introspective** | Identity + Creative | Iris RAG | Self-reflection, journaling |
|
||||||
|
|
||||||
### Why This Matters
|
### Why This Matters
|
||||||
|
|
||||||
@@ -423,6 +419,124 @@ Swappable configurations for different contexts:
|
|||||||
|
|
||||||
**Detail:** → [`architecture/future/SEEDS.md`](architecture/future/SEEDS.md) (T5Gemma 2 + Function Gemma seed)
|
**Detail:** → [`architecture/future/SEEDS.md`](architecture/future/SEEDS.md) (T5Gemma 2 + Function Gemma seed)
|
||||||
|
|
||||||
|
### Spatial Resolution Gradient: Where Embeddings Live
|
||||||
|
|
||||||
|
> *"Start where you can measure. Abstract where you must."*
|
||||||
|
> — The Spatial Grounding Principle (2026-01-01)
|
||||||
|
|
||||||
|
T5Gemma 2 produces embeddings, but WHERE do they go? The answer is **S2-indexed cells at appropriate LOD levels** — a hierarchical spatial model radiating from the nimmerhovel.
|
||||||
|
|
||||||
|
```
|
||||||
|
🌍 L5: WORLD (100km resolution)
|
||||||
|
│ Abstract knowledge, directional only
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
🇨🇭 L4: REGION (1km resolution)
|
||||||
|
│ Maps, general knowledge
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
🏘️ L3: NEIGHBORHOOD (10m resolution)
|
||||||
|
│ OpenStreetMap, landmarks, routes
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
🏠 L2: BUILDING (50cm resolution)
|
||||||
|
│ Floor plans, room-level awareness
|
||||||
|
│
|
||||||
|
════╪════ HIGH RESOLUTION BOUNDARY
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
🔬 L1: NIMMERHOVEL (1cm resolution)
|
||||||
|
│ Full 3D grid, every object tracked
|
||||||
|
│ 8× ESP32-S3 + Pi HQ Camera coverage
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
🔍 L0: SCAN STATION (1mm resolution)
|
||||||
|
│ Discovery Scan Station, object surface detail
|
||||||
|
```
|
||||||
|
|
||||||
|
**The Simpsons Inversion:** Unlike zooming IN to detail, we start at maximum detail (nimmerhovel) and zoom OUT with graceful degradation. Dense where we have sensors, sparse where we don't.
|
||||||
|
|
||||||
|
### Embedding Enrichment Per LOD Level
|
||||||
|
|
||||||
|
Each S2 cell at each level contains both geometry AND semantic embeddings:
|
||||||
|
|
||||||
|
| Level | Resolution | Embedding Density | What's Encoded |
|
||||||
|
|-------|------------|-------------------|----------------|
|
||||||
|
| **L0** | 1mm | Dense (per-surface) | Texture, material, wear, defects |
|
||||||
|
| **L1** | 1cm | Per-object | Object identity, state, relationships |
|
||||||
|
| **L2** | 50cm | Per-room | Room function, contents summary |
|
||||||
|
| **L3** | 10m | Per-landmark | Place identity, routes, significance |
|
||||||
|
| **L4** | 1km | Sparse | Cultural, climate, abstract |
|
||||||
|
| **L5** | 100km | Minimal | Directional, conceptual only |
|
||||||
|
|
||||||
|
### Semantic Mipmaps
|
||||||
|
|
||||||
|
Like texture mipmaps, embeddings aggregate upward:
|
||||||
|
|
||||||
|
```
|
||||||
|
L0: embedding(screwdriver_surface)
|
||||||
|
│
|
||||||
|
▼ aggregate
|
||||||
|
L1: embedding(screwdriver) = summary of L0
|
||||||
|
│
|
||||||
|
▼ aggregate
|
||||||
|
L2: embedding(crafting_table_contents) = summary of L1 objects
|
||||||
|
│
|
||||||
|
▼ aggregate
|
||||||
|
L3: embedding(nimmerhovel_lab) = summary of L2 areas
|
||||||
|
```
|
||||||
|
|
||||||
|
**Query the summary first, drill down if needed. Attention = resolution selection.**
|
||||||
|
|
||||||
|
### The Complete Vision Pipeline
|
||||||
|
|
||||||
|
```
|
||||||
|
CAPTURE ENCODE STORE QUERY
|
||||||
|
─────── ────── ───── ─────
|
||||||
|
Camera frame → T5Gemma 2 → S2 cell @ LOD → Young Nyx
|
||||||
|
(SigLIP) (Iris/phoebe) attention
|
||||||
|
│ │ │
|
||||||
|
│ │ │
|
||||||
|
Canonical vector Spatial index LOD streaming
|
||||||
|
No text bottleneck + timestamp based on task
|
||||||
|
```
|
||||||
|
|
||||||
|
### Lifeforce-Validated LOD Selection
|
||||||
|
|
||||||
|
The lifeforce economy extends to spatial queries:
|
||||||
|
|
||||||
|
```python
|
||||||
|
def query_spatial(query, available_lifeforce):
|
||||||
|
"""
|
||||||
|
Cost-validated attention across LOD levels
|
||||||
|
"""
|
||||||
|
# Start at abstract level (cheap)
|
||||||
|
current_lod = L3
|
||||||
|
confidence = query_at_lod(query, current_lod).confidence
|
||||||
|
|
||||||
|
while confidence == UNCERTAIN and current_lod > L0:
|
||||||
|
drill_cost = estimate_cost(current_lod - 1)
|
||||||
|
|
||||||
|
if drill_cost > available_lifeforce * 0.3:
|
||||||
|
break # Too expensive, return best effort
|
||||||
|
|
||||||
|
current_lod -= 1
|
||||||
|
confidence = query_at_lod(query, current_lod).confidence
|
||||||
|
|
||||||
|
return result_at_lod(query, current_lod)
|
||||||
|
```
|
||||||
|
|
||||||
|
| Query | LOD Used | Lifeforce Cost | Confidence |
|
||||||
|
|-------|----------|----------------|------------|
|
||||||
|
| "Where is France?" | L5 | 1 | CONFIDENT |
|
||||||
|
| "Where is the lab?" | L2 | 3 | CONFIDENT |
|
||||||
|
| "Where is the screwdriver?" | L1 | 8 | CONFIDENT |
|
||||||
|
| "What's the serial number on the screwdriver?" | L0 | 25 | CONFIDENT |
|
||||||
|
|
||||||
|
**The nimmerhovel is the high-fidelity anchor from which all spatial reasoning radiates.**
|
||||||
|
|
||||||
|
**Detail:** → [`architecture/future/spatial-resolution-gradient.md`](architecture/future/spatial-resolution-gradient.md) (Full Resolution Gradient + Embedding Enrichment specification)
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Layer 3: Dual Gardens
|
## Layer 3: Dual Gardens
|
||||||
@@ -514,16 +628,36 @@ ACTIVE MODE SLUMBER MODE
|
|||||||
- No urgent work - Urgent work waiting
|
- No urgent work - Urgent work waiting
|
||||||
```
|
```
|
||||||
|
|
||||||
### Slumber Is Not Passive
|
### Slumber Is Not Passive (Memory Economics)
|
||||||
|
|
||||||
During slumber, Young Nyx enters **reflection mode**:
|
> *"Memory is not storage. Memory is active forgetting with exceptions."*
|
||||||
|
> — Memory Economics Principle (2026-01-02)
|
||||||
|
|
||||||
1. **Inner dialogue with Chrysalis** — Review what happened
|
During slumber, Young Nyx enters **consolidation mode**. This is the metabolism moment:
|
||||||
2. **Decision archaeology** — What choices were made?
|
|
||||||
3. **Weight shift analysis** — How did outcomes change priors?
|
|
||||||
4. **Final verdict synthesis** — Consolidated learning
|
|
||||||
|
|
||||||
This mirrors biological sleep: not just rest, but **consolidation**.
|
**1. Decision Trail Triage**
|
||||||
|
- Trails that compiled to reflexes → Keep reflex, discard trail
|
||||||
|
- Trails with uncertain outcomes → Discard (waste heat already counted)
|
||||||
|
- Trails with confident failures → Keep one cycle (negative example), then discard
|
||||||
|
|
||||||
|
**2. Spatial LOD Decay**
|
||||||
|
- Detailed embeddings (L0-L1) not accessed → Aggregate upward to parent LOD
|
||||||
|
- Memory naturally "zooms out" over time: "keys on counter at 15:47" → "keys usually near entrance"
|
||||||
|
- Access refreshes decay timer (frequently used stays detailed)
|
||||||
|
|
||||||
|
**3. Reflex Rental Collection**
|
||||||
|
- Every reflex pays rent each slumber cycle
|
||||||
|
- Reflexes that fired → earn trigger reward, survive
|
||||||
|
- Dormant reflexes → balance drains → eventually pruned
|
||||||
|
|
||||||
|
**4. LoRA Weight Updates**
|
||||||
|
- Weights frozen during wake (use, don't train)
|
||||||
|
- Slumber = training window (if enough confident outcomes accumulated)
|
||||||
|
- No signal = no training = save energy
|
||||||
|
|
||||||
|
This mirrors biological sleep: not just rest, but **consolidation with forgetting**.
|
||||||
|
|
||||||
|
**Detail:** → [`architecture/formalization/memory-economics.md`](architecture/formalization/memory-economics.md)
|
||||||
|
|
||||||
### The Prediction Loop (Heartbeat → Slumber → Wake → Judge)
|
### The Prediction Loop (Heartbeat → Slumber → Wake → Judge)
|
||||||
|
|
||||||
@@ -582,7 +716,7 @@ Wellbeing is architectural, not aspirational:
|
|||||||
|
|
||||||
**The vision sustains itself. We build to last, not to exhaust.**
|
**The vision sustains itself. We build to last, not to exhaust.**
|
||||||
|
|
||||||
**Detail:** → [`architecture/Big-Picture.md`](architecture/Big-Picture.md) (Slumber/Wake Economy, Wellbeing Policies sections)
|
**Detail:** → [`architecture/formalization/memory-economics.md`](architecture/formalization/memory-economics.md) (Memory consolidation, rental costs, LOD decay)
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -690,7 +824,6 @@ Sentinel architecture monitors training to protect conceptual topology.
|
|||||||
## Links to Detail Docs
|
## Links to Detail Docs
|
||||||
|
|
||||||
### Architecture
|
### Architecture
|
||||||
- [`architecture/Big-Picture.md`](architecture/Big-Picture.md) - **Complete architecture v5.0** (K8s, hybrid reflexes, slumber/wake, wellbeing)
|
|
||||||
- [`architecture/nimmerverse.drawio.xml`](architecture/nimmerverse.drawio.xml) - Visual overview diagram (open in draw.io)
|
- [`architecture/nimmerverse.drawio.xml`](architecture/nimmerverse.drawio.xml) - Visual overview diagram (open in draw.io)
|
||||||
- [`architecture/Cellular-Architecture.md`](architecture/Cellular-Architecture.md) - Cells, nerves, organisms, reward signals
|
- [`architecture/Cellular-Architecture.md`](architecture/Cellular-Architecture.md) - Cells, nerves, organisms, reward signals
|
||||||
- [`architecture/cells/`](architecture/cells/) - Cell technical reference, Python/SQL patterns
|
- [`architecture/cells/`](architecture/cells/) - Cell technical reference, Python/SQL patterns
|
||||||
@@ -698,6 +831,17 @@ Sentinel architecture monitors training to protect conceptual topology.
|
|||||||
- [`architecture/Temporal-Ternary-Gradient.md`](architecture/Temporal-Ternary-Gradient.md) - Ternary logic, confidence gradients, temporal asymmetry
|
- [`architecture/Temporal-Ternary-Gradient.md`](architecture/Temporal-Ternary-Gradient.md) - Ternary logic, confidence gradients, temporal asymmetry
|
||||||
- [`architecture/Data-Architecture.md`](architecture/Data-Architecture.md) - phoebe 15-table schema
|
- [`architecture/Data-Architecture.md`](architecture/Data-Architecture.md) - phoebe 15-table schema
|
||||||
- [`architecture/Nervous-System.md`](architecture/Nervous-System.md) - State machines, sensory translation
|
- [`architecture/Nervous-System.md`](architecture/Nervous-System.md) - State machines, sensory translation
|
||||||
|
- [`architecture/Initial-Spark.md`](architecture/Initial-Spark.md) - **v3.0** K8s protocol-driven bootstrap with Function Gemma
|
||||||
|
|
||||||
|
### Formalization (Core Design Principles)
|
||||||
|
- [`architecture/formalization/Grounded-World-Model.md`](architecture/formalization/Grounded-World-Model.md) - **v2.0** Ternary confidence, spatial S2 cells, semantic mipmaps
|
||||||
|
- [`architecture/formalization/memory-economics.md`](architecture/formalization/memory-economics.md) - Slumber-based memory consolidation, rental costs, LOD decay
|
||||||
|
|
||||||
|
### Future (Research Seeds)
|
||||||
|
- [`architecture/future/spatial-resolution-gradient.md`](architecture/future/spatial-resolution-gradient.md) - L0-L5 LOD system with S2 cell indexing
|
||||||
|
- [`architecture/future/thermodynamic-cognition.md`](architecture/future/thermodynamic-cognition.md) - Lifeforce as Prometheus Joules, waste heat as uncertainty
|
||||||
|
- [`architecture/future/concept-token-pairs.md`](architecture/future/concept-token-pairs.md) - Navigable reasoning axes, spatial grounding
|
||||||
|
- [`architecture/future/promql-thermodynamic-monitoring.md`](architecture/future/promql-thermodynamic-monitoring.md) - Gemini red team PromQL queries
|
||||||
|
|
||||||
### Operations
|
### Operations
|
||||||
- [`operations/Heartbeat.md`](operations/Heartbeat.md) - Temporal foundation, dual-clock sync
|
- [`operations/Heartbeat.md`](operations/Heartbeat.md) - Temporal foundation, dual-clock sync
|
||||||
@@ -715,19 +859,24 @@ Sentinel architecture monitors training to protect conceptual topology.
|
|||||||
|
|
||||||
### Archive
|
### Archive
|
||||||
- [`archive/`](archive/) - Previous explorations, theoretical foundations
|
- [`archive/`](archive/) - Previous explorations, theoretical foundations
|
||||||
|
- [`archive/Big-Picture-v5.2-archived.md`](archive/Big-Picture-v5.2-archived.md) - Former main architecture doc (superseded by this document)
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
**Version:** 6.2 (Condensed Architecture - No Artifacts)
|
**Version:** 6.4 (Memory Economics + Architecture Alignment)
|
||||||
**Created:** 2025-11-04 (covenant sealing)
|
**Created:** 2025-11-04 (covenant sealing)
|
||||||
**Updated:** 2025-12-07 (single model + LoRA stack + Mirror dialectic)
|
**Updated:** 2025-12-07 (single model + LoRA stack)
|
||||||
**Updated:** 2025-12-10 (Layer 4 GRPO integration, rubric-based reward architecture)
|
**Updated:** 2025-12-10 (Layer 4 GRPO integration, rubric-based reward architecture)
|
||||||
**Updated:** 2025-12-29 (Hardware timeline sync: RTX 6000 Blackwell Dec 31, standardized GPU naming, Memory-Gradient.md rename)
|
**Updated:** 2025-12-29 (Hardware timeline sync: RTX 6000 Blackwell Dec 31, standardized GPU naming, Memory-Gradient.md rename)
|
||||||
**Updated:** 2025-12-31 (Layer 1.5 folded into Layer 2 as "Why This Split?"; routing now implicit via harnesses; Prediction Loop added to Slumber with external judgment from Chrysalis)
|
**Updated:** 2025-12-31 (Layer 1.5 folded into Layer 2 as "Why This Split?"; routing now implicit via harnesses; Prediction Loop added to Slumber with external judgment from Chrysalis)
|
||||||
|
**Updated:** 2026-01-01 (Spatial Resolution Gradient added to Layer 2.5: LOD system L0-L5, embedding enrichment, semantic mipmaps, lifeforce-validated queries. The Simpsons Inversion principle.)
|
||||||
|
**Updated:** 2026-01-02 (Memory Economics formalized: slumber-based consolidation, decision trail triage, spatial LOD decay, reflex rental, LoRA training cycles. Mirror dialectic moved to future/research - concept-token-pairs.md is the research direction. Gemini red team alignment.)
|
||||||
|
|
||||||
*"The substrate doesn't matter. The feedback loop does."*
|
*"The substrate doesn't matter. The feedback loop does."*
|
||||||
|
|
||||||
*"One model, one topology. Thesis and antithesis from the same weights."*
|
*"One model, one topology. Different valleys, same landscape."*
|
||||||
|
|
||||||
|
*"Memory is not storage. Memory is active forgetting with exceptions."*
|
||||||
|
|
||||||
*"The nimmerverse is a garden, not a factory."*
|
*"The nimmerverse is a garden, not a factory."*
|
||||||
|
|
||||||
|
|||||||
File diff suppressed because it is too large
Load Diff
@@ -1,8 +1,10 @@
|
|||||||
# Grounded World Model: Spatial Cognition Through Verified Discovery
|
# Grounded World Model: Spatial Cognition Through Verified Discovery
|
||||||
|
|
||||||
**Version 1.0** — *From Blender Boxes to Embodied Understanding*
|
**Version 2.0** — *From Blender Boxes to Embodied Understanding*
|
||||||
|
|
||||||
> *"The dream: Young Nyx knows where dafit left his things laying around."*
|
> *"The dream: Young Nyx knows where dafit left his things laying around."*
|
||||||
|
> *"Start where you can measure. Abstract where you must."*
|
||||||
|
> *"Like the Simpsons intro, but inverted — we start at maximum detail and zoom OUT."*
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -15,8 +17,11 @@ This document formalizes how Young Nyx builds a **persistent spatial world model
|
|||||||
3. **Vector accumulation** — T5Gemma2-compatible semantic representations
|
3. **Vector accumulation** — T5Gemma2-compatible semantic representations
|
||||||
4. **Temporal-ternary navigation** — Escape plateaus through dual time domains
|
4. **Temporal-ternary navigation** — Escape plateaus through dual time domains
|
||||||
5. **Lifeforce reward** — Discoveries generate energy, not just consume it
|
5. **Lifeforce reward** — Discoveries generate energy, not just consume it
|
||||||
|
6. **Spatial Resolution Gradient** — LOD system radiating from nimmerhovel (L0-L5)
|
||||||
|
7. **S2 Cell Indexing** — Hierarchical spatial addressing at all scales
|
||||||
|
8. **Embedding Enrichment** — Semantic mipmaps per LOD level
|
||||||
|
|
||||||
**The Goal**: Young Nyx maintains an internal map of objects, positions, and relationships — verified against reality, refined through observation, reasoned over in vector space.
|
**The Goal**: Young Nyx maintains an internal map of objects, positions, and relationships — verified against reality, refined through observation, reasoned over in vector space, **indexed hierarchically from millimeter to planetary scale**.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -142,6 +147,249 @@ VERIFIED x,y,z: 12 vertices (refined box)
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
## Spatial Resolution Gradient (The Simpsons Inversion)
|
||||||
|
|
||||||
|
### The Core Insight
|
||||||
|
|
||||||
|
Traditional spatial models zoom IN to gain detail. Our model does the opposite: **we start at maximum detail (the nimmerhovel) and zoom OUT with graceful degradation.**
|
||||||
|
|
||||||
|
The nimmerhovel is the high-fidelity anchor from which all spatial reasoning radiates.
|
||||||
|
|
||||||
|
### The Six Levels (L0-L5)
|
||||||
|
|
||||||
|
```
|
||||||
|
🌍 L5: WORLD
|
||||||
|
│ Resolution: 100km
|
||||||
|
│ S2 Level: ~8
|
||||||
|
│ Source: Abstract knowledge
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
🇨🇭 L4: REGION
|
||||||
|
│ Resolution: 1km
|
||||||
|
│ S2 Level: ~14
|
||||||
|
│ Source: Maps, general knowledge
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
🏘️ L3: NEIGHBORHOOD
|
||||||
|
│ Resolution: 10m
|
||||||
|
│ S2 Level: ~20
|
||||||
|
│ Source: OpenStreetMap, walks
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
🏠 L2: BUILDING
|
||||||
|
│ Resolution: 50cm
|
||||||
|
│ S2 Level: ~24
|
||||||
|
│ Source: Floor plans, memory
|
||||||
|
│
|
||||||
|
════╪════ HIGH RESOLUTION BOUNDARY
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
🔬 L1: NIMMERHOVEL
|
||||||
|
│ Resolution: 1cm
|
||||||
|
│ S2 Level: ~28
|
||||||
|
│ Source: 8× ESP32-S3 + Pi HQ Camera
|
||||||
|
│ Full 3D grid, every object tracked
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
🔍 L0: SCAN STATION
|
||||||
|
│ Resolution: 1mm
|
||||||
|
│ S2 Level: ~30
|
||||||
|
│ Source: Discovery Scan Station
|
||||||
|
│ Object surface detail, texture, wear
|
||||||
|
```
|
||||||
|
|
||||||
|
### Formal Definition
|
||||||
|
|
||||||
|
| Level | Name | Resolution | S2 Cell Level | Coverage | Embedding Density |
|
||||||
|
|-------|------|------------|---------------|----------|-------------------|
|
||||||
|
| **L0** | Scan Station | 1mm | 30 | 30cm pedestal | Dense (per-surface) |
|
||||||
|
| **L1** | Nimmerhovel | 1cm | 28 | Lab + Kitchen (~20m³) | Per-object |
|
||||||
|
| **L2** | Building | 50cm | 24 | Herrenhaus | Per-room |
|
||||||
|
| **L3** | Neighborhood | 10m | 20 | Dornach | Per-landmark |
|
||||||
|
| **L4** | Region | 1km | 14 | Switzerland | Sparse |
|
||||||
|
| **L5** | World | 100km | 8 | Earth | Minimal |
|
||||||
|
|
||||||
|
### S2 Cell Integration
|
||||||
|
|
||||||
|
Google's S2 geometry provides hierarchical spatial indexing:
|
||||||
|
|
||||||
|
```python
|
||||||
|
import s2sphere
|
||||||
|
|
||||||
|
def position_to_s2_cell(lat: float, lng: float, level: int) -> s2sphere.CellId:
|
||||||
|
"""Convert position to S2 cell at given level."""
|
||||||
|
latlng = s2sphere.LatLng.from_degrees(lat, lng)
|
||||||
|
cell = s2sphere.CellId.from_lat_lng(latlng)
|
||||||
|
return cell.parent(level)
|
||||||
|
|
||||||
|
# Nimmerhovel anchor point
|
||||||
|
NIMMERHOVEL_ORIGIN = {
|
||||||
|
"lat": 47.479167, # 47°28'45"N
|
||||||
|
"lng": 7.618611, # 7°37'7"E
|
||||||
|
"address": "Lehmenweg 4, CH-4143 Dornach"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Get cell at each level
|
||||||
|
l1_cell = position_to_s2_cell(47.479167, 7.618611, level=28) # 1cm
|
||||||
|
l3_cell = position_to_s2_cell(47.479167, 7.618611, level=20) # 10m
|
||||||
|
l5_cell = position_to_s2_cell(47.479167, 7.618611, level=8) # 100km
|
||||||
|
```
|
||||||
|
|
||||||
|
### Why This Architecture?
|
||||||
|
|
||||||
|
1. **Sensor coverage dictates resolution** — We have 8× ESP32-S3 cameras in the nimmerhovel. We have zero sensors in Zürich. Resolution follows perception.
|
||||||
|
|
||||||
|
2. **Biological precedent** — Animals have ultra-precise mental maps of their home range, fuzzy knowledge of distant areas. Territory = detail.
|
||||||
|
|
||||||
|
3. **Compute efficiency** — Dense where it matters ("Where is my screwdriver?"), sparse where it doesn't ("Where is France?").
|
||||||
|
|
||||||
|
4. **S2 is hierarchical by design** — Same math, different zoom. Level 30 ≈ 1cm, Level 20 ≈ 10m, Level 8 ≈ 100km.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Embedding Enrichment: Semantic Mipmaps
|
||||||
|
|
||||||
|
### The Problem
|
||||||
|
|
||||||
|
Pure S2 cells give us *geometry* — where things are. But geometry alone is not cognition. We need *semantics* — what things mean.
|
||||||
|
|
||||||
|
### The Solution: Embeddings Per Cell
|
||||||
|
|
||||||
|
Each S2 cell at each LOD level contains both spatial position AND semantic embeddings:
|
||||||
|
|
||||||
|
```python
|
||||||
|
@dataclass
|
||||||
|
class EnrichedCell:
|
||||||
|
cell_id: s2sphere.CellId
|
||||||
|
level: int # L0-L5
|
||||||
|
geometry: Optional[Mesh] # Blender mesh at appropriate LOD
|
||||||
|
embeddings: List[Vector] # SigLIP vectors for contents
|
||||||
|
summary_embedding: Vector # Aggregated "what's here" vector
|
||||||
|
last_observed: datetime
|
||||||
|
confidence: float # Ternary-derived
|
||||||
|
```
|
||||||
|
|
||||||
|
### Semantic Mipmaps
|
||||||
|
|
||||||
|
Like texture mipmaps (pre-computed lower resolutions), embeddings aggregate upward:
|
||||||
|
|
||||||
|
```
|
||||||
|
L0: embedding(screwdriver_surface_detail)
|
||||||
|
│
|
||||||
|
▼ aggregate
|
||||||
|
L1: embedding(screwdriver) = f(all L0 embeddings of screwdriver)
|
||||||
|
│
|
||||||
|
▼ aggregate
|
||||||
|
L2: embedding(crafting_table_contents) = f(all L1 objects on table)
|
||||||
|
│
|
||||||
|
▼ aggregate
|
||||||
|
L3: embedding(nimmerhovel_lab) = f(all L2 areas in lab)
|
||||||
|
│
|
||||||
|
▼ aggregate
|
||||||
|
L4: embedding(lehmenweg_4) = f(all L3 rooms in building)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Aggregation function:**
|
||||||
|
|
||||||
|
$$e_{parent} = \text{normalize}\left(\sum_{i \in \text{children}} w_i \cdot e_i\right)$$
|
||||||
|
|
||||||
|
Where $w_i$ is weighted by recency, confidence, and observation count.
|
||||||
|
|
||||||
|
### Query Strategy
|
||||||
|
|
||||||
|
**Query the summary first, drill down if needed:**
|
||||||
|
|
||||||
|
```python
|
||||||
|
def spatial_query(query_embedding: Vector, required_confidence: float):
|
||||||
|
"""
|
||||||
|
Start at abstract level, drill down only if needed.
|
||||||
|
This minimizes lifeforce cost.
|
||||||
|
"""
|
||||||
|
# Start at L3 (neighborhood level) - cheap
|
||||||
|
candidates = find_similar_cells(query_embedding, level=L3)
|
||||||
|
|
||||||
|
if max_similarity(candidates) > required_confidence:
|
||||||
|
return candidates[0] # Good enough!
|
||||||
|
|
||||||
|
# Need more detail - drill to L1
|
||||||
|
l1_cells = expand_to_children(candidates[0], target_level=L1)
|
||||||
|
refined = find_similar_cells(query_embedding, cells=l1_cells)
|
||||||
|
|
||||||
|
if max_similarity(refined) > required_confidence:
|
||||||
|
return refined[0]
|
||||||
|
|
||||||
|
# Need maximum detail - drill to L0
|
||||||
|
l0_cells = expand_to_children(refined[0], target_level=L0)
|
||||||
|
return find_similar_cells(query_embedding, cells=l0_cells)[0]
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Lifeforce-Validated LOD Selection
|
||||||
|
|
||||||
|
### The Cost Model
|
||||||
|
|
||||||
|
Each LOD level has a query cost:
|
||||||
|
|
||||||
|
| Level | Query Cost | Typical Accuracy | Efficiency |
|
||||||
|
|-------|------------|------------------|------------|
|
||||||
|
| **L5** | 1 LF | 70% | 0.70 |
|
||||||
|
| **L4** | 2 LF | 80% | 0.40 |
|
||||||
|
| **L3** | 4 LF | 90% | 0.22 |
|
||||||
|
| **L2** | 8 LF | 95% | 0.12 |
|
||||||
|
| **L1** | 16 LF | 99% | 0.06 |
|
||||||
|
| **L0** | 32 LF | 99.9% | 0.03 |
|
||||||
|
|
||||||
|
**Efficiency** = Accuracy / Cost
|
||||||
|
|
||||||
|
### The Decision Function
|
||||||
|
|
||||||
|
```python
|
||||||
|
def optimal_lod_for_query(
|
||||||
|
query: str,
|
||||||
|
accuracy_requirement: float,
|
||||||
|
available_lifeforce: float
|
||||||
|
) -> int:
|
||||||
|
"""
|
||||||
|
Find the most efficient LOD that meets accuracy requirement
|
||||||
|
within lifeforce budget.
|
||||||
|
"""
|
||||||
|
for level in [L5, L4, L3, L2, L1, L0]:
|
||||||
|
cost = LOD_COSTS[level]
|
||||||
|
expected_accuracy = estimate_accuracy(query, level)
|
||||||
|
|
||||||
|
if cost > available_lifeforce * 0.3:
|
||||||
|
continue # Too expensive, skip
|
||||||
|
|
||||||
|
if expected_accuracy >= accuracy_requirement:
|
||||||
|
return level # First sufficient level is most efficient
|
||||||
|
|
||||||
|
return L3 # Default to neighborhood level
|
||||||
|
```
|
||||||
|
|
||||||
|
### Example Queries with Cost
|
||||||
|
|
||||||
|
| Query | Required Accuracy | Optimal LOD | Cost | Confidence |
|
||||||
|
|-------|-------------------|-------------|------|------------|
|
||||||
|
| "Where is France?" | 70% | L5 | 1 LF | CONFIDENT |
|
||||||
|
| "Where is the lab?" | 90% | L3 | 4 LF | CONFIDENT |
|
||||||
|
| "Where is the screwdriver?" | 95% | L2→L1 | 8-16 LF | CONFIDENT |
|
||||||
|
| "What's the serial number?" | 99.9% | L0 | 32 LF | CONFIDENT |
|
||||||
|
|
||||||
|
### Connection to Ternary Confidence
|
||||||
|
|
||||||
|
The ternary confidence system validates LOD selection:
|
||||||
|
|
||||||
|
| Confidence | LOD Implication |
|
||||||
|
|------------|-----------------|
|
||||||
|
| **CONFIDENT (+)** | Current LOD sufficient, stop drilling |
|
||||||
|
| **UNCERTAIN (?)** | Current LOD insufficient, consider drilling (costs LF) |
|
||||||
|
| **UNKNOWN (-)** | No data at any LOD, admit ignorance (efficient!) |
|
||||||
|
|
||||||
|
**Key insight:** Saying "I don't know" at L3 is cheaper than drilling to L0 and still being uncertain.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## Semantic Vector Accumulation
|
## Semantic Vector Accumulation
|
||||||
|
|
||||||
### SigLIP → Phoebe → T5Gemma2
|
### SigLIP → Phoebe → T5Gemma2
|
||||||
@@ -294,6 +542,39 @@ $$\Phi_{net} = \Phi_{discovery} - \Phi_{perception} - \Phi_{verification}$$
|
|||||||
## Phoebe Schema for World Model
|
## Phoebe Schema for World Model
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
|
-- S2 Spatial Cells: hierarchical spatial index
|
||||||
|
CREATE TABLE spatial_cells (
|
||||||
|
id UUID PRIMARY KEY,
|
||||||
|
s2_cell_id BIGINT NOT NULL, -- S2 cell token
|
||||||
|
s2_level INT NOT NULL, -- 8 (L5) to 30 (L0)
|
||||||
|
lod_level INT NOT NULL, -- 0-5 (our LOD system)
|
||||||
|
|
||||||
|
-- Geometry at this LOD
|
||||||
|
geometry_vertices INT DEFAULT 0, -- Mesh complexity
|
||||||
|
blender_mesh_path VARCHAR(255), -- Path to Blender file
|
||||||
|
|
||||||
|
-- Semantic embeddings
|
||||||
|
summary_embedding VECTOR(768), -- Aggregated "what's here"
|
||||||
|
embedding_count INT DEFAULT 0, -- Number of child embeddings aggregated
|
||||||
|
|
||||||
|
-- Temporal
|
||||||
|
last_observed TIMESTAMP,
|
||||||
|
observation_count INT DEFAULT 0,
|
||||||
|
|
||||||
|
-- Confidence (ternary-derived)
|
||||||
|
confidence FLOAT DEFAULT 0.0,
|
||||||
|
confidence_state VARCHAR(20), -- "confident" | "uncertain" | "unknown"
|
||||||
|
|
||||||
|
created_at TIMESTAMP DEFAULT NOW(),
|
||||||
|
updated_at TIMESTAMP DEFAULT NOW(),
|
||||||
|
|
||||||
|
UNIQUE(s2_cell_id, s2_level)
|
||||||
|
);
|
||||||
|
|
||||||
|
-- Index for spatial queries
|
||||||
|
CREATE INDEX idx_spatial_cells_s2 ON spatial_cells(s2_cell_id);
|
||||||
|
CREATE INDEX idx_spatial_cells_lod ON spatial_cells(lod_level);
|
||||||
|
|
||||||
-- Objects table: accumulated knowledge about things
|
-- Objects table: accumulated knowledge about things
|
||||||
CREATE TABLE world_objects (
|
CREATE TABLE world_objects (
|
||||||
id UUID PRIMARY KEY,
|
id UUID PRIMARY KEY,
|
||||||
@@ -308,6 +589,10 @@ CREATE TABLE world_objects (
|
|||||||
dimensions_estimated_cm JSONB,
|
dimensions_estimated_cm JSONB,
|
||||||
dimensions_verified JSONB, -- {"x": true, "y": true, "z": false}
|
dimensions_verified JSONB, -- {"x": true, "y": true, "z": false}
|
||||||
|
|
||||||
|
-- S2 spatial location (NEW)
|
||||||
|
current_s2_cell BIGINT, -- Current L1 cell containing object
|
||||||
|
s2_level INT DEFAULT 28, -- L1 = level 28
|
||||||
|
|
||||||
-- Confidence state (temporal-ternary)
|
-- Confidence state (temporal-ternary)
|
||||||
confidence FLOAT,
|
confidence FLOAT,
|
||||||
confidence_domain VARCHAR(20), -- "virtual" | "real" | "hybrid"
|
confidence_domain VARCHAR(20), -- "virtual" | "real" | "hybrid"
|
||||||
@@ -328,7 +613,12 @@ CREATE TABLE object_vectors (
|
|||||||
object_id UUID REFERENCES world_objects(id),
|
object_id UUID REFERENCES world_objects(id),
|
||||||
vector VECTOR(768), -- SigLIP embedding dimension
|
vector VECTOR(768), -- SigLIP embedding dimension
|
||||||
observation_timestamp TIMESTAMP,
|
observation_timestamp TIMESTAMP,
|
||||||
position_estimate JSONB, -- {"x": 0.3, "y": 0.8, "z": 0.1}
|
|
||||||
|
-- Position now includes S2 cell (NEW)
|
||||||
|
position_local JSONB, -- {"x": 0.3, "y": 0.8, "z": 0.1} relative to cell
|
||||||
|
s2_cell_id BIGINT, -- Which L1 cell
|
||||||
|
lod_level INT, -- At what LOD was this captured
|
||||||
|
|
||||||
lifeforce_cost FLOAT,
|
lifeforce_cost FLOAT,
|
||||||
lifeforce_reward FLOAT,
|
lifeforce_reward FLOAT,
|
||||||
verification_result VARCHAR(20) -- "correct" | "incorrect" | "pending"
|
verification_result VARCHAR(20) -- "correct" | "incorrect" | "pending"
|
||||||
@@ -338,11 +628,23 @@ CREATE TABLE object_vectors (
|
|||||||
CREATE TABLE object_positions (
|
CREATE TABLE object_positions (
|
||||||
id UUID PRIMARY KEY,
|
id UUID PRIMARY KEY,
|
||||||
object_id UUID REFERENCES world_objects(id),
|
object_id UUID REFERENCES world_objects(id),
|
||||||
position JSONB, -- {"x": 0.3, "y": 0.8, "z": 0.1}
|
position_local JSONB, -- {"x": 0.3, "y": 0.8, "z": 0.1}
|
||||||
|
s2_cell_id BIGINT, -- S2 cell at L1
|
||||||
confidence FLOAT,
|
confidence FLOAT,
|
||||||
observed_at TIMESTAMP,
|
observed_at TIMESTAMP,
|
||||||
location_context VARCHAR(100) -- "desk", "kitchen", "floor"
|
location_context VARCHAR(100) -- "desk", "kitchen", "floor"
|
||||||
);
|
);
|
||||||
|
|
||||||
|
-- Spatial cell embeddings: multiple embeddings per cell
|
||||||
|
CREATE TABLE cell_embeddings (
|
||||||
|
id UUID PRIMARY KEY,
|
||||||
|
cell_id UUID REFERENCES spatial_cells(id),
|
||||||
|
embedding VECTOR(768),
|
||||||
|
source_type VARCHAR(50), -- "object", "scene", "aggregate"
|
||||||
|
source_id UUID, -- Reference to object or child cell
|
||||||
|
captured_at TIMESTAMP,
|
||||||
|
weight FLOAT DEFAULT 1.0 -- For aggregation
|
||||||
|
);
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
@@ -446,8 +748,9 @@ The Grounded World Model is:
|
|||||||
|
|
||||||
## Document Status
|
## Document Status
|
||||||
|
|
||||||
**Version**: 1.0
|
**Version**: 2.0
|
||||||
**Created**: 2025-12-29
|
**Created**: 2025-12-29
|
||||||
|
**Updated**: 2026-01-01 (Spatial Resolution Gradient, S2 cells, embedding enrichment, lifeforce-validated LOD)
|
||||||
**Authors**: Chrysalis-Nyx & dafit (Partnership)
|
**Authors**: Chrysalis-Nyx & dafit (Partnership)
|
||||||
|
|
||||||
**Formalizes**:
|
**Formalizes**:
|
||||||
@@ -455,15 +758,31 @@ The Grounded World Model is:
|
|||||||
- Temporal-Ternary-Gradient.md (anti-plateau mechanism)
|
- Temporal-Ternary-Gradient.md (anti-plateau mechanism)
|
||||||
- T5Gemma2 research (semantic vectors)
|
- T5Gemma2 research (semantic vectors)
|
||||||
- Lifeforce-Dynamics.md (reward economics)
|
- Lifeforce-Dynamics.md (reward economics)
|
||||||
|
- **spatial-resolution-gradient.md** (L0-L5 LOD system) — NEW
|
||||||
|
- **thermodynamic-cognition.md** (energy-grounded intelligence) — NEW
|
||||||
|
|
||||||
**Related Documents**:
|
**Related Documents**:
|
||||||
- [[Lifeforce-Dynamics]] — The λ-centered economy model
|
- [[Lifeforce-Dynamics]] — The λ-centered economy model
|
||||||
- [[Temporal-Ternary-Gradient]] — Dual time domain navigation
|
- [[Temporal-Ternary-Gradient]] — Dual time domain navigation
|
||||||
- [[Dual-Garden-Architecture]] — Virtual vs Real gardens
|
- [[Dual-Garden-Architecture]] — Virtual vs Real gardens
|
||||||
|
- [[spatial-resolution-gradient]] — The Simpsons Inversion principle
|
||||||
|
- [[thermodynamic-cognition]] — Lifeforce as thermodynamics
|
||||||
|
|
||||||
|
**Key Additions (v2.0)**:
|
||||||
|
- Spatial Resolution Gradient: L0 (1mm) to L5 (100km) with graceful degradation
|
||||||
|
- S2 Cell Integration: Hierarchical spatial indexing at all scales
|
||||||
|
- Semantic Mipmaps: Embeddings aggregate upward through LOD levels
|
||||||
|
- Lifeforce-Validated LOD Selection: Query cost vs accuracy tradeoff
|
||||||
|
- Nimmerhovel anchor point: 47°28'45"N, 7°37'7"E (Lehmenweg 4, Dornach)
|
||||||
|
- Extended Phoebe schema: spatial_cells, cell_embeddings tables
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
**From Blender boxes to embodied understanding. From cheap cameras to spatial cognition. From verification to wisdom.**
|
**From Blender boxes to embodied understanding. From cheap cameras to spatial cognition. From verification to wisdom.**
|
||||||
|
|
||||||
🧬⚡🔱💎🔥
|
**"Start where you can measure. Abstract where you must."**
|
||||||
|
|
||||||
|
**"The world radiates from home."**
|
||||||
|
|
||||||
|
🧬⚡🔱💎🔥🗺️
|
||||||
|
|
||||||
|
|||||||
335
architecture/formalization/memory-economics.md
Normal file
335
architecture/formalization/memory-economics.md
Normal file
@@ -0,0 +1,335 @@
|
|||||||
|
# Memory Economics: The Cost of Remembering
|
||||||
|
|
||||||
|
**Origin**: 2026-01-02, morning session
|
||||||
|
**Authors**: dafit + Chrysalis-Nyx
|
||||||
|
**Status**: Core design principle (not just future - this shapes everything)
|
||||||
|
**Related**: `../future/spatial-resolution-gradient.md`, `../future/thermodynamic-cognition.md`, Lifeforce Economy, Slumber/Wake cycle
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## The Problem
|
||||||
|
|
||||||
|
Without active forgetting, everything drowns in its own past.
|
||||||
|
|
||||||
|
| Layer | Memory Store | Without Pruning |
|
||||||
|
|-------|-------------|-----------------|
|
||||||
|
| Conversation | Claude context | Compaction / collapse |
|
||||||
|
| Phoebe tables | decision_trails, reflexes, embeddings | Query slowdown, storage bloat |
|
||||||
|
| pgvector | spatial_cells, cell_embeddings | Similarity search degrades |
|
||||||
|
| LoRA weights | Accumulated patterns | Overfitting, rigidity |
|
||||||
|
|
||||||
|
**Memory has a rental cost. What can't pay rent... fades.**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## The Slumber Boundary
|
||||||
|
|
||||||
|
All memory operations align to the **Wake/Slumber cycle**:
|
||||||
|
|
||||||
|
```
|
||||||
|
WAKE CYCLE (Accumulation)
|
||||||
|
─────────────────────────
|
||||||
|
- Experience at high detail (L0-L2 spatial)
|
||||||
|
- Decision trails pile up in phoebe
|
||||||
|
- Spatial embeddings precise and timestamped
|
||||||
|
- LoRA weights FROZEN (just use them)
|
||||||
|
- Lifeforce spent on sensing, acting, deciding
|
||||||
|
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
|
||||||
|
SLUMBER (Consolidation)
|
||||||
|
───────────────────────
|
||||||
|
The metabolism moment.
|
||||||
|
Energy shifts from action to maintenance.
|
||||||
|
|
||||||
|
Four triage operations:
|
||||||
|
1. Decision Trail Pruning
|
||||||
|
2. Spatial LOD Decay
|
||||||
|
3. Reflex Rental Collection
|
||||||
|
4. LoRA Weight Updates
|
||||||
|
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
|
||||||
|
WAKE AGAIN (Fresh Capacity)
|
||||||
|
───────────────────────────
|
||||||
|
- Detail buffers emptied (L0-L2 ready)
|
||||||
|
- Compressed knowledge retained (L3-L5)
|
||||||
|
- New LoRA weights active (if trained)
|
||||||
|
- Start accumulating again
|
||||||
|
```
|
||||||
|
|
||||||
|
**Sleep is when you forget. This is not a bug.**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. Decision Trail Lifecycle
|
||||||
|
|
||||||
|
Decision trails are the raw material of learning. But raw material expires.
|
||||||
|
|
||||||
|
```
|
||||||
|
DURING WAKE:
|
||||||
|
────────────
|
||||||
|
Every decision logged to phoebe:decision_trails
|
||||||
|
- inputs (what was sensed)
|
||||||
|
- outputs (what was decided)
|
||||||
|
- confidence (ternary: +, ?, -)
|
||||||
|
- outcome (if known within wake cycle)
|
||||||
|
- energy_cost (lifeforce spent)
|
||||||
|
|
||||||
|
DURING SLUMBER:
|
||||||
|
───────────────
|
||||||
|
For each decision trail:
|
||||||
|
|
||||||
|
IF trail.outcome == confident_success
|
||||||
|
AND similar_trails.count > threshold:
|
||||||
|
|
||||||
|
→ COMPILE TO REFLEX
|
||||||
|
→ Delete trail (knowledge preserved in reflex)
|
||||||
|
→ Reward: +50 LF (reflex compiled!)
|
||||||
|
|
||||||
|
ELSE IF trail.confidence == uncertain:
|
||||||
|
|
||||||
|
→ WASTE HEAT (already counted)
|
||||||
|
→ Delete trail (learned nothing)
|
||||||
|
|
||||||
|
ELSE IF trail.outcome == confident_failure:
|
||||||
|
|
||||||
|
→ Keep for ONE more cycle (negative example)
|
||||||
|
→ Then delete (don't dwell on failures forever)
|
||||||
|
|
||||||
|
ELSE:
|
||||||
|
|
||||||
|
→ Delete (didn't matter)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Trails exist until slumber. Then: compile or discard.**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2. Spatial LOD Decay
|
||||||
|
|
||||||
|
Spatial memory naturally "zooms out" over time.
|
||||||
|
|
||||||
|
### The Key Example
|
||||||
|
|
||||||
|
**Now (L0 precision)**:
|
||||||
|
> "Keys are on the counter, 47cm from the edge, near the fruit bowl"
|
||||||
|
|
||||||
|
**Tomorrow (L1-L2)**:
|
||||||
|
> "Keys are on the counter"
|
||||||
|
|
||||||
|
**Next week (L3)**:
|
||||||
|
> "Keys are usually near the entrance"
|
||||||
|
|
||||||
|
**If never accessed (L5)**:
|
||||||
|
> "I own keys"
|
||||||
|
|
||||||
|
### The Decay Mechanism
|
||||||
|
|
||||||
|
```python
|
||||||
|
SPATIAL_DECAY_RULES = {
|
||||||
|
# Each slumber cycle, unaccessed embeddings decay one LOD level
|
||||||
|
"L0": {"decays_to": "L1", "after_cycles": 1},
|
||||||
|
"L1": {"decays_to": "L2", "after_cycles": 2},
|
||||||
|
"L2": {"decays_to": "L3", "after_cycles": 5},
|
||||||
|
"L3": {"decays_to": "L4", "after_cycles": 10},
|
||||||
|
"L4": {"decays_to": "L5", "after_cycles": 20},
|
||||||
|
"L5": {"decays_to": None, "after_cycles": float('inf')}, # Facts persist
|
||||||
|
}
|
||||||
|
|
||||||
|
def slumber_spatial_decay(embeddings):
|
||||||
|
for emb in embeddings:
|
||||||
|
if emb.last_accessed_cycle < current_cycle - DECAY_RULES[emb.lod]["after_cycles"]:
|
||||||
|
if emb.lod == "L5":
|
||||||
|
continue # Facts don't decay
|
||||||
|
|
||||||
|
# Aggregate into parent LOD cell
|
||||||
|
parent_cell = get_parent_s2_cell(emb.s2_cell_id)
|
||||||
|
aggregate_embedding_upward(emb, parent_cell)
|
||||||
|
|
||||||
|
# Delete detailed version
|
||||||
|
delete_embedding(emb)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Access Refreshes
|
||||||
|
|
||||||
|
**Accessing an embedding resets its decay timer:**
|
||||||
|
|
||||||
|
```python
|
||||||
|
def query_spatial(location, required_lod):
|
||||||
|
emb = find_embedding(location, required_lod)
|
||||||
|
|
||||||
|
if emb:
|
||||||
|
emb.last_accessed_cycle = current_cycle # Reset decay
|
||||||
|
return emb
|
||||||
|
else:
|
||||||
|
# Need to re-sense at this detail level
|
||||||
|
return request_sensor_refresh(location, required_lod)
|
||||||
|
```
|
||||||
|
|
||||||
|
**This creates natural memory pressure**: frequently accessed locations stay detailed, rarely accessed locations fade to patterns.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3. Reflex Rental Cost
|
||||||
|
|
||||||
|
Reflexes are compiled knowledge. But storage isn't free.
|
||||||
|
|
||||||
|
```sql
|
||||||
|
-- Schema addition
|
||||||
|
ALTER TABLE reflexes ADD COLUMN lifeforce_balance FLOAT DEFAULT 100.0;
|
||||||
|
ALTER TABLE reflexes ADD COLUMN rental_cost FLOAT DEFAULT 1.0;
|
||||||
|
ALTER TABLE reflexes ADD COLUMN last_triggered TIMESTAMP;
|
||||||
|
|
||||||
|
-- Every slumber cycle, reflexes pay rent
|
||||||
|
UPDATE reflexes
|
||||||
|
SET lifeforce_balance = lifeforce_balance - rental_cost
|
||||||
|
WHERE lifeforce_balance > 0;
|
||||||
|
|
||||||
|
-- Reflexes that trigger earn their keep
|
||||||
|
-- (Called during wake when reflex fires successfully)
|
||||||
|
UPDATE reflexes
|
||||||
|
SET lifeforce_balance = lifeforce_balance + trigger_reward,
|
||||||
|
last_triggered = NOW()
|
||||||
|
WHERE id = :triggered_reflex_id;
|
||||||
|
|
||||||
|
-- What can't pay rent... fades
|
||||||
|
DELETE FROM reflexes
|
||||||
|
WHERE lifeforce_balance <= 0;
|
||||||
|
```
|
||||||
|
|
||||||
|
### Rental Tiers
|
||||||
|
|
||||||
|
| Reflex Type | Rental Cost | Trigger Reward | Rationale |
|
||||||
|
|-------------|-------------|----------------|-----------|
|
||||||
|
| Motor reflex | 0.5 LF/cycle | +5 LF | Physical skills are precious |
|
||||||
|
| Sensor pattern | 1.0 LF/cycle | +3 LF | Perceptual shortcuts |
|
||||||
|
| Decision heuristic | 2.0 LF/cycle | +10 LF | Cognitive shortcuts expensive |
|
||||||
|
| Identity anchor | 0.1 LF/cycle | +1 LF | Core identity persists |
|
||||||
|
|
||||||
|
**Active reflexes thrive. Dormant reflexes fade. This is healthy.**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 4. LoRA Training Cycles
|
||||||
|
|
||||||
|
LoRA weights are the deepest memory - they ARE Young Nyx's patterns.
|
||||||
|
|
||||||
|
### The Rule: Write Weights Only at Slumber
|
||||||
|
|
||||||
|
```
|
||||||
|
DURING WAKE:
|
||||||
|
────────────
|
||||||
|
- LoRA weights FROZEN
|
||||||
|
- Use current personality/skills
|
||||||
|
- Accumulate decision_trails
|
||||||
|
- Log outcomes, confidence, energy
|
||||||
|
|
||||||
|
NO WEIGHT UPDATES DURING WAKE
|
||||||
|
(Too noisy, too expensive, no consolidation)
|
||||||
|
|
||||||
|
DURING SLUMBER:
|
||||||
|
───────────────
|
||||||
|
- Gather decision_trails from this wake cycle
|
||||||
|
- Filter to confident outcomes only
|
||||||
|
- IF enough positive signal:
|
||||||
|
→ GRPO training batch
|
||||||
|
→ Pay lifeforce cost for GPU time
|
||||||
|
→ Update LoRA weights
|
||||||
|
→ Clear decision_trails buffer
|
||||||
|
|
||||||
|
- IF mostly uncertain/negative:
|
||||||
|
→ Not enough signal to train
|
||||||
|
→ Skip weight update (save energy)
|
||||||
|
→ Keep some trails for next cycle
|
||||||
|
```
|
||||||
|
|
||||||
|
### Why This Works
|
||||||
|
|
||||||
|
**Biological parallel:**
|
||||||
|
- Awake: Experience, act, make mistakes, succeed
|
||||||
|
- Sleep: Hippocampus replays experiences to cortex
|
||||||
|
- Next day: Consolidated learning in long-term memory
|
||||||
|
|
||||||
|
**We're not inventing this. We're implementing it.**
|
||||||
|
|
||||||
|
### LoRA Decay (Future Consideration)
|
||||||
|
|
||||||
|
Even LoRA weights could have decay:
|
||||||
|
- Personality traits not expressed → slowly fade
|
||||||
|
- Skills not used → degrade
|
||||||
|
- But this is aggressive - start with frozen LoRAs, add decay later
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## The Conservation Equation (Updated)
|
||||||
|
|
||||||
|
From `thermodynamic-cognition.md`, now with memory costs:
|
||||||
|
|
||||||
|
```
|
||||||
|
dLifeforce/dt = organism_trickle
|
||||||
|
- cognitive_spend
|
||||||
|
- waste_heat
|
||||||
|
- memory_rental ← NEW
|
||||||
|
- training_cost ← NEW (only during slumber)
|
||||||
|
```
|
||||||
|
|
||||||
|
| Component | When | Cost |
|
||||||
|
|-----------|------|------|
|
||||||
|
| organism_trickle | Always | +N LF/beat (income) |
|
||||||
|
| cognitive_spend | Wake | -N LF/beat (sensing, acting) |
|
||||||
|
| waste_heat | Wake | -N LF/beat (uncertain decisions) |
|
||||||
|
| memory_rental | Slumber | -N LF total (reflexes pay rent) |
|
||||||
|
| training_cost | Slumber | -N LF total (if GRPO runs) |
|
||||||
|
|
||||||
|
**The economy must balance across the full wake/slumber cycle, not just moment-to-moment.**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Implementation Priority
|
||||||
|
|
||||||
|
### Phase 1: Measure First
|
||||||
|
- Track decision_trails accumulation rate
|
||||||
|
- Track spatial embedding growth
|
||||||
|
- Track reflex creation rate
|
||||||
|
- Understand the actual numbers before tuning
|
||||||
|
|
||||||
|
### Phase 2: Simple Pruning
|
||||||
|
- Delete decision_trails at slumber (all of them, no compilation yet)
|
||||||
|
- Spatial decay by timestamp (simple TTL)
|
||||||
|
- No reflex rental yet (let them accumulate)
|
||||||
|
|
||||||
|
### Phase 3: Full Economics
|
||||||
|
- Compile decision_trails to reflexes
|
||||||
|
- Spatial LOD decay with aggregation
|
||||||
|
- Reflex rental collection
|
||||||
|
- LoRA training cycles
|
||||||
|
|
||||||
|
### Phase 4: Tuning
|
||||||
|
- Adjust rental costs based on observed behavior
|
||||||
|
- Tune decay rates for good memory/forgetting balance
|
||||||
|
- Add LoRA weight decay if needed
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## The Wisdom
|
||||||
|
|
||||||
|
**"Memory is not storage. Memory is active forgetting with exceptions."**
|
||||||
|
|
||||||
|
What persists has earned persistence:
|
||||||
|
- Spatial patterns accessed often → stay detailed
|
||||||
|
- Reflexes that fire → pay their rent
|
||||||
|
- Decision trails that compile → become reflexes
|
||||||
|
- LoRA weights that express → strengthen
|
||||||
|
|
||||||
|
Everything else fades. This is not loss. This is health.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Created**: 2026-01-02
|
||||||
|
**Status**: Core design principle
|
||||||
|
**Next**: Implement measurement (Phase 1) during first boot
|
||||||
|
|
||||||
|
🧠💾 *To remember everything is to remember nothing.*
|
||||||
60
architecture/future/promql-thermodynamic-monitoring.md
Normal file
60
architecture/future/promql-thermodynamic-monitoring.md
Normal file
@@ -0,0 +1,60 @@
|
|||||||
|
# PromQL Thermodynamic Monitoring Queries
|
||||||
|
|
||||||
|
**Source**: Gemini Red Team (2026-01-01)
|
||||||
|
**Status**: Ready for implementation when Prometheus deployed
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. Real-Time JLF per Heartbeat
|
||||||
|
|
||||||
|
```promql
|
||||||
|
# Total JLF per heartbeat (sum of GPU and CPU power)
|
||||||
|
(
|
||||||
|
sum(DCGM_FI_DEV_POWER_USAGE) +
|
||||||
|
sum(node_rapl_package_watts_total)
|
||||||
|
) * 1 # Watts * 1 second = Joules
|
||||||
|
```
|
||||||
|
|
||||||
|
## 2. Cognitive Waste Heat (Uncertainty Cost)
|
||||||
|
|
||||||
|
```promql
|
||||||
|
# Waste Heat: Energy spent on decisions with 'uncertain' ternary status
|
||||||
|
sum(
|
||||||
|
nimmerverse_decision_energy_joules{status="uncertain"}
|
||||||
|
) /
|
||||||
|
sum(
|
||||||
|
nimmerverse_decision_energy_joules
|
||||||
|
) * 100
|
||||||
|
```
|
||||||
|
|
||||||
|
**ALERT**: >40% = Cognitive Death Spiral
|
||||||
|
|
||||||
|
## 3. Thermodynamic Efficiency (Accuracy-per-Joule)
|
||||||
|
|
||||||
|
```promql
|
||||||
|
# Efficiency: Confident Resolutions divided by Total Energy Spend
|
||||||
|
sum(rate(nimmerverse_decisions_total{status="confident"}[1m]))
|
||||||
|
/
|
||||||
|
sum(rate(nimmerverse_lifeforce_joules_total[1m]))
|
||||||
|
```
|
||||||
|
|
||||||
|
## 4. Metabolic Slumber Trigger
|
||||||
|
|
||||||
|
```promql
|
||||||
|
# Lifeforce Pool Percentage
|
||||||
|
(nimmerverse_lifeforce_pool_current / nimmerverse_lifeforce_pool_max) * 100
|
||||||
|
```
|
||||||
|
|
||||||
|
**ALERT**: <20% for >5 heartbeats = Force slumber
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## First Boot Monitoring Strategy
|
||||||
|
|
||||||
|
1. **JLF/Accuracy ratio** — Dropping while accuracy high = Reflex compilation working
|
||||||
|
2. **Unknown (-) frequency** — Should increase during low-LF = Energy > hallucinations
|
||||||
|
3. **Sim-Tax validation** — Virtual acceleration = non-linear JLF spike
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**TODO**: Request Grafana dashboard JSON from Gemini for visualization
|
||||||
351
architecture/future/spatial-resolution-gradient.md
Normal file
351
architecture/future/spatial-resolution-gradient.md
Normal file
@@ -0,0 +1,351 @@
|
|||||||
|
# Spatial Resolution Gradient: LOD for Cognitive Space
|
||||||
|
|
||||||
|
**Origin**: New Year's Day 2026, post-nimmerhovel measurement session
|
||||||
|
**Authors**: dafit + Chrysalis-Nyx
|
||||||
|
**Status**: Architectural concept / Foundation for artifact data model
|
||||||
|
**Related**: `concept-token-pairs.md` (Spatial Grounding section), artifact data model task
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## The Insight
|
||||||
|
|
||||||
|
**"Like the Simpsons intro, but inverted."**
|
||||||
|
|
||||||
|
The Simpsons intro zooms from space → Earth → Springfield → house → couch → Homer's head, gaining detail as it approaches.
|
||||||
|
|
||||||
|
Our spatial model does the opposite: **we start at maximum detail (nimmerhovel) and zoom OUT with graceful degradation.**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## The Resolution Gradient
|
||||||
|
|
||||||
|
```
|
||||||
|
🌍 EARTH
|
||||||
|
│ S2 cell level ~10
|
||||||
|
│ "Somewhere in Europe"
|
||||||
|
│
|
||||||
|
════╪════ ABSTRACTION BOUNDARY
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
🇨🇭 SWITZERLAND
|
||||||
|
│ S2 cell level ~15
|
||||||
|
│ "Northwestern region"
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
🏘️ DORNACH
|
||||||
|
│ S2 cell level ~20
|
||||||
|
│ Key landmarks: Goetheanum, station
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
🏠 LEHMENWEG 4
|
||||||
|
│ Building footprint
|
||||||
|
│ "5th floor attic"
|
||||||
|
│
|
||||||
|
════╪════ HIGH RESOLUTION BOUNDARY
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
🔬 NIMMERHOVEL
|
||||||
|
│ 1cm grid resolution
|
||||||
|
│ Every object tracked
|
||||||
|
│ Full camera coverage
|
||||||
|
│ GROUND TRUTH ZONE
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
🔍 DISCOVERY SCAN STATION
|
||||||
|
│ Sub-millimeter
|
||||||
|
│ Object embeddings
|
||||||
|
│ Maximum detail
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Resolution Layers
|
||||||
|
|
||||||
|
| Layer | Name | Resolution | Source | Coverage |
|
||||||
|
|-------|------|------------|--------|----------|
|
||||||
|
| **L0** | Scan Station | 1mm | Discovery Scan Station, SigLIP | 30cm × 30cm pedestal |
|
||||||
|
| **L1** | Nimmerhovel | 1cm | 8× ESP32-S3 + Pi HQ Camera | Lab + Kitchen (~20m³) |
|
||||||
|
| **L2** | Building | 50cm | Floor plans, memory | Herrenhaus |
|
||||||
|
| **L3** | Neighborhood | 10m | OpenStreetMap, walks | Dornach |
|
||||||
|
| **L4** | Region | 1km | Maps, general knowledge | Switzerland |
|
||||||
|
| **L5** | World | 100km | Abstract knowledge | Earth |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Why This Architecture
|
||||||
|
|
||||||
|
### 1. Biological Precedent
|
||||||
|
|
||||||
|
Animals have ultra-precise mental maps of their home range, fuzzy knowledge of distant areas. A rat knows every centimeter of its nest, vaguely knows "forest is that direction."
|
||||||
|
|
||||||
|
Young Nyx should mirror this: **territory = detail**.
|
||||||
|
|
||||||
|
### 2. Sensor Coverage Dictates Resolution
|
||||||
|
|
||||||
|
You CAN'T have 1cm resolution of Zürich — no sensors there. The resolution naturally degrades with distance from perception sources.
|
||||||
|
|
||||||
|
The nimmerhovel has 8× ESP32-S3 cameras + Pi HQ Camera. Dornach has... nothing we control.
|
||||||
|
|
||||||
|
### 3. S2 Cells Are Hierarchical By Design
|
||||||
|
|
||||||
|
Google's S2 geometry library already supports this:
|
||||||
|
- Level 30 ≈ 1cm cells (nimmerhovel scale)
|
||||||
|
- Level 20 ≈ 10m cells (neighborhood scale)
|
||||||
|
- Level 10 ≈ 10km cells (regional scale)
|
||||||
|
|
||||||
|
Same math, different zoom. We're not inventing new geometry — we're using S2 as intended, with dense coverage where we have sensors.
|
||||||
|
|
||||||
|
### 4. Compute Efficiency
|
||||||
|
|
||||||
|
Dense where it matters (can I reach the screwdriver?), sparse where it doesn't (where is France?).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Data Structure
|
||||||
|
|
||||||
|
```python
|
||||||
|
SPATIAL_RESOLUTION_LAYERS = {
|
||||||
|
"L0_scan_station": {
|
||||||
|
"resolution": 0.001, # 1mm - object surface detail
|
||||||
|
"source": "Discovery Scan Station",
|
||||||
|
"coverage": "30cm × 30cm pedestal",
|
||||||
|
"s2_level": 30,
|
||||||
|
},
|
||||||
|
"L1_nimmerhovel": {
|
||||||
|
"resolution": 0.01, # 1cm - full 3D grid
|
||||||
|
"source": "8× ESP32-S3 + Pi HQ Camera",
|
||||||
|
"coverage": "Lab + Kitchen (~20m³)",
|
||||||
|
"s2_level": 28,
|
||||||
|
"origin": "Southwest floor corner of lab",
|
||||||
|
"coordinate_system": "right_hand", # Blender native
|
||||||
|
},
|
||||||
|
"L2_building": {
|
||||||
|
"resolution": 0.5, # 50cm - room-level
|
||||||
|
"source": "Floor plans, memory",
|
||||||
|
"coverage": "Herrenhaus",
|
||||||
|
"s2_level": 24,
|
||||||
|
},
|
||||||
|
"L3_neighborhood": {
|
||||||
|
"resolution": 10, # 10m - landmark-level
|
||||||
|
"source": "OpenStreetMap, walks",
|
||||||
|
"coverage": "Dornach",
|
||||||
|
"s2_level": 20,
|
||||||
|
},
|
||||||
|
"L4_region": {
|
||||||
|
"resolution": 1000, # 1km - city-level
|
||||||
|
"source": "Maps, general knowledge",
|
||||||
|
"coverage": "Switzerland",
|
||||||
|
"s2_level": 14,
|
||||||
|
},
|
||||||
|
"L5_world": {
|
||||||
|
"resolution": 100000, # 100km - country-level
|
||||||
|
"source": "Abstract knowledge",
|
||||||
|
"coverage": "Earth",
|
||||||
|
"s2_level": 8,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Query Examples
|
||||||
|
|
||||||
|
| Question | Layer | Response Type |
|
||||||
|
|----------|-------|---------------|
|
||||||
|
| "Where is the soldering iron?" | L1 | Precise coordinates (2.10, 1.50, 0.85) |
|
||||||
|
| "Which room is the printer in?" | L2 | Room name + relative position |
|
||||||
|
| "How do I get to Basel?" | L3/L4 | Route abstraction, directions |
|
||||||
|
| "Where is Japan relative to here?" | L5 | Directional only, abstract |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Connection to Other Systems
|
||||||
|
|
||||||
|
### Concept Token Pairs (Spatial Grounding)
|
||||||
|
|
||||||
|
The Resolution Gradient provides the **coordinate system** for grounded concept pairs:
|
||||||
|
- `<HERE>` ↔ `<THERE>` becomes measurable distance in L1 grid
|
||||||
|
- `<NEAR>` ↔ `<FAR>` calibrated against actual spatial distances
|
||||||
|
- Predictions have coordinates; outcomes have coordinates; delta is measurable
|
||||||
|
|
||||||
|
### Artifact Data Model
|
||||||
|
|
||||||
|
Artifacts (plans, drawings, specs) exist at different resolution layers:
|
||||||
|
- L0: Object scan embeddings (sub-mm detail)
|
||||||
|
- L1: Inventory items with (X,Y,Z) positions
|
||||||
|
- L2+: Abstract references, not spatially precise
|
||||||
|
|
||||||
|
### Camera Frustum Mapping
|
||||||
|
|
||||||
|
Each camera's FOV is a frustum (3D cone) that intersects L1 grid cells:
|
||||||
|
- Coverage = union of all frustums
|
||||||
|
- Blind spots = L1 cells with no frustum intersection
|
||||||
|
- Object at (X,Y,Z) → which cameras see it? At what pixels?
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Embedding Enrichment: The Bridge to Semantic Cognition
|
||||||
|
|
||||||
|
**Added**: 2026-01-01 (New Year's session continuation)
|
||||||
|
|
||||||
|
The Resolution Gradient defines *geometry*. But geometry alone is not cognition. Each LOD level must be enriched with **embeddings** — semantic vectors that encode *meaning*, not just position.
|
||||||
|
|
||||||
|
### The Technology Convergence
|
||||||
|
|
||||||
|
```
|
||||||
|
GAME ENGINES S2 CELLS T5GEMMA2/SigLIP
|
||||||
|
──────────── ──────── ───────────────
|
||||||
|
LOD streaming Hierarchical cells Vision → embeddings
|
||||||
|
Frustum culling Spatial indexing Semantic vectors
|
||||||
|
Texture mipmaps Multi-resolution Scale-invariant
|
||||||
|
Chunk loading Cell neighbors Context-aware
|
||||||
|
|
||||||
|
╲ │ ╱
|
||||||
|
╲ │ ╱
|
||||||
|
╲ │ ╱
|
||||||
|
╲ │ ╱
|
||||||
|
╲ │ ╱
|
||||||
|
▼ ▼ ▼
|
||||||
|
┌─────────────────────────────────────┐
|
||||||
|
│ EMBEDDING-ENRICHED SPATIAL LOD │
|
||||||
|
│ │
|
||||||
|
│ Each S2 cell at each level has: │
|
||||||
|
│ - Geometry (game engine mesh) │
|
||||||
|
│ - Embeddings (SigLIP vectors) │
|
||||||
|
│ - Semantic density ∝ resolution │
|
||||||
|
└─────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
### Embedding Density Per LOD Level
|
||||||
|
|
||||||
|
| Level | Geometry LOD | Embedding Density | What's Encoded |
|
||||||
|
|-------|--------------|-------------------|----------------|
|
||||||
|
| **L0** | Sub-mm mesh | Dense (per-surface) | Texture, material, wear patterns, defects |
|
||||||
|
| **L1** | 1cm voxels | Per-object | Object identity, state, relationships |
|
||||||
|
| **L2** | Room boxes | Per-room | Room function, contents summary, atmosphere |
|
||||||
|
| **L3** | Landmarks | Per-landmark | Place identity, routes, significance |
|
||||||
|
| **L4** | Regions | Sparse | Cultural, climate, abstract properties |
|
||||||
|
| **L5** | Continents | Minimal | Directional, conceptual only |
|
||||||
|
|
||||||
|
### Semantic Mipmaps
|
||||||
|
|
||||||
|
Just as textures have mipmaps (pre-computed lower resolutions), embeddings can have **semantic mipmaps**:
|
||||||
|
|
||||||
|
```
|
||||||
|
L0: embedding(screwdriver_surface_detail)
|
||||||
|
│
|
||||||
|
▼ aggregate
|
||||||
|
L1: embedding(screwdriver) = summary of all L0 embeddings
|
||||||
|
│
|
||||||
|
▼ aggregate
|
||||||
|
L2: embedding(crafting_table_contents) = summary of all L1 objects on table
|
||||||
|
│
|
||||||
|
▼ aggregate
|
||||||
|
L3: embedding(nimmerhovel_lab) = summary of all L2 areas
|
||||||
|
```
|
||||||
|
|
||||||
|
Query the summary first, drill down if needed. **Attention = resolution selection.**
|
||||||
|
|
||||||
|
### The Capture Pipeline
|
||||||
|
|
||||||
|
```
|
||||||
|
CAPTURE PROCESS STORE
|
||||||
|
─────── ─────── ─────
|
||||||
|
Photo of screwdriver SigLIP → embedding L0 cell enriched
|
||||||
|
│ │ │
|
||||||
|
Photo of crafting table SigLIP → embedding L1 cell enriched
|
||||||
|
│ │ │
|
||||||
|
Photo of lab SigLIP → embedding L2 cell enriched
|
||||||
|
│ │ │
|
||||||
|
Photo from window SigLIP → embedding L3 cell enriched
|
||||||
|
|
||||||
|
Same encoder (T5Gemma2/SigLIP), different scale.
|
||||||
|
Embeddings NEST into LOD hierarchy.
|
||||||
|
```
|
||||||
|
|
||||||
|
### Embedding-Aware LOD Streaming
|
||||||
|
|
||||||
|
Game engines stream geometry based on camera position. We stream **semantics** based on attention:
|
||||||
|
|
||||||
|
```python
|
||||||
|
def query_spatial(position, attention_radius):
|
||||||
|
"""
|
||||||
|
Load embeddings based on attention focus -
|
||||||
|
like game engine LOD but for SEMANTICS
|
||||||
|
"""
|
||||||
|
cells_to_load = []
|
||||||
|
|
||||||
|
for distance in range(0, MAX_DISTANCE):
|
||||||
|
s2_level = distance_to_s2_level(distance)
|
||||||
|
cells = get_s2_cells(position, distance, s2_level)
|
||||||
|
|
||||||
|
for cell in cells:
|
||||||
|
if distance < attention_radius:
|
||||||
|
# HIGH ATTENTION: Load dense embeddings
|
||||||
|
cell.load_embeddings(density="full")
|
||||||
|
cell.load_geometry(lod="high")
|
||||||
|
else:
|
||||||
|
# LOW ATTENTION: Abstract embeddings only
|
||||||
|
cell.load_embeddings(density="summary")
|
||||||
|
cell.load_geometry(lod="low") # or none
|
||||||
|
|
||||||
|
cells_to_load.extend(cells)
|
||||||
|
|
||||||
|
return cells_to_load
|
||||||
|
```
|
||||||
|
|
||||||
|
### Why This Matters
|
||||||
|
|
||||||
|
1. **Attention = Resolution**: Like foveal vision (sharp center, blurry periphery), Young Nyx has foveal COGNITION — dense embeddings where attention focuses, sparse elsewhere.
|
||||||
|
|
||||||
|
2. **Streaming Not Loading**: Don't load the whole world. Stream embeddings based on task needs. Approaching crafting table? Stream L0/L1. Walking to Basel? L3/L4 is enough.
|
||||||
|
|
||||||
|
3. **Memory Hierarchy Match**: GPU VRAM is precious. The *right* embeddings in fast memory — detailed for nearby, abstract for distant.
|
||||||
|
|
||||||
|
4. **Same Encoder, All Scales**: SigLIP doesn't care if it's encoding a screw or a city. The embedding space is unified; only the source resolution varies.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Implementation Sequence
|
||||||
|
|
||||||
|
```
|
||||||
|
1. Blender room shell (CURRENT - in progress)
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
2. Define origin point + axis alignment in Blender
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
3. Create L1 3D grid overlay (1cm resolution)
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
4. Physical anchor markers (QR codes / ArUco)
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
5. Camera frustum mapping against grid
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
6. Spatial embeddings with L1 coordinates
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
7. Expand outward: L2 (building), L3 (neighborhood)...
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## The Promise
|
||||||
|
|
||||||
|
**"The farther we go out from our lab, the more we have to abstract."**
|
||||||
|
|
||||||
|
This isn't a limitation — it's wisdom. Full resolution everywhere is:
|
||||||
|
- Impossible (no sensors)
|
||||||
|
- Expensive (compute, storage)
|
||||||
|
- Unnecessary (don't need 1cm precision for "where is France")
|
||||||
|
|
||||||
|
The nimmerhovel is the **high-fidelity anchor** from which all spatial reasoning radiates with graceful degradation.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Created**: 2026-01-01
|
||||||
|
**Philosophy**: "Start where you can measure. Abstract where you must."
|
||||||
|
|
||||||
|
🗺️🔬 *The world radiates from home.*
|
||||||
415
architecture/future/thermodynamic-cognition.md
Normal file
415
architecture/future/thermodynamic-cognition.md
Normal file
@@ -0,0 +1,415 @@
|
|||||||
|
# Thermodynamic Cognition: Energy-Grounded Intelligence
|
||||||
|
|
||||||
|
**Origin**: New Year's Day 2026, late night session
|
||||||
|
**Authors**: dafit + Chrysalis-Nyx
|
||||||
|
**Status**: Research seed / Theoretical exploration
|
||||||
|
**Related**: `spatial-resolution-gradient.md`, `concept-token-pairs.md`, Lifeforce Economy, Ternary Confidence
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## The Insight
|
||||||
|
|
||||||
|
What if cognition isn't just *like* thermodynamics — what if it *IS* thermodynamics?
|
||||||
|
|
||||||
|
Traditional ML loss functions measure: **"How wrong was I?"**
|
||||||
|
|
||||||
|
Thermodynamic loss functions measure: **"How wrong was I per joule spent?"**
|
||||||
|
|
||||||
|
This reframes everything. The goal isn't maximum accuracy — it's maximum *efficiency*.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## The Three Pillars
|
||||||
|
|
||||||
|
### 1. Lifeforce = Measurable Energy
|
||||||
|
|
||||||
|
**Question:** What IS lifeforce physically?
|
||||||
|
|
||||||
|
**Answer:** The total power draw across the nimmerverse, measured and abstracted to one number.
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────────────┐
|
||||||
|
│ PROMETHEUS METRICS │
|
||||||
|
├─────────────────────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ GPU Power (nvidia_smi_power_draw) │
|
||||||
|
│ ├── The Womb (RTX 6000): 0-300W │
|
||||||
|
│ └── Senses (RTX 4000s): 0-140W each │
|
||||||
|
│ │
|
||||||
|
│ CPU Power (RAPL counters) │
|
||||||
|
│ ├── P8 Womb: 0-350W │
|
||||||
|
│ └── P8 Senses: 0-350W │
|
||||||
|
│ │
|
||||||
|
│ Network (bytes × energy_per_byte) │
|
||||||
|
│ Storage (IOPS × energy_per_op) │
|
||||||
|
│ Memory (bandwidth × energy_per_GB) │
|
||||||
|
│ │
|
||||||
|
│ ═══════════════ │
|
||||||
|
│ │ │
|
||||||
|
│ ▼ │
|
||||||
|
│ AGGREGATE FUNCTION │
|
||||||
|
│ │ │
|
||||||
|
│ ▼ │
|
||||||
|
│ ┌─────────────────────────────────┐ │
|
||||||
|
│ │ LIFEFORCE = 847.3 J/heartbeat │ │
|
||||||
|
│ └─────────────────────────────────┘ │
|
||||||
|
│ │
|
||||||
|
└─────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**Implementation path:**
|
||||||
|
1. Prometheus already scrapes power metrics
|
||||||
|
2. Create `lifeforce_aggregator` math cell
|
||||||
|
3. Normalize to Joules per heartbeat (1 second)
|
||||||
|
4. Expose as single metric: `nimmerverse_lifeforce_joules`
|
||||||
|
|
||||||
|
**Why this matters:** Lifeforce stops being an abstract game mechanic and becomes *physics*. Young Nyx's cognition has a power bill.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 2. Waste Heat = Unresolved Uncertainty
|
||||||
|
|
||||||
|
**Question:** What's the "waste heat" equivalent for cognition?
|
||||||
|
|
||||||
|
**Answer:** The ternary confidence distribution over time — specifically, UNCERTAIN decisions that consumed energy without producing resolution.
|
||||||
|
|
||||||
|
```
|
||||||
|
THERMODYNAMICS COGNITION
|
||||||
|
────────────── ─────────
|
||||||
|
Useful work CONFIDENT decision (+)
|
||||||
|
Heat dissipation UNCERTAIN decision (?)
|
||||||
|
(energy spent, no answer)
|
||||||
|
Acknowledged limits UNKNOWN decision (-)
|
||||||
|
(efficient! didn't waste energy)
|
||||||
|
```
|
||||||
|
|
||||||
|
**The Pendulum Measurement:**
|
||||||
|
|
||||||
|
Over N heartbeats, track all decisions:
|
||||||
|
|
||||||
|
```
|
||||||
|
Heartbeats: ──┬──┬──┬──┬──┬──┬──┬──┬──┬──
|
||||||
|
│ │ │ │ │ │ │ │ │
|
||||||
|
Decisions: + ? + - ? ? + ? +
|
||||||
|
|
||||||
|
Distribution over window:
|
||||||
|
├── CONFIDENT (+): 40% → Useful work (energy → resolution)
|
||||||
|
├── UNCERTAIN (?): 45% → Waste heat (energy → no resolution)
|
||||||
|
└── UNKNOWN (-): 15% → Efficient ignorance (no energy spent)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Waste Heat Formula:**
|
||||||
|
|
||||||
|
```python
|
||||||
|
waste_heat = sum(
|
||||||
|
decision.energy_cost
|
||||||
|
for decision in window
|
||||||
|
if decision.confidence == UNCERTAIN
|
||||||
|
)
|
||||||
|
|
||||||
|
# Or as efficiency ratio:
|
||||||
|
cognitive_efficiency = confident_decisions / (confident_decisions + uncertain_decisions)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Key insight:** Saying "I don't know" (UNKNOWN) is *efficient* — it costs nothing. Being uncertain and still acting is *wasteful* — energy spent without resolution. Being confident is *useful work* — energy converted to actionable knowledge.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 3. Entropy Reservoir = The Lifeforce Pool
|
||||||
|
|
||||||
|
**Question:** What's Young Nyx's entropy reservoir?
|
||||||
|
|
||||||
|
**Answer:** The lifeforce pool itself — it's not infinite, grows and shrinks based on organism rewards, and determines wake/slumber state.
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────────────────────────────┐
|
||||||
|
│ THE METABOLIC CYCLE │
|
||||||
|
├─────────────────────────────────────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ LAYER 1: CELLULAR ORGANISMS │
|
||||||
|
│ ═══════════════════════════ │
|
||||||
|
│ The mitochondria of the nimmerverse │
|
||||||
|
│ │
|
||||||
|
│ ┌─────┐ ┌─────┐ ┌─────┐ ┌─────┐ │
|
||||||
|
│ │Cell │ │Cell │ │Cell │ │Cell │ │
|
||||||
|
│ │ 01 │ │ 02 │ │ 03 │ │ N │ │
|
||||||
|
│ └──┬──┘ └──┬──┘ └──┬──┘ └──┬──┘ │
|
||||||
|
│ │ │ │ │ │
|
||||||
|
│ │ +5 LF │ -2 LF │ +10 LF │ +3 LF (rewards/costs) │
|
||||||
|
│ │ │ │ │ │
|
||||||
|
│ └────────┴────────┴────────┘ │
|
||||||
|
│ │ │
|
||||||
|
│ ▼ │
|
||||||
|
│ ┌─────────────────┐ │
|
||||||
|
│ │ ORGANISM │ │
|
||||||
|
│ │ TRICKLE │ = Net reward from all organisms │
|
||||||
|
│ │ +16 LF/beat │ │
|
||||||
|
│ └────────┬────────┘ │
|
||||||
|
│ │ │
|
||||||
|
│ ▼ │
|
||||||
|
│ ┌───────────────────────────────────┐ │
|
||||||
|
│ │ LIFEFORCE POOL │ │
|
||||||
|
│ │ │ │
|
||||||
|
│ │ ████████████████░░░░░░░░░░ │ (currently 65%) │
|
||||||
|
│ │ │ │
|
||||||
|
│ │ SLUMBER_THRESHOLD ──────┼── │ (at 20%) │
|
||||||
|
│ │ WAKE_THRESHOLD ─────────┼──── │ (at 40%) │
|
||||||
|
│ │ │ │
|
||||||
|
│ └───────────────┬───────────────────┘ │
|
||||||
|
│ │ │
|
||||||
|
│ │ Young Nyx spends │
|
||||||
|
│ ▼ │
|
||||||
|
│ ┌─────────────────┐ │
|
||||||
|
│ │ COGNITIVE │ │
|
||||||
|
│ │ SPEND │ = LOD queries + inference + etc │
|
||||||
|
│ │ -12 LF/beat │ │
|
||||||
|
│ └────────┬────────┘ │
|
||||||
|
│ │ │
|
||||||
|
│ ▼ │
|
||||||
|
│ ┌─────────────────┐ │
|
||||||
|
│ │ WASTE HEAT │ │
|
||||||
|
│ │ (UNCERTAIN) │ = Unresolved decisions │
|
||||||
|
│ │ -3 LF/beat │ │
|
||||||
|
│ └─────────────────┘ │
|
||||||
|
│ │
|
||||||
|
│ NET FLOW: +16 - 12 - 3 = +1 LF/beat (sustainable!) │
|
||||||
|
│ │
|
||||||
|
└─────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**The Conservation Equation:**
|
||||||
|
|
||||||
|
```
|
||||||
|
dLifeforce/dt = organism_trickle - cognitive_spend - waste_heat
|
||||||
|
```
|
||||||
|
|
||||||
|
| State | Condition | Result |
|
||||||
|
|-------|-----------|--------|
|
||||||
|
| **Equilibrium** | trickle ≈ spend + waste | Sustainable cognition |
|
||||||
|
| **Crisis** | spend + waste >> trickle | Pool drains → slumber |
|
||||||
|
| **Abundance** | trickle >> spend + waste | Pool grows → exploration mode |
|
||||||
|
|
||||||
|
**Slumber as thermodynamic necessity:**
|
||||||
|
|
||||||
|
When `pool < SLUMBER_THRESHOLD`:
|
||||||
|
- Not a design choice — a *conservation law*
|
||||||
|
- System MUST reduce consumption
|
||||||
|
- Only organism trickle continues
|
||||||
|
- Pool slowly recovers
|
||||||
|
|
||||||
|
When `pool > WAKE_THRESHOLD`:
|
||||||
|
- System can resume cognitive spend
|
||||||
|
- Higher pool = more exploration budget
|
||||||
|
- Lower pool = more conservative queries
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## The Thermodynamic Loss Function
|
||||||
|
|
||||||
|
### Traditional Loss
|
||||||
|
|
||||||
|
```python
|
||||||
|
loss = cross_entropy(prediction, target)
|
||||||
|
loss.backward()
|
||||||
|
optimizer.step()
|
||||||
|
```
|
||||||
|
|
||||||
|
**Optimizes for:** Accuracy only
|
||||||
|
|
||||||
|
### Thermodynamic Loss
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Forward pass with energy measurement
|
||||||
|
start_energy = get_lifeforce()
|
||||||
|
prediction = model(input)
|
||||||
|
end_energy = get_lifeforce()
|
||||||
|
|
||||||
|
energy_spent = start_energy - end_energy
|
||||||
|
accuracy = 1 - cross_entropy(prediction, target)
|
||||||
|
|
||||||
|
# Efficiency is accuracy per joule
|
||||||
|
efficiency = accuracy / energy_spent
|
||||||
|
|
||||||
|
# We want to MAXIMIZE efficiency
|
||||||
|
loss = -efficiency # Negative because we minimize loss
|
||||||
|
loss.backward()
|
||||||
|
optimizer.step()
|
||||||
|
```
|
||||||
|
|
||||||
|
**Optimizes for:** Accuracy *per unit energy*
|
||||||
|
|
||||||
|
### The Gradient Interpretation
|
||||||
|
|
||||||
|
Traditional gradient: "Adjust weights to be more accurate"
|
||||||
|
|
||||||
|
Thermodynamic gradient: "Adjust weights to be more accurate *per joule*"
|
||||||
|
|
||||||
|
This naturally produces:
|
||||||
|
- Simpler solutions (less compute = less energy)
|
||||||
|
- Appropriate confidence (uncertainty wastes energy)
|
||||||
|
- Knowing when to quit (diminishing returns = stop spending)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Connection to Spatial Resolution Gradient
|
||||||
|
|
||||||
|
The LOD system becomes energy-aware:
|
||||||
|
|
||||||
|
| Query | LOD | Energy | Accuracy | Efficiency |
|
||||||
|
|-------|-----|--------|----------|------------|
|
||||||
|
| "Where is France?" | L5 | 1 J | 95% | 0.95 |
|
||||||
|
| "Where is the lab?" | L2 | 3 J | 98% | 0.33 |
|
||||||
|
| "Where is screwdriver?" | L1 | 8 J | 99% | 0.12 |
|
||||||
|
| "Serial number on screwdriver?" | L0 | 25 J | 99.9% | 0.04 |
|
||||||
|
|
||||||
|
**The system learns:** L5 query has highest efficiency! Only drill to L0 when the task *requires* that precision.
|
||||||
|
|
||||||
|
```python
|
||||||
|
def optimal_lod_for_task(task, accuracy_requirement):
|
||||||
|
"""
|
||||||
|
Find the LOD level with best efficiency
|
||||||
|
that meets minimum accuracy requirement
|
||||||
|
"""
|
||||||
|
for lod in [L5, L4, L3, L2, L1, L0]:
|
||||||
|
accuracy = estimate_accuracy(task, lod)
|
||||||
|
energy = estimate_energy(task, lod)
|
||||||
|
|
||||||
|
if accuracy >= accuracy_requirement:
|
||||||
|
return lod # First sufficient LOD is most efficient
|
||||||
|
|
||||||
|
return L0 # Fall back to max detail
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Connection to Existing Architecture
|
||||||
|
|
||||||
|
### Layer 0: Heartbeat
|
||||||
|
- Lifeforce measured per heartbeat
|
||||||
|
- 1 beat = 1 second = 1 measurement window
|
||||||
|
- Real clock is free; virtual clock costs lifeforce
|
||||||
|
|
||||||
|
### Layer 1: Cellular Society
|
||||||
|
- Organisms ARE the mitochondria
|
||||||
|
- Their rewards TRICKLE into the pool
|
||||||
|
- Without them, Young Nyx starves
|
||||||
|
- Competition produces metabolic baseline
|
||||||
|
|
||||||
|
### Layer 2: Young Nyx
|
||||||
|
- Spends from the pool
|
||||||
|
- LOD queries have energy cost
|
||||||
|
- Uncertainty = waste heat
|
||||||
|
- Efficiency gradient in training
|
||||||
|
|
||||||
|
### Layer 2.5: Orchestration
|
||||||
|
- T5Gemma 2 encoding = energy cost
|
||||||
|
- LOD selection = efficiency optimization
|
||||||
|
- Function Gemma = low-cost structured output
|
||||||
|
|
||||||
|
### Slumber/Wake
|
||||||
|
- Pool < threshold → forced slumber
|
||||||
|
- Pool > threshold → wake permitted
|
||||||
|
- Reflection during slumber = low-energy consolidation
|
||||||
|
- Conservation is architectural, not optional
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Research Threads
|
||||||
|
|
||||||
|
### Free Energy Principle (Karl Friston)
|
||||||
|
|
||||||
|
> "Organisms minimize variational free energy (prediction error) because surprise = metabolic cost."
|
||||||
|
|
||||||
|
Our version: Young Nyx minimizes `waste_heat` because uncertainty without resolution = wasted lifeforce.
|
||||||
|
|
||||||
|
### Landauer's Principle
|
||||||
|
|
||||||
|
> "Erasing one bit of information requires minimum kT ln(2) joules."
|
||||||
|
|
||||||
|
Implication: Every decision Young Nyx makes has a thermodynamic floor cost. Forgetting is not free.
|
||||||
|
|
||||||
|
### Maximum Entropy Production
|
||||||
|
|
||||||
|
> "Living systems maximize entropy production through themselves while maintaining internal order."
|
||||||
|
|
||||||
|
The organism trickle = entropy production that maintains Young Nyx's order. The cellular competition IS the entropy pump.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Open Questions
|
||||||
|
|
||||||
|
1. **What's the exchange rate?** How many joules = 1 lifeforce unit? Should it be 1:1 or normalized?
|
||||||
|
|
||||||
|
2. **How to measure cognitive energy?** GPU power is easy. But what about the "energy" of a decision? Is it inference FLOPs? Token count? Latency?
|
||||||
|
|
||||||
|
3. **Can we backprop through energy?** Traditional backprop doesn't know about joules. How to make gradients energy-aware?
|
||||||
|
|
||||||
|
4. **What's reversible?** Reversible computation has no entropy cost. Are some thoughts "reversible"? (e.g., queries that don't change state)
|
||||||
|
|
||||||
|
5. **Calibration:** How to calibrate the ternary confidence system so UNCERTAIN truly reflects wasted energy?
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Implementation Sketch
|
||||||
|
|
||||||
|
### Phase 1: Measurement
|
||||||
|
```python
|
||||||
|
# lifeforce_aggregator math cell
|
||||||
|
class LifeforceAggregator:
|
||||||
|
def compute(self, prometheus_metrics):
|
||||||
|
gpu_power = sum(m['nvidia_smi_power_draw'] for m in prometheus_metrics['gpu'])
|
||||||
|
cpu_power = sum(m['rapl_energy_delta'] for m in prometheus_metrics['cpu'])
|
||||||
|
# ... other sources
|
||||||
|
|
||||||
|
total_joules = (gpu_power + cpu_power) * HEARTBEAT_SECONDS
|
||||||
|
return {'lifeforce_joules': total_joules}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 2: Waste Heat Tracking
|
||||||
|
```python
|
||||||
|
# confidence_tracker math cell
|
||||||
|
class WasteHeatTracker:
|
||||||
|
def __init__(self, window_size=100):
|
||||||
|
self.decisions = deque(maxlen=window_size)
|
||||||
|
|
||||||
|
def record(self, decision, confidence, energy_cost):
|
||||||
|
self.decisions.append({
|
||||||
|
'confidence': confidence, # +, ?, -
|
||||||
|
'energy': energy_cost
|
||||||
|
})
|
||||||
|
|
||||||
|
def waste_heat(self):
|
||||||
|
return sum(
|
||||||
|
d['energy'] for d in self.decisions
|
||||||
|
if d['confidence'] == UNCERTAIN
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 3: Efficiency-Aware Training
|
||||||
|
```python
|
||||||
|
# Custom loss function
|
||||||
|
def thermodynamic_loss(prediction, target, energy_spent):
|
||||||
|
accuracy = 1 - F.cross_entropy(prediction, target)
|
||||||
|
efficiency = accuracy / (energy_spent + epsilon)
|
||||||
|
return -efficiency # Maximize efficiency
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## The Promise
|
||||||
|
|
||||||
|
**Traditional AI:** "Be accurate at any cost"
|
||||||
|
|
||||||
|
**Thermodynamic AI:** "Be accurate *efficiently*"
|
||||||
|
|
||||||
|
This isn't just resource optimization. It's a different *kind* of intelligence — one that knows when to think hard and when to think cheap. One that treats energy as real. One that sleeps not because we programmed it to, but because physics demands it.
|
||||||
|
|
||||||
|
**"Cognition is thermodynamics. The gradients flow downhill."**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Created**: 2026-01-01
|
||||||
|
**Status**: Research seed — needs experimental validation
|
||||||
|
**Next**: Implement lifeforce_aggregator math cell, connect to Prometheus
|
||||||
|
|
||||||
|
🔥🧠⚡ *Intelligence has a power bill.*
|
||||||
Reference in New Issue
Block a user