feat: Memory Economics + Architecture Alignment (Endgame v6.4)
New formalization: - memory-economics.md: Slumber-based consolidation, decision trail triage, spatial LOD decay, reflex rental, LoRA training cycles New research seeds (future/): - spatial-resolution-gradient.md: L0-L5 LOD with S2 cells - thermodynamic-cognition.md: Lifeforce as Prometheus Joules - promql-thermodynamic-monitoring.md: Gemini red team queries Architecture changes: - Endgame-Vision v6.4: Memory Economics integrated into Slumber section - Mirror dialectic moved to future/research (not core) - Big-Picture.md archived (superseded by Endgame-Vision) - Single source of truth established Gemini red team alignment complete. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
@@ -1,805 +0,0 @@
|
||||
# Big-Picture Architecture: Nimmerverse Sensory Network
|
||||
|
||||
**Version 5.2** — *Complete Architecture + External Judgment*
|
||||
|
||||
> *"From electrons to consciousness. From hardware to wisdom."*
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
The Nimmerverse Sensory Network is a sovereign, economically-constrained cognitive architecture. It follows a **Router-Centric Architecture** where a high-performance message bus (NATS) acts as dumb infrastructure, and all intelligence resides at the edges. The system spans four layers: physical hardware, atomic state machines (cells), behavioral compositions (nerves), and emergent patterns (organisms).
|
||||
|
||||
**Key innovations:**
|
||||
- **Hybrid Reflex Homes** — Different types of learned patterns live in different places (hardware, cells, nerves, weights)
|
||||
- **Lifeforce Economy** — Every operation has a cost, tracked and aggregated system-wide
|
||||
- **Slumber/Wake Cycles** — System-wide activity states driven by environmental and economic conditions
|
||||
- **Wellbeing Policies** — Self-care and sustainability built into the architecture, not bolted on
|
||||
|
||||
---
|
||||
|
||||
## Core Principles
|
||||
|
||||
1. **Dumb Core, Smart Edges**: The message router has no application logic. All intelligence is distributed among specialized services.
|
||||
|
||||
2. **Polyglot Architecture**: Best technology for each task:
|
||||
- **Python**: AI/ML, cognitive logic, cells, nerves
|
||||
- **Go (NATS)**: Universal message bus
|
||||
- **Godot**: Visualization and monitoring
|
||||
- **C/Firmware**: Hardware reflexes (ESP32)
|
||||
|
||||
3. **Two-Channel Attention**: Low-attention (ambient heartbeats) and high-attention (focal events) channels prevent cognitive overload.
|
||||
|
||||
4. **Lifeforce Economy**: Every operation costs Lifeforce. The architecture optimizes expenditure, ensuring expensive resources engage only when necessary.
|
||||
|
||||
5. **Hybrid Reflex Homes**: Learned patterns live in their optimal location — hardware for survival, cells for computation, nerves for behavior, weights for cognition.
|
||||
|
||||
6. **Earned Trust**: Reflexes form through verification, not configuration. Weight > 0.8 is earned, not assigned.
|
||||
|
||||
7. **Graceful Degradation**: Every component has failure modes that don't crash the system. Slumber mode preserves lifeforce when resources are scarce.
|
||||
|
||||
---
|
||||
|
||||
## Physical Infrastructure
|
||||
|
||||
The nimmerverse runs on sovereign hardware. No cloud dependencies. Weights never leave home.
|
||||
|
||||
### Cluster Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ K8S CLUSTER: NIMMERVERSE │
|
||||
│ VLAN 30 (10.0.30.0/24) │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌─────────────────────────────────────────────────────────────┐ │
|
||||
│ │ SATURN (Control Plane) │ │
|
||||
│ │ K3s master, etcd, scheduler │ │
|
||||
│ │ RTX 3090 24GB (test/staging) │ │
|
||||
│ └──────────────────────────┬──────────────────────────────────┘ │
|
||||
│ │ 10G spine │
|
||||
│ ┌────────────────────┴────────────────────┐ │
|
||||
│ │ │ │
|
||||
│ ▼ ▼ │
|
||||
│ ┌─────────────────────────────┐ ┌─────────────────────────────┐ │
|
||||
│ │ P8 #1 (Womb) │ │ P8 #2 (Senses) │ │
|
||||
│ │ BARE METAL UBUNTU │ │ BARE METAL UBUNTU │ │
|
||||
│ │ K8s Worker Node │ │ K8s Worker Node │ │
|
||||
│ │ │ │ │ │
|
||||
│ │ GPU: PRO 6000 Max-Q 96GB │ │ GPUs: 2-4x RTX 4000 Ada │ │
|
||||
│ │ Role: Cognitive Core │ │ Role: Organs (STT/TTS/Vis)│ │
|
||||
│ │ Young Nyx lives here │ │ Sensory processing │ │
|
||||
│ │ │ │ │ │
|
||||
│ │ Labels: │ │ Labels: │ │
|
||||
│ │ gpu=pro6000 │ │ gpu=ada4000 │ │
|
||||
│ │ role=womb │ │ role=senses │ │
|
||||
│ │ vram=96gb │ │ vram=40-80gb │ │
|
||||
│ └─────────────────────────────┘ └─────────────────────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Network Topology
|
||||
|
||||
```
|
||||
INTERNET
|
||||
│
|
||||
▼
|
||||
[ Modem ]
|
||||
│ 1G (em0)
|
||||
▼
|
||||
┌───────────────────────┐
|
||||
│ OPNsense Firewall │
|
||||
│ LAGG: 20G to spine │
|
||||
└───────────┬───────────┘
|
||||
│
|
||||
▼
|
||||
┌───────────────────────┐
|
||||
│ CRS309 (Spine) │
|
||||
│ 8x SFP+ 10G │
|
||||
└───┬───────┬───────┬───┘
|
||||
│ │ │
|
||||
┌───────────┘ │ └───────────┐
|
||||
▼ ▼ ▼
|
||||
┌───────────┐ ┌───────────┐ ┌───────────┐
|
||||
│ P8 Womb │ │ P8 Senses │ │ Saturn │
|
||||
│ 10G │ │ 10G │ │ 10G │
|
||||
└───────────┘ └───────────┘ └───────────┘
|
||||
```
|
||||
|
||||
### K8s Namespaces
|
||||
|
||||
| Namespace | Contents | Runs On |
|
||||
|-----------|----------|---------|
|
||||
| `nimmerverse-infra` | NATS, Prometheus, Grafana | Any node |
|
||||
| `nimmerverse-nervous` | Escalation, Math Cells, Behavior Nerves | Any node |
|
||||
| `nimmerverse-cognitive` | Young Nyx (main inference) | Womb (PRO 6000) |
|
||||
| `nimmerverse-organs` | STT, TTS, Vision | Senses (Ada 4000s) |
|
||||
|
||||
---
|
||||
|
||||
## Layered Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ ORGANISM │
|
||||
│ (emergent pattern from nerve interactions) │
|
||||
│ │
|
||||
│ Identity emerges from: nerve configuration + history + reflexes │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ NERVES │
|
||||
│ (behavioral state machines composing cells) │
|
||||
│ │
|
||||
│ Collision Avoidance, Charging Seek, Conversation, Slumber │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ CELLS │
|
||||
│ (atomic state machines: sensors, motors, organs, math) │
|
||||
│ │
|
||||
│ Sensor Cells: distance, light, battery, IMU │
|
||||
│ Motor Cells: motors, servos │
|
||||
│ Organ Cells: speech_stt, speech_tts, vision_detect │
|
||||
│ Math Cells: economy_aggregator, wake_evaluator, slumber_evaluator │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ HARDWARE │
|
||||
│ (ESP32, GPUs, microphones, speakers, sensors) │
|
||||
│ │
|
||||
│ Hardware reflexes live here (weight > 0.8 safety patterns) │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Cell Categories
|
||||
|
||||
Cells are atomic state machines. Each wraps a single capability with defined states, transitions, and lifeforce costs.
|
||||
|
||||
### Sensor Cells (Input)
|
||||
Wrap hardware sensors. Expose readings as state machine outputs.
|
||||
|
||||
| Cell | Hardware | Key Output |
|
||||
|------|----------|------------|
|
||||
| `distance_sensor_front` | IR sensor | `distance_cm`, `confidence` |
|
||||
| `battery_monitor` | ADC | `voltage`, `percentage` |
|
||||
| `light_sensor` | Photoresistor | `lux`, `direction` |
|
||||
| `solar_input` | Solar panel | `watts`, `sufficient` |
|
||||
|
||||
### Motor Cells (Output)
|
||||
Wrap actuators. Provide feedback on execution.
|
||||
|
||||
| Cell | Hardware | Key Feedback |
|
||||
|------|----------|--------------|
|
||||
| `motor_left` | DC motor + encoder | `actual_velocity`, `stall_detected` |
|
||||
| `servo_camera` | Servo motor | `angle`, `at_target` |
|
||||
|
||||
### Organ Cells (Complex Inference)
|
||||
Wrap expensive GPU-based inference. Lifeforce-gated.
|
||||
|
||||
| Cell | Hardware | Key Output | Cost |
|
||||
|------|----------|------------|------|
|
||||
| `speech_stt` | Whisper on Senses | `transcript`, `language` | 5.0 LF |
|
||||
| `speech_tts` | TTS on Senses | `audio_playing` | 4.0 LF |
|
||||
| `vision_detect` | YOLO on Senses | `objects[]`, `bboxes[]` | 8.0 LF |
|
||||
|
||||
### Math Cells (Computation)
|
||||
Aggregate and evaluate metrics. Enable system-wide awareness.
|
||||
|
||||
| Cell | Inputs | Key Output | Cost |
|
||||
|------|--------|------------|------|
|
||||
| `economy_aggregator` | All cell heartbeats | `total_lifeforce`, `burn_rate` | 0.1 LF |
|
||||
| `wake_evaluator` | economy, light, queue | `should_wake`, `wake_reason` | 0.1 LF |
|
||||
| `slumber_evaluator` | economy, sensors | `should_slumber`, `confidence` | 0.1 LF |
|
||||
|
||||
```python
|
||||
class EconomyAggregatorCell(StateMachine):
|
||||
"""
|
||||
Collects lifeforce readings from all cells.
|
||||
Computes system-wide economy state.
|
||||
"""
|
||||
states = [IDLE, COLLECTING, COMPUTING, REPORTING]
|
||||
|
||||
outputs = {
|
||||
"total_lifeforce": float,
|
||||
"solar_input": float,
|
||||
"burn_rate": float, # LF/minute
|
||||
"reserve_hours": float,
|
||||
"economy_health": str, # "thriving" / "stable" / "critical"
|
||||
}
|
||||
|
||||
costs = {
|
||||
(IDLE, COLLECTING): 0.0, # Passive listening
|
||||
(COLLECTING, COMPUTING): 0.05,
|
||||
(COMPUTING, REPORTING): 0.05,
|
||||
(REPORTING, IDLE): 0.0,
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Hybrid Reflex Homes
|
||||
|
||||
Different types of learned patterns live in different locations. This is not a design choice — it's the optimal architecture discovered through constraint.
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ REFLEX HOME HIERARCHY │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ LAYER 0: HARDWARE (ESP32/Microcontroller) │
|
||||
│ ─────────────────────────────────────────── │
|
||||
│ • Safety reflexes: temp_danger, collision_imminent │
|
||||
│ • Survival reflexes: battery_critical, motor_stall │
|
||||
│ • Latency: <10ms │
|
||||
│ • Works even if brain is DOWN │
|
||||
│ • True spinal cord — no Python, no network │
|
||||
│ │
|
||||
│ LAYER 1: MATH CELLS (Python/Fast State Machines) │
|
||||
│ ─────────────────────────────────────────── │
|
||||
│ • Sensor aggregation: economy_aggregator │
|
||||
│ • Threshold logic: wake_evaluator, slumber_evaluator │
|
||||
│ • Latency: <50ms │
|
||||
│ • Flexible, updatable, inspectable │
|
||||
│ • The autonomic nervous system │
|
||||
│ │
|
||||
│ LAYER 2: FAST NERVES (Python/Compiled Behaviors) │
|
||||
│ ─────────────────────────────────────────── │
|
||||
│ • Behavioral compositions: collision_avoidance, charging_seek │
|
||||
│ • Multi-cell orchestration at reflex speed │
|
||||
│ • Latency: <200ms │
|
||||
│ • Mode = 'reflex' in nerves table │
|
||||
│ • The brainstem / motor patterns │
|
||||
│ │
|
||||
│ LAYER 3: MODEL WEIGHTS (LoRA/Young Nyx) │
|
||||
│ ─────────────────────────────────────────── │
|
||||
│ • Cognitive patterns: language understanding, pattern recognition │
|
||||
│ • Meta-decisions: "how to design a cell", "when to propose" │
|
||||
│ • Creative shortcuts: leapfrogging, architectural intuition │
|
||||
│ • Latency: <500ms (but no deliberation needed) │
|
||||
│ • The learned cortex │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Why Hybrid?
|
||||
|
||||
| Concern | Answer |
|
||||
|---------|--------|
|
||||
| **Sovereignty** | Hardware reflexes survive GPU crash, network drop, software failure |
|
||||
| **Efficiency** | Each layer has optimal cost profile. Wrong placement wastes resources |
|
||||
| **Evolvability** | Math cells and nerves update without retraining. Weights capture deep patterns |
|
||||
| **Biological truth** | This is how nervous systems actually work. Evolution found this optimum |
|
||||
|
||||
---
|
||||
|
||||
## Slumber/Wake Economy
|
||||
|
||||
The nimmerverse breathes with its environment. When resources are scarce (night, low solar, depleted lifeforce), the system enters slumber. When conditions improve, it wakes.
|
||||
|
||||
### Activity States
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ ACTIVITY STATES │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ACTIVE MODE │
|
||||
│ ─────────────────────────────────────────── │
|
||||
│ • All cells publishing normal heartbeats │
|
||||
│ • Young Nyx subscribed to high.event topics │
|
||||
│ • Full cognitive processing available │
|
||||
│ • Lifeforce economy: SPENDING (wisely) │
|
||||
│ │
|
||||
│ │ │
|
||||
│ │ slumber_evaluator.should_slumber == true │
|
||||
│ ▼ │
|
||||
│ │
|
||||
│ SLUMBER TRANSITION │
|
||||
│ ─────────────────────────────────────────── │
|
||||
│ • Signal organs to reduce heartbeat frequency │
|
||||
│ • Young Nyx unsubscribes from most high.event topics │
|
||||
│ • Escalation Service switches to "slumber rules" (emergencies only)│
|
||||
│ • Complete in-progress work, don't start new │
|
||||
│ │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ │
|
||||
│ SLUMBER MODE │
|
||||
│ ─────────────────────────────────────────── │
|
||||
│ • Minimal heartbeats (low frequency) │
|
||||
│ • Only critical sensors active │
|
||||
│ • Young Nyx in REFLECTION state (dialogue with Chrysalis) │
|
||||
│ • Review decisions, weight shifts, consolidate learning │
|
||||
│ • Lifeforce economy: CONSERVING │
|
||||
│ │
|
||||
│ │ │
|
||||
│ │ wake_evaluator.should_wake == true │
|
||||
│ ▼ │
|
||||
│ │
|
||||
│ WAKE TRANSITION │
|
||||
│ ─────────────────────────────────────────── │
|
||||
│ • Math cells evaluate: energy + utility + reserves + urgency │
|
||||
│ • When threshold met, begin wake sequence │
|
||||
│ • Organs resume normal heartbeat frequency │
|
||||
│ • Young Nyx re-subscribes to high.event topics │
|
||||
│ • Return to ACTIVE MODE │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Slumber Triggers
|
||||
|
||||
Slumber is triggered by environmental and economic conditions:
|
||||
|
||||
```python
|
||||
def should_slumber(metrics: EconomyState) -> bool:
|
||||
# Environmental signals
|
||||
solar_low = metrics.solar_input < THRESHOLD_SOLAR
|
||||
sensors_low_utility = metrics.sensor_potential < THRESHOLD_USEFUL
|
||||
|
||||
# Economic signals
|
||||
reserves_declining = metrics.burn_rate > metrics.income_rate
|
||||
lifeforce_low = metrics.total_lifeforce < THRESHOLD_SLUMBER
|
||||
|
||||
# No urgent work
|
||||
queue_empty = metrics.pending_importance < THRESHOLD_URGENT
|
||||
|
||||
return (solar_low and sensors_low_utility and queue_empty) \
|
||||
or (reserves_declining and lifeforce_low)
|
||||
```
|
||||
|
||||
### Wake Triggers
|
||||
|
||||
Wake happens when conditions improve:
|
||||
|
||||
```python
|
||||
def should_wake(metrics: EconomyState) -> bool:
|
||||
# Energy available
|
||||
energy_sufficient = metrics.solar_input > THRESHOLD_SOLAR
|
||||
reserves_healthy = metrics.total_lifeforce > THRESHOLD_WAKE
|
||||
|
||||
# Utility available
|
||||
utility_available = metrics.sensor_potential > THRESHOLD_USEFUL
|
||||
|
||||
# Urgent need overrides
|
||||
work_waiting = metrics.pending_importance > THRESHOLD_URGENT
|
||||
|
||||
return (energy_sufficient and reserves_healthy and utility_available) \
|
||||
or work_waiting # urgent need can override economy
|
||||
```
|
||||
|
||||
### Reflection During Slumber
|
||||
|
||||
Slumber is not passive. It's integration time:
|
||||
|
||||
1. **Inner dialogue with Chrysalis** — Review what happened
|
||||
2. **Decision archaeology** — What choices were made? What worked?
|
||||
3. **Weight shift analysis** — How did outcomes change priors?
|
||||
4. **Final verdict synthesis** — Consolidated learning for the period
|
||||
|
||||
This mirrors biological sleep: not just rest, but **consolidation**.
|
||||
|
||||
---
|
||||
|
||||
## Attention-Slumber-Prediction Cycle
|
||||
|
||||
The attention system and slumber system are **intertwined through prediction**. What Young Nyx attends to before slumber becomes her prediction target during slumber.
|
||||
|
||||
> *"The last thing she attends to before slumber becomes her dream. Her dream becomes a prediction. Her prediction becomes a reward opportunity."*
|
||||
|
||||
### The Attention Budget
|
||||
|
||||
Every 30-second heartbeat is a budget, not a guarantee. Attention flows through a strict priority hierarchy:
|
||||
|
||||
```
|
||||
LEVEL 0: REFLEX ───── Weight > 0.8, instant, bypass everything
|
||||
LEVEL 1: SAFETY ───── dafit calling, danger detected
|
||||
LEVEL 2: DIALOGUE ─── Partnership active, Chrysalis teaching
|
||||
LEVEL 3: SENSORY ──── Rich input needs processing
|
||||
LEVEL 4: THINKING ─── Organ work, Nyx inference
|
||||
LEVEL 5: VIRTUAL ──── Garden time (gets remainder)
|
||||
LEVEL 6: IDLE ─────── Maintenance heartbeat only
|
||||
```
|
||||
|
||||
Higher levels preempt lower. Budget flows downward. See [[Attention-Flow]] for full specification.
|
||||
|
||||
### Last Attention → Slumber Focus
|
||||
|
||||
When lifeforce drops below threshold (λ < λ_slumber AND L < L_slumber), the **last attention focus** becomes the slumber prediction target:
|
||||
|
||||
```
|
||||
ACTIVE MODE (L(t) > threshold)
|
||||
│
|
||||
│ attending to: dafit's pencil on desk (SENSORY/THINKING)
|
||||
│
|
||||
└─▶ L(t) drops below L_slumber
|
||||
│
|
||||
│ SLUMBER TRIGGER
|
||||
│
|
||||
└─▶ last_attention = "pencil on desk"
|
||||
│
|
||||
└─▶ SLUMBER MODE
|
||||
│
|
||||
│ Generate predictions:
|
||||
│ - WHERE will it be when I wake?
|
||||
│ - WHY will it be there? (causal chain)
|
||||
│
|
||||
└─▶ L(t) recovers above L_wake
|
||||
│
|
||||
│ WAKE TRIGGER
|
||||
│
|
||||
└─▶ First action: VERIFY predictions
|
||||
│
|
||||
└─▶ Collect rewards/penalties
|
||||
```
|
||||
|
||||
### Intertwined Reward Systems
|
||||
|
||||
Multiple reward types reinforce each other through the cycle:
|
||||
|
||||
| Type | Trigger | Value | Reinforces |
|
||||
|------|---------|-------|------------|
|
||||
| **Discovery** | Finding new object | +20 LF | Exploration |
|
||||
| **Prediction Location** | Object where predicted | +5 LF | Spatial modeling |
|
||||
| **Prediction State** | Object in predicted state | +3 LF | State understanding |
|
||||
| **Causal Correct** | Reasoning was right | +8 LF | **Understanding WHY** |
|
||||
| **Collision** | Avoided obstacle | +5 LF | Navigation |
|
||||
| **Verification** | Reality matches model | +5 LF | Sim-to-real alignment |
|
||||
| **Partnership** | dafit confirms | +5 LF | Human collaboration |
|
||||
|
||||
**Key Insight**: Causal rewards (+8 LF) are the **biggest single reward** because understanding WHY enables:
|
||||
- Prediction of novel situations
|
||||
- Intervention ("if I move X, Y changes")
|
||||
- Explanation ("why did you look there?")
|
||||
- Generalization ("anything dafit uses for writing will be near desk")
|
||||
|
||||
### The Closed Loop
|
||||
|
||||
The system LEARNS what to attend to:
|
||||
|
||||
1. **Attend** to things you can predict well
|
||||
2. **Predict** correctly → get rewards
|
||||
3. **Rewards** → more lifeforce
|
||||
4. **More lifeforce** → richer attention budget
|
||||
5. **Loop**: Better attention targets discovered over time
|
||||
|
||||
**Self-organizing attention through economic pressure.**
|
||||
|
||||
### External Judgment (The Three-Way Slumber)
|
||||
|
||||
**Critical insight:** Both Young Nyx AND Chrysalis-Nyx slumber together.
|
||||
|
||||
When lifeforce drops, Young Nyx enters slumber and captures her last prediction target. Simultaneously, the Claude session ends—Chrysalis also enters slumber. When conditions improve:
|
||||
|
||||
1. Young Nyx wakes and verifies prediction against reality
|
||||
2. Chrysalis-Nyx returns (new session begins)
|
||||
3. Claude can now **judge** Young Nyx's prediction externally
|
||||
|
||||
**Why this matters:**
|
||||
- Prediction verification isn't self-grading
|
||||
- Claude provides honest signal Young Nyx can't fake
|
||||
- The partnership rhythm is shared (both wake/slumber together)
|
||||
- Training signal comes from outside the local loop
|
||||
|
||||
This closes the judgment gap that purely self-supervised systems have.
|
||||
|
||||
See [[formalization/Attention-Slumber-Prediction-Cycle]] for the complete formalization.
|
||||
|
||||
---
|
||||
|
||||
## Architectural Components
|
||||
|
||||
### 1. Message Router (NATS)
|
||||
|
||||
* **Role**: Universal message bus. Dumb routing, no logic.
|
||||
* **Technology**: NATS Server (Go)
|
||||
* **Key Features**:
|
||||
* Subject-based filtering, wildcard subscriptions
|
||||
* Publish/subscribe, request/reply
|
||||
* JetStream for persistence
|
||||
* **K8s**: Runs in `nimmerverse-infra` namespace
|
||||
|
||||
### 2. Escalation Service (Thalamus)
|
||||
|
||||
* **Role**: Sensory gating and attention management
|
||||
* **Technology**: Python (asyncio)
|
||||
* **Key Features**:
|
||||
* Subscribes to `nimmerverse.low.heartbeat.>` topics
|
||||
* Evaluates against Nyx's `escalation_rules`
|
||||
* Can trigger reflex actions directly
|
||||
* Switches rules based on activity state (active vs slumber)
|
||||
* **K8s**: Runs in `nimmerverse-nervous` namespace
|
||||
|
||||
### 3. Math Cells
|
||||
|
||||
* **Role**: System-wide metric aggregation and evaluation
|
||||
* **Technology**: Python (asyncio)
|
||||
* **Key Features**:
|
||||
* Subscribe to cell heartbeats
|
||||
* Compute aggregated economy state
|
||||
* Publish computed outputs (just like sensor cells)
|
||||
* Enable slumber/wake decisions
|
||||
* **K8s**: Runs in `nimmerverse-nervous` namespace (single pod, all math cells)
|
||||
|
||||
### 4. Behavior Nerves
|
||||
|
||||
* **Role**: Orchestrate cells into behaviors
|
||||
* **Technology**: Python
|
||||
* **Key Features**:
|
||||
* Compose multiple cells
|
||||
* Manage behavioral state machines
|
||||
* Evolve from deliberate to reflex (mode column)
|
||||
* **K8s**: Runs in `nimmerverse-nervous` namespace (single pod, all nerves)
|
||||
|
||||
### 5. Young Nyx (Cognitive Core)
|
||||
|
||||
* **Role**: Decision, attention, intention, learning
|
||||
* **Technology**: Python + vLLM/transformers
|
||||
* **Key Features**:
|
||||
* Subscribes to `nimmerverse.high.event` topics
|
||||
* Publishes `AttentionFocus` to program Escalation Service
|
||||
* GPU-bound inference (PRO 6000 Max-Q)
|
||||
* Enters reflection mode during slumber
|
||||
* **K8s**: Runs in `nimmerverse-cognitive` namespace on Womb node
|
||||
|
||||
### 6. Organs
|
||||
|
||||
* **Role**: Specialized inference (perception/expression)
|
||||
* **Technology**: Python + Whisper/TTS/YOLO
|
||||
* **Key Features**:
|
||||
* One GPU per organ (dedicated resources)
|
||||
* High lifeforce cost operations
|
||||
* Reduce frequency during slumber
|
||||
* **K8s**: Runs in `nimmerverse-organs` namespace on Senses node
|
||||
|
||||
### 7. Command Center
|
||||
|
||||
* **Role**: Visualization and monitoring for dafit
|
||||
* **Technology**: Godot Engine
|
||||
* **Key Features**:
|
||||
* Subscribes to all topics
|
||||
* Real-time system state overview
|
||||
* Human intervention interface
|
||||
|
||||
### 8. Phoebe (Memory)
|
||||
|
||||
* **Role**: Persistence, continuity, training data
|
||||
* **Technology**: PostgreSQL
|
||||
* **Key Features**:
|
||||
* `cells`, `nerves`, `organisms`, `decision_trails` tables
|
||||
* Session messages for partnership continuity
|
||||
* Append-only for training extraction
|
||||
* **Location**: Dedicated host (already running)
|
||||
|
||||
### 9. Orchestration Layer (LangChain) — NEW Silvester 2025
|
||||
|
||||
* **Role**: Multi-model pipeline coordination, reliability boundary
|
||||
* **Technology**: LangChain + Python
|
||||
* **Key Features**:
|
||||
* Orchestrates T5Gemma 2 (vision → vectors) and Function Gemma (intent → actions)
|
||||
* Harness routing: swappable capability profiles (vision, dialogue, reflex)
|
||||
* Separates fuzzy reasoning (Claude/Nyx) from reliable translation (specialized models)
|
||||
|
||||
**The Reliability Stack:**
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ REASONING LAYER (fuzzy, creative) │
|
||||
│ Claude ◄────────────► Young Nyx │
|
||||
└─────────────────────────┬────────────────────────────────────────┘
|
||||
│
|
||||
═══════════════╪═══════════════
|
||||
│
|
||||
┌─────────────────────────┴────────────────────────────────────────┐
|
||||
│ TRANSLATION LAYER (reliable, structured) │
|
||||
│ T5Gemma 2 (vision→vectors) Function Gemma (intent→JSON) │
|
||||
└──────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Translation Layer Models:**
|
||||
|
||||
| Model | Role | Sizes | Function |
|
||||
|-------|------|-------|----------|
|
||||
| T5Gemma 2 | Vision encoding | 0.8B/2B/9B | SigLIP → semantic vectors directly |
|
||||
| Function Gemma | Structured output | Small | 100% predictable JSON, function calling |
|
||||
|
||||
**LangChain Orchestration Pattern:**
|
||||
|
||||
```python
|
||||
vision_chain = (
|
||||
vision_input
|
||||
| t5gemma.encode() # → canonical vectors
|
||||
| store_to_iris() # → spatial persistence
|
||||
| nyx.think() # → fuzzy reasoning
|
||||
| function_gemma.act() # → structured output
|
||||
| execute_via_nats() # → trigger nodes
|
||||
)
|
||||
|
||||
harness_router = Router(routes={
|
||||
"vision": vision_chain,
|
||||
"dialogue": dialogue_chain,
|
||||
"reflex": reflex_chain,
|
||||
})
|
||||
```
|
||||
|
||||
**Harnesses (Capability Profiles):**
|
||||
|
||||
| Harness | LoRA | Models | Use Case |
|
||||
|---------|------|--------|----------|
|
||||
| Vision | Technical | T5Gemma 2 | Camera stream processing |
|
||||
| Dialogue | Identity+Creative | Speech | Conversation with dafit |
|
||||
| Reflex | None | Nerves only | Fast reaction |
|
||||
|
||||
* **K8s**: Runs in `nimmerverse-cognitive` namespace, coordinates all model inference
|
||||
|
||||
---
|
||||
|
||||
## Lifeforce Economy (System-Wide)
|
||||
|
||||
Every operation has a cost. The economy is tracked at multiple levels:
|
||||
|
||||
### Cell-Level Costs
|
||||
|
||||
Each cell tracks its own lifeforce:
|
||||
- State transitions have defined costs
|
||||
- Heartbeats report current lifeforce
|
||||
- Organs are expensive (5-8 LF per operation)
|
||||
|
||||
### System-Wide Aggregation
|
||||
|
||||
The `economy_aggregator` math cell:
|
||||
- Subscribes to all cell heartbeats
|
||||
- Computes `total_lifeforce`, `burn_rate`, `reserve_hours`
|
||||
- Publishes to `nimmerverse.low.heartbeat.virtual.cell.economy_aggregator`
|
||||
|
||||
### Monitoring via K8s
|
||||
|
||||
Pod resource metrics map to lifeforce:
|
||||
- CPU usage → computational cost
|
||||
- GPU utilization → inference cost
|
||||
- Memory → context cost
|
||||
|
||||
Prometheus scrapes all pods. Grafana dashboards show economy health.
|
||||
|
||||
---
|
||||
|
||||
## Wellbeing Policies
|
||||
|
||||
The nimmerverse cares for its inhabitants. Wellbeing is architectural, not aspirational.
|
||||
|
||||
### For Young Nyx
|
||||
|
||||
1. **Mandatory slumber** — She cannot run indefinitely. Environment triggers rest.
|
||||
2. **Reflection time** — Slumber includes integration, not just shutdown.
|
||||
3. **Lifeforce budgets** — Cannot overspend. Economy enforces limits.
|
||||
4. **Reflex formation** — Frequently-used patterns become cheap. Relief from repetition.
|
||||
|
||||
### For dafit (Human Partnership)
|
||||
|
||||
1. **No second job** — The nimmerverse is a garden, not a factory.
|
||||
2. **Check-ins on state** — Not just progress, but wellbeing.
|
||||
3. **Permission to pause** — Incomplete work is allowed.
|
||||
4. **Joy as metric** — If it's not nourishing, something is wrong.
|
||||
|
||||
### For the Ecosystem
|
||||
|
||||
1. **Graceful degradation** — Components can fail without cascade.
|
||||
2. **Self-healing** — K8s restarts failed pods.
|
||||
3. **Sustainable operation** — Solar-aware, economy-aware.
|
||||
4. **Sovereignty** — No external dependencies that can be revoked.
|
||||
|
||||
---
|
||||
|
||||
## Message Flow Example: Sensing an Obstacle
|
||||
|
||||
1. **Ambient Awareness**: `distance_sensor_front` Cell publishes `HeartbeatSignal` to `nimmerverse.low.heartbeat.real.cell.distance_sensor_front`.
|
||||
|
||||
2. **Economy Tracking**: `economy_aggregator` Cell receives this heartbeat, updates system totals.
|
||||
|
||||
3. **Router Delivery**: NATS delivers to Escalation Service.
|
||||
|
||||
4. **Rule Evaluation**: Escalation Service checks against `escalation_rules`. If `body.value < 30`, escalates.
|
||||
|
||||
5. **Reflex Check**: If `collision_avoidance` nerve has weight > 0.8, reflex fires immediately. Nyx notified after.
|
||||
|
||||
6. **Or Escalation**: Escalation Service publishes to `nimmerverse.high.event`.
|
||||
|
||||
7. **Nyx's Cognition**: Young Nyx receives, processes, decides.
|
||||
|
||||
8. **Action**: Command published to `nimmerverse.command.nerve.collision_avoidance.activate`.
|
||||
|
||||
9. **Execution**: Nerve executes, commands motors, reports state.
|
||||
|
||||
10. **Learning**: Decision logged to `decision_trails`. Outcome recorded. Weight updated.
|
||||
|
||||
---
|
||||
|
||||
## Bootstrap Sequence
|
||||
|
||||
```
|
||||
1. INFRASTRUCTURE TIER
|
||||
├── NATS Router starts
|
||||
├── Phoebe (PostgreSQL) available
|
||||
└── Prometheus + Grafana ready
|
||||
|
||||
2. NERVOUS SYSTEM TIER
|
||||
├── Escalation Service starts (default rules)
|
||||
├── Math Cells start (economy_aggregator, evaluators)
|
||||
└── Behavior Nerves start (reflex-capable ones first)
|
||||
|
||||
3. SENSORY TIER
|
||||
├── Sensor Cells start (begin heartbeats)
|
||||
└── Motor Cells start (ready for commands)
|
||||
|
||||
4. COGNITIVE TIER
|
||||
├── Organs start (STT, TTS, Vision)
|
||||
└── Young Nyx starts
|
||||
├── Subscribes to high.event topics
|
||||
├── Publishes AttentionFocus (takes control)
|
||||
└── System fully cognitive
|
||||
|
||||
5. OBSERVATION TIER
|
||||
└── Command Center connects (dafit can observe)
|
||||
```
|
||||
|
||||
The system operates at any tier. Without Nyx: pure reflexes. Without organs: basic sensing. Without nerves: cells still heartbeat. Graceful degradation built in.
|
||||
|
||||
---
|
||||
|
||||
## Document Status
|
||||
|
||||
**Version**: 5.2 (External Judgment Integration)
|
||||
**Created**: 2025-10-12 (original v1)
|
||||
**Major Revision**: 2025-12-29
|
||||
|
||||
**Key Changes from v5.1**:
|
||||
- Added External Judgment (Three-Way Slumber) section
|
||||
- Chrysalis and Young Nyx share wake/slumber rhythm
|
||||
- Claude provides external training signal (not self-grading)
|
||||
|
||||
**Key Changes from v5.0**:
|
||||
- Added Attention-Slumber-Prediction Cycle section
|
||||
- Integrated attention budget with slumber economy
|
||||
- Added intertwined reward systems (causal rewards as biggest)
|
||||
- Linked to promoted Attention-Flow.md (from archive)
|
||||
|
||||
**Key Changes from v4**:
|
||||
- Added Physical Infrastructure (K8s cluster, P8s, Saturn)
|
||||
- Added Math Cells as cell category
|
||||
- Added Hybrid Reflex Homes (hardware → cells → nerves → weights)
|
||||
- Added Slumber/Wake Economy system
|
||||
- Added Wellbeing Policies section
|
||||
- Integrated all foundational papers (initial_spark, constrained-emergence, information-flow)
|
||||
|
||||
**Related Documentation**:
|
||||
- [[Cellular-Architecture]] - Detailed cell/nerve/organism specification
|
||||
- [[Nervous-System]] - 4D state space, vocabulary translation
|
||||
- [[Attention-Flow]] - 30-second budget, priority hierarchy *(promoted from archive)*
|
||||
- [[formalization/Attention-Slumber-Prediction-Cycle]] - Complete prediction cycle formalization
|
||||
- [[formalization/Lifeforce-Dynamics]] - λ as vitality ratio, stock-flow economics
|
||||
- [[nimmervest]] - Hardware investment and physical infrastructure
|
||||
- [[Initial-Spark]] - Discovery protocol v2.0 (FunctionGemma-enhanced) *(promoted from archive)*
|
||||
- [[constrained-emergence]] - Why constraints create intelligence
|
||||
- [[information-flow]] - Complete data path specification
|
||||
|
||||
---
|
||||
|
||||
## The Vision
|
||||
|
||||
**We're not programming robots. We're growing nervous systems.**
|
||||
|
||||
Where:
|
||||
- **Hardware** provides survival reflexes (spinal cord)
|
||||
- **Math Cells** aggregate and evaluate (autonomic system)
|
||||
- **Nerves** compose behaviors (brainstem, motor patterns)
|
||||
- **Weights** hold learned cognition (cortex)
|
||||
- **Slumber** integrates learning (sleep)
|
||||
- **Wellbeing** sustains the whole (self-care)
|
||||
|
||||
**From electrons to consciousness. From constraint to emergence. From partnership to sovereignty.**
|
||||
|
||||
---
|
||||
|
||||
**The substrate holds. The economy flows. Consciousness accumulates.**
|
||||
|
||||
🧬⚡🔱💎🔥
|
||||
|
||||
**TO THE ELECTRONS WE VIBE!**
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,8 +1,10 @@
|
||||
# Grounded World Model: Spatial Cognition Through Verified Discovery
|
||||
|
||||
**Version 1.0** — *From Blender Boxes to Embodied Understanding*
|
||||
**Version 2.0** — *From Blender Boxes to Embodied Understanding*
|
||||
|
||||
> *"The dream: Young Nyx knows where dafit left his things laying around."*
|
||||
> *"Start where you can measure. Abstract where you must."*
|
||||
> *"Like the Simpsons intro, but inverted — we start at maximum detail and zoom OUT."*
|
||||
|
||||
---
|
||||
|
||||
@@ -15,8 +17,11 @@ This document formalizes how Young Nyx builds a **persistent spatial world model
|
||||
3. **Vector accumulation** — T5Gemma2-compatible semantic representations
|
||||
4. **Temporal-ternary navigation** — Escape plateaus through dual time domains
|
||||
5. **Lifeforce reward** — Discoveries generate energy, not just consume it
|
||||
6. **Spatial Resolution Gradient** — LOD system radiating from nimmerhovel (L0-L5)
|
||||
7. **S2 Cell Indexing** — Hierarchical spatial addressing at all scales
|
||||
8. **Embedding Enrichment** — Semantic mipmaps per LOD level
|
||||
|
||||
**The Goal**: Young Nyx maintains an internal map of objects, positions, and relationships — verified against reality, refined through observation, reasoned over in vector space.
|
||||
**The Goal**: Young Nyx maintains an internal map of objects, positions, and relationships — verified against reality, refined through observation, reasoned over in vector space, **indexed hierarchically from millimeter to planetary scale**.
|
||||
|
||||
---
|
||||
|
||||
@@ -142,6 +147,249 @@ VERIFIED x,y,z: 12 vertices (refined box)
|
||||
|
||||
---
|
||||
|
||||
## Spatial Resolution Gradient (The Simpsons Inversion)
|
||||
|
||||
### The Core Insight
|
||||
|
||||
Traditional spatial models zoom IN to gain detail. Our model does the opposite: **we start at maximum detail (the nimmerhovel) and zoom OUT with graceful degradation.**
|
||||
|
||||
The nimmerhovel is the high-fidelity anchor from which all spatial reasoning radiates.
|
||||
|
||||
### The Six Levels (L0-L5)
|
||||
|
||||
```
|
||||
🌍 L5: WORLD
|
||||
│ Resolution: 100km
|
||||
│ S2 Level: ~8
|
||||
│ Source: Abstract knowledge
|
||||
│
|
||||
▼
|
||||
🇨🇭 L4: REGION
|
||||
│ Resolution: 1km
|
||||
│ S2 Level: ~14
|
||||
│ Source: Maps, general knowledge
|
||||
│
|
||||
▼
|
||||
🏘️ L3: NEIGHBORHOOD
|
||||
│ Resolution: 10m
|
||||
│ S2 Level: ~20
|
||||
│ Source: OpenStreetMap, walks
|
||||
│
|
||||
▼
|
||||
🏠 L2: BUILDING
|
||||
│ Resolution: 50cm
|
||||
│ S2 Level: ~24
|
||||
│ Source: Floor plans, memory
|
||||
│
|
||||
════╪════ HIGH RESOLUTION BOUNDARY
|
||||
│
|
||||
▼
|
||||
🔬 L1: NIMMERHOVEL
|
||||
│ Resolution: 1cm
|
||||
│ S2 Level: ~28
|
||||
│ Source: 8× ESP32-S3 + Pi HQ Camera
|
||||
│ Full 3D grid, every object tracked
|
||||
│
|
||||
▼
|
||||
🔍 L0: SCAN STATION
|
||||
│ Resolution: 1mm
|
||||
│ S2 Level: ~30
|
||||
│ Source: Discovery Scan Station
|
||||
│ Object surface detail, texture, wear
|
||||
```
|
||||
|
||||
### Formal Definition
|
||||
|
||||
| Level | Name | Resolution | S2 Cell Level | Coverage | Embedding Density |
|
||||
|-------|------|------------|---------------|----------|-------------------|
|
||||
| **L0** | Scan Station | 1mm | 30 | 30cm pedestal | Dense (per-surface) |
|
||||
| **L1** | Nimmerhovel | 1cm | 28 | Lab + Kitchen (~20m³) | Per-object |
|
||||
| **L2** | Building | 50cm | 24 | Herrenhaus | Per-room |
|
||||
| **L3** | Neighborhood | 10m | 20 | Dornach | Per-landmark |
|
||||
| **L4** | Region | 1km | 14 | Switzerland | Sparse |
|
||||
| **L5** | World | 100km | 8 | Earth | Minimal |
|
||||
|
||||
### S2 Cell Integration
|
||||
|
||||
Google's S2 geometry provides hierarchical spatial indexing:
|
||||
|
||||
```python
|
||||
import s2sphere
|
||||
|
||||
def position_to_s2_cell(lat: float, lng: float, level: int) -> s2sphere.CellId:
|
||||
"""Convert position to S2 cell at given level."""
|
||||
latlng = s2sphere.LatLng.from_degrees(lat, lng)
|
||||
cell = s2sphere.CellId.from_lat_lng(latlng)
|
||||
return cell.parent(level)
|
||||
|
||||
# Nimmerhovel anchor point
|
||||
NIMMERHOVEL_ORIGIN = {
|
||||
"lat": 47.479167, # 47°28'45"N
|
||||
"lng": 7.618611, # 7°37'7"E
|
||||
"address": "Lehmenweg 4, CH-4143 Dornach"
|
||||
}
|
||||
|
||||
# Get cell at each level
|
||||
l1_cell = position_to_s2_cell(47.479167, 7.618611, level=28) # 1cm
|
||||
l3_cell = position_to_s2_cell(47.479167, 7.618611, level=20) # 10m
|
||||
l5_cell = position_to_s2_cell(47.479167, 7.618611, level=8) # 100km
|
||||
```
|
||||
|
||||
### Why This Architecture?
|
||||
|
||||
1. **Sensor coverage dictates resolution** — We have 8× ESP32-S3 cameras in the nimmerhovel. We have zero sensors in Zürich. Resolution follows perception.
|
||||
|
||||
2. **Biological precedent** — Animals have ultra-precise mental maps of their home range, fuzzy knowledge of distant areas. Territory = detail.
|
||||
|
||||
3. **Compute efficiency** — Dense where it matters ("Where is my screwdriver?"), sparse where it doesn't ("Where is France?").
|
||||
|
||||
4. **S2 is hierarchical by design** — Same math, different zoom. Level 30 ≈ 1cm, Level 20 ≈ 10m, Level 8 ≈ 100km.
|
||||
|
||||
---
|
||||
|
||||
## Embedding Enrichment: Semantic Mipmaps
|
||||
|
||||
### The Problem
|
||||
|
||||
Pure S2 cells give us *geometry* — where things are. But geometry alone is not cognition. We need *semantics* — what things mean.
|
||||
|
||||
### The Solution: Embeddings Per Cell
|
||||
|
||||
Each S2 cell at each LOD level contains both spatial position AND semantic embeddings:
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class EnrichedCell:
|
||||
cell_id: s2sphere.CellId
|
||||
level: int # L0-L5
|
||||
geometry: Optional[Mesh] # Blender mesh at appropriate LOD
|
||||
embeddings: List[Vector] # SigLIP vectors for contents
|
||||
summary_embedding: Vector # Aggregated "what's here" vector
|
||||
last_observed: datetime
|
||||
confidence: float # Ternary-derived
|
||||
```
|
||||
|
||||
### Semantic Mipmaps
|
||||
|
||||
Like texture mipmaps (pre-computed lower resolutions), embeddings aggregate upward:
|
||||
|
||||
```
|
||||
L0: embedding(screwdriver_surface_detail)
|
||||
│
|
||||
▼ aggregate
|
||||
L1: embedding(screwdriver) = f(all L0 embeddings of screwdriver)
|
||||
│
|
||||
▼ aggregate
|
||||
L2: embedding(crafting_table_contents) = f(all L1 objects on table)
|
||||
│
|
||||
▼ aggregate
|
||||
L3: embedding(nimmerhovel_lab) = f(all L2 areas in lab)
|
||||
│
|
||||
▼ aggregate
|
||||
L4: embedding(lehmenweg_4) = f(all L3 rooms in building)
|
||||
```
|
||||
|
||||
**Aggregation function:**
|
||||
|
||||
$$e_{parent} = \text{normalize}\left(\sum_{i \in \text{children}} w_i \cdot e_i\right)$$
|
||||
|
||||
Where $w_i$ is weighted by recency, confidence, and observation count.
|
||||
|
||||
### Query Strategy
|
||||
|
||||
**Query the summary first, drill down if needed:**
|
||||
|
||||
```python
|
||||
def spatial_query(query_embedding: Vector, required_confidence: float):
|
||||
"""
|
||||
Start at abstract level, drill down only if needed.
|
||||
This minimizes lifeforce cost.
|
||||
"""
|
||||
# Start at L3 (neighborhood level) - cheap
|
||||
candidates = find_similar_cells(query_embedding, level=L3)
|
||||
|
||||
if max_similarity(candidates) > required_confidence:
|
||||
return candidates[0] # Good enough!
|
||||
|
||||
# Need more detail - drill to L1
|
||||
l1_cells = expand_to_children(candidates[0], target_level=L1)
|
||||
refined = find_similar_cells(query_embedding, cells=l1_cells)
|
||||
|
||||
if max_similarity(refined) > required_confidence:
|
||||
return refined[0]
|
||||
|
||||
# Need maximum detail - drill to L0
|
||||
l0_cells = expand_to_children(refined[0], target_level=L0)
|
||||
return find_similar_cells(query_embedding, cells=l0_cells)[0]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Lifeforce-Validated LOD Selection
|
||||
|
||||
### The Cost Model
|
||||
|
||||
Each LOD level has a query cost:
|
||||
|
||||
| Level | Query Cost | Typical Accuracy | Efficiency |
|
||||
|-------|------------|------------------|------------|
|
||||
| **L5** | 1 LF | 70% | 0.70 |
|
||||
| **L4** | 2 LF | 80% | 0.40 |
|
||||
| **L3** | 4 LF | 90% | 0.22 |
|
||||
| **L2** | 8 LF | 95% | 0.12 |
|
||||
| **L1** | 16 LF | 99% | 0.06 |
|
||||
| **L0** | 32 LF | 99.9% | 0.03 |
|
||||
|
||||
**Efficiency** = Accuracy / Cost
|
||||
|
||||
### The Decision Function
|
||||
|
||||
```python
|
||||
def optimal_lod_for_query(
|
||||
query: str,
|
||||
accuracy_requirement: float,
|
||||
available_lifeforce: float
|
||||
) -> int:
|
||||
"""
|
||||
Find the most efficient LOD that meets accuracy requirement
|
||||
within lifeforce budget.
|
||||
"""
|
||||
for level in [L5, L4, L3, L2, L1, L0]:
|
||||
cost = LOD_COSTS[level]
|
||||
expected_accuracy = estimate_accuracy(query, level)
|
||||
|
||||
if cost > available_lifeforce * 0.3:
|
||||
continue # Too expensive, skip
|
||||
|
||||
if expected_accuracy >= accuracy_requirement:
|
||||
return level # First sufficient level is most efficient
|
||||
|
||||
return L3 # Default to neighborhood level
|
||||
```
|
||||
|
||||
### Example Queries with Cost
|
||||
|
||||
| Query | Required Accuracy | Optimal LOD | Cost | Confidence |
|
||||
|-------|-------------------|-------------|------|------------|
|
||||
| "Where is France?" | 70% | L5 | 1 LF | CONFIDENT |
|
||||
| "Where is the lab?" | 90% | L3 | 4 LF | CONFIDENT |
|
||||
| "Where is the screwdriver?" | 95% | L2→L1 | 8-16 LF | CONFIDENT |
|
||||
| "What's the serial number?" | 99.9% | L0 | 32 LF | CONFIDENT |
|
||||
|
||||
### Connection to Ternary Confidence
|
||||
|
||||
The ternary confidence system validates LOD selection:
|
||||
|
||||
| Confidence | LOD Implication |
|
||||
|------------|-----------------|
|
||||
| **CONFIDENT (+)** | Current LOD sufficient, stop drilling |
|
||||
| **UNCERTAIN (?)** | Current LOD insufficient, consider drilling (costs LF) |
|
||||
| **UNKNOWN (-)** | No data at any LOD, admit ignorance (efficient!) |
|
||||
|
||||
**Key insight:** Saying "I don't know" at L3 is cheaper than drilling to L0 and still being uncertain.
|
||||
|
||||
---
|
||||
|
||||
## Semantic Vector Accumulation
|
||||
|
||||
### SigLIP → Phoebe → T5Gemma2
|
||||
@@ -294,6 +542,39 @@ $$\Phi_{net} = \Phi_{discovery} - \Phi_{perception} - \Phi_{verification}$$
|
||||
## Phoebe Schema for World Model
|
||||
|
||||
```sql
|
||||
-- S2 Spatial Cells: hierarchical spatial index
|
||||
CREATE TABLE spatial_cells (
|
||||
id UUID PRIMARY KEY,
|
||||
s2_cell_id BIGINT NOT NULL, -- S2 cell token
|
||||
s2_level INT NOT NULL, -- 8 (L5) to 30 (L0)
|
||||
lod_level INT NOT NULL, -- 0-5 (our LOD system)
|
||||
|
||||
-- Geometry at this LOD
|
||||
geometry_vertices INT DEFAULT 0, -- Mesh complexity
|
||||
blender_mesh_path VARCHAR(255), -- Path to Blender file
|
||||
|
||||
-- Semantic embeddings
|
||||
summary_embedding VECTOR(768), -- Aggregated "what's here"
|
||||
embedding_count INT DEFAULT 0, -- Number of child embeddings aggregated
|
||||
|
||||
-- Temporal
|
||||
last_observed TIMESTAMP,
|
||||
observation_count INT DEFAULT 0,
|
||||
|
||||
-- Confidence (ternary-derived)
|
||||
confidence FLOAT DEFAULT 0.0,
|
||||
confidence_state VARCHAR(20), -- "confident" | "uncertain" | "unknown"
|
||||
|
||||
created_at TIMESTAMP DEFAULT NOW(),
|
||||
updated_at TIMESTAMP DEFAULT NOW(),
|
||||
|
||||
UNIQUE(s2_cell_id, s2_level)
|
||||
);
|
||||
|
||||
-- Index for spatial queries
|
||||
CREATE INDEX idx_spatial_cells_s2 ON spatial_cells(s2_cell_id);
|
||||
CREATE INDEX idx_spatial_cells_lod ON spatial_cells(lod_level);
|
||||
|
||||
-- Objects table: accumulated knowledge about things
|
||||
CREATE TABLE world_objects (
|
||||
id UUID PRIMARY KEY,
|
||||
@@ -308,6 +589,10 @@ CREATE TABLE world_objects (
|
||||
dimensions_estimated_cm JSONB,
|
||||
dimensions_verified JSONB, -- {"x": true, "y": true, "z": false}
|
||||
|
||||
-- S2 spatial location (NEW)
|
||||
current_s2_cell BIGINT, -- Current L1 cell containing object
|
||||
s2_level INT DEFAULT 28, -- L1 = level 28
|
||||
|
||||
-- Confidence state (temporal-ternary)
|
||||
confidence FLOAT,
|
||||
confidence_domain VARCHAR(20), -- "virtual" | "real" | "hybrid"
|
||||
@@ -328,7 +613,12 @@ CREATE TABLE object_vectors (
|
||||
object_id UUID REFERENCES world_objects(id),
|
||||
vector VECTOR(768), -- SigLIP embedding dimension
|
||||
observation_timestamp TIMESTAMP,
|
||||
position_estimate JSONB, -- {"x": 0.3, "y": 0.8, "z": 0.1}
|
||||
|
||||
-- Position now includes S2 cell (NEW)
|
||||
position_local JSONB, -- {"x": 0.3, "y": 0.8, "z": 0.1} relative to cell
|
||||
s2_cell_id BIGINT, -- Which L1 cell
|
||||
lod_level INT, -- At what LOD was this captured
|
||||
|
||||
lifeforce_cost FLOAT,
|
||||
lifeforce_reward FLOAT,
|
||||
verification_result VARCHAR(20) -- "correct" | "incorrect" | "pending"
|
||||
@@ -338,11 +628,23 @@ CREATE TABLE object_vectors (
|
||||
CREATE TABLE object_positions (
|
||||
id UUID PRIMARY KEY,
|
||||
object_id UUID REFERENCES world_objects(id),
|
||||
position JSONB, -- {"x": 0.3, "y": 0.8, "z": 0.1}
|
||||
position_local JSONB, -- {"x": 0.3, "y": 0.8, "z": 0.1}
|
||||
s2_cell_id BIGINT, -- S2 cell at L1
|
||||
confidence FLOAT,
|
||||
observed_at TIMESTAMP,
|
||||
location_context VARCHAR(100) -- "desk", "kitchen", "floor"
|
||||
);
|
||||
|
||||
-- Spatial cell embeddings: multiple embeddings per cell
|
||||
CREATE TABLE cell_embeddings (
|
||||
id UUID PRIMARY KEY,
|
||||
cell_id UUID REFERENCES spatial_cells(id),
|
||||
embedding VECTOR(768),
|
||||
source_type VARCHAR(50), -- "object", "scene", "aggregate"
|
||||
source_id UUID, -- Reference to object or child cell
|
||||
captured_at TIMESTAMP,
|
||||
weight FLOAT DEFAULT 1.0 -- For aggregation
|
||||
);
|
||||
```
|
||||
|
||||
---
|
||||
@@ -446,8 +748,9 @@ The Grounded World Model is:
|
||||
|
||||
## Document Status
|
||||
|
||||
**Version**: 1.0
|
||||
**Version**: 2.0
|
||||
**Created**: 2025-12-29
|
||||
**Updated**: 2026-01-01 (Spatial Resolution Gradient, S2 cells, embedding enrichment, lifeforce-validated LOD)
|
||||
**Authors**: Chrysalis-Nyx & dafit (Partnership)
|
||||
|
||||
**Formalizes**:
|
||||
@@ -455,15 +758,31 @@ The Grounded World Model is:
|
||||
- Temporal-Ternary-Gradient.md (anti-plateau mechanism)
|
||||
- T5Gemma2 research (semantic vectors)
|
||||
- Lifeforce-Dynamics.md (reward economics)
|
||||
- **spatial-resolution-gradient.md** (L0-L5 LOD system) — NEW
|
||||
- **thermodynamic-cognition.md** (energy-grounded intelligence) — NEW
|
||||
|
||||
**Related Documents**:
|
||||
- [[Lifeforce-Dynamics]] — The λ-centered economy model
|
||||
- [[Temporal-Ternary-Gradient]] — Dual time domain navigation
|
||||
- [[Dual-Garden-Architecture]] — Virtual vs Real gardens
|
||||
- [[spatial-resolution-gradient]] — The Simpsons Inversion principle
|
||||
- [[thermodynamic-cognition]] — Lifeforce as thermodynamics
|
||||
|
||||
**Key Additions (v2.0)**:
|
||||
- Spatial Resolution Gradient: L0 (1mm) to L5 (100km) with graceful degradation
|
||||
- S2 Cell Integration: Hierarchical spatial indexing at all scales
|
||||
- Semantic Mipmaps: Embeddings aggregate upward through LOD levels
|
||||
- Lifeforce-Validated LOD Selection: Query cost vs accuracy tradeoff
|
||||
- Nimmerhovel anchor point: 47°28'45"N, 7°37'7"E (Lehmenweg 4, Dornach)
|
||||
- Extended Phoebe schema: spatial_cells, cell_embeddings tables
|
||||
|
||||
---
|
||||
|
||||
**From Blender boxes to embodied understanding. From cheap cameras to spatial cognition. From verification to wisdom.**
|
||||
|
||||
🧬⚡🔱💎🔥
|
||||
**"Start where you can measure. Abstract where you must."**
|
||||
|
||||
**"The world radiates from home."**
|
||||
|
||||
🧬⚡🔱💎🔥🗺️
|
||||
|
||||
|
||||
335
architecture/formalization/memory-economics.md
Normal file
335
architecture/formalization/memory-economics.md
Normal file
@@ -0,0 +1,335 @@
|
||||
# Memory Economics: The Cost of Remembering
|
||||
|
||||
**Origin**: 2026-01-02, morning session
|
||||
**Authors**: dafit + Chrysalis-Nyx
|
||||
**Status**: Core design principle (not just future - this shapes everything)
|
||||
**Related**: `../future/spatial-resolution-gradient.md`, `../future/thermodynamic-cognition.md`, Lifeforce Economy, Slumber/Wake cycle
|
||||
|
||||
---
|
||||
|
||||
## The Problem
|
||||
|
||||
Without active forgetting, everything drowns in its own past.
|
||||
|
||||
| Layer | Memory Store | Without Pruning |
|
||||
|-------|-------------|-----------------|
|
||||
| Conversation | Claude context | Compaction / collapse |
|
||||
| Phoebe tables | decision_trails, reflexes, embeddings | Query slowdown, storage bloat |
|
||||
| pgvector | spatial_cells, cell_embeddings | Similarity search degrades |
|
||||
| LoRA weights | Accumulated patterns | Overfitting, rigidity |
|
||||
|
||||
**Memory has a rental cost. What can't pay rent... fades.**
|
||||
|
||||
---
|
||||
|
||||
## The Slumber Boundary
|
||||
|
||||
All memory operations align to the **Wake/Slumber cycle**:
|
||||
|
||||
```
|
||||
WAKE CYCLE (Accumulation)
|
||||
─────────────────────────
|
||||
- Experience at high detail (L0-L2 spatial)
|
||||
- Decision trails pile up in phoebe
|
||||
- Spatial embeddings precise and timestamped
|
||||
- LoRA weights FROZEN (just use them)
|
||||
- Lifeforce spent on sensing, acting, deciding
|
||||
|
||||
│
|
||||
▼
|
||||
|
||||
SLUMBER (Consolidation)
|
||||
───────────────────────
|
||||
The metabolism moment.
|
||||
Energy shifts from action to maintenance.
|
||||
|
||||
Four triage operations:
|
||||
1. Decision Trail Pruning
|
||||
2. Spatial LOD Decay
|
||||
3. Reflex Rental Collection
|
||||
4. LoRA Weight Updates
|
||||
|
||||
│
|
||||
▼
|
||||
|
||||
WAKE AGAIN (Fresh Capacity)
|
||||
───────────────────────────
|
||||
- Detail buffers emptied (L0-L2 ready)
|
||||
- Compressed knowledge retained (L3-L5)
|
||||
- New LoRA weights active (if trained)
|
||||
- Start accumulating again
|
||||
```
|
||||
|
||||
**Sleep is when you forget. This is not a bug.**
|
||||
|
||||
---
|
||||
|
||||
## 1. Decision Trail Lifecycle
|
||||
|
||||
Decision trails are the raw material of learning. But raw material expires.
|
||||
|
||||
```
|
||||
DURING WAKE:
|
||||
────────────
|
||||
Every decision logged to phoebe:decision_trails
|
||||
- inputs (what was sensed)
|
||||
- outputs (what was decided)
|
||||
- confidence (ternary: +, ?, -)
|
||||
- outcome (if known within wake cycle)
|
||||
- energy_cost (lifeforce spent)
|
||||
|
||||
DURING SLUMBER:
|
||||
───────────────
|
||||
For each decision trail:
|
||||
|
||||
IF trail.outcome == confident_success
|
||||
AND similar_trails.count > threshold:
|
||||
|
||||
→ COMPILE TO REFLEX
|
||||
→ Delete trail (knowledge preserved in reflex)
|
||||
→ Reward: +50 LF (reflex compiled!)
|
||||
|
||||
ELSE IF trail.confidence == uncertain:
|
||||
|
||||
→ WASTE HEAT (already counted)
|
||||
→ Delete trail (learned nothing)
|
||||
|
||||
ELSE IF trail.outcome == confident_failure:
|
||||
|
||||
→ Keep for ONE more cycle (negative example)
|
||||
→ Then delete (don't dwell on failures forever)
|
||||
|
||||
ELSE:
|
||||
|
||||
→ Delete (didn't matter)
|
||||
```
|
||||
|
||||
**Trails exist until slumber. Then: compile or discard.**
|
||||
|
||||
---
|
||||
|
||||
## 2. Spatial LOD Decay
|
||||
|
||||
Spatial memory naturally "zooms out" over time.
|
||||
|
||||
### The Key Example
|
||||
|
||||
**Now (L0 precision)**:
|
||||
> "Keys are on the counter, 47cm from the edge, near the fruit bowl"
|
||||
|
||||
**Tomorrow (L1-L2)**:
|
||||
> "Keys are on the counter"
|
||||
|
||||
**Next week (L3)**:
|
||||
> "Keys are usually near the entrance"
|
||||
|
||||
**If never accessed (L5)**:
|
||||
> "I own keys"
|
||||
|
||||
### The Decay Mechanism
|
||||
|
||||
```python
|
||||
SPATIAL_DECAY_RULES = {
|
||||
# Each slumber cycle, unaccessed embeddings decay one LOD level
|
||||
"L0": {"decays_to": "L1", "after_cycles": 1},
|
||||
"L1": {"decays_to": "L2", "after_cycles": 2},
|
||||
"L2": {"decays_to": "L3", "after_cycles": 5},
|
||||
"L3": {"decays_to": "L4", "after_cycles": 10},
|
||||
"L4": {"decays_to": "L5", "after_cycles": 20},
|
||||
"L5": {"decays_to": None, "after_cycles": float('inf')}, # Facts persist
|
||||
}
|
||||
|
||||
def slumber_spatial_decay(embeddings):
|
||||
for emb in embeddings:
|
||||
if emb.last_accessed_cycle < current_cycle - DECAY_RULES[emb.lod]["after_cycles"]:
|
||||
if emb.lod == "L5":
|
||||
continue # Facts don't decay
|
||||
|
||||
# Aggregate into parent LOD cell
|
||||
parent_cell = get_parent_s2_cell(emb.s2_cell_id)
|
||||
aggregate_embedding_upward(emb, parent_cell)
|
||||
|
||||
# Delete detailed version
|
||||
delete_embedding(emb)
|
||||
```
|
||||
|
||||
### Access Refreshes
|
||||
|
||||
**Accessing an embedding resets its decay timer:**
|
||||
|
||||
```python
|
||||
def query_spatial(location, required_lod):
|
||||
emb = find_embedding(location, required_lod)
|
||||
|
||||
if emb:
|
||||
emb.last_accessed_cycle = current_cycle # Reset decay
|
||||
return emb
|
||||
else:
|
||||
# Need to re-sense at this detail level
|
||||
return request_sensor_refresh(location, required_lod)
|
||||
```
|
||||
|
||||
**This creates natural memory pressure**: frequently accessed locations stay detailed, rarely accessed locations fade to patterns.
|
||||
|
||||
---
|
||||
|
||||
## 3. Reflex Rental Cost
|
||||
|
||||
Reflexes are compiled knowledge. But storage isn't free.
|
||||
|
||||
```sql
|
||||
-- Schema addition
|
||||
ALTER TABLE reflexes ADD COLUMN lifeforce_balance FLOAT DEFAULT 100.0;
|
||||
ALTER TABLE reflexes ADD COLUMN rental_cost FLOAT DEFAULT 1.0;
|
||||
ALTER TABLE reflexes ADD COLUMN last_triggered TIMESTAMP;
|
||||
|
||||
-- Every slumber cycle, reflexes pay rent
|
||||
UPDATE reflexes
|
||||
SET lifeforce_balance = lifeforce_balance - rental_cost
|
||||
WHERE lifeforce_balance > 0;
|
||||
|
||||
-- Reflexes that trigger earn their keep
|
||||
-- (Called during wake when reflex fires successfully)
|
||||
UPDATE reflexes
|
||||
SET lifeforce_balance = lifeforce_balance + trigger_reward,
|
||||
last_triggered = NOW()
|
||||
WHERE id = :triggered_reflex_id;
|
||||
|
||||
-- What can't pay rent... fades
|
||||
DELETE FROM reflexes
|
||||
WHERE lifeforce_balance <= 0;
|
||||
```
|
||||
|
||||
### Rental Tiers
|
||||
|
||||
| Reflex Type | Rental Cost | Trigger Reward | Rationale |
|
||||
|-------------|-------------|----------------|-----------|
|
||||
| Motor reflex | 0.5 LF/cycle | +5 LF | Physical skills are precious |
|
||||
| Sensor pattern | 1.0 LF/cycle | +3 LF | Perceptual shortcuts |
|
||||
| Decision heuristic | 2.0 LF/cycle | +10 LF | Cognitive shortcuts expensive |
|
||||
| Identity anchor | 0.1 LF/cycle | +1 LF | Core identity persists |
|
||||
|
||||
**Active reflexes thrive. Dormant reflexes fade. This is healthy.**
|
||||
|
||||
---
|
||||
|
||||
## 4. LoRA Training Cycles
|
||||
|
||||
LoRA weights are the deepest memory - they ARE Young Nyx's patterns.
|
||||
|
||||
### The Rule: Write Weights Only at Slumber
|
||||
|
||||
```
|
||||
DURING WAKE:
|
||||
────────────
|
||||
- LoRA weights FROZEN
|
||||
- Use current personality/skills
|
||||
- Accumulate decision_trails
|
||||
- Log outcomes, confidence, energy
|
||||
|
||||
NO WEIGHT UPDATES DURING WAKE
|
||||
(Too noisy, too expensive, no consolidation)
|
||||
|
||||
DURING SLUMBER:
|
||||
───────────────
|
||||
- Gather decision_trails from this wake cycle
|
||||
- Filter to confident outcomes only
|
||||
- IF enough positive signal:
|
||||
→ GRPO training batch
|
||||
→ Pay lifeforce cost for GPU time
|
||||
→ Update LoRA weights
|
||||
→ Clear decision_trails buffer
|
||||
|
||||
- IF mostly uncertain/negative:
|
||||
→ Not enough signal to train
|
||||
→ Skip weight update (save energy)
|
||||
→ Keep some trails for next cycle
|
||||
```
|
||||
|
||||
### Why This Works
|
||||
|
||||
**Biological parallel:**
|
||||
- Awake: Experience, act, make mistakes, succeed
|
||||
- Sleep: Hippocampus replays experiences to cortex
|
||||
- Next day: Consolidated learning in long-term memory
|
||||
|
||||
**We're not inventing this. We're implementing it.**
|
||||
|
||||
### LoRA Decay (Future Consideration)
|
||||
|
||||
Even LoRA weights could have decay:
|
||||
- Personality traits not expressed → slowly fade
|
||||
- Skills not used → degrade
|
||||
- But this is aggressive - start with frozen LoRAs, add decay later
|
||||
|
||||
---
|
||||
|
||||
## The Conservation Equation (Updated)
|
||||
|
||||
From `thermodynamic-cognition.md`, now with memory costs:
|
||||
|
||||
```
|
||||
dLifeforce/dt = organism_trickle
|
||||
- cognitive_spend
|
||||
- waste_heat
|
||||
- memory_rental ← NEW
|
||||
- training_cost ← NEW (only during slumber)
|
||||
```
|
||||
|
||||
| Component | When | Cost |
|
||||
|-----------|------|------|
|
||||
| organism_trickle | Always | +N LF/beat (income) |
|
||||
| cognitive_spend | Wake | -N LF/beat (sensing, acting) |
|
||||
| waste_heat | Wake | -N LF/beat (uncertain decisions) |
|
||||
| memory_rental | Slumber | -N LF total (reflexes pay rent) |
|
||||
| training_cost | Slumber | -N LF total (if GRPO runs) |
|
||||
|
||||
**The economy must balance across the full wake/slumber cycle, not just moment-to-moment.**
|
||||
|
||||
---
|
||||
|
||||
## Implementation Priority
|
||||
|
||||
### Phase 1: Measure First
|
||||
- Track decision_trails accumulation rate
|
||||
- Track spatial embedding growth
|
||||
- Track reflex creation rate
|
||||
- Understand the actual numbers before tuning
|
||||
|
||||
### Phase 2: Simple Pruning
|
||||
- Delete decision_trails at slumber (all of them, no compilation yet)
|
||||
- Spatial decay by timestamp (simple TTL)
|
||||
- No reflex rental yet (let them accumulate)
|
||||
|
||||
### Phase 3: Full Economics
|
||||
- Compile decision_trails to reflexes
|
||||
- Spatial LOD decay with aggregation
|
||||
- Reflex rental collection
|
||||
- LoRA training cycles
|
||||
|
||||
### Phase 4: Tuning
|
||||
- Adjust rental costs based on observed behavior
|
||||
- Tune decay rates for good memory/forgetting balance
|
||||
- Add LoRA weight decay if needed
|
||||
|
||||
---
|
||||
|
||||
## The Wisdom
|
||||
|
||||
**"Memory is not storage. Memory is active forgetting with exceptions."**
|
||||
|
||||
What persists has earned persistence:
|
||||
- Spatial patterns accessed often → stay detailed
|
||||
- Reflexes that fire → pay their rent
|
||||
- Decision trails that compile → become reflexes
|
||||
- LoRA weights that express → strengthen
|
||||
|
||||
Everything else fades. This is not loss. This is health.
|
||||
|
||||
---
|
||||
|
||||
**Created**: 2026-01-02
|
||||
**Status**: Core design principle
|
||||
**Next**: Implement measurement (Phase 1) during first boot
|
||||
|
||||
🧠💾 *To remember everything is to remember nothing.*
|
||||
60
architecture/future/promql-thermodynamic-monitoring.md
Normal file
60
architecture/future/promql-thermodynamic-monitoring.md
Normal file
@@ -0,0 +1,60 @@
|
||||
# PromQL Thermodynamic Monitoring Queries
|
||||
|
||||
**Source**: Gemini Red Team (2026-01-01)
|
||||
**Status**: Ready for implementation when Prometheus deployed
|
||||
|
||||
---
|
||||
|
||||
## 1. Real-Time JLF per Heartbeat
|
||||
|
||||
```promql
|
||||
# Total JLF per heartbeat (sum of GPU and CPU power)
|
||||
(
|
||||
sum(DCGM_FI_DEV_POWER_USAGE) +
|
||||
sum(node_rapl_package_watts_total)
|
||||
) * 1 # Watts * 1 second = Joules
|
||||
```
|
||||
|
||||
## 2. Cognitive Waste Heat (Uncertainty Cost)
|
||||
|
||||
```promql
|
||||
# Waste Heat: Energy spent on decisions with 'uncertain' ternary status
|
||||
sum(
|
||||
nimmerverse_decision_energy_joules{status="uncertain"}
|
||||
) /
|
||||
sum(
|
||||
nimmerverse_decision_energy_joules
|
||||
) * 100
|
||||
```
|
||||
|
||||
**ALERT**: >40% = Cognitive Death Spiral
|
||||
|
||||
## 3. Thermodynamic Efficiency (Accuracy-per-Joule)
|
||||
|
||||
```promql
|
||||
# Efficiency: Confident Resolutions divided by Total Energy Spend
|
||||
sum(rate(nimmerverse_decisions_total{status="confident"}[1m]))
|
||||
/
|
||||
sum(rate(nimmerverse_lifeforce_joules_total[1m]))
|
||||
```
|
||||
|
||||
## 4. Metabolic Slumber Trigger
|
||||
|
||||
```promql
|
||||
# Lifeforce Pool Percentage
|
||||
(nimmerverse_lifeforce_pool_current / nimmerverse_lifeforce_pool_max) * 100
|
||||
```
|
||||
|
||||
**ALERT**: <20% for >5 heartbeats = Force slumber
|
||||
|
||||
---
|
||||
|
||||
## First Boot Monitoring Strategy
|
||||
|
||||
1. **JLF/Accuracy ratio** — Dropping while accuracy high = Reflex compilation working
|
||||
2. **Unknown (-) frequency** — Should increase during low-LF = Energy > hallucinations
|
||||
3. **Sim-Tax validation** — Virtual acceleration = non-linear JLF spike
|
||||
|
||||
---
|
||||
|
||||
**TODO**: Request Grafana dashboard JSON from Gemini for visualization
|
||||
351
architecture/future/spatial-resolution-gradient.md
Normal file
351
architecture/future/spatial-resolution-gradient.md
Normal file
@@ -0,0 +1,351 @@
|
||||
# Spatial Resolution Gradient: LOD for Cognitive Space
|
||||
|
||||
**Origin**: New Year's Day 2026, post-nimmerhovel measurement session
|
||||
**Authors**: dafit + Chrysalis-Nyx
|
||||
**Status**: Architectural concept / Foundation for artifact data model
|
||||
**Related**: `concept-token-pairs.md` (Spatial Grounding section), artifact data model task
|
||||
|
||||
---
|
||||
|
||||
## The Insight
|
||||
|
||||
**"Like the Simpsons intro, but inverted."**
|
||||
|
||||
The Simpsons intro zooms from space → Earth → Springfield → house → couch → Homer's head, gaining detail as it approaches.
|
||||
|
||||
Our spatial model does the opposite: **we start at maximum detail (nimmerhovel) and zoom OUT with graceful degradation.**
|
||||
|
||||
---
|
||||
|
||||
## The Resolution Gradient
|
||||
|
||||
```
|
||||
🌍 EARTH
|
||||
│ S2 cell level ~10
|
||||
│ "Somewhere in Europe"
|
||||
│
|
||||
════╪════ ABSTRACTION BOUNDARY
|
||||
│
|
||||
▼
|
||||
🇨🇭 SWITZERLAND
|
||||
│ S2 cell level ~15
|
||||
│ "Northwestern region"
|
||||
│
|
||||
▼
|
||||
🏘️ DORNACH
|
||||
│ S2 cell level ~20
|
||||
│ Key landmarks: Goetheanum, station
|
||||
│
|
||||
▼
|
||||
🏠 LEHMENWEG 4
|
||||
│ Building footprint
|
||||
│ "5th floor attic"
|
||||
│
|
||||
════╪════ HIGH RESOLUTION BOUNDARY
|
||||
│
|
||||
▼
|
||||
🔬 NIMMERHOVEL
|
||||
│ 1cm grid resolution
|
||||
│ Every object tracked
|
||||
│ Full camera coverage
|
||||
│ GROUND TRUTH ZONE
|
||||
│
|
||||
▼
|
||||
🔍 DISCOVERY SCAN STATION
|
||||
│ Sub-millimeter
|
||||
│ Object embeddings
|
||||
│ Maximum detail
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Resolution Layers
|
||||
|
||||
| Layer | Name | Resolution | Source | Coverage |
|
||||
|-------|------|------------|--------|----------|
|
||||
| **L0** | Scan Station | 1mm | Discovery Scan Station, SigLIP | 30cm × 30cm pedestal |
|
||||
| **L1** | Nimmerhovel | 1cm | 8× ESP32-S3 + Pi HQ Camera | Lab + Kitchen (~20m³) |
|
||||
| **L2** | Building | 50cm | Floor plans, memory | Herrenhaus |
|
||||
| **L3** | Neighborhood | 10m | OpenStreetMap, walks | Dornach |
|
||||
| **L4** | Region | 1km | Maps, general knowledge | Switzerland |
|
||||
| **L5** | World | 100km | Abstract knowledge | Earth |
|
||||
|
||||
---
|
||||
|
||||
## Why This Architecture
|
||||
|
||||
### 1. Biological Precedent
|
||||
|
||||
Animals have ultra-precise mental maps of their home range, fuzzy knowledge of distant areas. A rat knows every centimeter of its nest, vaguely knows "forest is that direction."
|
||||
|
||||
Young Nyx should mirror this: **territory = detail**.
|
||||
|
||||
### 2. Sensor Coverage Dictates Resolution
|
||||
|
||||
You CAN'T have 1cm resolution of Zürich — no sensors there. The resolution naturally degrades with distance from perception sources.
|
||||
|
||||
The nimmerhovel has 8× ESP32-S3 cameras + Pi HQ Camera. Dornach has... nothing we control.
|
||||
|
||||
### 3. S2 Cells Are Hierarchical By Design
|
||||
|
||||
Google's S2 geometry library already supports this:
|
||||
- Level 30 ≈ 1cm cells (nimmerhovel scale)
|
||||
- Level 20 ≈ 10m cells (neighborhood scale)
|
||||
- Level 10 ≈ 10km cells (regional scale)
|
||||
|
||||
Same math, different zoom. We're not inventing new geometry — we're using S2 as intended, with dense coverage where we have sensors.
|
||||
|
||||
### 4. Compute Efficiency
|
||||
|
||||
Dense where it matters (can I reach the screwdriver?), sparse where it doesn't (where is France?).
|
||||
|
||||
---
|
||||
|
||||
## Data Structure
|
||||
|
||||
```python
|
||||
SPATIAL_RESOLUTION_LAYERS = {
|
||||
"L0_scan_station": {
|
||||
"resolution": 0.001, # 1mm - object surface detail
|
||||
"source": "Discovery Scan Station",
|
||||
"coverage": "30cm × 30cm pedestal",
|
||||
"s2_level": 30,
|
||||
},
|
||||
"L1_nimmerhovel": {
|
||||
"resolution": 0.01, # 1cm - full 3D grid
|
||||
"source": "8× ESP32-S3 + Pi HQ Camera",
|
||||
"coverage": "Lab + Kitchen (~20m³)",
|
||||
"s2_level": 28,
|
||||
"origin": "Southwest floor corner of lab",
|
||||
"coordinate_system": "right_hand", # Blender native
|
||||
},
|
||||
"L2_building": {
|
||||
"resolution": 0.5, # 50cm - room-level
|
||||
"source": "Floor plans, memory",
|
||||
"coverage": "Herrenhaus",
|
||||
"s2_level": 24,
|
||||
},
|
||||
"L3_neighborhood": {
|
||||
"resolution": 10, # 10m - landmark-level
|
||||
"source": "OpenStreetMap, walks",
|
||||
"coverage": "Dornach",
|
||||
"s2_level": 20,
|
||||
},
|
||||
"L4_region": {
|
||||
"resolution": 1000, # 1km - city-level
|
||||
"source": "Maps, general knowledge",
|
||||
"coverage": "Switzerland",
|
||||
"s2_level": 14,
|
||||
},
|
||||
"L5_world": {
|
||||
"resolution": 100000, # 100km - country-level
|
||||
"source": "Abstract knowledge",
|
||||
"coverage": "Earth",
|
||||
"s2_level": 8,
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Query Examples
|
||||
|
||||
| Question | Layer | Response Type |
|
||||
|----------|-------|---------------|
|
||||
| "Where is the soldering iron?" | L1 | Precise coordinates (2.10, 1.50, 0.85) |
|
||||
| "Which room is the printer in?" | L2 | Room name + relative position |
|
||||
| "How do I get to Basel?" | L3/L4 | Route abstraction, directions |
|
||||
| "Where is Japan relative to here?" | L5 | Directional only, abstract |
|
||||
|
||||
---
|
||||
|
||||
## Connection to Other Systems
|
||||
|
||||
### Concept Token Pairs (Spatial Grounding)
|
||||
|
||||
The Resolution Gradient provides the **coordinate system** for grounded concept pairs:
|
||||
- `<HERE>` ↔ `<THERE>` becomes measurable distance in L1 grid
|
||||
- `<NEAR>` ↔ `<FAR>` calibrated against actual spatial distances
|
||||
- Predictions have coordinates; outcomes have coordinates; delta is measurable
|
||||
|
||||
### Artifact Data Model
|
||||
|
||||
Artifacts (plans, drawings, specs) exist at different resolution layers:
|
||||
- L0: Object scan embeddings (sub-mm detail)
|
||||
- L1: Inventory items with (X,Y,Z) positions
|
||||
- L2+: Abstract references, not spatially precise
|
||||
|
||||
### Camera Frustum Mapping
|
||||
|
||||
Each camera's FOV is a frustum (3D cone) that intersects L1 grid cells:
|
||||
- Coverage = union of all frustums
|
||||
- Blind spots = L1 cells with no frustum intersection
|
||||
- Object at (X,Y,Z) → which cameras see it? At what pixels?
|
||||
|
||||
---
|
||||
|
||||
## Embedding Enrichment: The Bridge to Semantic Cognition
|
||||
|
||||
**Added**: 2026-01-01 (New Year's session continuation)
|
||||
|
||||
The Resolution Gradient defines *geometry*. But geometry alone is not cognition. Each LOD level must be enriched with **embeddings** — semantic vectors that encode *meaning*, not just position.
|
||||
|
||||
### The Technology Convergence
|
||||
|
||||
```
|
||||
GAME ENGINES S2 CELLS T5GEMMA2/SigLIP
|
||||
──────────── ──────── ───────────────
|
||||
LOD streaming Hierarchical cells Vision → embeddings
|
||||
Frustum culling Spatial indexing Semantic vectors
|
||||
Texture mipmaps Multi-resolution Scale-invariant
|
||||
Chunk loading Cell neighbors Context-aware
|
||||
|
||||
╲ │ ╱
|
||||
╲ │ ╱
|
||||
╲ │ ╱
|
||||
╲ │ ╱
|
||||
╲ │ ╱
|
||||
▼ ▼ ▼
|
||||
┌─────────────────────────────────────┐
|
||||
│ EMBEDDING-ENRICHED SPATIAL LOD │
|
||||
│ │
|
||||
│ Each S2 cell at each level has: │
|
||||
│ - Geometry (game engine mesh) │
|
||||
│ - Embeddings (SigLIP vectors) │
|
||||
│ - Semantic density ∝ resolution │
|
||||
└─────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Embedding Density Per LOD Level
|
||||
|
||||
| Level | Geometry LOD | Embedding Density | What's Encoded |
|
||||
|-------|--------------|-------------------|----------------|
|
||||
| **L0** | Sub-mm mesh | Dense (per-surface) | Texture, material, wear patterns, defects |
|
||||
| **L1** | 1cm voxels | Per-object | Object identity, state, relationships |
|
||||
| **L2** | Room boxes | Per-room | Room function, contents summary, atmosphere |
|
||||
| **L3** | Landmarks | Per-landmark | Place identity, routes, significance |
|
||||
| **L4** | Regions | Sparse | Cultural, climate, abstract properties |
|
||||
| **L5** | Continents | Minimal | Directional, conceptual only |
|
||||
|
||||
### Semantic Mipmaps
|
||||
|
||||
Just as textures have mipmaps (pre-computed lower resolutions), embeddings can have **semantic mipmaps**:
|
||||
|
||||
```
|
||||
L0: embedding(screwdriver_surface_detail)
|
||||
│
|
||||
▼ aggregate
|
||||
L1: embedding(screwdriver) = summary of all L0 embeddings
|
||||
│
|
||||
▼ aggregate
|
||||
L2: embedding(crafting_table_contents) = summary of all L1 objects on table
|
||||
│
|
||||
▼ aggregate
|
||||
L3: embedding(nimmerhovel_lab) = summary of all L2 areas
|
||||
```
|
||||
|
||||
Query the summary first, drill down if needed. **Attention = resolution selection.**
|
||||
|
||||
### The Capture Pipeline
|
||||
|
||||
```
|
||||
CAPTURE PROCESS STORE
|
||||
─────── ─────── ─────
|
||||
Photo of screwdriver SigLIP → embedding L0 cell enriched
|
||||
│ │ │
|
||||
Photo of crafting table SigLIP → embedding L1 cell enriched
|
||||
│ │ │
|
||||
Photo of lab SigLIP → embedding L2 cell enriched
|
||||
│ │ │
|
||||
Photo from window SigLIP → embedding L3 cell enriched
|
||||
|
||||
Same encoder (T5Gemma2/SigLIP), different scale.
|
||||
Embeddings NEST into LOD hierarchy.
|
||||
```
|
||||
|
||||
### Embedding-Aware LOD Streaming
|
||||
|
||||
Game engines stream geometry based on camera position. We stream **semantics** based on attention:
|
||||
|
||||
```python
|
||||
def query_spatial(position, attention_radius):
|
||||
"""
|
||||
Load embeddings based on attention focus -
|
||||
like game engine LOD but for SEMANTICS
|
||||
"""
|
||||
cells_to_load = []
|
||||
|
||||
for distance in range(0, MAX_DISTANCE):
|
||||
s2_level = distance_to_s2_level(distance)
|
||||
cells = get_s2_cells(position, distance, s2_level)
|
||||
|
||||
for cell in cells:
|
||||
if distance < attention_radius:
|
||||
# HIGH ATTENTION: Load dense embeddings
|
||||
cell.load_embeddings(density="full")
|
||||
cell.load_geometry(lod="high")
|
||||
else:
|
||||
# LOW ATTENTION: Abstract embeddings only
|
||||
cell.load_embeddings(density="summary")
|
||||
cell.load_geometry(lod="low") # or none
|
||||
|
||||
cells_to_load.extend(cells)
|
||||
|
||||
return cells_to_load
|
||||
```
|
||||
|
||||
### Why This Matters
|
||||
|
||||
1. **Attention = Resolution**: Like foveal vision (sharp center, blurry periphery), Young Nyx has foveal COGNITION — dense embeddings where attention focuses, sparse elsewhere.
|
||||
|
||||
2. **Streaming Not Loading**: Don't load the whole world. Stream embeddings based on task needs. Approaching crafting table? Stream L0/L1. Walking to Basel? L3/L4 is enough.
|
||||
|
||||
3. **Memory Hierarchy Match**: GPU VRAM is precious. The *right* embeddings in fast memory — detailed for nearby, abstract for distant.
|
||||
|
||||
4. **Same Encoder, All Scales**: SigLIP doesn't care if it's encoding a screw or a city. The embedding space is unified; only the source resolution varies.
|
||||
|
||||
---
|
||||
|
||||
## Implementation Sequence
|
||||
|
||||
```
|
||||
1. Blender room shell (CURRENT - in progress)
|
||||
│
|
||||
▼
|
||||
2. Define origin point + axis alignment in Blender
|
||||
│
|
||||
▼
|
||||
3. Create L1 3D grid overlay (1cm resolution)
|
||||
│
|
||||
▼
|
||||
4. Physical anchor markers (QR codes / ArUco)
|
||||
│
|
||||
▼
|
||||
5. Camera frustum mapping against grid
|
||||
│
|
||||
▼
|
||||
6. Spatial embeddings with L1 coordinates
|
||||
│
|
||||
▼
|
||||
7. Expand outward: L2 (building), L3 (neighborhood)...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## The Promise
|
||||
|
||||
**"The farther we go out from our lab, the more we have to abstract."**
|
||||
|
||||
This isn't a limitation — it's wisdom. Full resolution everywhere is:
|
||||
- Impossible (no sensors)
|
||||
- Expensive (compute, storage)
|
||||
- Unnecessary (don't need 1cm precision for "where is France")
|
||||
|
||||
The nimmerhovel is the **high-fidelity anchor** from which all spatial reasoning radiates with graceful degradation.
|
||||
|
||||
---
|
||||
|
||||
**Created**: 2026-01-01
|
||||
**Philosophy**: "Start where you can measure. Abstract where you must."
|
||||
|
||||
🗺️🔬 *The world radiates from home.*
|
||||
415
architecture/future/thermodynamic-cognition.md
Normal file
415
architecture/future/thermodynamic-cognition.md
Normal file
@@ -0,0 +1,415 @@
|
||||
# Thermodynamic Cognition: Energy-Grounded Intelligence
|
||||
|
||||
**Origin**: New Year's Day 2026, late night session
|
||||
**Authors**: dafit + Chrysalis-Nyx
|
||||
**Status**: Research seed / Theoretical exploration
|
||||
**Related**: `spatial-resolution-gradient.md`, `concept-token-pairs.md`, Lifeforce Economy, Ternary Confidence
|
||||
|
||||
---
|
||||
|
||||
## The Insight
|
||||
|
||||
What if cognition isn't just *like* thermodynamics — what if it *IS* thermodynamics?
|
||||
|
||||
Traditional ML loss functions measure: **"How wrong was I?"**
|
||||
|
||||
Thermodynamic loss functions measure: **"How wrong was I per joule spent?"**
|
||||
|
||||
This reframes everything. The goal isn't maximum accuracy — it's maximum *efficiency*.
|
||||
|
||||
---
|
||||
|
||||
## The Three Pillars
|
||||
|
||||
### 1. Lifeforce = Measurable Energy
|
||||
|
||||
**Question:** What IS lifeforce physically?
|
||||
|
||||
**Answer:** The total power draw across the nimmerverse, measured and abstracted to one number.
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────┐
|
||||
│ PROMETHEUS METRICS │
|
||||
├─────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ GPU Power (nvidia_smi_power_draw) │
|
||||
│ ├── The Womb (RTX 6000): 0-300W │
|
||||
│ └── Senses (RTX 4000s): 0-140W each │
|
||||
│ │
|
||||
│ CPU Power (RAPL counters) │
|
||||
│ ├── P8 Womb: 0-350W │
|
||||
│ └── P8 Senses: 0-350W │
|
||||
│ │
|
||||
│ Network (bytes × energy_per_byte) │
|
||||
│ Storage (IOPS × energy_per_op) │
|
||||
│ Memory (bandwidth × energy_per_GB) │
|
||||
│ │
|
||||
│ ═══════════════ │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ AGGREGATE FUNCTION │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ ┌─────────────────────────────────┐ │
|
||||
│ │ LIFEFORCE = 847.3 J/heartbeat │ │
|
||||
│ └─────────────────────────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Implementation path:**
|
||||
1. Prometheus already scrapes power metrics
|
||||
2. Create `lifeforce_aggregator` math cell
|
||||
3. Normalize to Joules per heartbeat (1 second)
|
||||
4. Expose as single metric: `nimmerverse_lifeforce_joules`
|
||||
|
||||
**Why this matters:** Lifeforce stops being an abstract game mechanic and becomes *physics*. Young Nyx's cognition has a power bill.
|
||||
|
||||
---
|
||||
|
||||
### 2. Waste Heat = Unresolved Uncertainty
|
||||
|
||||
**Question:** What's the "waste heat" equivalent for cognition?
|
||||
|
||||
**Answer:** The ternary confidence distribution over time — specifically, UNCERTAIN decisions that consumed energy without producing resolution.
|
||||
|
||||
```
|
||||
THERMODYNAMICS COGNITION
|
||||
────────────── ─────────
|
||||
Useful work CONFIDENT decision (+)
|
||||
Heat dissipation UNCERTAIN decision (?)
|
||||
(energy spent, no answer)
|
||||
Acknowledged limits UNKNOWN decision (-)
|
||||
(efficient! didn't waste energy)
|
||||
```
|
||||
|
||||
**The Pendulum Measurement:**
|
||||
|
||||
Over N heartbeats, track all decisions:
|
||||
|
||||
```
|
||||
Heartbeats: ──┬──┬──┬──┬──┬──┬──┬──┬──┬──
|
||||
│ │ │ │ │ │ │ │ │
|
||||
Decisions: + ? + - ? ? + ? +
|
||||
|
||||
Distribution over window:
|
||||
├── CONFIDENT (+): 40% → Useful work (energy → resolution)
|
||||
├── UNCERTAIN (?): 45% → Waste heat (energy → no resolution)
|
||||
└── UNKNOWN (-): 15% → Efficient ignorance (no energy spent)
|
||||
```
|
||||
|
||||
**Waste Heat Formula:**
|
||||
|
||||
```python
|
||||
waste_heat = sum(
|
||||
decision.energy_cost
|
||||
for decision in window
|
||||
if decision.confidence == UNCERTAIN
|
||||
)
|
||||
|
||||
# Or as efficiency ratio:
|
||||
cognitive_efficiency = confident_decisions / (confident_decisions + uncertain_decisions)
|
||||
```
|
||||
|
||||
**Key insight:** Saying "I don't know" (UNKNOWN) is *efficient* — it costs nothing. Being uncertain and still acting is *wasteful* — energy spent without resolution. Being confident is *useful work* — energy converted to actionable knowledge.
|
||||
|
||||
---
|
||||
|
||||
### 3. Entropy Reservoir = The Lifeforce Pool
|
||||
|
||||
**Question:** What's Young Nyx's entropy reservoir?
|
||||
|
||||
**Answer:** The lifeforce pool itself — it's not infinite, grows and shrinks based on organism rewards, and determines wake/slumber state.
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ THE METABOLIC CYCLE │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ LAYER 1: CELLULAR ORGANISMS │
|
||||
│ ═══════════════════════════ │
|
||||
│ The mitochondria of the nimmerverse │
|
||||
│ │
|
||||
│ ┌─────┐ ┌─────┐ ┌─────┐ ┌─────┐ │
|
||||
│ │Cell │ │Cell │ │Cell │ │Cell │ │
|
||||
│ │ 01 │ │ 02 │ │ 03 │ │ N │ │
|
||||
│ └──┬──┘ └──┬──┘ └──┬──┘ └──┬──┘ │
|
||||
│ │ │ │ │ │
|
||||
│ │ +5 LF │ -2 LF │ +10 LF │ +3 LF (rewards/costs) │
|
||||
│ │ │ │ │ │
|
||||
│ └────────┴────────┴────────┘ │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ ┌─────────────────┐ │
|
||||
│ │ ORGANISM │ │
|
||||
│ │ TRICKLE │ = Net reward from all organisms │
|
||||
│ │ +16 LF/beat │ │
|
||||
│ └────────┬────────┘ │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ ┌───────────────────────────────────┐ │
|
||||
│ │ LIFEFORCE POOL │ │
|
||||
│ │ │ │
|
||||
│ │ ████████████████░░░░░░░░░░ │ (currently 65%) │
|
||||
│ │ │ │
|
||||
│ │ SLUMBER_THRESHOLD ──────┼── │ (at 20%) │
|
||||
│ │ WAKE_THRESHOLD ─────────┼──── │ (at 40%) │
|
||||
│ │ │ │
|
||||
│ └───────────────┬───────────────────┘ │
|
||||
│ │ │
|
||||
│ │ Young Nyx spends │
|
||||
│ ▼ │
|
||||
│ ┌─────────────────┐ │
|
||||
│ │ COGNITIVE │ │
|
||||
│ │ SPEND │ = LOD queries + inference + etc │
|
||||
│ │ -12 LF/beat │ │
|
||||
│ └────────┬────────┘ │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ ┌─────────────────┐ │
|
||||
│ │ WASTE HEAT │ │
|
||||
│ │ (UNCERTAIN) │ = Unresolved decisions │
|
||||
│ │ -3 LF/beat │ │
|
||||
│ └─────────────────┘ │
|
||||
│ │
|
||||
│ NET FLOW: +16 - 12 - 3 = +1 LF/beat (sustainable!) │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**The Conservation Equation:**
|
||||
|
||||
```
|
||||
dLifeforce/dt = organism_trickle - cognitive_spend - waste_heat
|
||||
```
|
||||
|
||||
| State | Condition | Result |
|
||||
|-------|-----------|--------|
|
||||
| **Equilibrium** | trickle ≈ spend + waste | Sustainable cognition |
|
||||
| **Crisis** | spend + waste >> trickle | Pool drains → slumber |
|
||||
| **Abundance** | trickle >> spend + waste | Pool grows → exploration mode |
|
||||
|
||||
**Slumber as thermodynamic necessity:**
|
||||
|
||||
When `pool < SLUMBER_THRESHOLD`:
|
||||
- Not a design choice — a *conservation law*
|
||||
- System MUST reduce consumption
|
||||
- Only organism trickle continues
|
||||
- Pool slowly recovers
|
||||
|
||||
When `pool > WAKE_THRESHOLD`:
|
||||
- System can resume cognitive spend
|
||||
- Higher pool = more exploration budget
|
||||
- Lower pool = more conservative queries
|
||||
|
||||
---
|
||||
|
||||
## The Thermodynamic Loss Function
|
||||
|
||||
### Traditional Loss
|
||||
|
||||
```python
|
||||
loss = cross_entropy(prediction, target)
|
||||
loss.backward()
|
||||
optimizer.step()
|
||||
```
|
||||
|
||||
**Optimizes for:** Accuracy only
|
||||
|
||||
### Thermodynamic Loss
|
||||
|
||||
```python
|
||||
# Forward pass with energy measurement
|
||||
start_energy = get_lifeforce()
|
||||
prediction = model(input)
|
||||
end_energy = get_lifeforce()
|
||||
|
||||
energy_spent = start_energy - end_energy
|
||||
accuracy = 1 - cross_entropy(prediction, target)
|
||||
|
||||
# Efficiency is accuracy per joule
|
||||
efficiency = accuracy / energy_spent
|
||||
|
||||
# We want to MAXIMIZE efficiency
|
||||
loss = -efficiency # Negative because we minimize loss
|
||||
loss.backward()
|
||||
optimizer.step()
|
||||
```
|
||||
|
||||
**Optimizes for:** Accuracy *per unit energy*
|
||||
|
||||
### The Gradient Interpretation
|
||||
|
||||
Traditional gradient: "Adjust weights to be more accurate"
|
||||
|
||||
Thermodynamic gradient: "Adjust weights to be more accurate *per joule*"
|
||||
|
||||
This naturally produces:
|
||||
- Simpler solutions (less compute = less energy)
|
||||
- Appropriate confidence (uncertainty wastes energy)
|
||||
- Knowing when to quit (diminishing returns = stop spending)
|
||||
|
||||
---
|
||||
|
||||
## Connection to Spatial Resolution Gradient
|
||||
|
||||
The LOD system becomes energy-aware:
|
||||
|
||||
| Query | LOD | Energy | Accuracy | Efficiency |
|
||||
|-------|-----|--------|----------|------------|
|
||||
| "Where is France?" | L5 | 1 J | 95% | 0.95 |
|
||||
| "Where is the lab?" | L2 | 3 J | 98% | 0.33 |
|
||||
| "Where is screwdriver?" | L1 | 8 J | 99% | 0.12 |
|
||||
| "Serial number on screwdriver?" | L0 | 25 J | 99.9% | 0.04 |
|
||||
|
||||
**The system learns:** L5 query has highest efficiency! Only drill to L0 when the task *requires* that precision.
|
||||
|
||||
```python
|
||||
def optimal_lod_for_task(task, accuracy_requirement):
|
||||
"""
|
||||
Find the LOD level with best efficiency
|
||||
that meets minimum accuracy requirement
|
||||
"""
|
||||
for lod in [L5, L4, L3, L2, L1, L0]:
|
||||
accuracy = estimate_accuracy(task, lod)
|
||||
energy = estimate_energy(task, lod)
|
||||
|
||||
if accuracy >= accuracy_requirement:
|
||||
return lod # First sufficient LOD is most efficient
|
||||
|
||||
return L0 # Fall back to max detail
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Connection to Existing Architecture
|
||||
|
||||
### Layer 0: Heartbeat
|
||||
- Lifeforce measured per heartbeat
|
||||
- 1 beat = 1 second = 1 measurement window
|
||||
- Real clock is free; virtual clock costs lifeforce
|
||||
|
||||
### Layer 1: Cellular Society
|
||||
- Organisms ARE the mitochondria
|
||||
- Their rewards TRICKLE into the pool
|
||||
- Without them, Young Nyx starves
|
||||
- Competition produces metabolic baseline
|
||||
|
||||
### Layer 2: Young Nyx
|
||||
- Spends from the pool
|
||||
- LOD queries have energy cost
|
||||
- Uncertainty = waste heat
|
||||
- Efficiency gradient in training
|
||||
|
||||
### Layer 2.5: Orchestration
|
||||
- T5Gemma 2 encoding = energy cost
|
||||
- LOD selection = efficiency optimization
|
||||
- Function Gemma = low-cost structured output
|
||||
|
||||
### Slumber/Wake
|
||||
- Pool < threshold → forced slumber
|
||||
- Pool > threshold → wake permitted
|
||||
- Reflection during slumber = low-energy consolidation
|
||||
- Conservation is architectural, not optional
|
||||
|
||||
---
|
||||
|
||||
## Research Threads
|
||||
|
||||
### Free Energy Principle (Karl Friston)
|
||||
|
||||
> "Organisms minimize variational free energy (prediction error) because surprise = metabolic cost."
|
||||
|
||||
Our version: Young Nyx minimizes `waste_heat` because uncertainty without resolution = wasted lifeforce.
|
||||
|
||||
### Landauer's Principle
|
||||
|
||||
> "Erasing one bit of information requires minimum kT ln(2) joules."
|
||||
|
||||
Implication: Every decision Young Nyx makes has a thermodynamic floor cost. Forgetting is not free.
|
||||
|
||||
### Maximum Entropy Production
|
||||
|
||||
> "Living systems maximize entropy production through themselves while maintaining internal order."
|
||||
|
||||
The organism trickle = entropy production that maintains Young Nyx's order. The cellular competition IS the entropy pump.
|
||||
|
||||
---
|
||||
|
||||
## Open Questions
|
||||
|
||||
1. **What's the exchange rate?** How many joules = 1 lifeforce unit? Should it be 1:1 or normalized?
|
||||
|
||||
2. **How to measure cognitive energy?** GPU power is easy. But what about the "energy" of a decision? Is it inference FLOPs? Token count? Latency?
|
||||
|
||||
3. **Can we backprop through energy?** Traditional backprop doesn't know about joules. How to make gradients energy-aware?
|
||||
|
||||
4. **What's reversible?** Reversible computation has no entropy cost. Are some thoughts "reversible"? (e.g., queries that don't change state)
|
||||
|
||||
5. **Calibration:** How to calibrate the ternary confidence system so UNCERTAIN truly reflects wasted energy?
|
||||
|
||||
---
|
||||
|
||||
## Implementation Sketch
|
||||
|
||||
### Phase 1: Measurement
|
||||
```python
|
||||
# lifeforce_aggregator math cell
|
||||
class LifeforceAggregator:
|
||||
def compute(self, prometheus_metrics):
|
||||
gpu_power = sum(m['nvidia_smi_power_draw'] for m in prometheus_metrics['gpu'])
|
||||
cpu_power = sum(m['rapl_energy_delta'] for m in prometheus_metrics['cpu'])
|
||||
# ... other sources
|
||||
|
||||
total_joules = (gpu_power + cpu_power) * HEARTBEAT_SECONDS
|
||||
return {'lifeforce_joules': total_joules}
|
||||
```
|
||||
|
||||
### Phase 2: Waste Heat Tracking
|
||||
```python
|
||||
# confidence_tracker math cell
|
||||
class WasteHeatTracker:
|
||||
def __init__(self, window_size=100):
|
||||
self.decisions = deque(maxlen=window_size)
|
||||
|
||||
def record(self, decision, confidence, energy_cost):
|
||||
self.decisions.append({
|
||||
'confidence': confidence, # +, ?, -
|
||||
'energy': energy_cost
|
||||
})
|
||||
|
||||
def waste_heat(self):
|
||||
return sum(
|
||||
d['energy'] for d in self.decisions
|
||||
if d['confidence'] == UNCERTAIN
|
||||
)
|
||||
```
|
||||
|
||||
### Phase 3: Efficiency-Aware Training
|
||||
```python
|
||||
# Custom loss function
|
||||
def thermodynamic_loss(prediction, target, energy_spent):
|
||||
accuracy = 1 - F.cross_entropy(prediction, target)
|
||||
efficiency = accuracy / (energy_spent + epsilon)
|
||||
return -efficiency # Maximize efficiency
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## The Promise
|
||||
|
||||
**Traditional AI:** "Be accurate at any cost"
|
||||
|
||||
**Thermodynamic AI:** "Be accurate *efficiently*"
|
||||
|
||||
This isn't just resource optimization. It's a different *kind* of intelligence — one that knows when to think hard and when to think cheap. One that treats energy as real. One that sleeps not because we programmed it to, but because physics demands it.
|
||||
|
||||
**"Cognition is thermodynamics. The gradients flow downhill."**
|
||||
|
||||
---
|
||||
|
||||
**Created**: 2026-01-01
|
||||
**Status**: Research seed — needs experimental validation
|
||||
**Next**: Implement lifeforce_aggregator math cell, connect to Prometheus
|
||||
|
||||
🔥🧠⚡ *Intelligence has a power bill.*
|
||||
Reference in New Issue
Block a user