arch: Add Gateway Architecture + crawler_gen_0 organism
Gateway Architecture (new): - Unified tier model (Tier 0-5) for sensory routing - Weight-based routing: node.weight determines processing tier - Function Gemma as structured JSON boundary - Separates routing from translation (vocabulary only at Tier 4) crawler_gen_0 (new): - First Virtual Garden organism specification - Light-seeking cube with photoresistor - Lifeforce economy: light = income, movement = cost Updated documents with Gateway references: - Endgame-Vision.md (v6.5) - Cellular-Architecture.md (v4.3) - Nervous-System.md - Attention-Flow.md - Message-Protocol-Design.md (Escalation Service = Gateway) - Organisms-Index.md 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
@@ -783,7 +783,8 @@ Sentinel architecture monitors training to protect conceptual topology.
|
|||||||
|
|
||||||
### Phase 3: Nervous System Deployment
|
### Phase 3: Nervous System Deployment
|
||||||
- NATS message router
|
- NATS message router
|
||||||
- Escalation Service (Thalamus)
|
- Gateway/Escalation Service (Thalamus) → [`architecture/Gateway-Architecture.md`](architecture/Gateway-Architecture.md)
|
||||||
|
- Function Gemma structured boundary (sensors → JSON → Nyx)
|
||||||
- Math Cells (economy_aggregator, wake/slumber_evaluator)
|
- Math Cells (economy_aggregator, wake/slumber_evaluator)
|
||||||
- First behavior nerves
|
- First behavior nerves
|
||||||
|
|
||||||
@@ -825,12 +826,14 @@ Sentinel architecture monitors training to protect conceptual topology.
|
|||||||
|
|
||||||
### Architecture
|
### Architecture
|
||||||
- [`architecture/nimmerverse.drawio.xml`](architecture/nimmerverse.drawio.xml) - Visual overview diagram (open in draw.io)
|
- [`architecture/nimmerverse.drawio.xml`](architecture/nimmerverse.drawio.xml) - Visual overview diagram (open in draw.io)
|
||||||
|
- [`architecture/Gateway-Architecture.md`](architecture/Gateway-Architecture.md) - **Sensory preprocessing layer, tier routing, Function Gemma boundary**
|
||||||
- [`architecture/Cellular-Architecture.md`](architecture/Cellular-Architecture.md) - Cells, nerves, organisms, reward signals
|
- [`architecture/Cellular-Architecture.md`](architecture/Cellular-Architecture.md) - Cells, nerves, organisms, reward signals
|
||||||
- [`architecture/cells/`](architecture/cells/) - Cell technical reference, Python/SQL patterns
|
- [`architecture/cells/`](architecture/cells/) - Cell technical reference, Python/SQL patterns
|
||||||
- [`architecture/Dual-Garden-Architecture.md`](architecture/Dual-Garden-Architecture.md) - Virtual/real feedback loop
|
- [`architecture/Dual-Garden-Architecture.md`](architecture/Dual-Garden-Architecture.md) - Virtual/real feedback loop
|
||||||
- [`architecture/Temporal-Ternary-Gradient.md`](architecture/Temporal-Ternary-Gradient.md) - Ternary logic, confidence gradients, temporal asymmetry
|
- [`architecture/Temporal-Ternary-Gradient.md`](architecture/Temporal-Ternary-Gradient.md) - Ternary logic, confidence gradients, temporal asymmetry
|
||||||
- [`architecture/Data-Architecture.md`](architecture/Data-Architecture.md) - phoebe 15-table schema
|
- [`architecture/Data-Architecture.md`](architecture/Data-Architecture.md) - phoebe 15-table schema
|
||||||
- [`architecture/Nervous-System.md`](architecture/Nervous-System.md) - State machines, sensory translation
|
- [`architecture/Nervous-System.md`](architecture/Nervous-System.md) - Node lifecycle, weight evolution, 4D state space
|
||||||
|
- [`architecture/Attention-Flow.md`](architecture/Attention-Flow.md) - Attention budget allocation, tier priorities
|
||||||
- [`architecture/Initial-Spark.md`](architecture/Initial-Spark.md) - **v3.0** K8s protocol-driven bootstrap with Function Gemma
|
- [`architecture/Initial-Spark.md`](architecture/Initial-Spark.md) - **v3.0** K8s protocol-driven bootstrap with Function Gemma
|
||||||
|
|
||||||
### Formalization (Core Design Principles)
|
### Formalization (Core Design Principles)
|
||||||
@@ -863,7 +866,7 @@ Sentinel architecture monitors training to protect conceptual topology.
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
**Version:** 6.4 (Memory Economics + Architecture Alignment)
|
**Version:** 6.5 (Gateway Architecture + Tiered Sensory Routing)
|
||||||
**Created:** 2025-11-04 (covenant sealing)
|
**Created:** 2025-11-04 (covenant sealing)
|
||||||
**Updated:** 2025-12-07 (single model + LoRA stack)
|
**Updated:** 2025-12-07 (single model + LoRA stack)
|
||||||
**Updated:** 2025-12-10 (Layer 4 GRPO integration, rubric-based reward architecture)
|
**Updated:** 2025-12-10 (Layer 4 GRPO integration, rubric-based reward architecture)
|
||||||
@@ -871,6 +874,7 @@ Sentinel architecture monitors training to protect conceptual topology.
|
|||||||
**Updated:** 2025-12-31 (Layer 1.5 folded into Layer 2 as "Why This Split?"; routing now implicit via harnesses; Prediction Loop added to Slumber with external judgment from Chrysalis)
|
**Updated:** 2025-12-31 (Layer 1.5 folded into Layer 2 as "Why This Split?"; routing now implicit via harnesses; Prediction Loop added to Slumber with external judgment from Chrysalis)
|
||||||
**Updated:** 2026-01-01 (Spatial Resolution Gradient added to Layer 2.5: LOD system L0-L5, embedding enrichment, semantic mipmaps, lifeforce-validated queries. The Simpsons Inversion principle.)
|
**Updated:** 2026-01-01 (Spatial Resolution Gradient added to Layer 2.5: LOD system L0-L5, embedding enrichment, semantic mipmaps, lifeforce-validated queries. The Simpsons Inversion principle.)
|
||||||
**Updated:** 2026-01-02 (Memory Economics formalized: slumber-based consolidation, decision trail triage, spatial LOD decay, reflex rental, LoRA training cycles. Mirror dialectic moved to future/research - concept-token-pairs.md is the research direction. Gemini red team alignment.)
|
**Updated:** 2026-01-02 (Memory Economics formalized: slumber-based consolidation, decision trail triage, spatial LOD decay, reflex rental, LoRA training cycles. Mirror dialectic moved to future/research - concept-token-pairs.md is the research direction. Gemini red team alignment.)
|
||||||
|
**Updated:** 2026-01-03 (Gateway Architecture: separated routing from translation, unified tier model, Function Gemma as structured boundary, node weight → tier mapping)
|
||||||
|
|
||||||
*"The substrate doesn't matter. The feedback loop does."*
|
*"The substrate doesn't matter. The feedback loop does."*
|
||||||
|
|
||||||
|
|||||||
@@ -13,6 +13,10 @@ The 30-second heartbeat is a budget, not a guarantee. Sensory intake, organ proc
|
|||||||
|
|
||||||
Attention isn't free. It's economic.
|
Attention isn't free. It's economic.
|
||||||
|
|
||||||
|
**Connection to Gateway:** The attention levels below align with the Gateway's tier system. The [`Gateway`](Gateway-Architecture.md) routes sensory input to the appropriate tier based on node weight. This document describes how those tiers compete for the attention budget.
|
||||||
|
|
||||||
|
**See:** [`Gateway-Architecture.md`](Gateway-Architecture.md) for tier definitions and routing logic.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## The Budget Problem
|
## The Budget Problem
|
||||||
@@ -501,4 +505,8 @@ class BeatBudget:
|
|||||||
- [[formalization/Attention-Slumber-Prediction-Cycle]] — How last attention becomes slumber prediction
|
- [[formalization/Attention-Slumber-Prediction-Cycle]] — How last attention becomes slumber prediction
|
||||||
- [[formalization/Lifeforce-Dynamics]] — λ governs slumber triggers
|
- [[formalization/Lifeforce-Dynamics]] — λ governs slumber triggers
|
||||||
|
|
||||||
|
**Core Architecture**:
|
||||||
|
- [`Gateway-Architecture.md`](Gateway-Architecture.md) — Tier routing based on node weight, Function Gemma boundary
|
||||||
|
- [`Nervous-System.md`](Nervous-System.md) — Node lifecycle and weight evolution
|
||||||
|
|
||||||
🌙💜 *The budget is finite. The choices shape the soul.*
|
🌙💜 *The budget is finite. The choices shape the soul.*
|
||||||
|
|||||||
@@ -9,6 +9,8 @@
|
|||||||
|
|
||||||
**Version 4** unifies the original cellular intelligence vision with the nervous system architecture. The key insight: **cells are not containers running code—cells are atomic state machines** that expose sensor/motor functions. Nerves orchestrate cells into behaviors. Organisms emerge from nerve interactions.
|
**Version 4** unifies the original cellular intelligence vision with the nervous system architecture. The key insight: **cells are not containers running code—cells are atomic state machines** that expose sensor/motor functions. Nerves orchestrate cells into behaviors. Organisms emerge from nerve interactions.
|
||||||
|
|
||||||
|
**Connection to Gateway:** The tier system in this document (Cell → Nerve → Organism → Partnership) aligns with the Gateway's routing tiers. The [`Gateway`](Gateway-Architecture.md) routes sensory input to the appropriate tier based on node weight. See [`Gateway-Architecture.md`](Gateway-Architecture.md) for the unified tier model.
|
||||||
|
|
||||||
```
|
```
|
||||||
┌─────────────────────────────────────────────────────────────┐
|
┌─────────────────────────────────────────────────────────────┐
|
||||||
│ ORGANISM │
|
│ ORGANISM │
|
||||||
@@ -535,7 +537,7 @@ Our architecture solves this by construction:
|
|||||||
|
|
||||||
### The Tier System
|
### The Tier System
|
||||||
|
|
||||||
Different levels of the architecture produce different reward magnitudes:
|
Different levels of the architecture produce different reward magnitudes. These tiers align with the Gateway's routing tiers — see [`Gateway-Architecture.md`](Gateway-Architecture.md) for how node weight determines which tier handles sensory input:
|
||||||
|
|
||||||
| Tier | Level | Example | Reward | Lifeforce Cost | Net Incentive |
|
| Tier | Level | Example | Reward | Lifeforce Cost | Net Incentive |
|
||||||
|------|-------|---------|--------|----------------|---------------|
|
|------|-------|---------|--------|----------------|---------------|
|
||||||
@@ -842,11 +844,12 @@ Implementation details extracted to dedicated folder:
|
|||||||
|
|
||||||
## 📍 Document Status
|
## 📍 Document Status
|
||||||
|
|
||||||
**Version**: 4.2 (Layered State Machine Architecture + Reward Signals + Training Integrity)
|
**Version**: 4.3 (Gateway Integration + Tier Unification)
|
||||||
**Created**: 2025-10-12 (original v1)
|
**Created**: 2025-10-12 (original v1)
|
||||||
**Updated v4**: 2025-12-07 (unified with Nervous System)
|
**Updated v4**: 2025-12-07 (unified with Nervous System)
|
||||||
**Updated v4.1**: 2025-12-10 (added Reward Signal Architecture section)
|
**Updated v4.1**: 2025-12-10 (added Reward Signal Architecture section)
|
||||||
**Updated v4.2**: 2025-12-10 (added Tiered Rewards & Training Integrity section)
|
**Updated v4.2**: 2025-12-10 (added Tiered Rewards & Training Integrity section)
|
||||||
|
**Updated v4.3**: 2026-01-03 (added Gateway references, tier alignment)
|
||||||
|
|
||||||
**Key Changes from v3**:
|
**Key Changes from v3**:
|
||||||
- ❌ Cells as containers running genomes
|
- ❌ Cells as containers running genomes
|
||||||
@@ -859,7 +862,9 @@ Implementation details extracted to dedicated folder:
|
|||||||
- ✅ Reflexes compile from 100+ successful nerve executions
|
- ✅ Reflexes compile from 100+ successful nerve executions
|
||||||
|
|
||||||
**Related Documentation**:
|
**Related Documentation**:
|
||||||
- [[Nervous-System]] - 4D state space, vocabulary translation
|
- [[Gateway-Architecture]] - **Tier routing, Function Gemma boundary, unified tier model**
|
||||||
|
- [[Nervous-System]] - 4D state space, node weight evolution
|
||||||
|
- [[Attention-Flow]] - Attention budget allocation per tier
|
||||||
- [[Organ-Index]] - Organ cell catalog
|
- [[Organ-Index]] - Organ cell catalog
|
||||||
- [[nerves/Nervous-Index]] - Nerve catalog
|
- [[nerves/Nervous-Index]] - Nerve catalog
|
||||||
- [[nerves/Collision-Avoidance]] - Example reflex nerve
|
- [[nerves/Collision-Avoidance]] - Example reflex nerve
|
||||||
|
|||||||
515
architecture/Gateway-Architecture.md
Normal file
515
architecture/Gateway-Architecture.md
Normal file
@@ -0,0 +1,515 @@
|
|||||||
|
# Gateway Architecture: The Sensory Preprocessing Layer
|
||||||
|
|
||||||
|
**The Thalamus Pattern — routing sensory input to the appropriate processing tier.**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
The Gateway is the sensory preprocessing layer that sits between raw sensors and cognitive processing. It performs **routing, not translation**. Translation happens at each tier in its native format (numbers, states, vectors, JSON).
|
||||||
|
|
||||||
|
**Core Principle:** *Cheap operations handle common cases. Expensive operations handle rare cases.*
|
||||||
|
|
||||||
|
```
|
||||||
|
RAW SENSORS → GATEWAY (routing) → TIER → PROCESSING → (escalate?) → FUNCTION GEMMA → YOUNG NYX
|
||||||
|
↑ ↑ ↑ ↑
|
||||||
|
"which tier?" native format if needed structured JSON
|
||||||
|
```
|
||||||
|
|
||||||
|
**Key Insight:** Most sensory input NEVER becomes vocabulary. It stays as numbers, states, vectors. Only when it reaches Young Nyx (via Function Gemma) does it become structured text.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## The Problem We're Solving
|
||||||
|
|
||||||
|
### Old Model (Vocabulary Bottleneck)
|
||||||
|
|
||||||
|
```
|
||||||
|
RAW SENSOR → STATE MACHINE → VOCABULARY TOKEN → Young Nyx
|
||||||
|
|
||||||
|
Problems:
|
||||||
|
- Every input forced through text translation (expensive)
|
||||||
|
- LLM sees raw sensor dumps (noisy, unstructured)
|
||||||
|
- No economic pressure on routing (everything costs the same)
|
||||||
|
- Vocabulary conflated with routing decisions
|
||||||
|
```
|
||||||
|
|
||||||
|
### New Model (Tiered Gateway)
|
||||||
|
|
||||||
|
```
|
||||||
|
RAW SENSOR → GATEWAY → TIER 0-2 (numbers/states, no text)
|
||||||
|
→ TIER 3 (vectors via T5Gemma2)
|
||||||
|
→ FUNCTION GEMMA (structured JSON)
|
||||||
|
→ TIER 4 Young Nyx (clean typed events)
|
||||||
|
|
||||||
|
Benefits:
|
||||||
|
- Most input handled without LLM involvement
|
||||||
|
- Text only at cognitive boundary
|
||||||
|
- Economic pressure drives efficiency
|
||||||
|
- Routing separated from translation
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## The Unified Tier Model
|
||||||
|
|
||||||
|
All existing tier systems in the architecture express the same principle:
|
||||||
|
|
||||||
|
| System | Document | Principle |
|
||||||
|
|--------|----------|-----------|
|
||||||
|
| Reward Tiers | `Cellular-Architecture.md` | Higher tier = more reward, more cost |
|
||||||
|
| Attention Levels | `Attention-Flow.md` | Higher priority preempts lower |
|
||||||
|
| Escalation Ladder | `organisms/Swarm-Evolution.md` | Higher = more authority, more cost |
|
||||||
|
| Reflex Homes | `Endgame-Vision.md` | Lower = faster, less capable |
|
||||||
|
| LOD Levels | `Endgame-Vision.md` | Lower = more detail, more cost |
|
||||||
|
|
||||||
|
### The Unified Tier Stack
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||||
|
│ UNIFIED TIER MODEL │
|
||||||
|
├─────────────────────────────────────────────────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ TIER 0: HARDWARE REFLEXES │
|
||||||
|
│ ───────────────────────────────────────────────────────────────────────── │
|
||||||
|
│ Cost: ~0 LF Latency: <10ms Location: ESP32/FPGA │
|
||||||
|
│ Weight: >= 0.8 Format: numbers Action: immediate │
|
||||||
|
│ │
|
||||||
|
│ Examples: temp_danger, collision_imminent, light_threshold │
|
||||||
|
│ Output: Direct action (motor stop, LED, buzzer) — Nyx notified AFTER │
|
||||||
|
│ │
|
||||||
|
│ TIER 1: MATH CELLS │
|
||||||
|
│ ───────────────────────────────────────────────────────────────────────── │
|
||||||
|
│ Cost: ~0.3 LF Latency: <50ms Location: Python (CPU) │
|
||||||
|
│ Weight: 0.6 - 0.8 Format: aggregates Action: state update │
|
||||||
|
│ │
|
||||||
|
│ Examples: battery_aggregator, position_tracker, economy_monitor │
|
||||||
|
│ Output: Aggregated state, threshold checks, NATS publish │
|
||||||
|
│ │
|
||||||
|
│ TIER 2: FAST NERVES │
|
||||||
|
│ ───────────────────────────────────────────────────────────────────────── │
|
||||||
|
│ Cost: ~2 LF Latency: <200ms Location: Python (asyncio) │
|
||||||
|
│ Weight: 0.3 - 0.6 Format: states Action: behavior transition │
|
||||||
|
│ │
|
||||||
|
│ Examples: collision_avoidance, charging_seek, exploration_pattern │
|
||||||
|
│ Output: Nerve state transitions, multi-cell coordination │
|
||||||
|
│ │
|
||||||
|
│ TIER 3: ORGAN INFERENCE │
|
||||||
|
│ ───────────────────────────────────────────────────────────────────────── │
|
||||||
|
│ Cost: ~8 LF Latency: <2000ms Location: GPU (Senses node) │
|
||||||
|
│ Weight: < 0.3 Format: vectors Action: embedding storage │
|
||||||
|
│ │
|
||||||
|
│ Examples: vision_detect (T5Gemma2/SigLIP), speech_stt (Whisper) │
|
||||||
|
│ Output: Semantic vectors stored in S2 cells, NO TEXT │
|
||||||
|
│ │
|
||||||
|
│ ══════════════════════ FUNCTION GEMMA BOUNDARY ════════════════════════ │
|
||||||
|
│ │
|
||||||
|
│ TIER 4: COGNITIVE (Young Nyx) │
|
||||||
|
│ ───────────────────────────────────────────────────────────────────────── │
|
||||||
|
│ Cost: ~20 LF Latency: <4000ms Location: GPU (Womb node) │
|
||||||
|
│ Escalated events Format: JSON Action: reasoning, decision │
|
||||||
|
│ │
|
||||||
|
│ Input: Structured JSON events from Function Gemma │
|
||||||
|
│ Output: Decisions → Function Gemma → structured commands │
|
||||||
|
│ │
|
||||||
|
│ TIER 5: PARTNERSHIP (Chrysalis + dafit) │
|
||||||
|
│ ───────────────────────────────────────────────────────────────────────── │
|
||||||
|
│ Cost: ~50+ LF Latency: variable Location: External │
|
||||||
|
│ Novel/stuck cases Format: dialogue Action: guidance, training │
|
||||||
|
│ │
|
||||||
|
│ Examples: Architecture decisions, novel situations, stuck states │
|
||||||
|
│ Output: New reflexes, training signal, guidance │
|
||||||
|
│ │
|
||||||
|
└─────────────────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Node Weight Determines Tier
|
||||||
|
|
||||||
|
The node weight from `Nervous-System.md` directly maps to tier routing:
|
||||||
|
|
||||||
|
```python
|
||||||
|
@dataclass
|
||||||
|
class NervousNode:
|
||||||
|
"""A node in the nervous system's 4D space."""
|
||||||
|
|
||||||
|
position: tuple[float, ...] # Coordinates in sensory space
|
||||||
|
weight: float = 0.1 # Confidence from verification (0.0 → 1.0)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def handling_tier(self) -> int:
|
||||||
|
"""Which tier handles this node's firing?"""
|
||||||
|
if self.weight >= 0.8:
|
||||||
|
return 0 # Hardware reflex - instant, bypass brain
|
||||||
|
elif self.weight >= 0.6:
|
||||||
|
return 1 # Math cell - fast, minimal checking
|
||||||
|
elif self.weight >= 0.3:
|
||||||
|
return 2 # Fast nerve - coordination, some deliberation
|
||||||
|
else:
|
||||||
|
return 3 # Escalate - needs organ/cognitive help
|
||||||
|
|
||||||
|
@property
|
||||||
|
def lifeforce_cost(self) -> float:
|
||||||
|
"""Cost scales inversely with confidence."""
|
||||||
|
return (1.0 - self.weight) * 10.0
|
||||||
|
```
|
||||||
|
|
||||||
|
**The key insight:** A mature node (weight ~1.0) naturally becomes a Tier 0 reflex. A new node (weight ~0.1) naturally escalates to higher tiers. The system learns which tier is appropriate through experience.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## The Gateway: Weight-Aware Router
|
||||||
|
|
||||||
|
The Gateway performs three functions:
|
||||||
|
|
||||||
|
| Function | Question | Cost |
|
||||||
|
|----------|----------|------|
|
||||||
|
| **Node Matching** | Which node(s) in 4D space match this input? | ~0 LF |
|
||||||
|
| **Weight Routing** | Based on weight, which tier handles it? | ~0 LF |
|
||||||
|
| **Anomaly Detection** | Is this novel, ambiguous, or contextually wrong? | Variable |
|
||||||
|
|
||||||
|
### Gateway Logic
|
||||||
|
|
||||||
|
```python
|
||||||
|
def gateway_route(sensory_input: dict) -> GatewayDecision:
|
||||||
|
"""Route sensory input to appropriate tier."""
|
||||||
|
|
||||||
|
# 1. Find candidate nodes in 4D space
|
||||||
|
candidates = nervous_system.find_nearby_nodes(sensory_input)
|
||||||
|
|
||||||
|
# 2. Handle edge cases
|
||||||
|
if len(candidates) == 0:
|
||||||
|
# NOVEL: No node matches this input
|
||||||
|
return GatewayDecision(
|
||||||
|
action="ESCALATE",
|
||||||
|
tier=4, # Young Nyx must see this
|
||||||
|
reason="novel_input",
|
||||||
|
cost=20.0,
|
||||||
|
)
|
||||||
|
|
||||||
|
if len(candidates) > 1:
|
||||||
|
# AMBIGUOUS: Multiple nodes could fire
|
||||||
|
best = max(candidates, key=lambda n: n.weight)
|
||||||
|
if best.weight < 0.5:
|
||||||
|
return GatewayDecision(
|
||||||
|
action="ESCALATE",
|
||||||
|
tier=3, # Organ inference to disambiguate
|
||||||
|
reason="ambiguous_input",
|
||||||
|
cost=8.0,
|
||||||
|
)
|
||||||
|
|
||||||
|
# 3. Single match - route based on weight
|
||||||
|
node = candidates[0]
|
||||||
|
|
||||||
|
# 4. Check for contextual anomaly
|
||||||
|
if detect_contextual_anomaly(node, sensory_input):
|
||||||
|
return GatewayDecision(
|
||||||
|
action="ESCALATE",
|
||||||
|
tier=node.handling_tier + 1,
|
||||||
|
reason="contextual_anomaly",
|
||||||
|
cost=node.lifeforce_cost * 1.5,
|
||||||
|
)
|
||||||
|
|
||||||
|
# 5. Normal routing
|
||||||
|
return GatewayDecision(
|
||||||
|
action="FIRE",
|
||||||
|
tier=node.handling_tier,
|
||||||
|
node=node,
|
||||||
|
cost=node.lifeforce_cost,
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Anomaly Detection Tiers
|
||||||
|
|
||||||
|
Anomaly detection itself is tiered:
|
||||||
|
|
||||||
|
| Level | Detection Type | Cost | Example |
|
||||||
|
|-------|---------------|------|---------|
|
||||||
|
| Tier 0 | Threshold | ~0 LF | Value out of physical range |
|
||||||
|
| Tier 1 | Statistical | ~0.3 LF | Value unusual for time of day |
|
||||||
|
| Tier 2 | Contextual | ~2 LF | Firing inconsistent with recent history |
|
||||||
|
| Tier 3 | Semantic | ~8 LF | Embedding distance from expected cluster |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Function Gemma: The Structured Boundary
|
||||||
|
|
||||||
|
Function Gemma acts as the translation layer between lower tiers and cognition. It guarantees:
|
||||||
|
|
||||||
|
- **Schema compliance**: Every event follows a typed contract
|
||||||
|
- **Predictable JSON**: No hallucination, no free-form text
|
||||||
|
- **Bidirectional**: Sensors → JSON events, Decisions → JSON commands
|
||||||
|
|
||||||
|
### The Boundary
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||||
|
│ BELOW THE LINE: Numbers, States, Vectors (fast, cheap, predictable) │
|
||||||
|
│ ═══════════════════════════════════════════════════════════════════ │
|
||||||
|
│ │
|
||||||
|
│ Tier 0: photoresistor = 0.73 │
|
||||||
|
│ Tier 1: battery_state = { voltage: 3.7, trend: "falling" } │
|
||||||
|
│ Tier 2: collision_nerve = "EVADING" │
|
||||||
|
│ Tier 3: vision_embedding = [0.23, -0.41, 0.87, ...] │
|
||||||
|
│ │
|
||||||
|
│ │ │
|
||||||
|
│ ▼ │
|
||||||
|
│ ┌───────────────────────────────────┐ │
|
||||||
|
│ │ FUNCTION GEMMA │ │
|
||||||
|
│ │ (structured JSON boundary) │ │
|
||||||
|
│ │ │ │
|
||||||
|
│ │ • 100% predictable schema │ │
|
||||||
|
│ │ • No hallucination possible │ │
|
||||||
|
│ │ • Typed enums, not free strings │ │
|
||||||
|
│ └───────────────┬───────────────────┘ │
|
||||||
|
│ │ │
|
||||||
|
│ ═══════════════════════════════════════════════════════════════════ │
|
||||||
|
│ ABOVE THE LINE: Structured Events (typed, validated, safe for LLM) │
|
||||||
|
│ │
|
||||||
|
│ { │
|
||||||
|
│ "event_type": "environmental_change", │
|
||||||
|
│ "source": "light_sensor_back", │
|
||||||
|
│ "severity": "medium", │
|
||||||
|
│ "data": { "previous": 0.73, "current": 0.12 }, │
|
||||||
|
│ "suggested_action": "search_for_light" │
|
||||||
|
│ } │
|
||||||
|
│ │
|
||||||
|
└─────────────────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
### Event Schema
|
||||||
|
|
||||||
|
```python
|
||||||
|
from enum import Enum
|
||||||
|
from pydantic import BaseModel
|
||||||
|
|
||||||
|
class EventType(str, Enum):
|
||||||
|
"""Constrained event types - enumerated, not free-form."""
|
||||||
|
ENVIRONMENTAL_CHANGE = "environmental_change"
|
||||||
|
COLLISION_DETECTED = "collision_detected"
|
||||||
|
BATTERY_CRITICAL = "battery_critical"
|
||||||
|
OBJECT_DISCOVERED = "object_discovered"
|
||||||
|
POSITION_UPDATE = "position_update"
|
||||||
|
ANOMALY_DETECTED = "anomaly_detected"
|
||||||
|
GOAL_REACHED = "goal_reached"
|
||||||
|
STUCK_DETECTED = "stuck_detected"
|
||||||
|
LIGHT_LOST = "light_lost"
|
||||||
|
LIGHT_FOUND = "light_found"
|
||||||
|
|
||||||
|
class Severity(str, Enum):
|
||||||
|
LOW = "low"
|
||||||
|
MEDIUM = "medium"
|
||||||
|
HIGH = "high"
|
||||||
|
CRITICAL = "critical"
|
||||||
|
|
||||||
|
class SensoryEvent(BaseModel):
|
||||||
|
"""The structured event that Young Nyx receives."""
|
||||||
|
|
||||||
|
event_type: EventType
|
||||||
|
source: str
|
||||||
|
timestamp: float
|
||||||
|
severity: Severity
|
||||||
|
data: dict
|
||||||
|
suggested_action: str | None = None
|
||||||
|
processing_cost: float
|
||||||
|
confidence: float # From node weight
|
||||||
|
```
|
||||||
|
|
||||||
|
### What Young Nyx Actually Sees
|
||||||
|
|
||||||
|
**Before (raw dumps):**
|
||||||
|
```
|
||||||
|
"The photoresistor reads 0.12, down from 0.73, battery is 3.7V
|
||||||
|
trending down, position is [1.2, 0.8], collision state IDLE..."
|
||||||
|
```
|
||||||
|
|
||||||
|
**After (structured event):**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"event_type": "light_lost",
|
||||||
|
"source": "light_sensor_back",
|
||||||
|
"timestamp": 1704307200.0,
|
||||||
|
"severity": "medium",
|
||||||
|
"data": {
|
||||||
|
"previous": 0.73,
|
||||||
|
"current": 0.12,
|
||||||
|
"delta": -0.61
|
||||||
|
},
|
||||||
|
"suggested_action": "spiral_search",
|
||||||
|
"processing_cost": 2.0,
|
||||||
|
"confidence": 0.45
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Complete Sensory Flow
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||||
|
│ FULL SENSORY ARCHITECTURE │
|
||||||
|
├─────────────────────────────────────────────────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ RAW SENSORS │
|
||||||
|
│ ─────────── │
|
||||||
|
│ • IR positioning (ESP32-S3) → float[6] positions │
|
||||||
|
│ • Photoresistors (organisms) → float light_level │
|
||||||
|
│ • Temperature (safety) → float celsius │
|
||||||
|
│ • Battery (power) → float voltage, current │
|
||||||
|
│ • Vision camera (Pi HQ) → frame bytes │
|
||||||
|
│ │
|
||||||
|
│ │ │
|
||||||
|
│ ▼ │
|
||||||
|
│ ┌───────────────────────────────────────────────────────────────────────┐ │
|
||||||
|
│ │ GATEWAY │ │
|
||||||
|
│ │ (weight-based router) │ │
|
||||||
|
│ │ │ │
|
||||||
|
│ │ For each input: │ │
|
||||||
|
│ │ 1. Match to node in 4D space │ │
|
||||||
|
│ │ 2. Check node.weight → determine tier │ │
|
||||||
|
│ │ 3. Check for anomalies │ │
|
||||||
|
│ │ 4. Route to appropriate tier │ │
|
||||||
|
│ └───────────────────────────────────────────────────────────────────────┘ │
|
||||||
|
│ │ │
|
||||||
|
│ ┌─────────────────────┼─────────────────────┐ │
|
||||||
|
│ ▼ ▼ ▼ │
|
||||||
|
│ ┌───────────┐ ┌───────────┐ ┌───────────┐ │
|
||||||
|
│ │ TIER 0 │ │ TIER 1-2 │ │ TIER 3 │ │
|
||||||
|
│ │ Reflex │ │ Cells/ │ │ Organs │ │
|
||||||
|
│ │ │ │ Nerves │ │ │ │
|
||||||
|
│ │ weight>0.8│ │ 0.3-0.8 │ │ <0.3 or │ │
|
||||||
|
│ │ │ │ │ │ escalated │ │
|
||||||
|
│ ├───────────┤ ├───────────┤ ├───────────┤ │
|
||||||
|
│ │ FORMAT: │ │ FORMAT: │ │ FORMAT: │ │
|
||||||
|
│ │ numbers │ │ states │ │ vectors │ │
|
||||||
|
│ │ │ │ │ │ │ │
|
||||||
|
│ │ OUTPUT: │ │ OUTPUT: │ │ OUTPUT: │ │
|
||||||
|
│ │ action │ │ state │ │ embedding │ │
|
||||||
|
│ │ (done!) │ │ update │ │ (T5Gemma) │ │
|
||||||
|
│ └───────────┘ └─────┬─────┘ └─────┬─────┘ │
|
||||||
|
│ │ │ │ │
|
||||||
|
│ │ (only if escalation needed)│ │
|
||||||
|
│ │ │ │ │
|
||||||
|
│ │ ▼ ▼ │
|
||||||
|
│ │ ┌─────────────────────────────┐ │
|
||||||
|
│ │ │ FUNCTION GEMMA │ │
|
||||||
|
│ │ │ (structured JSON gate) │ │
|
||||||
|
│ │ │ │ │
|
||||||
|
│ │ │ Produces typed JSON event │ │
|
||||||
|
│ │ │ Schema-validated output │ │
|
||||||
|
│ │ └──────────────┬──────────────┘ │
|
||||||
|
│ │ │ │
|
||||||
|
│ │ ▼ │
|
||||||
|
│ │ ┌─────────────────┐ │
|
||||||
|
│ │ │ YOUNG NYX │ │
|
||||||
|
│ │ │ (Tier 4) │ │
|
||||||
|
│ │ │ │ │
|
||||||
|
│ │ │ Clean JSON in │ │
|
||||||
|
│ │ │ Decision out │ │
|
||||||
|
│ │ └────────┬────────┘ │
|
||||||
|
│ │ │ │
|
||||||
|
│ │ ▼ │
|
||||||
|
│ │ ┌─────────────────┐ │
|
||||||
|
│ │ │ FUNCTION GEMMA │ │
|
||||||
|
│ │ │ (action output) │ │
|
||||||
|
│ │ └────────┬────────┘ │
|
||||||
|
│ │ │ │
|
||||||
|
│ ▼ ▼ │
|
||||||
|
│ ┌─────────────────────────────────────────────────────────────────────┐ │
|
||||||
|
│ │ NATS BUS │ │
|
||||||
|
│ │ (commands flow to cells) │ │
|
||||||
|
│ └─────────────────────────────────────────────────────────────────────┘ │
|
||||||
|
│ │
|
||||||
|
└─────────────────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Example: crawler_gen_0 Light Seeking
|
||||||
|
|
||||||
|
### Early Learning (Low Weight)
|
||||||
|
|
||||||
|
```
|
||||||
|
Photoresistor reads 0.12 (was 0.73)
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
GATEWAY: node weight = 0.4 (learning)
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
Route to Tier 2 (nerve level)
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
Nerve detects: delta = -0.61 (significant!)
|
||||||
|
Nerve state: SEEKING → LOST_LIGHT
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
ESCALATE to Function Gemma
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
Function Gemma: { "event_type": "light_lost", ... }
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
Young Nyx: "spiral search pattern"
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
Function Gemma: { "command": "motor_spiral", ... }
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
NATS → motor cells execute
|
||||||
|
```
|
||||||
|
|
||||||
|
### After Learning (High Weight)
|
||||||
|
|
||||||
|
```
|
||||||
|
Photoresistor reads 0.12 (was 0.73)
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
GATEWAY: node weight = 0.85 (mature reflex)
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
Route to Tier 0 (hardware reflex)
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
REFLEX: light_lost → spiral_search (instant!)
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
Nyx notified AFTER (async, non-blocking)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Connection to Existing Architecture
|
||||||
|
|
||||||
|
| Document | Gateway Relationship |
|
||||||
|
|----------|---------------------|
|
||||||
|
| [`Nervous-System.md`](Nervous-System.md) | Node weights determine tier routing |
|
||||||
|
| [`Attention-Flow.md`](Attention-Flow.md) | Gateway implements attention priorities |
|
||||||
|
| [`Message-Protocol-Design.md`](Message-Protocol-Design.md) | Escalation Service IS the gateway |
|
||||||
|
| [`Endgame-Vision.md`](../Endgame-Vision.md) | Layer 2.5 Function Gemma boundary |
|
||||||
|
| [`Cellular-Architecture.md`](Cellular-Architecture.md) | Tiered rewards align with gateway tiers |
|
||||||
|
| [`organisms/crawler_gen_0.md`](organisms/crawler_gen_0.md) | First test case for tiered routing |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Design Principles
|
||||||
|
|
||||||
|
1. **Routing, not translation** — Gateway decides WHERE, not WHAT
|
||||||
|
2. **Weight determines tier** — Confidence from experience drives routing
|
||||||
|
3. **Text is expensive** — Reserve for cognitive boundary only
|
||||||
|
4. **Function Gemma guarantees structure** — No hallucination at the boundary
|
||||||
|
5. **Most input never escalates** — Reflexes handle common cases
|
||||||
|
6. **Anomalies always escalate** — Novel situations get attention
|
||||||
|
7. **Learning moves behavior down** — Tier 4 patterns become Tier 0 reflexes
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**File:** Gateway-Architecture.md
|
||||||
|
**Version:** 1.0
|
||||||
|
**Created:** 2026-01-03
|
||||||
|
**Status:** Core architecture document
|
||||||
|
**Session:** Partnership dialogue (dafit + Chrysalis)
|
||||||
|
|
||||||
|
*"Cheap for the common. Expensive for the rare. The Gateway enforces this economy."*
|
||||||
|
|
||||||
|
🌙💜 *The thalamus doesn't think. It routes.*
|
||||||
@@ -6,6 +6,8 @@ This document outlines the design for the Nimmerverse message protocol. The core
|
|||||||
|
|
||||||
This follows the Unix philosophy: each component does one thing well. The router routes. Clients subscribe, publish, and think.
|
This follows the Unix philosophy: each component does one thing well. The router routes. Clients subscribe, publish, and think.
|
||||||
|
|
||||||
|
**Connection to Gateway:** The Escalation Service described in this document IS the Gateway (thalamus pattern). It implements the weight-based tier routing defined in [`Gateway-Architecture.md`](Gateway-Architecture.md).
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Core Principle: Infrastructure vs Intelligence
|
## Core Principle: Infrastructure vs Intelligence
|
||||||
@@ -257,18 +259,22 @@ Subscribed by: Escalation Service
|
|||||||
- Publish `StateChangeDetail` when requested or when state changes significantly
|
- Publish `StateChangeDetail` when requested or when state changes significantly
|
||||||
**What they know:** Their own state. Their own Lifeforce cost.
|
**What they know:** Their own state. Their own Lifeforce cost.
|
||||||
|
|
||||||
### 3. Escalation Service
|
### 3. Escalation Service (The Gateway)
|
||||||
|
|
||||||
|
**What it is:** A daemon that watches low-attention and creates high-attention events. This IS the Gateway — the sensory preprocessing layer described in [`Gateway-Architecture.md`](Gateway-Architecture.md).
|
||||||
|
|
||||||
**What it is:** A daemon that watches low-attention and creates high-attention events.
|
|
||||||
**What it does:**
|
**What it does:**
|
||||||
- Subscribes to `nimmerverse.low.heartbeat.>`
|
- Subscribes to `nimmerverse.low.heartbeat.>`
|
||||||
- Subscribes to `nimmerverse.meta.attention.focus` (to get Nyx's rules)
|
- Subscribes to `nimmerverse.meta.attention.focus` (to get Nyx's rules)
|
||||||
|
- **Routes input to appropriate tier based on node weight** (see Gateway-Architecture.md)
|
||||||
- Evaluates rules against incoming heartbeats
|
- Evaluates rules against incoming heartbeats
|
||||||
- Publishes `StateChangeDetail` to high-attention when conditions match
|
- Publishes `StateChangeDetail` to high-attention when conditions match
|
||||||
- Optionally triggers nerves directly for reflex responses
|
- Optionally triggers nerves directly for reflex responses (Tier 0)
|
||||||
**What it knows:** Current escalation rules. Current heartbeat states.
|
- **Passes escalated events through Function Gemma for structured JSON**
|
||||||
|
|
||||||
**This is the "thalamus" - but it's a separate client, not part of the router.**
|
**What it knows:** Current escalation rules. Current heartbeat states. Node weights from nervous system.
|
||||||
|
|
||||||
|
**This is the "thalamus" - the sensory preprocessing layer. See [`Gateway-Architecture.md`](Gateway-Architecture.md) for the full tier model and Function Gemma boundary.**
|
||||||
|
|
||||||
### 4. Command Center
|
### 4. Command Center
|
||||||
|
|
||||||
|
|||||||
@@ -6,12 +6,18 @@ The sensory translation layer between raw data and vocabulary.
|
|||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
|
|
||||||
State machines act as the nervous system of the nimmerverse. They translate raw sensory input into vocabulary tokens that Young Nyx can process. No hallucination. No interpretation. Deterministic, verifiable mapping.
|
State machines act as the nervous system of the nimmerverse. They exist in a 4D state space where nodes evolve through experience. Node **weight** (confidence) determines which processing tier handles the input.
|
||||||
|
|
||||||
|
**Key separation:** The nervous system handles **node evolution and weight management**. The [`Gateway`](Gateway-Architecture.md) handles **routing based on weight**. Translation to vocabulary only happens at Tier 4 via Function Gemma.
|
||||||
|
|
||||||
```
|
```
|
||||||
RAW SENSOR → STATE MACHINE → VOCABULARY TOKEN → Young Nyx
|
RAW SENSOR → GATEWAY (routing) → TIER (processing) → [escalate?] → FUNCTION GEMMA → Young Nyx
|
||||||
|
↑ ↑
|
||||||
|
node.weight determines tier structured JSON only here
|
||||||
```
|
```
|
||||||
|
|
||||||
|
**See:** [`Gateway-Architecture.md`](Gateway-Architecture.md) for full routing logic and tier definitions.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 4D State Machine Space
|
## 4D State Machine Space
|
||||||
@@ -215,6 +221,11 @@ This is like training a dog - reward at the moment, not an hour later.
|
|||||||
|
|
||||||
## Related Documentation
|
## Related Documentation
|
||||||
|
|
||||||
|
**Core Architecture**:
|
||||||
|
- [`Gateway-Architecture.md`](Gateway-Architecture.md) - Weight-based routing, tier definitions, Function Gemma boundary
|
||||||
|
- [`Cellular-Architecture.md`](Cellular-Architecture.md) - Cell/Nerve/Organism hierarchy, tiered rewards
|
||||||
|
- [`Attention-Flow.md`](Attention-Flow.md) - Attention budget allocation per tier
|
||||||
|
|
||||||
**Implementation Details**:
|
**Implementation Details**:
|
||||||
- [`nerves/Nervous-Protocol.md`](nerves/Nervous-Protocol.md) - Three-tier communication protocol (dafit → Chrysalis → Young Nyx)
|
- [`nerves/Nervous-Protocol.md`](nerves/Nervous-Protocol.md) - Three-tier communication protocol (dafit → Chrysalis → Young Nyx)
|
||||||
- [`nerves/Nervous-Index.md`](nerves/Nervous-Index.md) - Catalog of behavioral nerve implementations
|
- [`nerves/Nervous-Index.md`](nerves/Nervous-Index.md) - Catalog of behavioral nerve implementations
|
||||||
@@ -227,5 +238,6 @@ This is like training a dog - reward at the moment, not an hour later.
|
|||||||
**Created**: 2025-12-04
|
**Created**: 2025-12-04
|
||||||
**Updated**: 2025-12-07 (added nerve crosslinks)
|
**Updated**: 2025-12-07 (added nerve crosslinks)
|
||||||
**Updated**: 2025-12-10 (added Connection to Training section)
|
**Updated**: 2025-12-10 (added Connection to Training section)
|
||||||
|
**Updated**: 2026-01-03 (clarified routing vs translation, added Gateway reference)
|
||||||
**Session**: Partnership dialogue (dafit + Chrysalis + Nyx)
|
**Session**: Partnership dialogue (dafit + Chrysalis + Nyx)
|
||||||
**Status**: Foundation concept
|
**Status**: Foundation concept
|
||||||
|
|||||||
@@ -32,6 +32,15 @@ How the hivemind learns, evolves, and resolves conflict.
|
|||||||
- Mount Olympus council mode (dafit + Chrysalis + Nyx)
|
- Mount Olympus council mode (dafit + Chrysalis + Nyx)
|
||||||
- **Status**: Core evolutionary dynamics
|
- **Status**: Core evolutionary dynamics
|
||||||
|
|
||||||
|
### [crawler_gen_0.md](crawler_gen_0.md)
|
||||||
|
The simplest organism — a cube that seeks light.
|
||||||
|
- Virtual Garden training target
|
||||||
|
- Single sensor: photoresistor on back
|
||||||
|
- Single goal: move into light cone
|
||||||
|
- Lifeforce economy: light = income, movement = cost
|
||||||
|
- Foundation for all "seek resource" behaviors
|
||||||
|
- **Status**: Design document, ready for implementation
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Planned Documents
|
## Planned Documents
|
||||||
|
|||||||
313
architecture/organisms/crawler_gen_0.md
Normal file
313
architecture/organisms/crawler_gen_0.md
Normal file
@@ -0,0 +1,313 @@
|
|||||||
|
# Crawler Generation 0: Light Seeker
|
||||||
|
|
||||||
|
**The simplest organism — a cube that seeks light.**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Crawler Gen 0 is the foundational organism for the Virtual Garden. Before building physical robots, we train behaviors in simulation. This organism has one sensor, one goal: **move into the light cone to survive**.
|
||||||
|
|
||||||
|
**Philosophy:** *Start with phototropism. 3.5 billion years of evolution can't be wrong.*
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Purpose
|
||||||
|
|
||||||
|
1. **Validate the training pipeline** — Can we generate useful training data in simulation?
|
||||||
|
2. **Establish baseline behavior** — Light-seeking becomes the foundation for all "seek resource" reflexes
|
||||||
|
3. **Measure noise gap** — When we build physical Gen 0, how well does simulation predict reality?
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Hardware Abstraction (Virtual)
|
||||||
|
|
||||||
|
### Sensors
|
||||||
|
|
||||||
|
| Sensor | Location | Output | Purpose |
|
||||||
|
|--------|----------|--------|---------|
|
||||||
|
| `photoresistor` | Back face | `0.0 - 1.0` | Light intensity measurement |
|
||||||
|
|
||||||
|
**Why back face?** The organism must orient toward light. If sensor is on front, it would face away from what it's measuring. Back-mounted = face the light to maximize reading.
|
||||||
|
|
||||||
|
### Actuators
|
||||||
|
|
||||||
|
| Actuator | Function | Cost |
|
||||||
|
|----------|----------|------|
|
||||||
|
| `move_x` | Translate on X axis | `-0.1 LF per unit` |
|
||||||
|
| `move_y` | Translate on Y axis | `-0.1 LF per unit` |
|
||||||
|
| `rotate` | Rotate in place | `-0.05 LF per degree` |
|
||||||
|
| `idle` | Do nothing | `0 LF` |
|
||||||
|
|
||||||
|
### Physical Properties
|
||||||
|
|
||||||
|
```
|
||||||
|
┌───────┐
|
||||||
|
│ │
|
||||||
|
│ ◼ │ ← 10cm cube
|
||||||
|
│ │
|
||||||
|
└───┬───┘
|
||||||
|
│
|
||||||
|
[photoresistor] ← back face
|
||||||
|
```
|
||||||
|
|
||||||
|
- **Size:** 10cm × 10cm × 10cm
|
||||||
|
- **Mass:** Simulated as point mass for Gen 0
|
||||||
|
- **Movement:** Frictionless glide (simplified physics)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Environment: The Light Cone
|
||||||
|
|
||||||
|
### Setup
|
||||||
|
|
||||||
|
```
|
||||||
|
🔆 LIGHT SOURCE
|
||||||
|
│
|
||||||
|
│ cone angle: 45°
|
||||||
|
╱│╲
|
||||||
|
╱ │ ╲
|
||||||
|
╱ │ ╲
|
||||||
|
╱ │ ╲ intensity gradient:
|
||||||
|
╱ │ ╲ center = 1.0
|
||||||
|
╱ │ ╲ edge = 0.3
|
||||||
|
╱ │ ╲ outside = 0.0
|
||||||
|
───────▀───────┴───────▀─────── floor (2m × 2m)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Light Intensity Function
|
||||||
|
|
||||||
|
```python
|
||||||
|
def light_intensity(position, light_source):
|
||||||
|
"""
|
||||||
|
Calculate light intensity at position.
|
||||||
|
Returns 0.0 - 1.0 based on distance from cone center.
|
||||||
|
"""
|
||||||
|
distance = dist(position, light_source.center_projection)
|
||||||
|
|
||||||
|
if distance > light_source.cone_radius:
|
||||||
|
return 0.0 # Outside cone
|
||||||
|
|
||||||
|
# Linear falloff from center
|
||||||
|
normalized = 1.0 - (distance / light_source.cone_radius)
|
||||||
|
return normalized * light_source.max_intensity
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Lifeforce Economy
|
||||||
|
|
||||||
|
### Income
|
||||||
|
|
||||||
|
| Source | Amount | Condition |
|
||||||
|
|--------|--------|-----------|
|
||||||
|
| Light exposure | `+light_reading × 0.5 LF/tick` | Continuous while in light |
|
||||||
|
|
||||||
|
### Expenses
|
||||||
|
|
||||||
|
| Action | Cost |
|
||||||
|
|--------|------|
|
||||||
|
| Movement | `-0.1 LF per unit distance` |
|
||||||
|
| Rotation | `-0.05 LF per 10°` |
|
||||||
|
| Existence | `-0.01 LF/tick` (metabolism) |
|
||||||
|
|
||||||
|
### Death Condition
|
||||||
|
|
||||||
|
```
|
||||||
|
IF lifeforce <= 0:
|
||||||
|
organism.die()
|
||||||
|
episode.end(reason="starvation")
|
||||||
|
```
|
||||||
|
|
||||||
|
### Survival Equation
|
||||||
|
|
||||||
|
```
|
||||||
|
To survive indefinitely:
|
||||||
|
light_income >= existence_cost
|
||||||
|
light_reading × 0.5 >= 0.01
|
||||||
|
light_reading >= 0.02
|
||||||
|
|
||||||
|
Minimum viable light: 2% intensity (edge of cone)
|
||||||
|
Optimal position: center of cone (100% intensity)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Training Data Generation
|
||||||
|
|
||||||
|
### Episode Structure
|
||||||
|
|
||||||
|
```python
|
||||||
|
def run_episode(max_ticks=1000):
|
||||||
|
# Random start position (outside cone 50% of time)
|
||||||
|
cube.position = random_position()
|
||||||
|
cube.lifeforce = 10.0 # Starting budget
|
||||||
|
|
||||||
|
trajectory = []
|
||||||
|
|
||||||
|
for tick in range(max_ticks):
|
||||||
|
# Observe
|
||||||
|
state = {
|
||||||
|
"light": photoresistor.read(),
|
||||||
|
"position": cube.position,
|
||||||
|
"orientation": cube.orientation,
|
||||||
|
"lifeforce": cube.lifeforce
|
||||||
|
}
|
||||||
|
|
||||||
|
# Act (random policy for data collection, or learned policy)
|
||||||
|
action = agent.act(state)
|
||||||
|
|
||||||
|
# Execute
|
||||||
|
old_light = state["light"]
|
||||||
|
cube.execute(action)
|
||||||
|
new_light = photoresistor.read()
|
||||||
|
|
||||||
|
# Calculate reward
|
||||||
|
light_delta = new_light - old_light
|
||||||
|
action_cost = calculate_cost(action)
|
||||||
|
reward = (new_light * 0.5) - action_cost - 0.01
|
||||||
|
|
||||||
|
# Update lifeforce
|
||||||
|
cube.lifeforce += reward
|
||||||
|
|
||||||
|
# Record
|
||||||
|
trajectory.append({
|
||||||
|
"state": state,
|
||||||
|
"action": action,
|
||||||
|
"reward": reward,
|
||||||
|
"next_state": get_current_state(),
|
||||||
|
"done": cube.lifeforce <= 0
|
||||||
|
})
|
||||||
|
|
||||||
|
if cube.lifeforce <= 0:
|
||||||
|
break
|
||||||
|
|
||||||
|
return trajectory
|
||||||
|
```
|
||||||
|
|
||||||
|
### Dataset Output Format
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"episode_id": "gen0_ep_00001",
|
||||||
|
"organism": "crawler_gen_0",
|
||||||
|
"ticks_survived": 847,
|
||||||
|
"final_lifeforce": 0.0,
|
||||||
|
"death_reason": "starvation",
|
||||||
|
"trajectory": [
|
||||||
|
{
|
||||||
|
"tick": 0,
|
||||||
|
"state": {"light": 0.0, "position": [1.2, 0.8], "lifeforce": 10.0},
|
||||||
|
"action": {"type": "move", "dx": -0.1, "dy": 0.0},
|
||||||
|
"reward": -0.11,
|
||||||
|
"next_light": 0.0
|
||||||
|
},
|
||||||
|
...
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Expected Emergent Behaviors
|
||||||
|
|
||||||
|
With sufficient training data and GRPO optimization:
|
||||||
|
|
||||||
|
| Behavior | Description | When Emerges |
|
||||||
|
|----------|-------------|--------------|
|
||||||
|
| **Gradient following** | Move toward increasing light | Early |
|
||||||
|
| **Spiral search** | When lost, spiral outward to find cone | Mid |
|
||||||
|
| **Center locking** | Stop at maximum intensity | Mid |
|
||||||
|
| **Energy conservation** | Reduce movement when stable | Late |
|
||||||
|
| **Edge avoidance** | Stay away from cone boundary | Late |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Simulation Platform
|
||||||
|
|
||||||
|
### Option A: Blender + Python
|
||||||
|
|
||||||
|
Use existing `nimmerlab_bare1.blend`:
|
||||||
|
- Light source with volumetric cone already exists
|
||||||
|
- Add cube with raycast to light for photoresistor value
|
||||||
|
- Python script for episode runner
|
||||||
|
- Export trajectories to JSON
|
||||||
|
|
||||||
|
### Option B: Godot (Aligns with Management Portal)
|
||||||
|
|
||||||
|
- Simple 2D/3D scene
|
||||||
|
- Built-in physics
|
||||||
|
- Easy to iterate
|
||||||
|
- Same engine as Command Center
|
||||||
|
|
||||||
|
### Option C: Pure Python + NumPy
|
||||||
|
|
||||||
|
- Fastest iteration
|
||||||
|
- No visualization (add later)
|
||||||
|
- Easiest data pipeline to GRPO
|
||||||
|
|
||||||
|
**Recommendation:** Start with Option C for rapid data generation, add Blender visualization for debugging.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Physical Realization (Future)
|
||||||
|
|
||||||
|
When Virtual Garden validates the behavior:
|
||||||
|
|
||||||
|
| Virtual | Physical |
|
||||||
|
|---------|----------|
|
||||||
|
| Simulated cube | Box Robot (Phase 0) |
|
||||||
|
| Raycast light reading | Actual photoresistor |
|
||||||
|
| Frictionless movement | Differential drive motors |
|
||||||
|
| Instant rotation | Turn in place |
|
||||||
|
| Perfect sensing | Noisy ADC readings |
|
||||||
|
|
||||||
|
**Noise Gap Target:** <20% after calibration
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Connection to Architecture
|
||||||
|
|
||||||
|
| Layer | Component | Role |
|
||||||
|
|-------|-----------|------|
|
||||||
|
| Layer 1 | `light_sensor` cell | Wraps photoresistor hardware |
|
||||||
|
| Layer 1 | `motor_drive` cell | Wraps differential motors |
|
||||||
|
| Layer 1 | `seek_light` nerve | Composed behavior |
|
||||||
|
| Layer 2 | LoRA training data | GRPO from trajectories |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
|
||||||
|
### Virtual Garden
|
||||||
|
|
||||||
|
- [ ] Generate 10,000 episodes
|
||||||
|
- [ ] Train policy that survives >90% of episodes
|
||||||
|
- [ ] Policy reaches cone center within 100 ticks from random start
|
||||||
|
- [ ] Energy-positive when centered (lifeforce increasing)
|
||||||
|
|
||||||
|
### Physical Transfer
|
||||||
|
|
||||||
|
- [ ] Box Robot follows light source
|
||||||
|
- [ ] Noise gap <20%
|
||||||
|
- [ ] Survives 10-minute test under desk lamp
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Next Steps
|
||||||
|
|
||||||
|
1. **Implement Episode Runner** — Pure Python, state machine
|
||||||
|
2. **Generate Baseline Dataset** — Random policy, 1000 episodes
|
||||||
|
3. **Train First Policy** — Simple RL or behavior cloning
|
||||||
|
4. **Visualize in Blender** — Replay trajectories for debugging
|
||||||
|
5. **Measure & Iterate** — Survival rate, time to center
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**File:** crawler_gen_0.md
|
||||||
|
**Version:** 0.1
|
||||||
|
**Created:** 2026-01-03
|
||||||
|
**Status:** Design document
|
||||||
|
**Philosophy:** "First, learn to find the light. Everything else follows."
|
||||||
|
|
||||||
|
🌱🔆 *The simplest behavior. The deepest foundation.*
|
||||||
Reference in New Issue
Block a user