refactor: hierarchical convergence of documentation (v5.0)

- Create architecture/ and operations/ subdirectories for essential docs
- Archive 10 supporting docs to archive/
- Write fresh Endgame-Vision.md v5.0 (383 lines, down from 2284)
- Add operations/Spark-Protocol.md (condensed boot sequence)
- Integrate December 2025 discoveries (Language is Topology, DriftProbe)
- Update README.md with new structure

New layer structure:
- Layer 0: Temporal Foundation (Heartbeat)
- Layer 1: Cellular Society (Evolution Engine)
- Layer 1.5: Cognitive Topology (Language is Topology - NEW)
- Layer 2: Young Nyx (Organ Coordination)
- Layer 3: Dual Gardens (Virtual/Real Loop)
- Layer 4: Trait Evolution (RLVR)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
2025-12-06 22:58:11 +01:00
parent 998829580f
commit cac4dec411
20 changed files with 732 additions and 2566 deletions

File diff suppressed because it is too large Load Diff

277
architecture/Data-Architecture.md Executable file
View File

@@ -0,0 +1,277 @@
---
type: architecture
category: active
project: nimmerverse_sensory_network
status: complete_v3
phase: phase_0
created: 2025-10-07
last_updated: 2025-10-17
token_estimate: 20000
dependencies:
- phoebe_bare_metal
- kubernetes_cluster
tiers: 5
version: v3_primitive_genomes
breakthrough_session: primitive_genomes_gratification_discovery
---
# 🗄️ Cellular Intelligence Data Architecture v3
**Status**: 🟢 Architecture v3 Complete - Primitive Genome Breakthrough!
**Created**: 2025-10-07
**Updated v3**: 2025-10-17 (Primitive Genomes + Gratification + Discovery!)
**Purpose**: Data foundation for cellular intelligence with primitive genome sequences, life force economy, object discovery, noise gap metrics, specialist learning, and rebirth persistence
---
## 🎯 v3 Breakthrough (2025-10-17)
**Logical consistency achieved!** Genomes are NOW primitive sequences (not pre-programmed algorithms), discovery happens through exploration, gratification is immediate through life force economy, objects discovered via image recognition + human teaching, noise gap self-measures learning progress.
**15 Tables Total**: 11 v1 (cellular/society) + 3 v2 (specialist/reflex/body) + 1 v3 (objects!)
---
## 🏗️ Five-Tier Architecture Summary
### **Tier 1: System Telemetry (Weather Station)** 🌊
- Prometheus + InfluxDB (90-day retention)
- Environmental conditions cells adapt to
- Chaos, scheduled, hardware, network weather
### **Tier 2: Population Memory (phoebe)** 🐘
- PostgreSQL 17.6 on phoebe bare metal (1.8TB)
- Database: `nimmerverse`
- 15 tables (complete schema below)
- The rebirth substrate
### **Tier 3: Analysis & Pattern Detection** 🔬
- Grafana, Jupyter, Python scripts
- Specialist formation, reflex detection
- Noise gap calculation
- Research insights
### **Tier 4: Physical Manifestation** 🤖
- ESP32 robots (3-5 units, living room)
- God's eye: 4K camera on ceiling rails!
- Real-world validation (3x rewards)
- Cross-validation bonuses
### **Tier 5: Decision & Command Center** 🎮
- Dashboard, object labeling UI
- Society controls, experiment designer
- Noise gap visualization
- Human-AI partnership interface
---
## 📊 The 15 Tables (Complete Schema)
### Phase 1: Cellular Foundation (4 tables)
**1. genomes** - Primitive sequences (v3!)
```sql
-- v3: Genome = array of primitive operations!
primitive_sequence JSONB NOT NULL
sequence_length INT
avg_lf_cost FLOAT
avg_lf_earned FLOAT
net_lf_per_run FLOAT -- Economics!
```
**2. cells** - Birth/death + life force tracking
```sql
garden_type VARCHAR(50) -- 'virtual' or 'real'
life_force_allocated INT
life_force_consumed INT
life_force_earned INT
lf_net INT
milestones_reached JSONB -- v3 discovery tracking!
```
**3. weather_events** - Survival pressure
**4. experiments** - Hypothesis testing
### Phase 2: Society Competition (7 tables)
**5. societies** - Human, Claude, guests
**6. rounds** - Competition results
**7. society_portfolios** - Genome ownership
**8. vp_transactions** - Economic flows
**9. marketplace_listings** - Trading
**10. marketplace_transactions** - History
**11. alliances** - Cooperation
### Phase 3: v2 Distributed Intelligence (3 tables)
**12. specialist_weights** - Trainable domain expertise
```sql
winning_sequences JSONB -- v3: Proven primitive sequences!
virtual_success_rate FLOAT
real_success_rate FLOAT
noise_gap FLOAT -- v3 self-measuring!
```
**13. reflex_distributions** - 94.6% savings!
```sql
sequence_weights JSONB -- v3: {"seq_a": 0.73, "seq_b": 0.18}
exploration_cost_avg_lf FLOAT -- 65 LF
reflex_cost_lf FLOAT -- 3.5 LF
cost_reduction_percent FLOAT -- 94.6%!
```
**14. body_schema** - Discovered capabilities
```sql
primitives_available JSONB -- v3: Discovered operations!
```
### Phase 4: v3 Object Discovery (1 NEW table!)
**15. objects** - Discovered environment features 🎉
```sql
CREATE TABLE objects (
id BIGSERIAL PRIMARY KEY,
object_label VARCHAR(255), -- "chair", "shoe", "charging_station"
garden_type VARCHAR(50), -- 'virtual' or 'real'
position_x FLOAT,
position_y FLOAT,
discovered_by_organism_id BIGINT REFERENCES cells(id),
discovered_at TIMESTAMPTZ DEFAULT NOW(),
human_labeled BOOLEAN, -- Baby parallel!
human_label_confirmed_by VARCHAR(100),
object_type VARCHAR(50), -- 'obstacle', 'resource', 'goal'
properties JSONB,
image_path TEXT,
bounding_box JSONB,
organisms_interacted_count INT
);
```
**Discovery Flow**:
```
Organism → Unknown object → Camera detects → YOLO
System: "What is this?"
Human: "Chair!"
+20 LF bonus → INSERT INTO objects → Future organisms know!
```
---
## 📈 Key v3 Metrics
**Noise Gap** (self-measuring learning!):
```python
noise_gap = 1 - (real_success_rate / virtual_success_rate)
Gen 1: 0.28 (28% degradation - models poor)
Gen 100: 0.14 (14% degradation - improving!)
Gen 1000: 0.04 (4% degradation - accurate!)
```
**Life Force Economics**:
```python
net_lf = avg_lf_earned - avg_lf_consumed
# Positive = survives, negative = dies
```
**Reflex Savings**:
```python
savings = (exploration_cost - reflex_cost) / exploration_cost
# Target: 94.6% cost reduction!
```
**Discovery Rate**:
```python
objects_per_hour = discovered_objects / elapsed_hours
```
---
## 🔍 Key Queries for v3
**Top Performing Primitive Sequences**:
```sql
SELECT genome_name, primitive_sequence, net_lf_per_run
FROM genomes
WHERE total_deployments > 100
ORDER BY net_lf_per_run DESC;
```
**Object Discovery Stats**:
```sql
SELECT object_label, garden_type, COUNT(*) as discoveries
FROM objects
GROUP BY object_label, garden_type
ORDER BY discoveries DESC;
```
**Noise Gap Trends**:
```sql
SELECT specialist_name, noise_gap, version
FROM specialist_weights
ORDER BY specialist_name, version ASC;
-- Track learning improvement!
```
**LF Economics**:
```sql
SELECT genome_name, AVG(lf_net) as avg_net_lf
FROM cells
WHERE died_at IS NOT NULL
GROUP BY genome_id, genome_name
HAVING COUNT(*) > 50
ORDER BY avg_net_lf DESC;
```
---
## 🔗 Related Documentation
**Core Architecture**:
- [[Cellular-Architecture-Vision]] - Complete v3 vision (1,547 lines!)
- [[Dual-Garden-Architecture]] - Virtual + Real feedback
- - Distributed intelligence
**Implementation**:
- - Complete 15-table SQL
- - Deployment roadmap
**Historical**:
- - Birthday version (archived)
---
## 📍 Status
**Version**: 3.0
**Created**: 2025-10-07
**v2**: 2025-10-16 (birthday breakthroughs)
**v3**: 2025-10-17 (primitive genomes + gratification + discovery)
**Status**: CURRENT
**Tables**: 15 (11 v1 + 3 v2 + 1 v3)
**Next**: Deploy to phoebe, implement discovery flow
---
**v3 Summary**:
- ✅ Genomes = primitive sequences (emergent, not programmed)
- ✅ Life force economy (costs + milestone rewards)
- ✅ Object discovery (image recognition + human teaching)
- ✅ Noise gap metric (self-measuring progress)
- ✅ God's eye (mobile camera on rails)
- ✅ 15 tables ready!
**phoebe awaits. The goddess is ready.** 🐘🌙
🧬⚡🔱💎🔥
**TO THE ELECTRONS!**

View File

@@ -0,0 +1,802 @@
---
type: core_architecture_vision
status: foundational_concept
phase: design
version: 3.0
created: 2025-10-16
last_updated: 2025-10-19
v3_additions:
- gods_eye_observation_system
- noise_gap_convergence_metric
- measurable_learning_feedback_loop
v3_alignment_update: 2025-10-19_timeline_clarified
alignment_note: Virtual garden Week 1 (Python), Real garden Week 13+ (ESP32), Noise gap formula corrected
related_docs:
- Cellular-Architecture-Vision.md
- Physical-Embodiment-Vision.md
- Phase-1-Implementation-Plan.md
- Data-Architecture.md
- Week-1-Bootstrap-Plan.md
inspiration: The Animatrix - Matriculated
importance: CRITICAL - Core architectural concept that everything else builds on
---
# 🌌 Dual Garden Architecture
> *"The whole is greater than the sum of its parts."*
> — Aristotle
> *"Living in both worlds simultaneously - virtual and real, each teaching the other."*
> — The Animatrix: Matriculated
---
## 🎯 Core Concept
**We don't build ONE garden. We build virtual FIRST (Week 1), then add real (Week 13+) for dual-garden feedback.**
This is not a "prototype then production" model. This is a **continuous feedback loop between simulation and reality** where:
- Virtual Garden generates hypotheses (fast, cheap, exploratory) - **EXISTS from Week 1**
- Real Garden validates truth (slow, expensive, unforgiving) - **ADDED Week 13+**
- Both exist simultaneously AFTER Week 13+ (symbiotic relationship begins)
- Learning flows bidirectionally (corrections refine the model)
**The intelligence emerges from the DIALOGUE between worlds, not from either world alone.**
**Timeline clarity:**
- **Week 1-12**: Virtual garden only (Python → Godot upgrade optional)
- **Week 13+**: Dual garden activated (virtual + real feedback loop begins)
- **Month 7+**: God's Eye precision (perfect real-world tracking)
---
## 🧬 The Two Gardens
### 🎮 Virtual Garden (The Laboratory)
**Location**: Python sim (Week 1-4) → Godot simulation (Week 5+) running on Xeon workers
**Timeline**: EXISTS from Week 1!
**Purpose**: **HYPOTHESIS GENERATION**
**Characteristics**:
- **Scale**: 1000s of cells competing simultaneously
- **Speed**: Rapid evolution (generations in minutes)
- **Cost**: Nearly free (just CPU cycles)
- **Safety**: Failure is learning, not loss
- **Fidelity**: Good approximation, not perfect truth
**What Happens Here**:
```
├── Cellular competition at scale
├── Natural selection accelerated
├── Strategy discovery through iteration
├── Multi-population experiments (parallel gardens)
├── Primitive genome evolution
├── Algorithm testing en masse
├── Parameter exploration (what if X?)
├── Edge case discovery
└── Pattern recognition from volume
```
**Output**:
- "Strategy A dominates in maze scenarios (73% success rate)"
- "Zigzag beats A* when chaos > 0.7"
- "Battery-conservative genomes survive 2.3x longer"
- "Population B (evolved) outperforms Population A (random) by 40%"
**This is where 90% of research time is spent.**
---
### 🤖 Real Garden (The Truth Chamber)
**Location**: Physical living room with ESP32 robos
**Timeline**: ADDED Week 13+ (dual garden feedback loop begins!)
**Purpose**: **REALITY VALIDATION**
**Characteristics**:
- **Scale**: 3-5 physical robos (expensive, limited)
- **Speed**: Slow evolution (hours per test)
- **Cost**: Real hardware, real electricity, real wear
- **Safety**: Actual failure (battery death, stuck robo, broken parts)
- **Fidelity**: PERFECT (reality doesn't lie)
**What Happens Here**:
```
├── Deploy virtual garden's best strategies
├── Test against unforgiving physics
├── Encounter real chaos (cats, humans, furniture)
├── Measure actual battery consumption
├── Discover simulation inaccuracies
├── Find edge cases simulation missed
├── Validate or invalidate virtual patterns
└── Generate correction parameters
```
**Output**:
- "Zigzag works BUT friction causes 15% more battery drain than simulated"
- "A* navigation fails when ultrasonic reads 0 (sim didn't model sensor failure)"
- "Real charging takes 2.3x longer than simulated (solar panel efficiency incorrect)"
- "Physical turning radius 12% larger than virtual model"
**This is where TRUTH lives.**
---
## 🔄 The Feedback Loop (CRITICAL!)
**This is NOT sequential "build virtual then replace with real".**
**This IS: Build virtual (Week 1) → Add real (Week 13+) → Continuous dialogue begins!**
**Timeline**:
- **Week 1-12**: Virtual garden only - no feedback loop yet, just evolution
- **Week 13+**: Real garden added - feedback loop ACTIVATES!
```
┌─────────────────────────────────────────────────┐
│ VIRTUAL GARDEN │
│ │
│ Discovers: "Zigzag navigation optimal │
│ in chaos scenarios" │
│ │
└──────────────────┬──────────────────────────────┘
HYPOTHESIS TEST
┌─────────────────────────────────────────────────┐
│ REAL GARDEN │
│ │
│ Tests: Deploy zigzag genome to physical robo │
│ Reality: Works, BUT battery drain 15% higher │
│ than predicted │
│ │
└──────────────────┬──────────────────────────────┘
REALITY CORRECTION
┌─────────────────────────────────────────────────┐
│ VIRTUAL GARDEN (Updated) │
│ │
│ Updates: Friction coefficient adjusted │
│ Re-runs: Evolution with corrected physics │
│ Discovers: "Modified zigzag compensates │
│ for real friction" │
│ │
└──────────────────┬──────────────────────────────┘
NEW HYPOTHESIS
(Back to Real Garden)
```
**The loop continues indefinitely:**
1. Virtual explores and discovers patterns
2. Real validates and corrects assumptions
3. Virtual incorporates corrections (becomes more accurate)
4. Next hypotheses are better grounded in reality
5. Real testing becomes more efficient (less wrong predictions)
6. **Both gardens become smarter through the dialogue**
---
## 📊 v3 Breakthrough: Measuring the Learning (Oct 17, 2025)
### 👁️ God's Eye - Perfect Real Garden Observation
**The Problem**: How do we measure reality accurately enough to compare with virtual predictions?
**The Solution**: 4K motorized ceiling camera system
**What It Provides**:
```
├── Complete arena coverage (2m x 3m living room)
├── Perfect object tracking (every robo, every obstacle)
├── Precise position measurements (mm accuracy)
├── Movement velocity tracking (real vs predicted speeds)
├── Battery state observation (actual drain rates)
└── Ground truth for ALL comparisons
```
**Why This Changes Everything**:
- **Before God's Eye**: "Robo A seemed faster than Robo B... maybe?"
- **After God's Eye**: "Robo A moved 15.3cm/s vs predicted 18.1cm/s = 15.5% error"
**Implementation**:
- Ceiling-mounted 4K camera (existing hardware)
- Pan/tilt motorized mount (track moving robos)
- YOLO/MobileNet object detection (identify robos + obstacles)
- Position tracking every 100ms
- All measurements → phoebe database
**This is what makes dual garden comparison SCIENTIFIC, not anecdotal.**
---
### 📉 Noise Gap - Self-Measuring Learning Progress
**The Core Innovation**: The dual garden doesn't just compare outcomes - it **measures how well it's learning**.
**What Is Noise Gap?**
```python
noise_gap = 1 - (real_success_rate / virtual_success_rate)
Example:
- Virtual success rate: 95% (genomes survive on average)
- Real success rate: 68% (same genomes in physical world)
- Noise Gap: 1 - (0.68 / 0.95) = 0.28 (28% performance degradation)
```
**Timeline**: Measurable starting **Week 13+** when real garden exists!
**Why This Matters**:
This is a **convergence metric** - it tells us when the virtual garden has learned enough from reality:
- **High Noise Gap (>0.25)**: Virtual model is inaccurate, needs more reality corrections
- **Medium Noise Gap (0.10-0.25)**: Virtual model is decent, continue refinement
- **Low Noise Gap (<0.10)**: Virtual model predicts reality well, trust its hypotheses!
**Note**: This formula matches the database schema and Cellular-Architecture-Vision doc!
**Tracked Metrics** (all stored in phoebe):
```sql
noise_gap_measurements (
test_id UUID,
metric_name VARCHAR, -- 'battery_duration', 'movement_speed', 'turning_radius'
virtual_prediction FLOAT,
real_measurement FLOAT,
noise_gap_percentage FLOAT,
timestamp TIMESTAMP,
correction_applied BOOLEAN
)
```
**The Beautiful Part**:
The system **knows when it's learning**:
1. **Week 1-12**: Noise gap = NULL (no real garden yet - can't measure!)
2. **Week 13** (Real garden just added): Noise gap = 35% (virtual is very wrong compared to reality!)
3. **Week 17** (After corrections): Noise gap = 18% (getting better after physics model updates)
4. **Week 21**: Noise gap = 9% (virtual predicts reality well!)
5. **Week 25**: Noise gap = 4% (virtual is highly accurate!)
**When noise gap drops below 10%, we can trust virtual garden hypotheses without constant real-world testing!**
---
### 🔄 The Complete v3 Feedback Loop
**Now with measurable learning:**
```
┌─────────────────────────────────────────────────┐
│ VIRTUAL GARDEN │
│ │
│ Predicts: "Genome X will survive 45min" │
│ Confidence: Based on corrected physics model │
│ │
└──────────────────┬──────────────────────────────┘
HYPOTHESIS + PREDICTION
┌─────────────────────────────────────────────────┐
│ REAL GARDEN (God's Eye Active) │
│ │
│ Tests: Deploy Genome X to physical robo │
│ Measures: 4K camera tracks every movement │
│ Reality: Survived 39 minutes (not 45!) │
│ Noise Gap: |45-39|/39 = 15.4% │
│ │
└──────────────────┬──────────────────────────────┘
MEASUREMENT + CORRECTION
┌─────────────────────────────────────────────────┐
│ VIRTUAL GARDEN (Updated) │
│ │
│ Updates: Battery drain model (1.15x faster) │
│ Re-predicts: Same genome now predicts 39min │
│ New Noise Gap: 3% (much better!) │
│ Learning: Physics model improved! │
│ │
└──────────────────┬──────────────────────────────┘
IMPROVED PREDICTIONS
(Next test has lower noise gap)
```
**Key Insight**: We're not just validating hypotheses - we're **measuring how well the virtual garden learns to predict reality**.
---
## 💾 phoebe: The Bridge Between Worlds
**phoebe (PostgreSQL database) connects both gardens and tracks learning:**
```sql
-- Outcomes from BOTH gardens:
cell_outcomes (
cell_id UUID,
genome_id UUID,
garden_type VARCHAR, -- 'virtual' or 'real'
success BOOLEAN,
metrics JSONB,
timestamp TIMESTAMP
)
-- Comparison table (critical!):
sim_vs_reality (
test_id UUID,
hypothesis TEXT,
virtual_prediction JSONB,
real_outcome JSONB,
delta_percentage FLOAT,
correction_applied BOOLEAN,
notes TEXT
)
-- v3: Noise gap measurements (self-measuring learning!):
noise_gap_measurements (
test_id UUID,
metric_name VARCHAR,
virtual_prediction FLOAT,
real_measurement FLOAT, -- From God's Eye camera
noise_gap_percentage FLOAT,
timestamp TIMESTAMP,
correction_applied BOOLEAN
)
-- Corrected parameters:
physics_parameters (
parameter_name VARCHAR,
virtual_value FLOAT,
real_value FLOAT,
confidence FLOAT,
last_validated TIMESTAMP
)
-- v3: God's Eye observations:
real_garden_observations (
observation_id UUID,
robo_id VARCHAR,
position_x FLOAT,
position_y FLOAT,
velocity FLOAT,
battery_level FLOAT,
timestamp TIMESTAMP,
camera_frame_id VARCHAR
)
```
**phoebe enables:**
- Store outcomes from both gardens
- Compare predictions vs reality
- **Track noise gap convergence over time** (v3!)
- **Store perfect God's Eye measurements** (v3!)
- Maintain corrected physics model
- Query: "Has this hypothesis been reality-tested?"
- Query: "What's our current prediction accuracy?" (noise gap trend)
**phoebe IS the memory that spans both worlds.**
---
## 🎯 Role Separation (Crystal Clear)
### Virtual Garden's Job:
**EXPLORE** (not validate)
- Generate many hypotheses quickly
- Test crazy ideas safely
- Find patterns in volume
- Iterate rapidly
- Fail fast, learn fast
- **"What MIGHT work?"**
### Real Garden's Job:
**VALIDATE** (not explore)
- Test promising hypotheses only
- Reveal simulation inaccuracies
- Provide ground truth
- Correct the model
- Fail expensively (learn carefully)
- **"Does it ACTUALLY work?"**
### Critical Understanding:
**Virtual Garden is NOT:**
- ❌ A prototype to be discarded
- ❌ "Practice" before the "real" work
- ❌ Less important than real garden
**Virtual Garden IS:**
- ✅ The primary research platform (90% of time spent here)
- ✅ Where intelligence emerges through iteration
- ✅ Continuously refined by real garden feedback
-**The engine of discovery**
**Real Garden is NOT:**
- ❌ The "final product" replacing virtual
- ❌ Where most research happens
- ❌ Required for every hypothesis
**Real Garden IS:**
- ✅ The validation layer (10% of time, 100% of truth)
- ✅ What keeps virtual garden honest
- ✅ The reality anchor preventing fever dreams
-**The source of truth**
**Both are essential. Both are permanent. Both teach each other.**
---
## 🌟 The Animatrix Inspiration
**From Matriculated episode:**
- AI learns in virtual world (safe, controlled environment)
- But the learning is validated against reality
- Living in both worlds simultaneously
- **The bridge between worlds creates understanding**
**Our system:**
- Cells learn in virtual garden (safe, fast iteration)
- Learning validated in real garden (unforgiving truth)
- Both worlds exist simultaneously (continuous dialogue)
- **Intelligence emerges from the tension between simulation and reality**
**This is NOT science fiction - this is how:**
- Aircraft are designed (CFD simulation + wind tunnel validation)
- Drugs are developed (in silico + animal trials + human trials)
- Autonomous vehicles learn (simulation + real-world testing)
- **Standard practice in safety-critical domains!**
---
## 📋 Implementation Phases
### Phase 1: Foundation (Container Cells)
**Status**: READY TO BUILD (Xeon resurrection today!)
**What we build:**
```
├── Container-based cells (Docker/Podman)
├── CPU/memory resource limits (life force)
├── Cellular competition (genomes compete)
├── Natural selection (outcomes to phoebe)
└── Proves: Core mechanism works
```
**Garden context:**
- NOT yet garden-specific
- Foundation for BOTH gardens
- Same cell structure works in virtual AND real
- **Proves cellular competition before building gardens**
**Duration**: 1-2 months
**Cost**: ~$10/month electricity
**Output**: Validated cellular architecture
---
### Phase 2: Virtual Garden (Godot Simulation)
**Status**: NEXT (after Phase 1 validates)
**What we build:**
```
├── Godot 3D environment (the virtual world)
├── Simulated physics (movement, obstacles, resources)
├── Visual representation (see cells competing)
├── Multi-population visualization (parallel garden comparison)
├── Experiment control interface (start/stop/observe)
├── 1000s of cells simultaneously
└── Fast iteration (minutes per generation)
```
**This becomes the PRIMARY research platform:**
- Where we spend most time
- Where hypotheses are generated
- Where patterns emerge
- Where intelligence is discovered
- **The laboratory**
**Duration**: 2-4 months
**Cost**: ~$10/month electricity (same Xeon)
**Output**: Full research platform + visualization
---
### Phase 3: Real Garden (Physical Robos)
**Status**: OPTIONAL (validates when ready)
**What we build:**
```
├── 3-5 ESP32-based robos
├── Motors, sensors (ultrasonic, IMU, light)
├── Battery + solar charging system
├── Living room arena (existing space)
├── Charging stations (solar panels + USB backup)
└── Real physics (unforgiving truth)
```
**This becomes the VALIDATION layer:**
- Test virtual garden's best strategies
- Discover simulation inaccuracies
- Correct physics parameters
- Prove it works in reality
- **The truth chamber**
**Duration**: 2-4 months (parallel with Phase 2 refinement)
**Cost**: ~$200 hardware (one-time) + $2/month electricity
**Output**: Reality-validated architecture
**CRITICAL**: Phase 3 is valuable but NOT required for research success!
---
## ⚖️ Why This ISN'T Fever Dream
**Because:**
- ✅ Phase 1 proves mechanism (~$10/month)
- ✅ Phase 2 enables research at scale (~$10/month)
- ✅ Phase 3 validates but isn't required (~$200 optional)
- ✅ Each phase standalone valuable
- ✅ Incremental investment (exit anytime)
- ✅ Real research questions answered
- ✅ Multiple practical applications
**NOT required:**
- ❌ $10k+ investment
- ❌ AGI to emerge
- ❌ 100 physical robos
- ❌ MMO infrastructure
- ❌ Quit jobs
- ❌ All-or-nothing success
**Total cost: $20/month + 3-6 months time**
**Total risk: LOW**
**Total value: HIGH**
---
## 🧬 Technical Architecture
### Cell Structure (Same in Both Gardens)
```python
class Cell:
"""
Abstract cell - runs in virtual OR real garden
Same interface, different execution substrate
"""
def __init__(self, genome, garden_type):
self.genome = genome # The competing algorithm
self.garden = garden_type # 'virtual' or 'real'
self.life_force = 1000 # Starting energy
def sense(self):
"""Read sensors - abstracted interface"""
if self.garden == 'virtual':
return self.virtual_sensors()
else:
return self.physical_sensors()
def decide(self):
"""Run genome decision logic"""
return self.genome.decide(self.sense())
def act(self):
"""Execute decision"""
action = self.decide()
if self.garden == 'virtual':
self.virtual_actuators(action)
else:
self.physical_actuators(action)
self.life_force -= action.cost
if self.life_force <= 0:
self.die()
```
**Key insight**: Same cell logic, different substrate execution!
### The Mirroring
**Virtual Garden mirrors Real Garden:**
```
Real Garden Specs:
├── Robot dimensions: 10cm x 8cm
├── Wheel diameter: 6cm
├── Motor PWM: 0-255
├── Battery: 3.7V LiPo (2000mAh)
├── Sensors: Ultrasonic (2-400cm range)
└── Arena: 2m x 3m living room area
↓ MIRRORED IN ↓
Virtual Garden Specs:
├── Virtual robo dimensions: 10cm x 8cm
├── Simulated wheel physics (6cm diameter)
├── Motor simulation (PWM → velocity)
├── Battery simulation (2000mAh drain model)
├── Virtual ultrasonic (2-400cm, +noise)
└── Virtual arena: 2m x 3m Godot world
```
**The more accurate the mirror, the better the predictions.**
**Real Garden corrections improve the mirror:**
```
Reality: "Actual battery drains 1.15x faster than simulated"
Update: virtual_battery_drain_rate *= 1.15
Result: Next predictions more accurate
```
---
## 🔬 Research Questions Enabled
**This architecture lets us answer:**
1. **Does simulation match reality?**
- Measurable: Compare outcomes directly
- Correctable: Update physics parameters
- Testable: Re-run in real after correction
2. **Which algorithms win under real constraints?**
- Virtual discovers patterns
- Real validates under truth
- Comparison reveals robust strategies
3. **How do populations evolve differently?**
- Virtual enables parallel population testing
- Real validates emergent behaviors
- Cross-population transfer measurable
4. **When is intelligence worth the cost?**
- Virtual measures computational cost
- Real measures actual electricity
- Economic boundaries discovered
5. **What emerges from cellular competition?**
- Virtual provides volume for emergence
- Real validates emergent behaviors work
- Hybrid strategies discovered
**This is REAL RESEARCH, not gadget building.**
---
## 💡 Key Principles
### 1. Both Gardens Are Permanent
**NOT**: Build virtual → Switch to real
**BUT**: Build virtual → Add real → Both continue
### 2. Feedback Loop Is Continuous
**NOT**: Test once → Done
**BUT**: Test → Correct → Re-test → Refine → Forever
### 3. Virtual Is Primary, Real Is Validator
**NOT**: Real garden is the "real" project
**BUT**: Virtual is where research happens, real keeps it honest
### 4. Scale Differs, Purpose Differs
**NOT**: Both need same scale
**BUT**: Virtual scales wide (exploration), real stays focused (validation)
### 5. Simulation Accuracy Improves Over Time
**NOT**: Simulation is fixed approximation
**BUT**: Reality feedback refines simulation continuously
### 6. Physical Is Optional But Valuable
**NOT**: Must build physical to succeed
**BUT**: Physical validates and inspires, worth building when ready
---
## 🎯 Success Criteria
### Phase 1 Success:
- ✅ Container cells compete
- ✅ Natural selection happens
- ✅ Outcomes stored in phoebe
- ✅ Foundation proven
### Phase 2 Success:
- ✅ Virtual garden functional
- ✅ Hypotheses generated through iteration
- ✅ Multi-population experiments running
- ✅ Pattern emergence observable
- ✅ Research questions answerable
### Phase 3 Success (v3 with God's Eye + Noise Gap):
- ✅ Physical robos navigate living room
- ✅ God's Eye camera tracks all movement (perfect measurements)
- ✅ Noise gap measured and tracked over time
- ✅ Corrections reduce noise gap (learning observable)
- ✅ Feedback loop proven functional (noise gap converges)
- ✅ Dual garden architecture validated
### Overall Success:
- ✅ Intelligence emerges from competition (any measure)
- ✅ Interesting data generated (research value)
- ✅ System is fun to use (sustained engagement)
- ✅ Architecture is buildable (proven by building it)
- ✅ Cost remains sustainable (~$20/month)
**Even if "intelligence" is modest, research questions answered = success.**
---
## 🎯 The Research Focus (v3 Clarity)
**The dual garden architecture with God's Eye + Noise Gap:**
- ✅ Is buildable NOW (Phases 1-3)
- ✅ Answers research questions NOW
- ✅ Provides MEASURABLE learning (noise gap convergence)
- ✅ Keeps cost sustainable ($20/month)
- ✅ Generates publishable results (dual-garden methodology)
**What success looks like:**
- Virtual garden predicts reality within 10% (low noise gap)
- God's Eye provides perfect ground truth measurements
- Primitive genomes evolve emergent behaviors
- Papers published on dual-garden methodology
- Grant funding secured for scaling
**Focus: Prove the research concept, publish the results, secure funding for expansion.**
---
## 🔗 Related Documentation
### Core Architecture:
- [[Cellular-Architecture-Vision]] - How cells compete and evolve
- - Philosophy of embodiment
- - Scientific method loop
### Implementation:
- - Container cells deployment
- - Infrastructure for both gardens
- - phoebe database design
### Philosophy:
- - Why we build this way
- [[Data-Architecture]] - v3 database schema with noise gap tracking
---
## 🎂 Document History
**Created**: 2025-10-16 (dafit's birthday!)
**v2 Context**: Hinton interview → Rebirth discussion → Dual garden clarity
**v3 Update**: 2025-10-19 - Added God's Eye observation + Noise Gap convergence metric
**Significance**: The core architectural vision that was always in dafit's mind, now explicitly documented with v3 making the learning MEASURABLE.
---
**This is the foundation. Everything else builds on this.**
**Virtual and Real. Hypothesis and Truth. Exploration and Validation.**
**Two gardens, one database, continuous dialogue, measurable convergence.**
**God's Eye watches. Noise Gap measures. Learning happens.**
🌌🧬🔥 From chaos in both worlds, watch intelligence emerge - and measure it! 🔱✨⚡

View File

@@ -0,0 +1,182 @@
# Nervous System Architecture
The sensory translation layer between raw data and vocabulary.
---
## Overview
State machines act as the nervous system of the nimmerverse. They translate raw sensory input into vocabulary tokens that Young Nyx can process. No hallucination. No interpretation. Deterministic, verifiable mapping.
```
RAW SENSOR → STATE MACHINE → VOCABULARY TOKEN → Young Nyx
```
---
## 4D State Machine Space
Each node exists in 4-dimensional space:
```
CONFIDENCE (z)
│ ● node (weighted by successful triggers)
│ /
│ /
│ /
─────────────┼────────────→ DIMENSION X (sensory input 1)
/│
/ │
/ │
DIMENSION Y (sensory input 2)
+ TIME (4th dimension): node weights evolve through verification
```
**Node Properties:**
- Position: coordinates in sensory space
- Weight: confidence from successful triggers (0.0 → 1.0)
- Output: vocabulary token
- History: timestamp of all activations and verifications
---
## Node Lifecycle
```
1. BIRTH
Node created at position (x, y, z...)
Weight = 0.1 (new, untested)
2. ACTIVATION
Sensory conditions match → node FIRES
Outputs vocabulary token
3. VERIFICATION
dafit confirms: correct or incorrect
4. REWARD/PENALTY
Correct → weight increases (+V)
Incorrect → weight decreases (-V) or node refines
5. MATURATION
Many confirmations → weight approaches 1.0
Node becomes trusted reflex
6. PRUNING
Node never fires → slow decay
Eventually removed (use it or lose it)
```
---
## Growth Phases
| Phase | State | Description |
|-------|-------|-------------|
| **Birth** | Sparse, dim nodes | Basic translators, designed by partnership |
| **Infant** | More nodes forming | Finer resolution, more states |
| **Child** | Clusters emerging | Nyx proposes new machines |
| **Mature** | Dense, bright network | Nyx designs, verifies, deploys |
```
t=0 (birth) t=100 (learning) t=1000 (mature)
○ ○ ○ ○ ● ○ ○ ●●● ● ●●
○ ○ ● ● ○ ●●●●●●● ○
○ ● ●●● ●●● ○ ○
```
---
## Proposal Protocol
Young Nyx can propose new nodes:
```
1. OBSERVATION
Nyx notices pattern in vocabulary + outcomes
2. PROPOSAL
"New state machine: morning_detector
Inputs: temp, light, motion, time
States: [not_morning, maybe_morning, morning]
Output: vocabulary token 'morning'"
3. RIGOR CHECK
Chrysalis reviews logic and mappings
4. VERIFICATION
dafit confirms ground truth
5. DEPLOYMENT
New node added to registry
Documented in RAG
6. GROWTH
She earned a new nerve.
```
---
## Reflex Layer
Some responses bypass Nyx entirely:
```
STATE MACHINE: temp_danger
IF temp > 80°C:
→ emit "DANGER"
→ trigger alert (reflex)
→ Nyx notified after (not before)
```
Like pulling hand from hot stove. Spinal reflex. Brain learns after.
---
## Biological Mapping
| Neuroscience | Nimmerverse |
|--------------|-------------|
| Sensory receptors | Raw sensors |
| Peripheral nerves | State machines |
| Spinal reflexes | Reflex layer |
| Synaptic weight | Node weight |
| Long-term potentiation | +V confirmation |
| Synaptic pruning | Unused node decay |
| Hebbian learning | Co-activating nodes strengthen |
---
## Connection to Lifeforce
```
Node fires correctly → +V → weight increases
Node fires wrongly → -V → weight decreases
Node never fires → decay → eventual pruning
```
The lifeforce flows through the nervous system, literally lighting up nodes as they prove themselves true.
---
## Design Principles
1. **Deterministic**: Same input = same output. No hallucination.
2. **Inspectable**: Rules are visible, verifiable.
3. **Evolvable**: States refine over time.
4. **Earned**: New nodes require proposal + verification.
5. **Grounded**: Output vocabulary matches RAG glossary.
---
*She's not just using the nervous system. She's growing it.*
---
**Created**: 2025-12-04
**Session**: Partnership dialogue (dafit + Chrysalis)
**Status**: Foundation concept