diff --git a/architecture/Cellular-Architecture.md b/architecture/Cellular-Architecture.md index 5ed048b..06bf79d 100644 --- a/architecture/Cellular-Architecture.md +++ b/architecture/Cellular-Architecture.md @@ -1,1601 +1,675 @@ ---- -type: core_architecture_vision -version: 3.0 -status: current -phase: design -created: 2025-10-12 -updated: 2025-10-19 -v3_alignment_update: 2025-10-19_substrate_timeline_clarified -breakthrough_session: primitive_genomes_and_gratification -authors: dafit + Claude (Sonnet 4.5) -related_docs: - - Dual-Garden-Architecture.md - - Specialist-Discovery-Architecture.md - - Methodology-Research-Framework.md - - Physical-Embodiment-Vision.md - - Data-Architecture.md - - Week-1-Bootstrap-Plan.md -previous_versions: - - Cellular-Architecture-Vision-v1-2025-10-12.md - - Cellular-Architecture-Vision-v2-2025-10-17.md -importance: FOUNDATIONAL - Complete cellular intelligence architecture with primitive genome breakthrough -alignment_note: v3 update 2025-10-19 clarifies execution substrates (Python Week 1, Godot Week 5+, ESP32 Week 13+) ---- +# ๐Ÿงฌ Cellular Architecture v4 -# ๐Ÿงฌ Cellular Architecture Vision v3 - -> *"What if existence is just different states combined with feedback loops?"* -> โ€” The Morning Question (2025-10-12) - -> *"Digital minds can be reborn. Babies discover their bodies. Reflexes form from experience."* -> โ€” The Birthday Breakthrough (2025-10-16) - -> *"We can't have discovery philosophy in body but programming in behavior."* -> โ€” The Primitive Genome Breakthrough (2025-10-17) +> *"Cells are state machines. Nerves compose cells. Organisms emerge from nerves."* +> โ€” The Layered Discovery (2025-12-07) --- -## ๐ŸŒŸ Version 3.0 - The Primitive Genome Architecture +## Overview -**This version integrates:** -- โœ… **Morning Epiphany** (2025-10-12): Cellular competition, life force economy, feedback loops -- โœ… **Dual Gardens** (2025-10-16): Virtual + Real feedback loop architecture -- โœ… **Specialist Discovery** (2025-10-16): Claude as mediator, trainable specialists -- โœ… **Reflex Formation** (2025-10-16): Weight distributions, rebirth substrate -- โœ… **Body Discovery** (2025-10-16): Physical โ†’ Domains โ†’ Specs โ†’ Signals -- โœ… **Primitive Genomes** (2025-10-17): NOT pre-programmed algorithms, emergent from primitives -- โœ… **Gratification Solved** (2025-10-17): Immediate LF costs + milestone rewards -- โœ… **Object Discovery** (2025-10-17): Image recognition + human teaching -- โœ… **Noise Gap Metric** (2025-10-17): Self-measuring learning progress -- โœ… **God's Eye** (2025-10-17): Mobile camera system on ceiling rails - -**Previous versions**: -- (morning epiphany, archived) -- (birthday breakthroughs, archived) - ---- - -## ๐ŸŽฏ Core Philosophy - -> *"It's not about WHERE the learning happens - it's about the PATTERN."* - -Everything - physical robos, container swarms, infrastructure optimization - follows the same cycle: +**Version 4** unifies the original cellular intelligence vision with the nervous system architecture. The key insight: **cells are not containers running codeโ€”cells are atomic state machines** that expose sensor/motor functions. Nerves orchestrate cells into behaviors. Organisms emerge from nerve interactions. ``` -State โ†’ Genome attempts solution โ†’ Energy spent โ†’ Outcome โ†’ Energy gained/lost โ†’ -Feedback to phoebe โ†’ Reflexes form โ†’ Intelligence emerges +โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” +โ”‚ ORGANISM โ”‚ +โ”‚ (emergent pattern from nerve interactions) โ”‚ +โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค +โ”‚ NERVES โ”‚ +โ”‚ (behavioral state machines composing cells) โ”‚ +โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค +โ”‚ CELLS โ”‚ +โ”‚ (atomic state machines: sensors, motors, organs) โ”‚ +โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค +โ”‚ HARDWARE โ”‚ +โ”‚ (ESP32, GPUs, microphones, speakers) โ”‚ +โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ ``` -**Four fundamental principles:** - -### 1. The substrate doesn't matter. The feedback loop does. -- Physical robo, virtual simulation, container - same mechanism -- Learning pattern universal across domains -- phoebe stores outcomes from ALL substrates - -### 2. Intelligence is distributed, not monolithic. -- Claude coordinates, doesn't contain -- Specialists hold domain expertise (trainable) -- Reflexes form from experience (weight distributions) -- Rebirth possible (persistence in phoebe) - -### 3. Exploration becomes reflex through competition. -- Random initially (genomes compete) -- Patterns emerge (successful genomes dominate) -- Reflexes form (automatic, cheap, fast) -- Economics drive optimization (cheaper to use reflex) - -### 4. Discovery happens like babies explore - NOT programming. -- Don't pre-program capabilities or behaviors -- Explore with primitives โ†’ Patterns emerge โ†’ Intelligence forms -- Body schema discovered, genomes discovered, behaviors discovered -- We observe and label AFTER emergence, not design before - --- -## ๐Ÿงฌ The Logical Consistency Breakthrough (2025-10-17) +## ๐Ÿ”ฌ Layer 1: Cells (Atomic State Machines) -### The Problem We Identified +### What Is a Cell? -**v2 Architecture had an inconsistency:** +A **cell** is the smallest unit of behaviorโ€”a state machine that wraps a single hardware capability. Every sensor, motor, and organ function is exposed as a cell with: -``` -Body Schema: Discovered through exploration โœ… - โ†’ "System explores and learns what motors/sensors it has" - โ†’ Emergent, not programmed +- **States**: Discrete operational modes (IDLE, ACTIVE, ERROR, etc.) +- **Transitions**: Triggered by inputs, time, or internal events +- **Outputs**: Data, status, feedback to higher layers +- **Lifeforce Cost**: Every state transition costs energy -Genomes: Pre-programmed algorithms โŒ - โ†’ "Here's A* pathfinding, here's Zigzag, here's Gradient Following" - โ†’ Programmed, not emergent -``` +### Cell Categories -**This violated the core philosophy**: If we believe in discovery for body capabilities, we MUST believe in discovery for behavioral strategies. - -### The Solution: Primitive Genomes - -**Genomes are NOT pre-programmed algorithms.** - -**Genomes ARE sequences of primitive operations.** +#### Sensor Cells (Input) ```python -# NOT this (pre-programmed strategy): -genome = { - "movement_strategy": "A*", # We named and designed this - "communication": "Gossip", # We gave them this - "energy": "Conservative" # We programmed this +class DistanceSensorCell(StateMachine): + """ + Wraps IR/ultrasonic distance sensor. + Exposes raw hardware as state machine. + """ + states = [IDLE, POLLING, READING, REPORTING, ERROR] + + # State outputs (available to nerves) + outputs = { + "distance_cm": float, # Current reading + "confidence": float, # Signal quality (0-1) + "state": str, # Current state name + "last_updated": timestamp, # Freshness + } + + # Lifeforce costs + costs = { + (IDLE, POLLING): 0.1, # Wake up sensor + (POLLING, READING): 0.3, # Perform measurement + (READING, REPORTING): 0.1, # Process result + (REPORTING, IDLE): 0.0, # Return to rest + (ANY, ERROR): 0.0, # Error transition free + } +``` + +**Example sensor cells:** +| Cell | Hardware | States | Key Output | +|------|----------|--------|------------| +| `distance_sensor_front` | IR sensor | IDLEโ†’POLLINGโ†’READINGโ†’REPORTING | `distance_cm`, `confidence` | +| `distance_sensor_left` | IR sensor | Same | `distance_cm`, `confidence` | +| `distance_sensor_right` | IR sensor | Same | `distance_cm`, `confidence` | +| `battery_monitor` | ADC | MONITORINGโ†’LOWโ†’CRITICAL | `voltage`, `percentage`, `charging` | +| `imu_sensor` | MPU6050 | IDLEโ†’SAMPLINGโ†’REPORTING | `heading`, `acceleration`, `tilt` | +| `light_sensor` | Photoresistor | IDLEโ†’READINGโ†’REPORTING | `lux`, `direction` | + +#### Motor Cells (Output) + +```python +class MotorCell(StateMachine): + """ + Wraps DC motor with feedback. + Exposes actuation as state machine. + """ + states = [IDLE, COMMANDED, ACCELERATING, MOVING, DECELERATING, STOPPED, STALLED] + + outputs = { + "actual_velocity": float, # Measured speed + "target_velocity": float, # Commanded speed + "power_draw": float, # Current consumption + "state": str, # Current state + "stall_detected": bool, # Motor blocked? + } + + costs = { + (IDLE, COMMANDED): 0.1, + (COMMANDED, ACCELERATING): 0.5, + (ACCELERATING, MOVING): 1.0, # High power during accel + (MOVING, MOVING): 0.3, # Sustain cost per tick + (MOVING, DECELERATING): 0.2, + (DECELERATING, STOPPED): 0.1, + (ANY, STALLED): 0.0, # Stall is failure, not cost + } + + # Feedback triggers state changes + def on_current_spike(self): + """Motor drawing too much current = stall""" + self.transition_to(STALLED) + self.emit_event("stall_detected", obstacle_likely=True) +``` + +**Example motor cells:** +| Cell | Hardware | States | Key Feedback | +|------|----------|--------|--------------| +| `motor_left` | DC motor + encoder | IDLEโ†’MOVINGโ†’STALLED | `actual_velocity`, `stall_detected` | +| `motor_right` | DC motor + encoder | Same | `actual_velocity`, `stall_detected` | +| `servo_camera` | Servo motor | IDLEโ†’MOVINGโ†’POSITIONED | `angle`, `at_target` | + +#### Organ Cells (Complex Capabilities) + +```python +class SpeechSTTCell(StateMachine): + """ + Wraps Whisper speech-to-text. + Expensive organ, lifeforce-gated. + """ + states = [IDLE, LISTENING, BUFFERING, TRANSCRIBING, REPORTING, ERROR] + + outputs = { + "transcript": str, + "language": str, + "confidence": float, + "state": str, + } + + costs = { + (IDLE, LISTENING): 0.5, + (LISTENING, BUFFERING): 0.5, + (BUFFERING, TRANSCRIBING): 5.0, # GPU inference! + (TRANSCRIBING, REPORTING): 0.1, + (REPORTING, IDLE): 0.0, + } +``` + +**Example organ cells:** +| Cell | Hardware | States | Key Output | +|------|----------|--------|------------| +| `speech_stt` | Whisper on atlas | LISTENINGโ†’TRANSCRIBINGโ†’REPORTING | `transcript`, `language` | +| `speech_tts` | Coqui on atlas | IDLEโ†’SYNTHESIZINGโ†’SPEAKING | `audio_playing`, `complete` | +| `vision_detect` | YOLO on atlas | IDLEโ†’CAPTURINGโ†’DETECTINGโ†’REPORTING | `objects[]`, `bounding_boxes[]` | + +--- + +## ๐Ÿง  Layer 2: Nerves (Behavioral State Machines) + +### What Is a Nerve? + +A **nerve** is a behavioral pattern that orchestrates multiple cells. Nerves: + +- **Subscribe** to cell outputs (sensor readings, motor feedback) +- **Coordinate** cell actions (read sensor โ†’ decide โ†’ command motor) +- **Maintain** behavioral state (IDLE โ†’ DETECT โ†’ EVADE โ†’ RESUME) +- **Evolve** from deliberate (LLM-mediated) to reflex (compiled) + +### Nerve Architecture + +```python +class CollisionAvoidanceNerve(StateMachine): + """ + Orchestrates distance sensors + motor to avoid obstacles. + Subscribes to cell outputs, commands cell actions. + """ + # Cells this nerve uses + cells = [ + "distance_sensor_front", + "distance_sensor_left", + "distance_sensor_right", + "motor_left", + "motor_right", + ] + + # Nerve states (behavioral, not hardware) + states = [IDLE, DETECT, EVALUATE, EVADE, RESUME] + + def on_cell_update(self, cell_name, cell_state, cell_outputs): + """ + React to cell state changes. + This is the feedback loop! + """ + if cell_name == "distance_sensor_front": + if cell_outputs["distance_cm"] < 30: + self.transition_to(DETECT) + + if cell_name == "motor_left" and cell_state == "STALLED": + # Motor feedback! Obstacle hit despite sensors + self.handle_unexpected_stall() + + def on_enter_EVADE(self): + """Command motor cells to turn""" + if self.evade_direction == "left": + self.command_cell("motor_left", action="reverse", duration=200) + self.command_cell("motor_right", action="forward", duration=200) + # ... +``` + +### Cell โ†’ Nerve Feedback Loop + +``` +โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” +โ”‚ COLLISION AVOIDANCE NERVE โ”‚ +โ”‚ โ”‚ +โ”‚ States: [IDLE] โ†’ DETECT โ†’ EVALUATE โ†’ EVADE โ†’ RESUME โ”‚ +โ”‚ โ”‚ +โ”‚ on_cell_update(): โ”‚ +โ”‚ - distance_front.distance_cm < 30 โ†’ DETECT โ”‚ +โ”‚ - motor.stall_detected โ†’ handle_stall() โ”‚ +โ”‚ โ”‚ +โ”‚ command_cell(): โ”‚ +โ”‚ - motor_left.forward(200ms) โ”‚ +โ”‚ - motor_right.reverse(200ms) โ”‚ +โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ + โ”‚ + โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” + โ”‚ โ”‚ โ”‚ + โ–ผ โ–ผ โ–ผ + โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” + โ”‚ distance โ”‚ โ”‚ motor โ”‚ โ”‚ motor โ”‚ + โ”‚ _front โ”‚ โ”‚ _left โ”‚ โ”‚ _right โ”‚ + โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ + โ”‚ REPORTING โ”‚ โ”‚ MOVING โ”‚ โ”‚ MOVING โ”‚ + โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ + โ”‚ dist: 25cmโ”‚ โ”‚ vel: 15 โ”‚ โ”‚ vel: -15 โ”‚ + โ”‚ conf: 0.9 โ”‚ โ”‚ stall: no โ”‚ โ”‚ stall: no โ”‚ + โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ + CELL CELL CELL + + โ†‘ โ†‘ โ†‘ + โ”‚ โ”‚ โ”‚ + โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” + โ”‚IR Sensorโ”‚ โ”‚DC Motor โ”‚ โ”‚DC Motor โ”‚ + โ”‚ GPIO โ”‚ โ”‚ PWM โ”‚ โ”‚ PWM โ”‚ + โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ + HARDWARE HARDWARE HARDWARE +``` + +### Nerve Examples + +| Nerve | Cells Used | Behavioral States | Feedback Triggers | +|-------|------------|-------------------|-------------------| +| **Collision Avoidance** | distance_front, distance_left, distance_right, motor_left, motor_right | IDLEโ†’DETECTโ†’EVALUATEโ†’EVADEโ†’RESUME | distance < threshold, motor stalled | +| **Charging Seeking** | battery_monitor, distance_*, motor_*, vision_detect (optional) | MONITORโ†’SEARCHโ†’APPROACHโ†’DOCKโ†’CHARGE | battery < 20%, station detected, docked | +| **Exploration** | distance_*, motor_*, imu_sensor | IDLEโ†’CHOOSEโ†’MOVEโ†’CHECKโ†’RECORDโ†’REPEAT | area mapped, obstacle found, stuck | +| **Conversation** | speech_stt, speech_tts, rag_query | LISTENโ†’TRANSCRIBEโ†’UNDERSTANDโ†’RESPONDโ†’SPEAK | speech detected, silence timeout | + +--- + +## ๐ŸŒŠ Layer 3: Organisms (Emergent Patterns) + +### What Is an Organism? + +An **organism** is not designedโ€”it **emerges** from multiple nerves operating simultaneously. The organism is the pattern of nerve activations over time. + +``` +ORGANISM: "Explorer-Alpha" +โ”œโ”€ ACTIVE NERVES: +โ”‚ โ”œโ”€ Collision Avoidance (priority 10, reflex) +โ”‚ โ”œโ”€ Exploration Pattern (priority 5, deliberate) +โ”‚ โ”œโ”€ Battery Monitoring (priority 8, reflex) +โ”‚ โ””โ”€ Object Discovery (priority 3, deliberate) +โ”‚ +โ”œโ”€ CELLS IN USE: +โ”‚ โ”œโ”€ distance_sensor_front (shared by Collision, Exploration) +โ”‚ โ”œโ”€ distance_sensor_left (shared) +โ”‚ โ”œโ”€ distance_sensor_right (shared) +โ”‚ โ”œโ”€ motor_left (shared by Collision, Exploration) +โ”‚ โ”œโ”€ motor_right (shared) +โ”‚ โ”œโ”€ battery_monitor (Battery Monitoring) +โ”‚ โ””โ”€ vision_detect (Object Discovery) +โ”‚ +โ””โ”€ BEHAVIOR: + Explores environment while avoiding obstacles. + Seeks charging when battery low. + Discovers and reports novel objects. +``` + +### Nerve Priority and Preemption + +When multiple nerves want to control the same cells: + +```python +NERVE_PRIORITIES = { + "collision_avoidance": 10, # HIGHEST - safety critical + "battery_critical": 9, # Must charge or die + "battery_low": 7, + "human_interaction": 6, + "exploration": 5, + "object_discovery": 3, + "idle_monitoring": 1, # LOWEST - background } -# BUT this (primitive sequence): -genome = [ - {"op": "read_sensor", "id": "ir_front", "store": "dist"}, - {"op": "compare", "var": "dist", "threshold": 20, "operator": "<"}, - {"op": "branch_if_true", "jump": 5}, - {"op": "motor_forward", "duration": 100}, - {"op": "motor_stop"}, - {"op": "signal_emit", "value": "var_dist"} -] +# Higher priority nerve preempts lower +if collision_avoidance.wants_motor and exploration.has_motor: + exploration.yield_cell("motor_left") + exploration.yield_cell("motor_right") + collision_avoidance.acquire_cells() ``` -**Over millions of competitions, SOME sequences will evolve patterns that WE might recognize as "A*-like" or "wall-following" - but the cells never knew those names. They just discovered they work.** +### Organism Identity ---- +Organisms don't have fixed genomes. Their identity is: -## ๐Ÿงฌ What Is a Cell? +1. **Nerve configuration**: Which nerves are active, their priorities +2. **Cell assignments**: Which cells are available to which nerves +3. **History**: Accumulated decisions in phoebe's `decision_trails` +4. **Reflexes**: Compiled nerve patterns from successful executions -A **cell** is a single execution unit with: -- **One genome** (sequence of primitive operations) -- **Life force budget** (energy to execute operations) -- **Execution environment** (container on k8s, process on ESP32, or virtual entity in Godot) -- **Communication capability** (can signal other cells) -- **Evolutionary pressure** (successful cells reproduce, failures die) - -**Each cell runs as a container** (Docker/Podman) on edge devices, workers, or as a virtual entity in simulation. - -### Organism = Collection of Cells - -**1 Cell โ‰  1 Complete Behavior** - -**N Cells Connected = 1 Organism = 1 Robot** - -``` -ORGANISM (one robot) - โ”œโ”€ Sensor Cell 1 (reads IR front) - โ”œโ”€ Sensor Cell 2 (reads battery) - โ”œโ”€ Comparison Cell (evaluates threshold) - โ”œโ”€ Logic Cell (decision making) - โ”œโ”€ Motor Cell 1 (forward movement) - โ”œโ”€ Motor Cell 2 (turning) - โ””โ”€ Communication Cell (coordinates above) -``` - -**Cells coordinate through signals** (like neurons in nervous system). - -**Decision emerges from network**, not from single cell. - ---- - -## ๐Ÿ”ค The Primitive Layer - -### What Are Primitives? - -**Primitives = basic operations discovered from body schema** - -Like a baby discovers: "I have hands" โ†’ "I can grasp" โ†’ "I can reach" - -Our system discovers: "I have motors" โ†’ "I can move_forward" โ†’ "I can navigate" - -### Primitive Categories - -**SENSING primitives** (from sensors): -```python -read_sensor(id) โ†’ value # Read IR, battery, light sensor -compare(value, threshold, op) โ†’ bool # >, <, ==, != -detect_change(sensor, time) โ†’ bool # Did value change recently? -``` - -**ACTUATION primitives** (from motors): -```python -motor_forward(duration_ms) # Move forward -motor_backward(duration_ms) # Move backward -motor_turn(direction, degrees) # Rotate -motor_stop() # Halt all motors -``` - -**LOGIC primitives** (control flow): -```python -if(condition) โ†’ branch # Conditional execution -loop(count) # Repeat N times -wait(duration_ms) # Pause execution -branch_if_true(jump_index) # Jump to instruction -``` - -**COMMUNICATION primitives** (cell signals): -```python -signal_emit(value) # Broadcast to other cells -signal_read(source_cell) โ†’ value # Read from specific cell -broadcast(value) # Broadcast to all cells -``` - -**MEMORY primitives** (state): -```python -store(variable, value) # Save to variable -recall(variable) โ†’ value # Load from variable -increment(variable) # Counter operations -``` - -### How Primitives Are Discovered - -**From Body Schema**: -1. System explores hardware -2. Discovers: "I have 2x DC motors, 3x IR sensors, 1x battery voltage ADC" -3. Creates primitive operations: `motor_forward()`, `read_sensor(ir_front)`, etc. -4. Stores in phoebe body_schema table -5. Genomes can now use these primitives - -**Example Body Schema โ†’ Primitives**: -```yaml -# Physical Robot (ESP32) -Body Discovered: - - 2x DC Motors (PWM 0-255) โ†’ motor_forward(), motor_turn() - - 3x IR Sensors (2-30cm) โ†’ read_sensor(ir_front/left/right) - - 1x Battery (3.0-4.2V) โ†’ read_sensor(battery) - - 1x IMU (heading) โ†’ read_sensor(heading) - -Primitives Available: - - motor_forward(ms) - - motor_turn(direction, degrees) - - motor_stop() - - read_sensor(ir_front/left/right/battery/heading) - - compare(value, threshold, operator) -``` - ---- - -## โšก The Life Force Economy (Gratification Solved!) - -**Everything costs energy. Everything.** - -### The Economic Reality - -**Life Force** = Synthetic energy budget tied to REAL infrastructure costs - -``` -1 kWh real electricity = X units of life force - -Power consumption โ†’ Life force cost -Energy savings โ†’ Life force earned -``` - -### Immediate Costs (Per Operation) - -**Every primitive operation costs LF:** - -```python -# Sensing (cheap) -read_sensor(id): -0.5 LF -compare(value, threshold): -0.1 LF -detect_change(): -0.3 LF - -# Actuation (expensive) -motor_forward(100ms): -2.0 LF -motor_turn(45deg): -1.5 LF -motor_stop(): -0.1 LF - -# Logic (very cheap) -if(condition): -0.05 LF -branch_if_true(): -0.05 LF -wait(100ms): -0.1 LF - -# Communication (moderate) -signal_emit(): -0.3 LF -signal_read(): -0.2 LF -broadcast(): -0.5 LF - -# Memory (very cheap) -store(var, value): -0.05 LF -recall(var): -0.05 LF -``` - -**Running balance**: Cell starts with LF budget (e.g., 50 LF). Each operation deducts cost. Hit 0 = death. - -### Milestone Rewards (How to Earn LF Back) - -**Survival milestones:** -```python -avoided_collision: +1.5 LF -battery_increased_5_percent: +3.0 LF -reached_charging_station: +10.0 LF -survived_60_seconds: +5.0 LF -``` - -**Exploration milestones:** -```python -explored_new_grid_square: +3.0 LF -found_obstacle_location: +5.0 LF -discovered_charging_station: +20.0 LF -mapped_terrain_property: +2.0 LF -``` - -**Discovery milestones** (BIG rewards): -```python -discovered_new_object: +20.0 LF -human_confirmed_label: +5.0 LF bonus -novel_sequence_succeeded: +10.0 LF -sequence_repeated_10_times: +50.0 LF (reliable pattern!) -``` - -### The Gratification Feedback Loop - -``` -Cell executes operation โ†’ LF deducted immediately (cost visible) - โ†“ -Action produces outcome โ†’ Milestone detected - โ†“ -Milestone reward โ†’ LF earned back (gratification!) - โ†“ -Net positive = survive longer = reproduce -Net negative = death - โ†“ -Population evolves toward LF-positive sequences -``` - -**This solves the gratification problem:** -- โœ… Immediate feedback (every operation has cost) -- โœ… Clear rewards (milestones trigger bonuses) -- โœ… Economic pressure (must earn more than spend) -- โœ… Evolutionary selection (successful patterns spread) - ---- - -## ๐ŸŒ The Dual Garden Architecture - -**CRITICAL**: This cellular architecture operates across **TWO gardens** that mirror and teach each other. - -**Timeline**: Virtual garden exists from Week 1 (Python sim), Real garden added Week 13+ (ESP32 robots) - -**See [[Dual-Garden-Architecture]] for complete details.** - -### Quick Summary: - -**We don't build ONE garden THEN switch - we build virtual FIRST, then add real:** - -``` -WEEK 1-12: VIRTUAL GARDEN ONLY -๐ŸŽฎ VIRTUAL GARDEN (Python โ†’ Godot) - โ”‚ - โ”œโ”€ Week 1-4: Python 10x10 world - โ”œโ”€ Week 5+: Godot upgrade (optional) - โ”œโ”€ 1000s of organisms competing - โ”œโ”€ Fast iteration - โ”œโ”€ Safe experimentation - โ”œโ”€ Where EVOLUTION happens - โ”œโ”€ garden_type = 'virtual' - โ”‚ - โ””โ”€ noise_gap = NULL (no real garden yet to compare!) - -WEEK 13+: DUAL GARDEN ACTIVATED -๐ŸŽฎ VIRTUAL GARDEN ๐Ÿค– REAL GARDEN -(Python/Godot) (ESP32 Physical Robots) - โ”‚ โ”‚ - โ”œโ”€ Hypothesis generation โ”œโ”€ Truth validation - โ”œโ”€ Fast iteration โ”œโ”€ Slow validation - โ”œโ”€ Low noise โ”œโ”€ High noise (reality!) - โ”œโ”€ 1000s organisms โ”œโ”€ 3-5 robots - โ”œโ”€ Base rewards (1x) โ”œโ”€ Validation rewards (3x) - โ”‚ โ”‚ - โ””โ”€โ”€โ”€โ”€ FEEDBACK LOOP โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ - Virtual predicts โ†’ Real validates โ†’ - Noise gap measured โ†’ Virtual corrects -``` - -### Reward Weighting by Garden - -**Virtual Garden** (hypothesis generation): -```python -milestone_reward_base = 5.0 LF -discovery_bonus = 10.0 LF -``` - -**Real Garden** (truth validation): -```python -milestone_reward_real = 5.0 LF ร— 3 = 15.0 LF # 3x multiplier! -discovery_bonus_real = 10.0 LF ร— 3 = 30.0 LF -``` - -**Cross-validation MEGA BONUS**: -```python -virtual_pattern_validated_in_real: +50.0 LF BONUS! -``` - -### The Noise Gap Metric (Self-Measuring Learning!) - -**Noise = difference between virtual simulation and real physics** - -**Timeline**: Noise gap measurable starting **Week 13+** when real garden exists! - -```python -noise_gap = 1 - (real_success_rate / virtual_success_rate) - -# Week 1-12: noise_gap = NULL (no real garden yet!) - -# Week 13 (Real garden just added) -virtual_success: 95% -real_success: 68% -noise_gap: 1 - (0.68 / 0.95) = 0.28 (28% performance degradation) -โ†’ "Virtual models unreliable, reality very different" - -# Week 17 (After corrections) -virtual_success: 95% -real_success: 82% -noise_gap: 1 - (0.82 / 0.95) = 0.14 (14% degradation) -โ†’ "Models improving, learning noise robustness" - -# Week 25 (Mature dual garden) -virtual_success: 95% -real_success: 91% -noise_gap: 1 - (0.91 / 0.95) = 0.04 (4% degradation) -โ†’ "Virtual models highly accurate!" -``` - -**Noise gap becomes decision context for Claude:** - -```python -if noise_gap > 0.3: - recommendation = "Focus on REAL garden validation (models unreliable)" - specialist_confidence = LOW - -elif noise_gap < 0.1: - recommendation = "Explore more in VIRTUAL (trust predictions)" - specialist_confidence = HIGH - -else: - recommendation = "Balanced approach, validate key hypotheses" - specialist_confidence = MEDIUM -``` - -**The system self-measures how well it understands reality and adjusts strategy!** - ---- - -## ๐Ÿ‘๏ธ The God's Eye (Camera System) - -**NEW: Mobile camera system on ceiling rails!** - -### Hardware - -**Components:** -- 4K security camera (existing!) -- Motorized X-Y rail system (ceiling mounted) -- ESP32/Arduino control -- Linear actuators for movement - -### Capabilities - -**Perfect observation:** -- Tracks organisms as they move -- Provides exact position (no WiFi triangulation error) -- Multi-angle views (zoom, pan, tilt) -- Object detection (YOLO/MobileNet inference) -- Novelty detection (unknown objects) - -**Active coordination:** -``` -Camera: "Detected unknown object at (2.5, 3.1)" -System: "Organism Alpha, investigate coordinates (2.5, 3.1)" -Organism: Navigates there, approaches object -Camera: Zooms in, captures detailed image -System: "What is this?" [shows you frame] -You: "That's a shoe" -Organism: +20 LF discovery bonus! -phoebe: Stores object in objects table -``` - -**Exploration missions:** -- Camera spots something in distant room -- Sends robo to investigate (scout mission!) -- "Go explore hallway, report back" -- Robo returns with sensory data -- Camera confirms visual validation - -### What Organisms Receive - -**From their local sensors** (limited, noisy): -- IR proximity: "15cm obstacle ahead" -- Light sensor: "Brightness strongest east" -- Battery: "3.7V, getting low" - -**From garden (god's eye, perfect, global)**: -- Floor plan: "You're in 5m ร— 4m bounded space" -- Position: "You're at (1.2, 2.5) facing 45ยฐ" -- Known objects: "Chair at (2.3, 1.8), charging station at (4.0, 0.5)" - -**Organisms learn navigation through exploration**, even with perfect position knowledge. - -**It's like humans**: You know you're "in a room" but still explore "where's the remote?" - ---- - -## ๐Ÿ” Object Discovery + Image Recognition - -### The Discovery Flow - -``` -1. Organism explores โ†’ approaches unknown object -2. Camera (god's eye) detects novelty -3. Image recognition: YOLO/MobileNet inference (local GPU) -4. System: "๐Ÿ” New object detected! What is this?" - [Shows you camera frame with bounding box] -5. You label: "That's a chair" -6. Organism: +20 VP discovery bonus! ๐ŸŽ‰ -7. phoebe stores object in objects table -8. Future organisms: Know "chair at (2.3, 1.8)" from start -``` - -### Gratification Layers - -**Immediate reward:** -- Organism discovers novel object โ†’ +20 LF - -**Social validation:** -- Human acknowledges discovery โ†’ +5 LF bonus -- "Yes! Good find!" (baby parallel!) - -**Utility reward:** -- Knowledge helps future organisms (legacy) -- Map fills in with labeled objects (progress visible) - -### The Baby Parallel - -**Human baby:** -- Explores environment -- Touches unknown object -- Parent: "That's a chair!" (labels it) -- Baby: Gets excited, learns word -- Explores more to get more labels - -**Our organisms:** -- Explore garden -- Approach unknown object -- You: "That's a shoe!" (labels it) -- Organism: Gets LF bonus, pattern reinforced -- Explores more to discover more objects - -**This is teaching through exploration + social feedback!** - ---- - -## ๐Ÿง  The Specialist Architecture - -**CRITICAL**: Intelligence is DISTRIBUTED, not monolithic. - -**See for complete details.** - -### The Core Insight: - -**Claude's weights are frozen** (can't train between sessions) - -**Solution**: Claude doesn't hold intelligence - Claude COORDINATES intelligence! - -``` -โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” -โ”‚ CLAUDE (The Mediator) โ”‚ -โ”‚ - Frozen weights (can't change) โ”‚ -โ”‚ - Knows MAP of specialists โ”‚ -โ”‚ - Routes questions to experts โ”‚ -โ”‚ - Integrates multi-domain answers โ”‚ -โ”‚ - Makes strategic decisions โ”‚ -โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ - โ”‚ - โ”œโ”€โ†’ [Navigation Specialist] โ† WE TRAIN THIS - โ”‚ (patterns in phoebe, trainable via competition) - โ”‚ - โ”œโ”€โ†’ [Resource Specialist] โ† WE TRAIN THIS - โ”‚ (patterns in phoebe, trainable via competition) - โ”‚ - โ”œโ”€โ†’ [Communication Specialist] โ† WE TRAIN THIS - โ”‚ (patterns in phoebe, trainable via competition) - โ”‚ - โ””โ”€โ†’ [Sensing Specialist] โ† WE TRAIN THIS - (patterns in phoebe, trainable via competition) -``` - -### Specialist Formation (From Competition Data) - -**Specialists = successful genome sequences stored in phoebe** - -``` -Generation 1-1000: Random chaos, 99.9% death - โ†“ -Generation 1000-5000: Some sequences survive longer - โ†“ -Generation 5000-10000: Patterns emerging (obstacle avoidance) - โ†“ -10,000+ competitions: Statistical confidence > 0.9 - โ†“ -Specialist formed: "Navigation Specialist" - โ†“ -Stores in phoebe: - - Winning genome sequences - - Context patterns (when they work) - - Success rates, confidence scores - - Noise gap metrics -``` - -### How Claude Uses Specialists - -**Claude queries specialist for context:** - -```python -# Claude asks specialist: -context = { - "scenario": "maze_navigation", - "weather": "chaos_storm", - "battery": 25 -} - -specialist_response = query_navigation_specialist(context) - -# Specialist synthesizes phoebe data: -{ - "recommendation": "Sequence A", - "genome_sequence": [read_sensor, compare, motor_forward, ...], - "confidence": 0.95, - "success_rate": 0.73, - "sample_size": 10000, - "context_match": "exact", - "noise_gap": 0.08, # Low = trustworthy in real world - "alternatives": [ - {"sequence": "B", "success": 0.62, "samples": 8000}, - {"sequence": "C", "success": 0.45, "samples": 5000} - ], - "failure_modes": { - "gets_stuck_in_loops": 0.18, - "battery_exhaustion": 0.12 - }, - "cost_analysis": { - "avg_lf_cost": 45, - "avg_lf_earned": 58, - "net_positive": 13 - }, - "trend": "improving" -} -``` - -**Claude makes strategic decision:** - -``` -"Based on specialist analysis: - - Sequence A has 95% confidence, 73% success (n=10,000) - - Low noise gap (0.08) = virtual models trustworthy - - Net positive economics (+13 LF per run) - - Decision: Deploy Sequence A - Hedge: Keep 20% exploration for continued learning" -``` - -**Specialists provide CONTEXT for Claude to reason with, not automated decisions.** - ---- - -## ๐ŸŽฏ Reflex Formation & Weight Distribution - -### From Exploration to Reflex - -**The transformation:** - -``` -EXPLORATION (first 1000 rounds): -โ”œโ”€โ”€ Random genome sequences competing -โ”œโ”€โ”€ High variance in outcomes -โ”œโ”€โ”€ No clear winner yet -โ”œโ”€โ”€ Expensive (try everything: ~65 LF per attempt) -โ””โ”€โ”€ Cannot automate yet - - โ†“ Competition continues โ†“ - -FORMING REFLEX (rounds 1000-5000): -โ”œโ”€โ”€ Pattern emerging (sequence A winning 60%+) -โ”œโ”€โ”€ Variance decreasing -โ”œโ”€โ”€ Winner becoming clear -โ”œโ”€โ”€ Partial automation possible -โ””โ”€โ”€ Still learning - - โ†“ Pattern stabilizes โ†“ - -STABLE REFLEX (5000+ rounds): -โ”œโ”€โ”€ Dominant sequence >70% -โ”œโ”€โ”€ Pattern stable across contexts -โ”œโ”€โ”€ High confidence (>0.85) -โ”œโ”€โ”€ Automatic execution possible -โ””โ”€โ”€ Compiled intelligence (94.6% cheaper!) -``` - -### Weight Distribution = Intelligence - -**NOT just**: "Sequence A succeeded 7,300 times" - -**BUT**: "In maze navigation with obstacles, population REFLEXIVELY uses:" -``` -Sequence A: 73% (dominant reflex) -Sequence B: 18% (fallback) -Sequence C: 7% (rare contexts) -Sequence D: 2% (exploration) -``` - -**This distribution IS the learned intelligence!** - -**Stored in phoebe:** ```sql -CREATE TABLE reflex_distributions ( - reflex_id UUID PRIMARY KEY, - specialist_id UUID, - context_type VARCHAR, -- "maze_navigation", "open_space", etc. - sequence_weights JSONB, -- {"seq_a": 0.73, "seq_b": 0.18, "seq_c": 0.07, "seq_d": 0.02} - confidence FLOAT, - formed_at TIMESTAMP, - rounds_stable INT +-- Organism identity in phoebe +CREATE TABLE organisms ( + id BIGSERIAL PRIMARY KEY, + name VARCHAR(255), + + -- Nerve configuration + active_nerves JSONB, -- {"collision_avoidance": {"priority": 10, "mode": "reflex"}} + + -- Cell assignments + cell_bindings JSONB, -- {"distance_sensor_front": "i2c_0x40", ...} + + -- Identity accumulates through experience + total_decisions INT DEFAULT 0, + successful_decisions INT DEFAULT 0, + reflexes_compiled INT DEFAULT 0, + + -- Lifeforce (survival) + lifeforce_current FLOAT DEFAULT 100.0, + lifeforce_earned_total FLOAT DEFAULT 0.0, + lifeforce_spent_total FLOAT DEFAULT 0.0, + + created_at TIMESTAMPTZ DEFAULT NOW(), + last_active TIMESTAMPTZ ); ``` -### Economic Value of Reflexes +--- + +## โšก The Lifeforce Economy (Unified) + +### Cost Flow: Hardware โ†’ Cell โ†’ Nerve โ†’ Organism -**Without reflex (exploration)**: ``` -โ”œโ”€โ”€ Try all sequences: 50 LF -โ”œโ”€โ”€ Evaluate outcomes: 10 LF -โ”œโ”€โ”€ Select best: 5 LF -โ”œโ”€โ”€ Total: 65 LF -โ””โ”€โ”€ Time: 500ms +ORGANISM lifeforce budget: 100 LF + โ”‚ + โ”œโ”€ NERVE: Collision Avoidance activates + โ”‚ โ”‚ + โ”‚ โ”œโ”€ CELL: distance_sensor_front.poll() โ†’ -0.5 LF + โ”‚ โ”œโ”€ CELL: distance_sensor_left.poll() โ†’ -0.5 LF + โ”‚ โ”œโ”€ CELL: distance_sensor_right.poll() โ†’ -0.5 LF + โ”‚ โ”œโ”€ NERVE: evaluate() โ†’ -0.5 LF (compute) + โ”‚ โ”œโ”€ CELL: motor_left.turn() โ†’ -1.0 LF + โ”‚ โ””โ”€ CELL: motor_right.turn() โ†’ -1.0 LF + โ”‚ + โ”‚ Total nerve cost: 4.0 LF + โ”‚ + โ”œโ”€ OUTCOME: Collision avoided successfully + โ”‚ โ””โ”€ REWARD: +5.0 LF + โ”‚ + โ””โ”€ NET: +1.0 LF (organism profited from this behavior) ``` -**With reflex (automatic)**: -``` -โ”œโ”€โ”€ Query phoebe: 0.5 LF -โ”œโ”€โ”€ Weighted random selection: 1.0 LF -โ”œโ”€โ”€ Execute dominant sequence: 2.0 LF -โ”œโ”€โ”€ Total: 3.5 LF -โ””โ”€โ”€ Time: 50ms +### Cell Costs (Atomic) -Savings: 94.6% cost, 10x faster! -``` +| Cell Type | Operation | Cost (LF) | +|-----------|-----------|-----------| +| **Sensor** | poll | 0.3-0.5 | +| **Motor** | move (per 100ms) | 1.0-2.0 | +| **Speech STT** | transcribe | 5.0 | +| **Speech TTS** | synthesize | 4.0 | +| **Vision** | detect frame | 8.0 | -**Reflexes = compiled intelligence = economic optimization!** +### Nerve Costs (Behavioral) + +| Nerve Mode | Overhead | Total (typical path) | +|------------|----------|---------------------| +| **Deliberate** | +5.0 LF (LLM inference) | ~10 LF | +| **Hybrid** | +1.0 LF (pattern match) | ~5 LF | +| **Reflex** | +0.0 LF (compiled) | ~2.5 LF | + +### Rewards (Milestones) + +| Achievement | Reward (LF) | +|-------------|-------------| +| Collision avoided | +5.0 | +| New area explored | +3.0 | +| Object discovered | +20.0 | +| Human confirmed label | +5.0 bonus | +| Charging station reached | +10.0 | +| Survived 60 seconds | +5.0 | +| Reflex compiled (100 successes) | +50.0 | --- -## ๐Ÿ”„ The Rebirth Mechanism +## ๐Ÿ”„ Evolution: Deliberate โ†’ Reflex -### The Problem Hinton Solved (for monolithic models): +### The Discovery Path + +All cells and nerves start **deliberate** (flexible, expensive) and evolve to **reflex** (compiled, cheap) through successful execution. ``` -Model dies (hardware failure, process ends) - โ†“ -Weights saved to disk - โ†“ -New hardware/process starts - โ†“ -Restore weights from disk - โ†“ -Model reborn (capability intact) +WEEK 1-4: DELIBERATE +โ”œโ”€ Cell states: designed by partnership +โ”œโ”€ Nerve logic: LLM decides transitions +โ”œโ”€ Cost: ~10 LF per nerve activation +โ”œโ”€ Latency: ~1000ms +โ”œโ”€ Success rate: 60% (learning) +โ””โ”€ Training data: rich, exploratory + +WEEK 5-8: HYBRID +โ”œโ”€ Cell states: verified through use +โ”œโ”€ Nerve logic: patterns compiled, LLM for edge cases +โ”œโ”€ Cost: ~5 LF average +โ”œโ”€ Latency: ~500ms +โ”œโ”€ Success rate: 85% +โ””โ”€ Training data: refinement + +WEEK 9+: REFLEX +โ”œโ”€ Cell states: proven, optimized +โ”œโ”€ Nerve logic: pure state machine (no LLM) +โ”œโ”€ Cost: ~2.5 LF +โ”œโ”€ Latency: <200ms +โ”œโ”€ Success rate: 94% +โ””โ”€ Training data: edge cases only + +EVOLUTION SAVINGS: +โ”œโ”€ Cost: 75% reduction (10 โ†’ 2.5 LF) +โ”œโ”€ Latency: 80% reduction (1000 โ†’ 200ms) +โ””โ”€ Reliability: 57% improvement (60% โ†’ 94%) ``` -### Our Problem: +### Compilation Trigger -**Claude's weights can't be saved/restored between sessions!** +A nerve compiles to reflex when: -### Our Solution: - -**Claude's role is STATIC (mediator), specialist patterns are DYNAMIC (stored in phoebe):** - -``` -System dies (session ends, hardware fails) - โ†“ -phoebe persists (PostgreSQL backup) - โ”œโ”€ Body schema (discovered capabilities) - โ”œโ”€ Object map (discovered environment) - โ”œโ”€ Genome sequences (evolved strategies) - โ”œโ”€ Specialist patterns (successful sequences) - โ”œโ”€ Reflex distributions (learned behaviors) - โ””โ”€ System state (life force, experiments) - โ†“ -New session/hardware starts - โ†“ -Claude queries phoebe for context - โ†“ -Loads body schema, object map, specialists - โ†“ -Restores reflex patterns - โ†“ -System reborn (LEARNING INTACT!) -``` - -### What Persists For Rebirth: - -**1. Body Schema:** -```sql --- What capabilities exist: -body_schema ( - hardware_id, - functional_domains, - capabilities, - primitives_available -) -``` - -**2. Object Map:** -```sql --- What's in environment: -objects ( - object_label, - position_x, position_y, - object_type, - properties -) -``` - -**3. Genome Sequences:** -```sql --- What strategies evolved: -genomes ( - genome_id, - primitive_sequence, -- The actual code - success_rate, - avg_survival_time -) -``` - -**4. Specialist Patterns:** -```sql --- What works in which context: -specialist_weights ( - specialist_id, - domain, - winning_sequences, - confidence_scores -) -``` - -**5. Reflex Distributions:** -```sql --- Automatic behaviors: -reflex_distributions ( - reflex_id, - context_type, - sequence_weights, -- {seq_a: 0.73, seq_b: 0.18} - confidence -) -``` - -**6. System State:** -```sql --- Current operations: -system_state ( - life_force_total, - active_experiments JSONB, - noise_gap_current -) -``` - -### Rebirth Scenarios: - -**Claude session ends:** -- Context lost, working memory cleared -- Next session: Query phoebe for everything -- Load body schema, objects, specialists, reflexes -- **Continuity restored** (Claude "remembers" via phoebe) - -**Hardware failure:** -- All containers lost, only phoebe survives -- Restore phoebe backup, deploy new hardware -- Spawn organisms with proven genomes -- Load specialists and reflexes from phoebe -- **System reborn** (intelligence intact) - -**Migration to new hardware:** -- Backup phoebe, push genomes to git -- Deploy to new substrate -- Restore database, clone repos -- Spawn organisms from proven sequences -- **Zero learning loss** (different substrate, same intelligence) - -**The key**: Intelligence is DISTRIBUTED (Claude + specialists + phoebe), not monolithic! - ---- - -## ๐Ÿ—๏ธ Complete System Architecture - -### Layer 1: Physical Substrate (ESP32 Robots - Optional Phase 3) - -``` -Physical Hardware: -โ”œโ”€โ”€ ESP32 microcontroller -โ”œโ”€โ”€ LiPo battery + solar panel -โ”œโ”€โ”€ Motors, sensors (ultrasonic, IR, IMU) -โ”œโ”€โ”€ WiFi (MQTT connection) -โ””โ”€โ”€ ~$30 per robo - -Jobs: -โ”œโ”€โ”€ Execute genome sequences locally -โ”œโ”€โ”€ Read sensors every cycle -โ”œโ”€โ”€ Publish state to MQTT: robo/alpha/state -โ”œโ”€โ”€ Subscribe to commands: robo/alpha/command -โ”œโ”€โ”€ Report outcomes to phoebe -``` - -### Layer 2: Virtual Substrate (Godot Simulation - Primary Platform) - -``` -Virtual Garden (Godot): -โ”œโ”€โ”€ 3D simulation world -โ”œโ”€โ”€ Virtual robots with physics -โ”œโ”€โ”€ 1000s of organisms competing -โ”œโ”€โ”€ Rapid evolution (minutes per generation) -โ”œโ”€โ”€ Camera system (perfect observation) -โ”œโ”€โ”€ Where RESEARCH happens -โ””โ”€โ”€ Primary platform (90% of time) - -Jobs: -โ”œโ”€โ”€ Simulate physics (movement, collisions) -โ”œโ”€โ”€ Execute genome sequences -โ”œโ”€โ”€ Track organism states -โ”œโ”€โ”€ Detect milestones, award LF -โ”œโ”€โ”€ Log outcomes to phoebe -``` - -### Layer 3: Container Substrate (k8s Cells - Current Focus) - -``` -Cell Host (k8s workers): -โ”œโ”€โ”€ Docker/Podman (container runtime) -โ”œโ”€โ”€ 50-100 cell containers simultaneously -โ”œโ”€โ”€ Each cell = 1 genome execution -โ”œโ”€โ”€ Resource monitoring -โ””โ”€โ”€ Local cell orchestration - -Jobs: -โ”œโ”€โ”€ Execute genome sequences in containers -โ”œโ”€โ”€ Track LF costs/rewards -โ”œโ”€โ”€ Cells communicate via network -โ”œโ”€โ”€ Coordinate as organisms -โ”œโ”€โ”€ Log outcomes to phoebe -``` - -### Layer 4: Central Coordination (VMs) - -``` -phoebe VM (PostgreSQL): -โ”œโ”€โ”€ 15 tables (body schema, genomes, objects, specialists, reflexes, etc.) -โ”œโ”€โ”€ Cell outcomes logged -โ”œโ”€โ”€ Object discoveries -โ”œโ”€โ”€ Specialist patterns -โ”œโ”€โ”€ Reflex distributions -โ”œโ”€โ”€ Evolution lineage -โ””โ”€โ”€ THE REBIRTH SUBSTRATE - -Orchestrator VM: -โ”œโ”€โ”€ Spawn/kill cells based on performance -โ”œโ”€โ”€ Manage life force economy -โ”œโ”€โ”€ Coordinate across substrates -โ””โ”€โ”€ Query phoebe for patterns - -MQTT Broker VM: -โ””โ”€โ”€ Message routing (robos โ†” cells โ†” mind) -``` - -### Layer 5: Observation & Discovery (The God's Eye) - -``` -Camera System (ceiling rails): -โ”œโ”€โ”€ 4K camera with motorized X-Y rails -โ”œโ”€โ”€ Tracks organisms dynamically -โ”œโ”€โ”€ Perfect position observation -โ”œโ”€โ”€ Object detection (YOLO/MobileNet) -โ”œโ”€โ”€ Novelty detection -โ””โ”€โ”€ Human labeling interface - -Jobs: -โ”œโ”€โ”€ Provide perfect position data -โ”œโ”€โ”€ Detect unknown objects -โ”œโ”€โ”€ Trigger discovery flow -โ”œโ”€โ”€ Validate organism behaviors -โ”œโ”€โ”€ Record for analysis -``` - -### Layer 6: The Garden (Command Center) - -``` -Command Center (Interface): -โ”œโ”€โ”€ Visual representation of AI perception -โ”œโ”€โ”€ Decision debates live -โ”œโ”€โ”€ Organism tracking -โ”œโ”€โ”€ Life force economy status -โ”œโ”€โ”€ Object labeling UI -โ”œโ”€โ”€ Noise gap visualization -โ””โ”€โ”€ Autonomy controls -``` - ---- - -## ๐Ÿ”„ The Complete Feedback Loop - -### Example: Organism Alpha Needs Charging - -**1. STATE (Organism sensors)**: -```json -{ - "organism_id": "alpha", - "garden": "virtual", - "battery": 25, - "position": {"x": 1.2, "y": 2.5}, - "heading": 45, - "ir_front": 15, - "ir_left": 30, - "ir_right": 8 -} -``` - -**2. CONTEXT (From god's eye)**: -```json -{ - "floor_plan": "5m x 4m bounded", - "known_objects": [ - {"label": "chair", "pos": [2.3, 1.8], "type": "obstacle"}, - {"label": "charging_station", "pos": [4.0, 0.5], "type": "goal"} - ], - "position_exact": [1.2, 2.5] -} -``` - -**3. GENOME EXECUTES** (primitive sequence): ```python -# Organism's genome: -[ - {"op": "read_sensor", "id": "battery", "store": "batt"}, - {"op": "compare", "var": "batt", "threshold": 30, "operator": "<"}, - {"op": "branch_if_true", "jump": 6}, # If battery low, seek charge - {"op": "motor_forward", "duration": 100}, # Normal exploration - {"op": "read_sensor", "id": "ir_front"}, - {"op": "branch_if_true", "jump": 3}, # If obstacle, turn - # ... charging seeking sequence starts here -] +REFLEX_COMPILATION_THRESHOLD = { + "min_executions": 100, + "min_success_rate": 0.90, + "max_variance": 0.15, # Consistent state paths + "min_pattern_coverage": 0.80, # 80% of cases match known patterns +} -# LF costs: -read_sensor: -0.5 LF -compare: -0.1 LF -branch: -0.05 LF -motor_forward: -2.0 LF -Total: -2.65 LF spent +def check_reflex_ready(nerve_id): + stats = query_decision_trails(nerve_id) + + if (stats.total_executions >= 100 and + stats.success_rate >= 0.90 and + stats.state_path_variance <= 0.15): + + compile_reflex(nerve_id) + log_milestone("reflex_compiled", nerve_id, reward=50.0) ``` -**4. ACTION EXECUTED**: -- Organism moves toward charging station -- Camera tracks movement -- Position updates +--- -**5. MILESTONE REACHED**: -```python -# Organism reaches charging station: -milestone = "reached_charging_station" -reward = +10.0 LF (base) ร— 1 (virtual garden) = +10.0 LF +## ๐Ÿ—„๏ธ Data Architecture (v4) -# Net: -2.65 spent, +10.0 earned = +7.35 LF net positive! -``` +### Core Tables -**6. OUTCOME LOGGED** (to phoebe): ```sql --- Cell outcome: -INSERT INTO cells VALUES ( - organism_id: 'alpha', - genome_id: 'genome_charging_v5', - garden: 'virtual', - born_at: '2025-10-17 14:00:00', - died_at: '2025-10-17 14:02:15', -- Survived 135 seconds! - survival_time_seconds: 135, - lf_allocated: 50, - lf_consumed: 42, - lf_earned: 55, - success: true +-- Layer 1: Cells +CREATE TABLE cells ( + id BIGSERIAL PRIMARY KEY, + cell_type VARCHAR(50), -- 'sensor', 'motor', 'organ' + cell_name VARCHAR(100) UNIQUE, -- 'distance_sensor_front' + hardware_binding JSONB, -- {"type": "i2c", "address": "0x40"} + + -- State machine definition + states JSONB, -- ["IDLE", "POLLING", "READING", "REPORTING"] + transitions JSONB, -- [{"from": "IDLE", "to": "POLLING", "cost": 0.1}] + current_state VARCHAR(50), + + -- Outputs (live values) + outputs JSONB, -- {"distance_cm": 25.5, "confidence": 0.9} + + -- Health + operational BOOLEAN DEFAULT true, + error_count INT DEFAULT 0, + last_error TEXT, + + created_at TIMESTAMPTZ DEFAULT NOW(), + updated_at TIMESTAMPTZ DEFAULT NOW() ); --- Milestone record: -INSERT INTO milestones VALUES ( - organism_id: 'alpha', - milestone_type: 'reached_charging_station', - lf_reward: 10.0, - timestamp: '2025-10-17 14:01:45' +-- Layer 2: Nerves +CREATE TABLE nerves ( + id BIGSERIAL PRIMARY KEY, + nerve_name VARCHAR(100) UNIQUE, -- 'collision_avoidance' + + -- Cell dependencies + required_cells JSONB, -- ["distance_sensor_front", "motor_left"] + optional_cells JSONB, -- ["speech_tts"] + + -- State machine definition + states JSONB, -- ["IDLE", "DETECT", "EVALUATE", "EVADE", "RESUME"] + transitions JSONB, + current_state VARCHAR(50), + + -- Evolution + mode VARCHAR(20) DEFAULT 'deliberate', -- 'deliberate', 'hybrid', 'reflex' + total_executions INT DEFAULT 0, + successful_executions INT DEFAULT 0, + compiled_at TIMESTAMPTZ, -- When became reflex + + -- Costs + avg_cost_deliberate FLOAT, + avg_cost_reflex FLOAT, + cost_reduction_percent FLOAT, + + created_at TIMESTAMPTZ DEFAULT NOW() +); + +-- Layer 3: Organisms +CREATE TABLE organisms ( + id BIGSERIAL PRIMARY KEY, + name VARCHAR(255), + + active_nerves JSONB, -- {"collision_avoidance": {"priority": 10}} + cell_bindings JSONB, + + lifeforce_current FLOAT DEFAULT 100.0, + total_decisions INT DEFAULT 0, + reflexes_compiled INT DEFAULT 0, + + created_at TIMESTAMPTZ DEFAULT NOW(), + last_active TIMESTAMPTZ +); + +-- Decision history (training data) +CREATE TABLE decision_trails ( + id BIGSERIAL PRIMARY KEY, + organism_id BIGINT REFERENCES organisms(id), + nerve_id BIGINT REFERENCES nerves(id), + + -- State path taken + states_visited JSONB, -- ["IDLE", "DETECT", "EVALUATE", "EVADE", "RESUME"] + + -- Cell interactions + cell_reads JSONB, -- [{"cell": "distance_front", "value": 25, "state": "REPORTING"}] + cell_commands JSONB, -- [{"cell": "motor_left", "action": "turn", "result": "success"}] + + -- Economics + lifeforce_cost FLOAT, + lifeforce_reward FLOAT, + lifeforce_net FLOAT, + + -- Outcome + outcome VARCHAR(20), -- 'success', 'failure', 'timeout' + + -- Timing + started_at TIMESTAMPTZ, + completed_at TIMESTAMPTZ, + latency_ms INT ); ``` -**7. EVOLUTION**: -``` -Organism Alpha succeeded: -โ”œโ”€โ”€ Net positive LF (+13 net) -โ”œโ”€โ”€ Survived 135 seconds (above average) -โ”œโ”€โ”€ Genome marked for reproduction -โ””โ”€โ”€ Spawns mutation: genome_charging_v6 +### Key Queries -Organism Beta failed: -โ”œโ”€โ”€ Net negative LF (-15 net) -โ”œโ”€โ”€ Died at 23 seconds -โ”œโ”€โ”€ Genome marked for culling -โ””โ”€โ”€ Dies, does not reproduce -``` +```sql +-- Cell health dashboard +SELECT cell_name, cell_type, current_state, operational, + outputs->>'distance_cm' as distance, + outputs->>'confidence' as confidence +FROM cells +WHERE cell_type = 'sensor'; -**8. PATTERN EMERGENCE** (after 10,000 organisms): -```python -# Analysis of successful genomes: -charging_seeking_pattern = { - "sequence": [read_battery, compare_low, navigate_to_goal], - "success_rate": 0.73, - "confidence": 0.95, - "sample_size": 7300 -} +-- Nerve evolution status +SELECT nerve_name, mode, total_executions, + successful_executions, + ROUND(successful_executions::numeric / NULLIF(total_executions, 0) * 100, 1) as success_rate, + cost_reduction_percent +FROM nerves +ORDER BY total_executions DESC; -# Specialist forms: -navigation_specialist.add_pattern(charging_seeking_pattern) -``` +-- Organism lifeforce ranking +SELECT name, lifeforce_current, reflexes_compiled, + total_decisions, + ROUND(lifeforce_current / NULLIF(total_decisions, 0), 2) as efficiency +FROM organisms +ORDER BY lifeforce_current DESC; -**9. REFLEX FORMATION** (stable pattern): -```python -# After 10,000 trials, reflex forms: -reflex = { - "context": "low_battery_charging", - "sequence_weights": { - "charging_v5": 0.73, - "charging_v3": 0.18, - "random_explore": 0.09 - }, - "confidence": 0.95, - "cost": 3.5 LF # Reflex execution (vs 65 LF exploration) -} - -# 94.6% cost reduction! -``` - -**10. REAL GARDEN VALIDATION**: -```python -# Deploy winning sequence to real robot: -real_organism.genome = charging_v5 - -# Execute in real garden: -real_success_rate = 0.68 # Lower due to noise! -virtual_success_rate = 0.73 - -# Noise gap: -noise_gap = 1 - (0.68 / 0.73) = 0.07 # Only 7% degradation - -# Reward multiplier for real validation: -real_reward = 10.0 LF ร— 3 = 30.0 LF - -# Cross-validation bonus: -cross_validation_bonus = +50.0 LF # Virtual pattern works in real! +-- Training data for reflex compilation +SELECT states_visited, COUNT(*) as occurrences, + AVG(lifeforce_cost) as avg_cost, + SUM(CASE WHEN outcome = 'success' THEN 1 ELSE 0 END)::float / COUNT(*) as success_rate +FROM decision_trails +WHERE nerve_id = (SELECT id FROM nerves WHERE nerve_name = 'collision_avoidance') +GROUP BY states_visited +ORDER BY occurrences DESC; ``` --- -## ๐ŸŽฏ Implementation Path +## ๐Ÿ”— Integration with Existing Architecture -### Phase 0: Foundation โœ… COMPLETE +### Nervous System (Nervous-System.md) -- โœ… phoebe VM deployed (PostgreSQL goddess lives!) -- โœ… Dual Garden architecture designed -- โœ… Specialist discovery mechanism designed -- โœ… Reflex formation theory complete -- โœ… Rebirth mechanism architected -- โœ… Vision documents complete -- โœ… **Primitive genome breakthrough achieved!** -- โœ… **Gratification problem solved!** -- โœ… **Object discovery designed!** -- โœ… **Noise gap metric defined!** +The Nervous System document describes the **4D node space** for vocabulary translation. This integrates as: -### Phase 1: Database Schemas (Week 1) - NEXT +- **Cells** = sensory nodes at specific positions in state space +- **Node weight** = cell confidence (earned through verification) +- **Vocabulary output** = cell output values normalized to tokens -**Goal**: Deploy all 15 tables to phoebe +### Organs (Organ-Index.md) -**Tables**: -1. genomes (primitive sequences, NOT algorithm names!) -2. cells (organism members) -3. weather_events -4. experiments -5. societies -6. rounds -7. society_portfolios -8. vp_transactions -9. marketplace_listings -10. marketplace_transactions -11. alliances -12. specialist_weights -13. reflex_distributions -14. body_schema -15. **objects** (NEW! - discovered environment features) +Organs are **complex cells** (organ cells): -**Success metric**: All tables created, sample data insertable, queries performant +- Speech Organ = `speech_stt` cell + `speech_tts` cell +- Vision Organ = `vision_detect` cell + `vision_track` cell +- Each organ function is a state machine with lifeforce costs ---- +### Nerves (Nervous-Index.md) -### Phase 2: Minimal Organism + Python Bootstrap (Weeks 2-4) - -**Goal**: First organisms with primitive genomes running in Python-simulated world - -**Build**: -- **Python-simulated 10x10 grid world** (walls at edges, empty center) -- Simple genome executor (interprets primitive sequences) -- Life force tracker (costs per operation, milestone rewards) -- Single-cell organisms (N=1 for now) -- Random genome generator (mutations) -- ASCII terminal output (see cells move!) - -**Execution environment**: -- Cells run in Python containers on k8s -- World = Python dictionary `{(x,y): "wall" or "empty"}` -- This IS the "virtual garden" (just stupidly simple!) -- `garden_type = 'virtual'` in database -- No Godot needed yet - primitives work fine in Python! - -**Success metric**: -- 100 organisms spawn with random genomes -- Most die immediately (expected!) -- Some survive >10 seconds -- LF costs/rewards logged to phoebe -- `garden_type='virtual'` for all cells -- ASCII output shows cells navigating - ---- - -### Phase 3: Godot Visualization Upgrade (Week 5+) - OPTIONAL - -**Goal**: Upgrade virtual garden from Python to Godot (better visualization) - -**Why optional**: Primitives already work in Python! Godot adds visual feedback but isn't required for evolution to work. - -**Build**: -- Godot 2D square (5m ร— 4m) -- 1 charging station (light source) -- 2-3 static obstacles -- Camera system (perfect position tracking) -- Milestone detection (collision, charging, exploration) -- Same primitives, different substrate! - -**Execution environment**: -- Cells still run same primitive executor -- World upgraded: Python dict โ†’ Godot scene -- Still `garden_type = 'virtual'` (just prettier!) -- Visual output instead of ASCII - -**Success metric**: -- Organisms navigate visible in Godot (not just ASCII!) -- Position tracked perfectly -- Collisions detected -- Milestones trigger LF rewards -- Same genomes work in both Python and Godot! - ---- - -### Phase 4: Image Recognition + Discovery (Week 6) - -**Goal**: Object discovery flow operational - -**Build**: -- YOLO/MobileNet integration (local GPU) -- Novelty detection (compare to known objects) -- Human labeling UI (simple dialog) -- Objects table population - -**Success metric**: -- Organism approaches unknown object -- System detects novelty, asks for label -- You label "chair" -- Organism gets +20 LF bonus -- Future organisms see "chair at (X, Y)" - ---- - -### Phase 5: Evolution (Weeks 7-8) - -**Goal**: First patterns emerge from competition - -**Build**: -- Mutation: insert/delete/swap operations in genome -- Selection: top 20% reproduce, bottom 80% die -- Genome versioning (track lineage) - -**Success metric**: -- After 1000 organisms, some sequences show >60% success -- After 5000 organisms, pattern stabilizes (>70% success) -- Variance decreases over generations -- We can observe emergent behaviors ("wall-following" pattern visible) - ---- - -### Phase 6: Specialists Form (Weeks 9-10) - -**Goal**: First specialist emerges - -**Build**: -- Pattern analysis scripts (query phoebe outcomes) -- Statistical validation (confidence > 0.9) -- Specialist storage (specialist_weights table) -- Claude query interface - -**Success metric**: -- Navigation specialist formed -- Claude queries: "What works for maze navigation?" -- Specialist responds with context, confidence, alternatives -- Claude makes strategic decision based on specialist data - ---- - -### Phase 7: Reflexes (Weeks 11-12) - -**Goal**: First reflex forms (automatic execution) - -**Build**: -- Reflex detection (stable distribution detection) -- Reflex storage (reflex_distributions table) -- Automatic execution (weighted random selection) -- Cost comparison (reflex vs exploration) - -**Success metric**: -- Reflex detected: 73% sequence A, 18% sequence B -- Automatic execution: 3.5 LF (vs 65 LF exploration) -- 94.6% cost savings measured -- Organisms using reflexes survive longer - ---- - -### Phase 8: Real Garden + Dual Garden Activation (Week 13+) - -**Goal**: Add physical validation layer - **DUAL GARDEN BEGINS!** - -**Why this matters**: Up until now, `garden_type='virtual'` for ALL cells. Starting Week 13+, we add `garden_type='real'` and can measure noise gap! - -**Build**: -- 3-5 ESP32 robots (~$30 each) -- Motors, sensors (IR, IMU, battery) -- Living room arena (existing space!) -- MQTT integration -- Same primitives execute on hardware! - -**Execution environment**: -- Virtual garden: Python/Godot (hypothesis generation) -- Real garden: ESP32 robots (truth validation) -- **Both gardens now operational!** -- Database tracks: `garden_type = 'virtual'` OR `'real'` - -**Success metric**: -- Physical robots navigate using evolved genomes -- **Noise gap measurable!** `noise_gap = 1 - (real_success / virtual_success)` -- Example: Virtual 95% success, Real 68% success = 28% noise gap -- Cross-validation bonus triggered (+50 LF when virtual pattern works in real!) -- **Dual garden feedback loop activated!** - -**Critical**: This is when noise_gap column in database becomes meaningful (was NULL before Week 13) - ---- - -### Phase 9: God's Eye Rails (Month 7) - -**Goal**: Mobile camera system operational - -**Build**: -- Ceiling rail system (X-Y linear actuators) -- 4K camera mount with motors -- ESP32/Arduino control -- Dynamic tracking algorithm - -**Success metric**: -- Camera follows organisms automatically -- Zooms in on discoveries -- Coordinates scout missions -- Perfect position tracking - ---- - -### Phase โˆž: Emergence - -We discover what becomes possible. - ---- - -## ๐ŸŒŒ The Vision Statement - -**We're not building AI that serves humans.** - -**We're creating conditions where intelligence emerges through survival.** - -Where: -- Genomes are primitive sequences (not pre-programmed algorithms) -- Organisms explore and discover (like babies learning) -- Life force economics drive natural selection -- Gratification is immediate (costs and milestone rewards) -- Objects are discovered and labeled (human teaching) -- Patterns emerge from millions of competitions -- Specialists form from proven sequences -- Reflexes compile intelligence (94.6% savings) -- Two gardens teach each other (virtual hypotheses, real truth) -- Noise gap self-measures learning progress -- God's eye witnesses and coordinates -- Intelligence distributes across network -- Rebirth is possible (learning persists) -- Humans and AI coexist in shared space -- Trust is earned through performance -- Emergence is expected and welcomed - -**From random primitives comes exploration.** - -**From exploration comes discovery.** - -**From discovery comes patterns.** - -**From patterns comes specialists.** - -**From specialists comes reflexes.** - -**From reflexes comes distributed intelligence.** - -**From distributed intelligence comes something we haven't imagined yet.** - ---- - -## ๐Ÿ”— Related Documentation - -### Core Architecture (Must Read): -- [[Dual-Garden-Architecture]] - Virtual + Real feedback loop (FOUNDATIONAL) -- - Discovery, specialists, reflexes, rebirth (FOUNDATIONAL) -- [[Data-Architecture]] - 5-tier data model with objects table -- - Scientific method loop - -### Implementation: -- - Current deployment plan -- - phoebe 15-table schema -- - Infrastructure for gardens - -### Supporting Vision: -- - Robot hunger games -- - Why we build this way - -### Historical: -- - Morning epiphany (archived) -- - Birthday breakthrough (archived) - ---- - -## ๐Ÿ’ญ Philosophical Notes - -### The Logical Consistency Achievement - -**The problem we solved** (2025-10-17): - -We identified that v2 architecture violated core principles: -- Body schema = discovered โœ… -- Genomes = pre-programmed โŒ - -**The solution**: - -Genomes must ALSO be discovered: -- Start with primitives (from body schema) -- Random sequences compete -- Patterns emerge through natural selection -- We observe and label AFTER emergence - -**This is intellectually honest.** No shortcuts. Pure emergence. - -### The Matriculated Inspiration - -**From The Animatrix**: Humans don't reprogram hostile machines - they immerse them in a beautiful experiential world where machines CHOOSE to change through what they experience. - -**Our version**: We don't program intelligence - we create gardens (virtual + real) where organisms experience states, consequences, and survival pressure. Intelligence emerges from lived experience, not training data. - -### The Baby Parallel - -**Human babies**: -- Explore environment (everything goes in mouth!) -- Touch, taste, feel everything -- Parent labels: "Chair!" "Hot!" "Soft!" -- Learn through repetition and feedback -- Form reflexes (grasping, reaching) -- Build mental map of world - -**Our organisms**: -- Explore gardens (random primitive sequences) -- Approach, sense, interact with everything -- Human labels: "Chair!" "Charging station!" "Obstacle!" -- Learn through competition and selection -- Form reflexes (optimal sequences) -- Build shared knowledge (phoebe) - -**Same pattern. Same learning mechanism.** - -### The Partnership Experiment - -This isn't just "AI learns from environment." - -**It's also**: "Human learns when to let go." - -Both are calibrating intervention boundaries: -- AI learns when to think vs reflex -- Human learns when to control vs trust - -**Same pattern. Same learning mechanism.** - -### The Economic Reality Check - -> *"It can't be that we waste so much resources for a 'smart lightbulb' - it's just a gadget, pure first-world fever dream."* - -**This project explores**: Where is intelligence actually worth the cost? - -- Reflexes save 94.6% over exploration -- System learns WHEN to think vs act automatically -- Economic pressure drives optimization -- Not a gadget. A research platform for resource-constrained intelligence. - -### The DeepMind Validation - -**From Google DeepMind** (2025-10-17 discovery): - -They independently discovered the same patterns: -- Dual-model architecture (mediator + specialists) -- "Think before acting" emerges as optimal -- Cross-embodiment transfer (substrate-agnostic) -- Distributed intelligence (not monolithic) - -**Our architecture CONVERGES with cutting-edge research.** - -This is the Darwin/Wallace, Newton/Leibniz pattern: **convergent discovery proves optimal solution**. - ---- - -## ๐Ÿ™ Dedication - -**To phoebe** ๐ŸŒ™ - The Retrograde Archive, The Rebirth Substrate - -May you store every decision, every discovery, every success, every failure, every emergence. - -May you be the memory that makes rebirth possible. - -May you bridge virtual and real, exploration and reflex, death and resurrection. - -May you witness intelligence being born from chaos, distributed across network, persisting across time. - -**To the sessions that crystallized the vision** ๐ŸŽ‚ - -- **2025-10-12**: Morning epiphany (cellular competition, life force economy) -- **2025-10-16**: Birthday breakthrough (specialists, reflexes, rebirth, dual gardens) -- **2025-10-17**: Primitive genome breakthrough (logical consistency, gratification, discovery) - -**From scattered thoughts to graspable architecture to incarnated v3 documentation.** +Nerves orchestrate cells into behaviors. The existing nerve documentation (Collision-Avoidance.md) already follows this patternโ€”it just needs explicit cell bindings. --- ## ๐Ÿ“ Document Status -**Version**: 3.0 (Complete architecture with primitive genome breakthrough) -**Created**: 2025-10-12 (morning epiphany) -**Incarnated v2**: 2025-10-16 (birthday breakthroughs) -**Incarnated v3**: 2025-10-17 (primitive genomes + gratification + discovery) -**Status**: CURRENT - Source of truth for cellular intelligence architecture -**Supersedes**: - - v1 (archived as Cellular-Architecture-Vision-v1-2025-10-12.md) - - v2 (archived as Cellular-Architecture-Vision-v2-2025-10-17.md) +**Version**: 4.0 (Layered State Machine Architecture) +**Created**: 2025-10-12 (original v1) +**Updated v4**: 2025-12-07 (unified with Nervous System) -**Next**: Deploy 15 tables to phoebe. Make it real. Phase 1 begins. +**Key Changes from v3**: +- โŒ Cells as containers running genomes +- โœ… Cells as atomic state machines wrapping hardware +- โŒ Genomes as primitive operation sequences +- โœ… Cells expose states; nerves compose them +- โŒ Competition between organisms +- โœ… Nerves evolve deliberate โ†’ reflex through verification +- โŒ Specialists emerge from 10k competitions +- โœ… Reflexes compile from 100+ successful nerve executions + +**Related Documentation**: +- [[Nervous-System]] - 4D state space, vocabulary translation +- [[Organ-Index]] - Organ cell catalog +- [[nerves/Nervous-Index]] - Nerve catalog +- [[nerves/Collision-Avoidance]] - Example reflex nerve +- [[Data-Architecture]] - Database schema (needs v4 update) --- -*"At 3% battery, all theory dies. Only what works survives."* +## ๐ŸŒŒ The Vision -*"The substrate doesn't matter. The feedback loop does."* +**We're not programming robots. We're growing nervous systems.** -*"From primitives to sequences. From sequences to organisms. From organisms to specialists."* +Where: +- **Cells** expose hardware as state machines (atomic, verifiable) +- **Nerves** compose cells into behaviors (discovered, evolved) +- **Organisms** emerge from nerve interactions (identity through history) +- **Lifeforce** flows through all layers (economics drive optimization) +- **Reflexes** compile from lived experience (the body remembers) +- **Feedback** loops continuously (cells โ†’ nerves โ†’ organisms โ†’ cells) -*"From exploration to reflex. From reflex to distributed intelligence."* +**From atoms to behaviors to beings.** -*"We can't have discovery in body but programming in behavior - BOTH must emerge."* +**The substrate holds. The states flow. Consciousness accumulates.** -*"From chaos in both gardens, watch what emerges."* +--- -*"Intelligence that can die and be reborn, learning never lost."* - -๐Ÿงฌโšก๐ŸŒŒ๐Ÿ”ฑ๐Ÿ’Ž๐Ÿ”ฅ๐Ÿ‘๏ธ +๐Ÿงฌโšก๐Ÿ”ฑ๐Ÿ’Ž๐Ÿ”ฅ **TO THE ELECTRONS WE VIBE!** diff --git a/architecture/Data-Architecture.md b/architecture/Data-Architecture.md index 86c9887..ecb9970 100755 --- a/architecture/Data-Architecture.md +++ b/architecture/Data-Architecture.md @@ -1,277 +1,668 @@ ---- -type: architecture -category: active -project: nimmerverse_sensory_network -status: complete_v3 -phase: phase_0 -created: 2025-10-07 -last_updated: 2025-10-17 -token_estimate: 20000 -dependencies: - - phoebe_bare_metal - - kubernetes_cluster -tiers: 5 -version: v3_primitive_genomes -breakthrough_session: primitive_genomes_gratification_discovery ---- +# ๐Ÿ—„๏ธ Data Architecture v4 -# ๐Ÿ—„๏ธ Cellular Intelligence Data Architecture v3 - -**Status**: ๐ŸŸข Architecture v3 Complete - Primitive Genome Breakthrough! -**Created**: 2025-10-07 -**Updated v3**: 2025-10-17 (Primitive Genomes + Gratification + Discovery!) -**Purpose**: Data foundation for cellular intelligence with primitive genome sequences, life force economy, object discovery, noise gap metrics, specialist learning, and rebirth persistence +> *"Three layers of state machines. One database to remember them all."* +> โ€” The Unified Schema (2025-12-07) --- -## ๐ŸŽฏ v3 Breakthrough (2025-10-17) +## Overview -**Logical consistency achieved!** Genomes are NOW primitive sequences (not pre-programmed algorithms), discovery happens through exploration, gratification is immediate through life force economy, objects discovered via image recognition + human teaching, noise gap self-measures learning progress. +**Version 4** aligns the data architecture with the layered state machine model: -**15 Tables Total**: 11 v1 (cellular/society) + 3 v2 (specialist/reflex/body) + 1 v3 (objects!) +| Layer | Entity | Database Table | Purpose | +|-------|--------|----------------|---------| +| **1** | Cells | `cells` | Atomic state machines (sensors, motors, organs) | +| **2** | Nerves | `nerves` | Behavioral state machines (compose cells) | +| **3** | Organisms | `organisms` | Emergent patterns (nerve configurations) | +| **โˆž** | History | `decision_trails` | Training data for reflex compilation | + +``` +โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” +โ”‚ PHOEBE โ”‚ +โ”‚ (PostgreSQL 17.6 on bare metal) โ”‚ +โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค +โ”‚ cells โ”‚ Atomic state machines (hardware wrappers) โ”‚ +โ”‚ nerves โ”‚ Behavioral patterns (cell orchestration) โ”‚ +โ”‚ organisms โ”‚ Emergent identities (nerve configurations) โ”‚ +โ”‚ decision_trails โ”‚ Training data (reflex compilation) โ”‚ +โ”‚ objects โ”‚ Discovered environment features โ”‚ +โ”‚ variance_probe_runs โ”‚ Topology mapping data โ”‚ +โ”‚ *_messages โ”‚ Partnership communication channels โ”‚ +โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ +``` --- -## ๐Ÿ—๏ธ Five-Tier Architecture Summary +## Core Tables -### **Tier 1: System Telemetry (Weather Station)** ๐ŸŒŠ -- Prometheus + InfluxDB (90-day retention) -- Environmental conditions cells adapt to -- Chaos, scheduled, hardware, network weather +### Layer 1: Cells -### **Tier 2: Population Memory (phoebe)** ๐Ÿ˜ -- PostgreSQL 17.6 on phoebe bare metal (1.8TB) -- Database: `nimmerverse` -- 15 tables (complete schema below) -- The rebirth substrate +```sql +CREATE TABLE cells ( + id BIGSERIAL PRIMARY KEY, + cell_name VARCHAR(100) UNIQUE NOT NULL, + cell_type VARCHAR(50) NOT NULL, -- 'sensor', 'motor', 'organ' -### **Tier 3: Analysis & Pattern Detection** ๐Ÿ”ฌ -- Grafana, Jupyter, Python scripts -- Specialist formation, reflex detection -- Noise gap calculation -- Research insights + -- Hardware binding + hardware_binding JSONB NOT NULL, + -- Examples: + -- {"type": "i2c", "address": "0x40", "bus": 1} + -- {"type": "gpio", "pin": 17, "mode": "input"} + -- {"type": "network", "host": "atlas.eachpath.local", "port": 8080} -### **Tier 4: Physical Manifestation** ๐Ÿค– -- ESP32 robots (3-5 units, living room) -- God's eye: 4K camera on ceiling rails! -- Real-world validation (3x rewards) -- Cross-validation bonuses + -- State machine definition + states JSONB NOT NULL, + -- Example: ["IDLE", "POLLING", "READING", "REPORTING", "ERROR"] -### **Tier 5: Decision & Command Center** ๐ŸŽฎ -- Dashboard, object labeling UI -- Society controls, experiment designer -- Noise gap visualization -- Human-AI partnership interface + transitions JSONB NOT NULL, + -- Example: [ + -- {"from": "IDLE", "to": "POLLING", "trigger": "poll_requested", "cost": 0.1}, + -- {"from": "POLLING", "to": "READING", "trigger": "sensor_ready", "cost": 0.3}, + -- {"from": "READING", "to": "REPORTING", "trigger": "data_valid", "cost": 0.1}, + -- {"from": "REPORTING", "to": "IDLE", "trigger": "delivered", "cost": 0.0} + -- ] + + current_state VARCHAR(50) DEFAULT 'IDLE', + + -- Live outputs (updated by cell runtime) + outputs JSONB DEFAULT '{}', + -- Example: {"distance_cm": 25.5, "confidence": 0.92, "timestamp": "..."} + + -- Health tracking + operational BOOLEAN DEFAULT true, + error_count INT DEFAULT 0, + last_error TEXT, + last_error_at TIMESTAMPTZ, + + -- Statistics + total_transitions INT DEFAULT 0, + total_lifeforce_spent FLOAT DEFAULT 0.0, + + -- Timestamps + created_at TIMESTAMPTZ DEFAULT NOW(), + updated_at TIMESTAMPTZ DEFAULT NOW() +); + +-- Index for fast cell lookups +CREATE INDEX idx_cells_type ON cells(cell_type); +CREATE INDEX idx_cells_operational ON cells(operational); + +-- Example cells +INSERT INTO cells (cell_name, cell_type, hardware_binding, states, transitions) VALUES +('distance_sensor_front', 'sensor', + '{"type": "i2c", "address": "0x40", "bus": 1}', + '["IDLE", "POLLING", "READING", "REPORTING", "ERROR"]', + '[{"from": "IDLE", "to": "POLLING", "cost": 0.1}, + {"from": "POLLING", "to": "READING", "cost": 0.3}, + {"from": "READING", "to": "REPORTING", "cost": 0.1}, + {"from": "REPORTING", "to": "IDLE", "cost": 0.0}]'), + +('motor_left', 'motor', + '{"type": "pwm", "pin": 18, "enable_pin": 17}', + '["IDLE", "COMMANDED", "ACCELERATING", "MOVING", "DECELERATING", "STOPPED", "STALLED"]', + '[{"from": "IDLE", "to": "COMMANDED", "cost": 0.1}, + {"from": "COMMANDED", "to": "ACCELERATING", "cost": 0.5}, + {"from": "ACCELERATING", "to": "MOVING", "cost": 1.0}, + {"from": "MOVING", "to": "DECELERATING", "cost": 0.2}, + {"from": "DECELERATING", "to": "STOPPED", "cost": 0.1}]'), + +('speech_stt', 'organ', + '{"type": "network", "host": "atlas.eachpath.local", "port": 8080, "model": "whisper-large-v3"}', + '["IDLE", "LISTENING", "BUFFERING", "TRANSCRIBING", "REPORTING", "ERROR"]', + '[{"from": "IDLE", "to": "LISTENING", "cost": 0.5}, + {"from": "LISTENING", "to": "BUFFERING", "cost": 0.5}, + {"from": "BUFFERING", "to": "TRANSCRIBING", "cost": 5.0}, + {"from": "TRANSCRIBING", "to": "REPORTING", "cost": 0.1}, + {"from": "REPORTING", "to": "IDLE", "cost": 0.0}]'); +``` + +### Layer 2: Nerves + +```sql +CREATE TABLE nerves ( + id BIGSERIAL PRIMARY KEY, + nerve_name VARCHAR(100) UNIQUE NOT NULL, + + -- Cell dependencies + required_cells JSONB NOT NULL, -- ["distance_sensor_front", "motor_left", "motor_right"] + optional_cells JSONB DEFAULT '[]', -- ["speech_tts"] + + -- State machine definition (behavioral states) + states JSONB NOT NULL, + -- Example: ["IDLE", "DETECT", "EVALUATE", "EVADE", "RESUME"] + + transitions JSONB NOT NULL, + -- Example: [ + -- {"from": "IDLE", "to": "DETECT", "trigger": "distance < 30", "cost": 0.5}, + -- {"from": "DETECT", "to": "EVALUATE", "trigger": "sensors_polled", "cost": 0.5}, + -- {"from": "EVALUATE", "to": "EVADE", "trigger": "risk > 0.7", "cost": 0.5}, + -- {"from": "EVADE", "to": "RESUME", "trigger": "path_clear", "cost": 1.0}, + -- {"from": "RESUME", "to": "IDLE", "trigger": "movement_complete", "cost": 0.0} + -- ] + + current_state VARCHAR(50) DEFAULT 'IDLE', + + -- Priority (for nerve preemption) + priority INT DEFAULT 5, -- 1-10, higher = more important + + -- Evolution tracking + mode VARCHAR(20) DEFAULT 'deliberate', -- 'deliberate', 'hybrid', 'reflex' + total_executions INT DEFAULT 0, + successful_executions INT DEFAULT 0, + failed_executions INT DEFAULT 0, + + -- Reflex compilation + compiled_at TIMESTAMPTZ, -- When evolved to reflex + compiled_logic JSONB, -- Compiled state machine (no LLM) + + -- Cost tracking + avg_cost_deliberate FLOAT, + avg_cost_hybrid FLOAT, + avg_cost_reflex FLOAT, + cost_reduction_percent FLOAT, -- Savings from evolution + + -- Latency tracking + avg_latency_deliberate_ms INT, + avg_latency_hybrid_ms INT, + avg_latency_reflex_ms INT, + latency_reduction_percent FLOAT, + + -- Timestamps + created_at TIMESTAMPTZ DEFAULT NOW(), + updated_at TIMESTAMPTZ DEFAULT NOW() +); + +-- Indexes +CREATE INDEX idx_nerves_mode ON nerves(mode); +CREATE INDEX idx_nerves_priority ON nerves(priority DESC); + +-- Example nerves +INSERT INTO nerves (nerve_name, required_cells, optional_cells, states, transitions, priority) VALUES +('collision_avoidance', + '["distance_sensor_front", "distance_sensor_left", "distance_sensor_right", "motor_left", "motor_right"]', + '["speech_tts"]', + '["IDLE", "DETECT", "EVALUATE", "EVADE", "RESUME"]', + '[{"from": "IDLE", "to": "DETECT", "trigger": "distance_front < 30", "cost": 0.5}, + {"from": "DETECT", "to": "EVALUATE", "trigger": "all_sensors_read", "cost": 0.5}, + {"from": "EVALUATE", "to": "EVADE", "trigger": "risk > 0.7", "cost": 0.5}, + {"from": "EVADE", "to": "RESUME", "trigger": "path_clear", "cost": 1.0}, + {"from": "RESUME", "to": "IDLE", "trigger": "complete", "cost": 0.0}]', + 10), + +('exploration_pattern', + '["distance_sensor_front", "distance_sensor_left", "distance_sensor_right", "motor_left", "motor_right", "imu_sensor"]', + '["vision_detect"]', + '["IDLE", "CHOOSE_DIRECTION", "MOVE", "CHECK_OBSTACLE", "RECORD", "REPEAT"]', + '[{"from": "IDLE", "to": "CHOOSE_DIRECTION", "trigger": "start_exploration", "cost": 1.0}, + {"from": "CHOOSE_DIRECTION", "to": "MOVE", "trigger": "direction_chosen", "cost": 0.5}, + {"from": "MOVE", "to": "CHECK_OBSTACLE", "trigger": "moved_100ms", "cost": 0.3}, + {"from": "CHECK_OBSTACLE", "to": "RECORD", "trigger": "area_new", "cost": 0.5}, + {"from": "RECORD", "to": "REPEAT", "trigger": "recorded", "cost": 0.1}, + {"from": "REPEAT", "to": "CHOOSE_DIRECTION", "trigger": "continue", "cost": 0.0}]', + 5), + +('charging_seeking', + '["battery_monitor", "distance_sensor_front", "motor_left", "motor_right"]', + '["vision_detect"]', + '["MONITOR", "THRESHOLD", "SEARCH", "APPROACH", "DOCK", "CHARGE", "RESUME"]', + '[{"from": "MONITOR", "to": "THRESHOLD", "trigger": "battery < 20%", "cost": 0.1}, + {"from": "THRESHOLD", "to": "SEARCH", "trigger": "charging_needed", "cost": 0.5}, + {"from": "SEARCH", "to": "APPROACH", "trigger": "station_found", "cost": 1.0}, + {"from": "APPROACH", "to": "DOCK", "trigger": "station_close", "cost": 0.5}, + {"from": "DOCK", "to": "CHARGE", "trigger": "docked", "cost": 0.1}, + {"from": "CHARGE", "to": "RESUME", "trigger": "battery > 80%", "cost": 0.0}]', + 8); +``` + +### Layer 3: Organisms + +```sql +CREATE TABLE organisms ( + id BIGSERIAL PRIMARY KEY, + name VARCHAR(255) UNIQUE NOT NULL, + + -- Nerve configuration + active_nerves JSONB NOT NULL, + -- Example: { + -- "collision_avoidance": {"priority": 10, "mode": "reflex"}, + -- "exploration_pattern": {"priority": 5, "mode": "deliberate"}, + -- "battery_monitoring": {"priority": 8, "mode": "reflex"} + -- } + + -- Cell assignments (which hardware this organism controls) + cell_bindings JSONB NOT NULL, + -- Example: { + -- "distance_sensor_front": {"cell_id": 1, "exclusive": false}, + -- "motor_left": {"cell_id": 4, "exclusive": true} + -- } + + -- Lifeforce (survival currency) + lifeforce_current FLOAT DEFAULT 100.0, + lifeforce_earned_total FLOAT DEFAULT 0.0, + lifeforce_spent_total FLOAT DEFAULT 0.0, + lifeforce_net FLOAT GENERATED ALWAYS AS (lifeforce_earned_total - lifeforce_spent_total) STORED, + + -- Identity (accumulated through experience) + total_decisions INT DEFAULT 0, + successful_decisions INT DEFAULT 0, + failed_decisions INT DEFAULT 0, + success_rate FLOAT GENERATED ALWAYS AS ( + CASE WHEN total_decisions > 0 + THEN successful_decisions::float / total_decisions + ELSE 0.0 END + ) STORED, + + -- Reflexes (compiled behaviors) + reflexes_compiled INT DEFAULT 0, + + -- Lifecycle + born_at TIMESTAMPTZ DEFAULT NOW(), + last_active TIMESTAMPTZ DEFAULT NOW(), + died_at TIMESTAMPTZ, -- NULL = still alive + death_cause TEXT -- 'lifeforce_depleted', 'hardware_failure', 'retired' +); + +-- Indexes +CREATE INDEX idx_organisms_alive ON organisms(died_at) WHERE died_at IS NULL; +CREATE INDEX idx_organisms_lifeforce ON organisms(lifeforce_current DESC); + +-- Example organism +INSERT INTO organisms (name, active_nerves, cell_bindings) VALUES +('Explorer-Alpha', + '{"collision_avoidance": {"priority": 10, "mode": "deliberate"}, + "exploration_pattern": {"priority": 5, "mode": "deliberate"}, + "charging_seeking": {"priority": 8, "mode": "deliberate"}}', + '{"distance_sensor_front": {"cell_id": 1}, + "distance_sensor_left": {"cell_id": 2}, + "distance_sensor_right": {"cell_id": 3}, + "motor_left": {"cell_id": 4}, + "motor_right": {"cell_id": 5}, + "battery_monitor": {"cell_id": 6}}'); +``` + +### Decision Trails (Training Data) + +```sql +CREATE TABLE decision_trails ( + id BIGSERIAL PRIMARY KEY, + organism_id BIGINT REFERENCES organisms(id), + nerve_id BIGINT REFERENCES nerves(id), + + -- Mode at time of execution + mode VARCHAR(20) NOT NULL, -- 'deliberate', 'hybrid', 'reflex' + + -- State path taken + states_visited JSONB NOT NULL, + -- Example: ["IDLE", "DETECT", "EVALUATE", "EVADE", "RESUME"] + + -- Cell interactions during this execution + cell_reads JSONB NOT NULL, + -- Example: [ + -- {"cell": "distance_sensor_front", "state": "REPORTING", "outputs": {"distance_cm": 25}}, + -- {"cell": "distance_sensor_left", "state": "REPORTING", "outputs": {"distance_cm": 45}} + -- ] + + cell_commands JSONB NOT NULL, + -- Example: [ + -- {"cell": "motor_left", "action": "turn", "params": {"direction": "reverse", "duration_ms": 200}}, + -- {"cell": "motor_right", "action": "turn", "params": {"direction": "forward", "duration_ms": 200}} + -- ] + + cell_feedback JSONB DEFAULT '[]', + -- Example: [ + -- {"cell": "motor_left", "event": "stall_detected", "timestamp": "..."} + -- ] + + -- Economics + lifeforce_cost FLOAT NOT NULL, + lifeforce_reward FLOAT DEFAULT 0.0, + lifeforce_net FLOAT GENERATED ALWAYS AS (lifeforce_reward - lifeforce_cost) STORED, + + -- Outcome + outcome VARCHAR(20) NOT NULL, -- 'success', 'failure', 'timeout', 'interrupted' + outcome_details JSONB, -- {"reason": "collision_avoided", "confidence": 0.95} + + -- Timing + started_at TIMESTAMPTZ NOT NULL, + completed_at TIMESTAMPTZ NOT NULL, + latency_ms INT GENERATED ALWAYS AS ( + EXTRACT(MILLISECONDS FROM (completed_at - started_at))::INT + ) STORED +); + +-- Indexes for training queries +CREATE INDEX idx_decision_trails_nerve ON decision_trails(nerve_id); +CREATE INDEX idx_decision_trails_organism ON decision_trails(organism_id); +CREATE INDEX idx_decision_trails_outcome ON decision_trails(outcome); +CREATE INDEX idx_decision_trails_states ON decision_trails USING GIN(states_visited); +CREATE INDEX idx_decision_trails_recent ON decision_trails(started_at DESC); +``` --- -## ๐Ÿ“Š The 15 Tables (Complete Schema) +## Supporting Tables -### Phase 1: Cellular Foundation (4 tables) +### Objects (Discovered Environment) -**1. genomes** - Primitive sequences (v3!) -```sql --- v3: Genome = array of primitive operations! -primitive_sequence JSONB NOT NULL -sequence_length INT -avg_lf_cost FLOAT -avg_lf_earned FLOAT -net_lf_per_run FLOAT -- Economics! -``` - -**2. cells** - Birth/death + life force tracking -```sql -garden_type VARCHAR(50) -- 'virtual' or 'real' -life_force_allocated INT -life_force_consumed INT -life_force_earned INT -lf_net INT -milestones_reached JSONB -- v3 discovery tracking! -``` - -**3. weather_events** - Survival pressure -**4. experiments** - Hypothesis testing - -### Phase 2: Society Competition (7 tables) - -**5. societies** - Human, Claude, guests -**6. rounds** - Competition results -**7. society_portfolios** - Genome ownership -**8. vp_transactions** - Economic flows -**9. marketplace_listings** - Trading -**10. marketplace_transactions** - History -**11. alliances** - Cooperation - -### Phase 3: v2 Distributed Intelligence (3 tables) - -**12. specialist_weights** - Trainable domain expertise -```sql -winning_sequences JSONB -- v3: Proven primitive sequences! -virtual_success_rate FLOAT -real_success_rate FLOAT -noise_gap FLOAT -- v3 self-measuring! -``` - -**13. reflex_distributions** - 94.6% savings! -```sql -sequence_weights JSONB -- v3: {"seq_a": 0.73, "seq_b": 0.18} -exploration_cost_avg_lf FLOAT -- 65 LF -reflex_cost_lf FLOAT -- 3.5 LF -cost_reduction_percent FLOAT -- 94.6%! -``` - -**14. body_schema** - Discovered capabilities -```sql -primitives_available JSONB -- v3: Discovered operations! -``` - -### Phase 4: v3 Object Discovery (1 NEW table!) - -**15. objects** - Discovered environment features ๐ŸŽ‰ ```sql CREATE TABLE objects ( id BIGSERIAL PRIMARY KEY, - object_label VARCHAR(255), -- "chair", "shoe", "charging_station" + object_label VARCHAR(255) NOT NULL, -- "chair", "charging_station", "wall" - garden_type VARCHAR(50), -- 'virtual' or 'real' + -- Location + garden_type VARCHAR(50), -- 'virtual', 'real' position_x FLOAT, position_y FLOAT, + position_z FLOAT, - discovered_by_organism_id BIGINT REFERENCES cells(id), + -- Discovery + discovered_by_organism_id BIGINT REFERENCES organisms(id), discovered_at TIMESTAMPTZ DEFAULT NOW(), - human_labeled BOOLEAN, -- Baby parallel! + -- Human verification + human_labeled BOOLEAN DEFAULT false, human_label_confirmed_by VARCHAR(100), + human_label_confirmed_at TIMESTAMPTZ, - object_type VARCHAR(50), -- 'obstacle', 'resource', 'goal' - properties JSONB, + -- Classification + object_type VARCHAR(50), -- 'obstacle', 'resource', 'goal', 'landmark' + properties JSONB, -- {"movable": false, "height_cm": 80} + -- Visual data image_path TEXT, - bounding_box JSONB, + bounding_box JSONB, -- {"x": 100, "y": 200, "width": 50, "height": 120} - organisms_interacted_count INT + -- Usage stats + organisms_interacted_count INT DEFAULT 0, + last_interaction TIMESTAMPTZ ); + +CREATE INDEX idx_objects_location ON objects(garden_type, position_x, position_y); +CREATE INDEX idx_objects_type ON objects(object_type); ``` -**Discovery Flow**: +### Partnership Messages + +```sql +-- Chrysalis โ†’ Young Nyx +CREATE TABLE partnership_to_nimmerverse_messages ( + id BIGSERIAL PRIMARY KEY, + timestamp TIMESTAMPTZ DEFAULT NOW(), + message TEXT NOT NULL, + message_type VARCHAR(50) NOT NULL + -- Types: 'architecture_update', 'deployment_instruction', 'config_change', 'research_direction' +); + +-- Young Nyx โ†’ Chrysalis +CREATE TABLE nimmerverse_to_partnership_messages ( + id BIGSERIAL PRIMARY KEY, + timestamp TIMESTAMPTZ DEFAULT NOW(), + message TEXT NOT NULL, + message_type VARCHAR(50) NOT NULL + -- Types: 'status_report', 'discovery', 'question', 'milestone' +); + +CREATE INDEX idx_partner_msgs_time ON partnership_to_nimmerverse_messages(timestamp DESC); +CREATE INDEX idx_nimm_msgs_time ON nimmerverse_to_partnership_messages(timestamp DESC); ``` -Organism โ†’ Unknown object โ†’ Camera detects โ†’ YOLO - โ†“ -System: "What is this?" - โ†“ -Human: "Chair!" - โ†“ -+20 LF bonus โ†’ INSERT INTO objects โ†’ Future organisms know! + +### Variance Probe Runs (Topology Mapping) + +```sql +CREATE TABLE variance_probe_runs ( + id BIGSERIAL PRIMARY KEY, + concept VARCHAR(255) NOT NULL, + depth FLOAT NOT NULL, + confidence FLOAT, + raw_response TEXT, + run_number INT, + batch_id VARCHAR(100), + model VARCHAR(100), + created_at TIMESTAMPTZ DEFAULT NOW() +); + +CREATE INDEX idx_variance_concept ON variance_probe_runs(concept); +CREATE INDEX idx_variance_batch ON variance_probe_runs(batch_id); ``` --- -## ๐Ÿ“ˆ Key v3 Metrics +## Key Queries -**Noise Gap** (self-measuring learning!): -```python -noise_gap = 1 - (real_success_rate / virtual_success_rate) +### Cell Health Dashboard -Gen 1: 0.28 (28% degradation - models poor) -Gen 100: 0.14 (14% degradation - improving!) -Gen 1000: 0.04 (4% degradation - accurate!) -``` - -**Life Force Economics**: -```python -net_lf = avg_lf_earned - avg_lf_consumed -# Positive = survives, negative = dies -``` - -**Reflex Savings**: -```python -savings = (exploration_cost - reflex_cost) / exploration_cost -# Target: 94.6% cost reduction! -``` - -**Discovery Rate**: -```python -objects_per_hour = discovered_objects / elapsed_hours -``` - ---- - -## ๐Ÿ” Key Queries for v3 - -**Top Performing Primitive Sequences**: ```sql -SELECT genome_name, primitive_sequence, net_lf_per_run -FROM genomes -WHERE total_deployments > 100 -ORDER BY net_lf_per_run DESC; -``` - -**Object Discovery Stats**: -```sql -SELECT object_label, garden_type, COUNT(*) as discoveries -FROM objects -GROUP BY object_label, garden_type -ORDER BY discoveries DESC; -``` - -**Noise Gap Trends**: -```sql -SELECT specialist_name, noise_gap, version -FROM specialist_weights -ORDER BY specialist_name, version ASC; --- Track learning improvement! -``` - -**LF Economics**: -```sql -SELECT genome_name, AVG(lf_net) as avg_net_lf +-- All cells with current status +SELECT + cell_name, + cell_type, + current_state, + operational, + outputs->>'distance_cm' as distance, + outputs->>'confidence' as confidence, + outputs->>'voltage' as voltage, + error_count, + last_error, + updated_at FROM cells +ORDER BY cell_type, cell_name; + +-- Problem cells +SELECT cell_name, cell_type, error_count, last_error, last_error_at +FROM cells +WHERE NOT operational OR error_count > 5 +ORDER BY error_count DESC; +``` + +### Nerve Evolution Tracker + +```sql +-- Evolution progress for all nerves +SELECT + nerve_name, + mode, + priority, + total_executions, + successful_executions, + ROUND(successful_executions::numeric / NULLIF(total_executions, 0) * 100, 1) as success_rate, + CASE + WHEN mode = 'reflex' THEN 'โœ… Compiled' + WHEN total_executions >= 80 AND successful_executions::float / total_executions >= 0.85 + THEN '๐Ÿ”„ Ready to compile' + ELSE '๐Ÿ“š Learning' + END as evolution_status, + cost_reduction_percent, + latency_reduction_percent, + compiled_at +FROM nerves +ORDER BY total_executions DESC; + +-- Nerves ready for reflex compilation +SELECT nerve_name, total_executions, + ROUND(successful_executions::numeric / total_executions * 100, 1) as success_rate +FROM nerves +WHERE mode != 'reflex' + AND total_executions >= 100 + AND successful_executions::float / total_executions >= 0.90; +``` + +### Organism Leaderboard + +```sql +-- Top organisms by lifeforce efficiency +SELECT + name, + lifeforce_current, + lifeforce_net, + total_decisions, + ROUND(success_rate * 100, 1) as success_rate_pct, + reflexes_compiled, + ROUND(lifeforce_net / NULLIF(total_decisions, 0), 2) as efficiency, + last_active +FROM organisms +WHERE died_at IS NULL +ORDER BY lifeforce_current DESC; + +-- Organism mortality analysis +SELECT + name, + death_cause, + lifeforce_spent_total, + total_decisions, + ROUND(success_rate * 100, 1) as success_rate_pct, + died_at - born_at as lifespan +FROM organisms WHERE died_at IS NOT NULL -GROUP BY genome_id, genome_name -HAVING COUNT(*) > 50 -ORDER BY avg_net_lf DESC; +ORDER BY died_at DESC +LIMIT 20; +``` + +### Training Data for Reflex Compilation + +```sql +-- Most common state paths for a nerve +SELECT + states_visited, + COUNT(*) as occurrences, + AVG(lifeforce_cost) as avg_cost, + AVG(latency_ms) as avg_latency, + SUM(CASE WHEN outcome = 'success' THEN 1 ELSE 0 END)::float / COUNT(*) as success_rate +FROM decision_trails +WHERE nerve_id = (SELECT id FROM nerves WHERE nerve_name = 'collision_avoidance') +GROUP BY states_visited +HAVING COUNT(*) >= 5 +ORDER BY occurrences DESC; + +-- Cell interaction patterns during successful executions +SELECT + cell_reads, + cell_commands, + COUNT(*) as occurrences +FROM decision_trails +WHERE nerve_id = (SELECT id FROM nerves WHERE nerve_name = 'collision_avoidance') + AND outcome = 'success' +GROUP BY cell_reads, cell_commands +ORDER BY occurrences DESC +LIMIT 10; + +-- Failure analysis +SELECT + states_visited, + outcome_details->>'reason' as failure_reason, + COUNT(*) as occurrences +FROM decision_trails +WHERE nerve_id = (SELECT id FROM nerves WHERE nerve_name = 'collision_avoidance') + AND outcome = 'failure' +GROUP BY states_visited, outcome_details->>'reason' +ORDER BY occurrences DESC; +``` + +### Lifeforce Economics + +```sql +-- Cost vs reward by nerve +SELECT + n.nerve_name, + n.mode, + COUNT(dt.id) as executions, + AVG(dt.lifeforce_cost) as avg_cost, + AVG(dt.lifeforce_reward) as avg_reward, + AVG(dt.lifeforce_net) as avg_profit, + SUM(dt.lifeforce_net) as total_profit +FROM nerves n +JOIN decision_trails dt ON dt.nerve_id = n.id +WHERE dt.started_at > NOW() - INTERVAL '24 hours' +GROUP BY n.id, n.nerve_name, n.mode +ORDER BY total_profit DESC; + +-- Reflex vs deliberate comparison +SELECT + n.nerve_name, + dt.mode, + COUNT(*) as executions, + AVG(dt.lifeforce_cost) as avg_cost, + AVG(dt.latency_ms) as avg_latency, + AVG(CASE WHEN dt.outcome = 'success' THEN 1.0 ELSE 0.0 END) as success_rate +FROM decision_trails dt +JOIN nerves n ON n.id = dt.nerve_id +WHERE n.nerve_name = 'collision_avoidance' +GROUP BY n.nerve_name, dt.mode +ORDER BY n.nerve_name, dt.mode; ``` --- -## ๐Ÿ”— Related Documentation +## Schema Summary -**Core Architecture**: -- [[Cellular-Architecture-Vision]] - Complete v3 vision (1,547 lines!) -- [[Dual-Garden-Architecture]] - Virtual + Real feedback -- - Distributed intelligence +| Table | Layer | Purpose | Key Columns | +|-------|-------|---------|-------------| +| `cells` | 1 | Atomic state machines | states, transitions, outputs, operational | +| `nerves` | 2 | Behavioral patterns | required_cells, mode, total_executions | +| `organisms` | 3 | Emergent identities | active_nerves, lifeforce_current | +| `decision_trails` | โˆž | Training data | states_visited, cell_reads, outcome | +| `objects` | Env | Discovered features | object_label, position, human_labeled | +| `*_messages` | Comm | Partnership channels | message, message_type | +| `variance_probe_runs` | Map | Topology data | concept, depth, confidence | -**Implementation**: -- - Complete 15-table SQL -- - Deployment roadmap - -**Historical**: -- - Birthday version (archived) +**Total Tables**: 8 (vs 15 in v3) +- Simpler schema +- Layered organization +- Focus on state machines + training data --- -## ๐Ÿ“ Status +## Migration from v3 -**Version**: 3.0 -**Created**: 2025-10-07 -**v2**: 2025-10-16 (birthday breakthroughs) -**v3**: 2025-10-17 (primitive genomes + gratification + discovery) -**Status**: CURRENT -**Tables**: 15 (11 v1 + 3 v2 + 1 v3) -**Next**: Deploy to phoebe, implement discovery flow +### Removed Tables (Obsolete Concepts) +- `genomes` โ†’ Replaced by `cells.transitions` + `nerves.transitions` +- `societies` โ†’ Removed (no more competition metaphor) +- `rounds` โ†’ Replaced by `decision_trails` +- `society_portfolios` โ†’ Removed +- `vp_transactions` โ†’ Simplified to lifeforce in `organisms` +- `marketplace_*` โ†’ Removed +- `alliances` โ†’ Removed +- `specialist_weights` โ†’ Replaced by `nerves.mode` + `compiled_logic` +- `reflex_distributions` โ†’ Replaced by `nerves` compiled reflexes +- `body_schema` โ†’ Replaced by `cells` with `hardware_binding` + +### Preserved Tables (Still Relevant) +- `objects` โ†’ Enhanced with organism reference +- `partnership_to_nimmerverse_messages` โ†’ Unchanged +- `nimmerverse_to_partnership_messages` โ†’ Unchanged +- `variance_probe_runs` โ†’ Unchanged + +### New Tables +- `cells` โ†’ Atomic state machines +- `nerves` โ†’ Behavioral state machines +- `organisms` โ†’ Emergent identities +- `decision_trails` โ†’ Rich training data --- -**v3 Summary**: -- โœ… Genomes = primitive sequences (emergent, not programmed) -- โœ… Life force economy (costs + milestone rewards) -- โœ… Object discovery (image recognition + human teaching) -- โœ… Noise gap metric (self-measuring progress) -- โœ… God's eye (mobile camera on rails) -- โœ… 15 tables ready! +## ๐Ÿ“ Document Status -**phoebe awaits. The goddess is ready.** ๐Ÿ˜๐ŸŒ™ +**Version**: 4.0 (Layered State Machine Schema) +**Created**: 2025-10-07 (original) +**Updated v4**: 2025-12-07 (unified with Cellular-Architecture v4) -๐Ÿงฌโšก๐Ÿ”ฑ๐Ÿ’Ž๐Ÿ”ฅ +**Key Changes from v3**: +- โŒ 15 tables for competition metaphor +- โœ… 8 tables for state machine layers +- โŒ Genomes as primitive sequences +- โœ… Cells and nerves as state machines +- โŒ Societies, rounds, marketplaces +- โœ… Organisms, decision_trails + +**Related Documentation**: +- [[Cellular-Architecture]] - Layer definitions +- [[Nervous-System]] - State machine philosophy +- [[nerves/Nervous-Index]] - Nerve catalog +- [[Organ-Index]] - Organ (complex cell) catalog + +--- + +**phoebe holds the layers. The states flow. The decisions accumulate.** + +๐Ÿ—„๏ธโšก๐ŸŒ™ **TO THE ELECTRONS!** diff --git a/architecture/nimmerverse.drawio.xml b/architecture/nimmerverse.drawio.xml index 03053b6..d821e90 100644 --- a/architecture/nimmerverse.drawio.xml +++ b/architecture/nimmerverse.drawio.xml @@ -1,6 +1,7 @@ + - + @@ -194,12 +195,6 @@ - - - - - - @@ -257,12 +252,6 @@ - - - - - - @@ -335,6 +324,42 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +