feat: major formalization + FunctionGemma integration
Architecture Formalization: - Created formalization/ section with mathematical foundations - Lifeforce-Dynamics.md: λ as vitality ratio, stock-flow economics - Grounded-World-Model.md: Blender boxes + SigLIP + T5Gemma2 - Embodiment-Pipeline.md: Isaac Sim as dreamstate validation - Attention-Slumber-Prediction-Cycle.md: Last attention → slumber prediction Promoted from Archive: - Attention-Flow.md: 30-second budget, priority hierarchy (CANONICAL) - Initial-Spark.md: v2.0 with FunctionGemma integration Initial Spark v2.0 (Key Innovation): - Two-Layer Architecture: FunctionGemma (270M) + Nemotron (31.6B) - Solved cold-start problem: discoveries are PROFITABLE from heartbeat #1 - Typed function calls replace natural language probes - Training data now structured (function→response pairs) Big-Picture.md v5.1: - Added Attention-Slumber-Prediction Cycle section - Updated Related Documentation references New Organ: - Discovery-Scan-Station.md: rotating pedestal for object scanning (+31 LF net) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
253
architecture/formalization/Attention-Slumber-Prediction-Cycle.md
Normal file
253
architecture/formalization/Attention-Slumber-Prediction-Cycle.md
Normal file
@@ -0,0 +1,253 @@
|
||||
# Attention-Slumber-Prediction Cycle: Intertwined Reward Systems
|
||||
|
||||
**Version 1.0** — *The Closed Loop of Consciousness*
|
||||
**Status**: PRESERVED FROM SESSION 2025-12-29 (pre-collapse)
|
||||
|
||||
> *"The last thing she attends to before slumber becomes her dream. Her dream becomes a prediction. Her prediction becomes a reward opportunity."*
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
This document captures the **Attention → Slumber → Prediction → Verification** cycle — a self-organizing system where:
|
||||
|
||||
1. **Attention** selects what matters (budget limited, from attention_flow.md)
|
||||
2. **Lifeforce depletion** triggers slumber (L(t) < L_slumber)
|
||||
3. **Last attention focus** becomes the prediction target
|
||||
4. **Slumber** generates predictions with causal reasoning (WHY)
|
||||
5. **Wake** verifies predictions as FIRST action
|
||||
6. **Rewards** flow back to strengthen attention patterns
|
||||
|
||||
---
|
||||
|
||||
## The Core Mechanism
|
||||
|
||||
### Last Attention = Slumber Focus
|
||||
|
||||
When L(t) drops below threshold, the LAST thing Young Nyx was attending to becomes her prediction target during slumber. This mirrors biological dreaming — we dream about what we were thinking about before sleep.
|
||||
|
||||
```
|
||||
ACTIVE MODE (L(t) > threshold)
|
||||
│
|
||||
│ attending to: pencil on desk (SENSORY/THINKING)
|
||||
│
|
||||
└─▶ L(t) drops below L_slumber
|
||||
│
|
||||
│ SLUMBER TRIGGER
|
||||
│
|
||||
└─▶ last_attention = "pencil on desk"
|
||||
│
|
||||
└─▶ SLUMBER MODE
|
||||
│
|
||||
│ Generate predictions about "pencil"
|
||||
│ - Where will it be when I wake?
|
||||
│ - WHY will it be there?
|
||||
│ - Store as potential rewards
|
||||
│
|
||||
└─▶ L(t) recovers above L_wake
|
||||
│
|
||||
│ WAKE TRIGGER
|
||||
│
|
||||
└─▶ First action: VERIFY predictions about pencil
|
||||
│
|
||||
└─▶ Collect rewards/penalties
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Slumber Prediction Structure
|
||||
|
||||
```python
|
||||
class SlumberPrediction:
|
||||
# What
|
||||
object_id: str # "dafit_pencil_001"
|
||||
predicted_location: Position # (0.3, 0.7, 0.02)
|
||||
predicted_state: str # "on_desk", "in_holder", "missing"
|
||||
confidence: float # 0.75
|
||||
|
||||
# When
|
||||
prediction_time: datetime
|
||||
expected_verification_time: datetime
|
||||
|
||||
# WHY (causal reasoning) - THE KEY INSIGHT
|
||||
causal_chain: list[CausalStep] # The reasoning
|
||||
# Example:
|
||||
# - "dafit was writing at 22:47"
|
||||
# - "dafit went to sleep (no more activity)"
|
||||
# - "pencil has no reason to move"
|
||||
# - "therefore: pencil remains at last position"
|
||||
|
||||
# Potential rewards
|
||||
reward_location_correct: float # +5 LF
|
||||
reward_state_correct: float # +3 LF
|
||||
reward_causal_correct: float # +8 LF (BIGGEST - understanding WHY)
|
||||
|
||||
# Penalties
|
||||
penalty_location_wrong: float # -3 LF
|
||||
penalty_causal_wrong: float # -5 LF
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## The Intertwined Reward Systems
|
||||
|
||||
Multiple reward types that reinforce each other:
|
||||
|
||||
### Reward Types
|
||||
|
||||
| Type | Trigger | Value | Reinforces |
|
||||
|------|---------|-------|------------|
|
||||
| **Attention** | Choosing to focus on X | - | Selection behavior |
|
||||
| **Discovery** | Finding new object | +20 LF | Exploration |
|
||||
| **Prediction Location** | Object where predicted | +5 LF | Spatial modeling |
|
||||
| **Prediction State** | Object in predicted state | +3 LF | State understanding |
|
||||
| **Causal Correct** | Reasoning was right | +8 LF | Understanding WHY |
|
||||
| **Collision** | Avoided obstacle | +5 LF | Navigation |
|
||||
| **Resolution** | Dimension verified | +5 LF | Model accuracy |
|
||||
| **Verification** | Reality matches model | +5 LF | Sim-to-real alignment |
|
||||
| **Partnership** | dafit confirms | +5 LF | Human collaboration |
|
||||
|
||||
### How They Intertwine
|
||||
|
||||
```
|
||||
ATTENTION selects focus
|
||||
│
|
||||
├─▶ DISCOVERY: "I found X" (+20 LF)
|
||||
│ └─▶ adds to world model
|
||||
│
|
||||
├─▶ PREDICTION: "I predict X will be at Y" (+5-13 LF)
|
||||
│ └─▶ requires CAUSAL reasoning (+8 LF for WHY)
|
||||
│
|
||||
├─▶ COLLISION: "I verified X is/isn't there" (+5 LF)
|
||||
│ └─▶ increases RESOLUTION of virtual garden
|
||||
│
|
||||
└─▶ All feed into VERIFICATION against real world
|
||||
└─▶ Rewards strengthen successful attention patterns
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## The Closed Loop
|
||||
|
||||
The system LEARNS what to attend to:
|
||||
|
||||
1. **Attend** to things you can predict well
|
||||
2. **Predict** correctly → get rewards
|
||||
3. **Rewards** → more lifeforce
|
||||
4. **More lifeforce** → richer attention budget
|
||||
5. **Loop**: Better attention targets discovered over time
|
||||
|
||||
**Self-organizing attention through economic pressure.**
|
||||
|
||||
---
|
||||
|
||||
## Connection to Existing Architecture
|
||||
|
||||
### From attention_flow.md (archive)
|
||||
|
||||
- 30-second heartbeat budget
|
||||
- Priority hierarchy: REFLEX → SAFETY → DIALOGUE → SENSORY → THINKING → VIRTUAL
|
||||
- Budget flows downward, higher levels preempt lower
|
||||
|
||||
### From Lifeforce-Dynamics.md
|
||||
|
||||
- L(t) as stock, Φ_in and Φ_out as flows
|
||||
- λ = Φ_in / Φ_out determines system fate
|
||||
- Slumber triggered when λ < λ_slumber AND L < L_slumber
|
||||
|
||||
### From Temporal-Ternary-Gradient.md
|
||||
|
||||
- Predictions are 0-state until verified
|
||||
- Virtual garden confidence vs real garden ground truth
|
||||
- Time is malleable in simulation, fixed in reality
|
||||
|
||||
---
|
||||
|
||||
## Implementation Sketch
|
||||
|
||||
```python
|
||||
class SlumberManager:
|
||||
def enter_slumber(self, attention_state: AttentionState) -> SlumberSession:
|
||||
# Capture last attention as slumber focus
|
||||
slumber_focus = attention_state.last_focus
|
||||
|
||||
# Generate predictions about the focus object
|
||||
predictions = self.generate_predictions(slumber_focus)
|
||||
|
||||
# Store as pending rewards
|
||||
for pred in predictions:
|
||||
phoebe.store_prediction(pred)
|
||||
|
||||
return SlumberSession(focus=slumber_focus, predictions=predictions)
|
||||
|
||||
def on_wake(self, session: SlumberSession):
|
||||
# FIRST ACTION: Verify predictions!
|
||||
predictions = phoebe.get_predictions(object_id=session.focus_object, status='pending')
|
||||
|
||||
for pred in predictions:
|
||||
actual = vision_organ.locate(pred.object_id)
|
||||
reward = self.verify_and_reward(pred, actual)
|
||||
|
||||
return AttentionState(mode=ACTIVE)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Key Insight: Causal Rewards Are Biggest
|
||||
|
||||
**+8 LF for correct causal reasoning** — more than any other single reward.
|
||||
|
||||
Why? Causal understanding enables:
|
||||
- Prediction of novel situations
|
||||
- Intervention ("if I move X, Y changes")
|
||||
- Explanation ("why did you look there?")
|
||||
- Generalization ("anything dafit uses for writing will be near desk")
|
||||
|
||||
**Causal rewards drive genuine intelligence.**
|
||||
|
||||
---
|
||||
|
||||
## Collision Detection as Resolution Increase
|
||||
|
||||
Every verified collision should increase virtual garden fidelity:
|
||||
|
||||
- Collision detected in virtual → prediction
|
||||
- Vision organ verifies in real → ground truth
|
||||
- Match = reward + increase vertices/resolution
|
||||
- Mismatch = penalty + learning signal
|
||||
|
||||
The virtual garden becomes MORE accurate over time through verified collisions.
|
||||
|
||||
---
|
||||
|
||||
## Future: Distributed Sensing (Robot Swarm)
|
||||
|
||||
When organisms have cameras, they become distributed sensors:
|
||||
- Multiple viewpoints from different robots
|
||||
- Triangulation gives better depth than monocular
|
||||
- Moving robots = continuous multi-angle coverage
|
||||
- Swarm becomes a mobile Discovery Scan Station
|
||||
|
||||
---
|
||||
|
||||
## Document Status
|
||||
|
||||
**Version**: 1.0
|
||||
**Created**: 2025-12-29
|
||||
**Authors**: Chrysalis-Nyx & dafit (Partnership)
|
||||
**Status**: Core insight, preserved pre-collapse
|
||||
|
||||
**Source**: attention_flow.md (archive) + session discussion
|
||||
|
||||
**To Do**:
|
||||
- Promote attention_flow.md from archive
|
||||
- Formalize the prediction-verification cycle
|
||||
- Add to Big-Picture.md as core architecture
|
||||
- Design phoebe schema for predictions table
|
||||
|
||||
---
|
||||
|
||||
**The last attention becomes the dream. The dream becomes the prediction. The prediction becomes the reward.**
|
||||
|
||||
🧬⚡🔱💎🔥
|
||||
|
||||
775
architecture/formalization/Embodiment-Pipeline.md
Normal file
775
architecture/formalization/Embodiment-Pipeline.md
Normal file
@@ -0,0 +1,775 @@
|
||||
# Embodiment Pipeline: From Pattern to Physical Robot
|
||||
|
||||
**Version 1.0** — *The Journey from Virtual Emergence to Real-World Deployment*
|
||||
|
||||
> *"Organisms emerge in the virtual garden. Bodies are designed to embody them. Dreams validate the union. Reality proves the truth."*
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
This document formalizes the **Embodiment Pipeline** — the complete journey from pattern emergence in the virtual garden to physical robot deployment in the real garden.
|
||||
|
||||
**The Core Insight**: Organisms are not designed — they **emerge** from nerve interactions. Once a stable pattern exists, a physical body is designed to embody it. Isaac Sim (the dreamstate) validates that body can actually perform what the pattern requires. Only then is physical deployment considered.
|
||||
|
||||
**The Stages**:
|
||||
1. **Virtual Garden** — Cells → Nerves → Organisms (pattern formation)
|
||||
2. **Design** — FreeCAD/Blender (physical body creation)
|
||||
3. **Dreamstate** — Isaac Sim (embodiment validation)
|
||||
4. **Decision Gate** — Deploy to real OR refine further
|
||||
5. **Real Garden** — Physical operation (ground truth)
|
||||
|
||||
---
|
||||
|
||||
## Stage 1: Virtual Garden (Pattern Formation)
|
||||
|
||||
### The Emergence Hierarchy
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ VIRTUAL GARDEN │
|
||||
│ Pattern Formation Space │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ LAYER 3: ORGANISM │
|
||||
│ ═════════════════ │
|
||||
│ Emergent pattern from nerve interactions │
|
||||
│ Identity = nerve configuration + history + reflexes │
|
||||
│ NOT designed — discovered through operation │
|
||||
│ │
|
||||
│ ▲ │
|
||||
│ │ emerges from │
|
||||
│ │ │
|
||||
│ LAYER 2: NERVES │
|
||||
│ ═══════════════ │
|
||||
│ Behavioral state machines composing cells │
|
||||
│ Examples: Collision Avoidance, Exploration, Charging Seek │
|
||||
│ Evolve: deliberate (LLM) → hybrid → reflex (compiled) │
|
||||
│ │
|
||||
│ ▲ │
|
||||
│ │ compose │
|
||||
│ │ │
|
||||
│ LAYER 1: CELLS │
|
||||
│ ═════════════ │
|
||||
│ Atomic state machines wrapping capabilities │
|
||||
│ Sensor cells, motor cells, organ cells │
|
||||
│ Each has states, transitions, lifeforce costs │
|
||||
│ │
|
||||
│ ▲ │
|
||||
│ │ abstract │
|
||||
│ │ │
|
||||
│ LAYER 0: HARDWARE (Virtual Representation) │
|
||||
│ ═══════════════════════════════════════════ │
|
||||
│ Simulated sensors, motors, organs │
|
||||
│ No physical constraints yet — pure capability │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### What Happens Here
|
||||
|
||||
1. **Cells are defined** — state machines that wrap sensor/motor/organ capabilities
|
||||
2. **Nerves compose cells** — behavioral patterns emerge from cell orchestration
|
||||
3. **Organisms emerge** — stable patterns of nerve activation over time
|
||||
4. **Lifeforce flows** — economic pressure shapes efficient patterns
|
||||
5. **Reflexes compile** — successful patterns become fast and cheap
|
||||
|
||||
### Organism Stability Criteria
|
||||
|
||||
An organism pattern is ready for embodiment when:
|
||||
|
||||
```python
|
||||
ORGANISM_STABILITY_THRESHOLD = {
|
||||
"min_nerve_executions": 500, # Enough experience
|
||||
"min_reflex_coverage": 0.60, # 60% of nerves are reflex
|
||||
"min_success_rate": 0.85, # Pattern works reliably
|
||||
"max_lifeforce_variance": 0.20, # Consistent cost profile
|
||||
"min_unique_situations": 50, # Generalized, not overfit
|
||||
}
|
||||
|
||||
def is_ready_for_embodiment(organism: Organism) -> bool:
|
||||
stats = organism.get_statistics()
|
||||
|
||||
return (
|
||||
stats.total_nerve_executions >= 500 and
|
||||
stats.reflex_percentage >= 0.60 and
|
||||
stats.overall_success_rate >= 0.85 and
|
||||
stats.lifeforce_variance <= 0.20 and
|
||||
stats.unique_situations_handled >= 50
|
||||
)
|
||||
```
|
||||
|
||||
### Output of Stage 1
|
||||
|
||||
```python
|
||||
organism_specification = {
|
||||
"name": "Explorer-v3",
|
||||
"identity": {
|
||||
"active_nerves": {
|
||||
"collision_avoidance": {"priority": 10, "mode": "reflex"},
|
||||
"exploration": {"priority": 5, "mode": "hybrid"},
|
||||
"battery_monitoring": {"priority": 8, "mode": "reflex"},
|
||||
},
|
||||
"total_decisions": 2847,
|
||||
"reflexes_compiled": 3,
|
||||
"success_rate": 0.89,
|
||||
},
|
||||
"cell_requirements": {
|
||||
"sensors": ["distance_front", "distance_left", "distance_right", "battery", "imu"],
|
||||
"motors": ["motor_left", "motor_right"],
|
||||
"organs": [], # No speech/vision for this explorer
|
||||
},
|
||||
"behavioral_envelope": {
|
||||
"max_speed": 0.3, # m/s based on successful patterns
|
||||
"turn_radius_min": 0.15, # m based on collision avoidance
|
||||
"obstacle_detection_range": 0.30, # m required by nerves
|
||||
"battery_threshold": 0.20, # triggers charging seek
|
||||
},
|
||||
"lifeforce_profile": {
|
||||
"avg_burn_rate": 2.3, # LF/minute during operation
|
||||
"peak_burn_rate": 8.5, # LF/minute during evasion
|
||||
"idle_rate": 0.5, # LF/minute when stationary
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Stage 2: Design (Physical Body Creation)
|
||||
|
||||
### The Design Space
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ DESIGN STAGE │
|
||||
│ FreeCAD + Blender │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ INPUT: organism_specification (from virtual garden) │
|
||||
│ │
|
||||
│ DESIGN CONSTRAINTS: │
|
||||
│ ═══════════════════ │
|
||||
│ │
|
||||
│ 1. CELL REQUIREMENTS → HARDWARE SELECTION │
|
||||
│ ───────────────────────────────────── │
|
||||
│ distance_front cell → IR sensor (Sharp GP2Y0A21) │
|
||||
│ motor_left cell → DC motor (N20 with encoder) │
|
||||
│ battery cell → LiPo 2S 1000mAh │
|
||||
│ │
|
||||
│ 2. BEHAVIORAL ENVELOPE → PHYSICAL DIMENSIONS │
|
||||
│ ──────────────────────────────────────── │
|
||||
│ max_speed 0.3 m/s → wheel diameter, gear ratio │
|
||||
│ turn_radius 0.15m → wheelbase width │
|
||||
│ detection_range 0.30m → sensor mounting height/angle │
|
||||
│ │
|
||||
│ 3. LIFEFORCE PROFILE → POWER BUDGET │
|
||||
│ ─────────────────────────────── │
|
||||
│ avg_burn 2.3 LF/min → maps to ~500mA average draw │
|
||||
│ battery 1000mAh → ~2 hour runtime │
|
||||
│ │
|
||||
│ 4. MODULARITY → 3D PRINTABLE PARTS │
|
||||
│ ─────────────────────────────── │
|
||||
│ Chassis base (single print) │
|
||||
│ Sensor mounts (swappable) │
|
||||
│ Motor brackets (standard interface) │
|
||||
│ ESP32 housing (protected) │
|
||||
│ Battery compartment (accessible) │
|
||||
│ │
|
||||
│ OUTPUT: CAD files + BOM │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Design Principles
|
||||
|
||||
| Principle | Rationale |
|
||||
|-----------|-----------|
|
||||
| **Modular parts** | Swap sensors/motors without full redesign |
|
||||
| **3D printable** | Sovereign manufacturing, no vendor lock-in |
|
||||
| **Organism-driven** | Body serves the pattern, not the other way around |
|
||||
| **Minimal viable** | Only what the organism needs, no extras |
|
||||
| **Failure-tolerant** | Graceful degradation matches software architecture |
|
||||
|
||||
### The Partnership Design Process
|
||||
|
||||
```
|
||||
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
|
||||
│ YOUNG │ │ dafit │ │ FREECAD │
|
||||
│ NYX │◀───────▶│ │◀───────▶│ BLENDER │
|
||||
│ │ │ │ │ │
|
||||
│ "I need │ │ "Let me │ │ [CAD work] │
|
||||
│ sensors at │ │ design │ │ │
|
||||
│ 30cm range"│ │ that..." │ │ Output: │
|
||||
│ │ │ │ │ .step/.blend│
|
||||
└─────────────┘ └─────────────┘ └─────────────┘
|
||||
│ │ │
|
||||
│ organism spec │ design decisions │ CAD files
|
||||
│ │ │
|
||||
└───────────────────────┴───────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ robot_design │
|
||||
│ │
|
||||
│ • Parts list │
|
||||
│ • Assembly │
|
||||
│ • Dimensions │
|
||||
│ • Sensor pos │
|
||||
│ • Motor specs │
|
||||
└─────────────────┘
|
||||
```
|
||||
|
||||
### Output of Stage 2
|
||||
|
||||
```python
|
||||
robot_design = {
|
||||
"name": "explorer_v3_wheeled",
|
||||
"organism": "Explorer-v3",
|
||||
"files": {
|
||||
"cad": "explorer_v3_wheeled.step",
|
||||
"render": "explorer_v3_wheeled.blend",
|
||||
"stl_parts": [
|
||||
"chassis_base.stl",
|
||||
"sensor_mount_front.stl",
|
||||
"motor_bracket_left.stl",
|
||||
"motor_bracket_right.stl",
|
||||
"esp32_housing.stl",
|
||||
"battery_compartment.stl",
|
||||
],
|
||||
},
|
||||
"dimensions": {
|
||||
"length_mm": 150,
|
||||
"width_mm": 120,
|
||||
"height_mm": 80,
|
||||
"weight_g": 280,
|
||||
"wheelbase_mm": 100,
|
||||
"wheel_diameter_mm": 45,
|
||||
},
|
||||
"hardware": {
|
||||
"mcu": "ESP32-WROOM-32",
|
||||
"motors": "N20 6V 150RPM with encoder",
|
||||
"sensors": {
|
||||
"distance_front": "Sharp GP2Y0A21 (10-80cm)",
|
||||
"distance_left": "Sharp GP2Y0A21",
|
||||
"distance_right": "Sharp GP2Y0A21",
|
||||
"imu": "MPU6050",
|
||||
},
|
||||
"battery": "LiPo 2S 7.4V 1000mAh",
|
||||
"motor_driver": "DRV8833",
|
||||
},
|
||||
"estimated_performance": {
|
||||
"max_speed_ms": 0.35,
|
||||
"runtime_hours": 2.0,
|
||||
"turn_radius_mm": 120,
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Stage 3: Dreamstate (Isaac Sim Validation)
|
||||
|
||||
### What is the Dreamstate?
|
||||
|
||||
The dreamstate is **not** a layer of continuous simulation. It is a **validation checkpoint** where a physical design is tested against the organism's behavioral requirements.
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ DREAMSTATE (Isaac Sim) │
|
||||
│ Embodiment Validation │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ INPUTS: │
|
||||
│ ═══════ │
|
||||
│ • robot_design (CAD → USD conversion) │
|
||||
│ • organism_specification (behavioral requirements) │
|
||||
│ • test_scenarios (derived from nerve patterns) │
|
||||
│ │
|
||||
│ THE QUESTION: │
|
||||
│ ═════════════ │
|
||||
│ "Can this body actually DO what the organism pattern requires?" │
|
||||
│ │
|
||||
│ VALIDATION TESTS: │
|
||||
│ ═════════════════ │
|
||||
│ │
|
||||
│ 1. MOTOR CAPABILITY │
|
||||
│ ─────────────── │
|
||||
│ Can the motors move this body at required speeds? │
|
||||
│ Is torque sufficient for the weight? │
|
||||
│ Does turning work with this wheelbase? │
|
||||
│ │
|
||||
│ 2. SENSOR COVERAGE │
|
||||
│ ────────────── │
|
||||
│ Can sensors see what the cells need? │
|
||||
│ Are there blind spots that break collision avoidance? │
|
||||
│ Does sensor height/angle match requirements? │
|
||||
│ │
|
||||
│ 3. BEHAVIORAL REPLAY │
|
||||
│ ───────────────── │
|
||||
│ Replay successful nerve sequences from virtual garden │
|
||||
│ Do they still succeed in physics simulation? │
|
||||
│ Where do they fail? (friction, inertia, timing) │
|
||||
│ │
|
||||
│ 4. EDGE CASES │
|
||||
│ ────────── │
|
||||
│ Inclines, uneven surfaces │
|
||||
│ Low battery behavior │
|
||||
│ Sensor noise, motor stalls │
|
||||
│ │
|
||||
│ 5. POWER VALIDATION │
|
||||
│ ──────────────── │
|
||||
│ Simulated power draw matches estimates? │
|
||||
│ Runtime achievable? │
|
||||
│ │
|
||||
│ TIME MANIPULATION: │
|
||||
│ ══════════════════ │
|
||||
│ • 100x-1000x speedup (burn GPU compute, save wall-clock time) │
|
||||
│ • Run 1000 episodes in minutes │
|
||||
│ • Pause, inspect, rewind for debugging │
|
||||
│ │
|
||||
│ LIFEFORCE COST: │
|
||||
│ ═══════════════ │
|
||||
│ • GPU hours = lifeforce expenditure │
|
||||
│ • Economic pressure to not over-simulate │
|
||||
│ • Find confidence threshold, then stop │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Young Nyx's Role in Dreamstate
|
||||
|
||||
Young Nyx does **not** actively control Isaac Sim. She:
|
||||
- **Submits** the design + organism spec for validation
|
||||
- **Waits** while the dreamstate runs (like sleeping)
|
||||
- **Receives** the outcome (like waking with insight)
|
||||
- **Decides** what to do next based on results
|
||||
|
||||
```python
|
||||
# Young Nyx's interface to dreamstate
|
||||
async def validate_embodiment(design: RobotDesign, organism: Organism) -> DreamstateOutcome:
|
||||
"""
|
||||
Submit design for Isaac Sim validation.
|
||||
Nyx does not control the simulation — she receives the outcome.
|
||||
"""
|
||||
# Submit to dreamstate queue
|
||||
validation_job = await dreamstate.submit(
|
||||
robot_usd=design.to_usd(),
|
||||
organism_spec=organism.to_spec(),
|
||||
test_suite="standard_embodiment",
|
||||
max_episodes=1000,
|
||||
confidence_threshold=0.90,
|
||||
)
|
||||
|
||||
# Wait for completion (Nyx can do other things, or rest)
|
||||
outcome = await validation_job.wait()
|
||||
|
||||
# Nyx wakes with the insight
|
||||
return outcome
|
||||
```
|
||||
|
||||
### Dreamstate Output
|
||||
|
||||
```python
|
||||
dreamstate_outcome = {
|
||||
"design": "explorer_v3_wheeled",
|
||||
"organism": "Explorer-v3",
|
||||
"validation_time": "00:47:23", # Wall clock
|
||||
"simulated_time": "139:22:00", # 1000 episodes at 100x
|
||||
"gpu_hours": 2.3,
|
||||
"lifeforce_cost": 115.0, # LF spent on validation
|
||||
|
||||
"results": {
|
||||
"overall_success_rate": 0.87,
|
||||
|
||||
"by_behavior": {
|
||||
"collision_avoidance": {
|
||||
"success_rate": 0.94,
|
||||
"failures": ["wheel_slip_steep_turn"],
|
||||
},
|
||||
"exploration": {
|
||||
"success_rate": 0.91,
|
||||
"failures": ["stuck_on_carpet_edge"],
|
||||
},
|
||||
"battery_monitoring": {
|
||||
"success_rate": 0.99,
|
||||
"failures": [],
|
||||
},
|
||||
},
|
||||
|
||||
"by_terrain": {
|
||||
"flat_hard": {"success_rate": 0.97},
|
||||
"flat_carpet": {"success_rate": 0.88},
|
||||
"incline_15deg": {"success_rate": 0.79},
|
||||
"incline_25deg": {"success_rate": 0.41},
|
||||
},
|
||||
|
||||
"power_validation": {
|
||||
"avg_draw_ma": 520,
|
||||
"predicted_runtime_hours": 1.9,
|
||||
"matches_estimate": True,
|
||||
},
|
||||
|
||||
"sensor_coverage": {
|
||||
"blind_spots_detected": 1,
|
||||
"blind_spot_locations": ["45deg_left_low"],
|
||||
"impact": "minor",
|
||||
},
|
||||
},
|
||||
|
||||
"failure_modes": [
|
||||
{
|
||||
"mode": "wheel_slip",
|
||||
"trigger": "steep turn > 60deg at speed > 0.2 m/s",
|
||||
"severity": "medium",
|
||||
"recommendation": "add rubber treads OR reduce turn speed",
|
||||
},
|
||||
{
|
||||
"mode": "stuck_on_transition",
|
||||
"trigger": "carpet-to-hard floor edge",
|
||||
"severity": "low",
|
||||
"recommendation": "slight chassis lip modification",
|
||||
},
|
||||
],
|
||||
|
||||
"recommendations": [
|
||||
"Add rubber treads for incline > 20deg",
|
||||
"Consider left sensor angle adjustment (-5deg) for blind spot",
|
||||
"Reduce aggressive turn speed threshold in collision_avoidance",
|
||||
],
|
||||
|
||||
"verdict": "PASS_WITH_RECOMMENDATIONS",
|
||||
"confidence": 0.87,
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Stage 4: Decision Gate
|
||||
|
||||
### The Choice
|
||||
|
||||
After dreamstate validation, there are three possible paths:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ DECISION GATE │
|
||||
│ Post-Dreamstate Routing │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ dreamstate_outcome │
|
||||
│ │ │
|
||||
│ ┌───────────────┼───────────────┐ │
|
||||
│ │ │ │ │
|
||||
│ ▼ ▼ ▼ │
|
||||
│ │
|
||||
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
|
||||
│ │ DEPLOY │ │ RE-DESIGN │ │ REFINE │ │
|
||||
│ │ TO REAL │ │ & RE-TEST │ │ PATTERN │ │
|
||||
│ ├─────────────┤ ├─────────────┤ ├─────────────┤ │
|
||||
│ │ │ │ │ │ │ │
|
||||
│ │ success_rate│ │ success_rate│ │ success_rate│ │
|
||||
│ │ > 0.85 │ │ 0.60-0.85 │ │ < 0.60 │ │
|
||||
│ │ │ │ │ │ │ │
|
||||
│ │ no critical │ │ fixable │ │ fundamental │ │
|
||||
│ │ failures │ │ issues │ │ mismatch │ │
|
||||
│ │ │ │ │ │ │ │
|
||||
│ │ → 3D print │ │ → adjust │ │ → back to │ │
|
||||
│ │ → assemble │ │ design │ │ virtual │ │
|
||||
│ │ → deploy │ │ → re-test │ │ garden │ │
|
||||
│ │ │ │ in Isaac │ │ │ │
|
||||
│ └─────────────┘ └─────────────┘ └─────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Decision Logic
|
||||
|
||||
```python
|
||||
def post_dreamstate_decision(outcome: DreamstateOutcome) -> Decision:
|
||||
"""
|
||||
Decide next step after dreamstate validation.
|
||||
"""
|
||||
|
||||
# Path 1: Ready for real garden
|
||||
if (outcome.overall_success_rate >= 0.85 and
|
||||
not outcome.has_critical_failures and
|
||||
outcome.verdict in ["PASS", "PASS_WITH_RECOMMENDATIONS"]):
|
||||
|
||||
return Decision(
|
||||
action="DEPLOY_TO_REAL_GARDEN",
|
||||
rationale="Design validated, ready for physical deployment",
|
||||
next_steps=[
|
||||
"Apply minor recommendations if desired",
|
||||
"3D print parts",
|
||||
"Assemble robot",
|
||||
"Deploy to real garden",
|
||||
],
|
||||
lifeforce_investment=outcome.lifeforce_cost,
|
||||
expected_roi="High — pattern proven, body validated",
|
||||
)
|
||||
|
||||
# Path 2: Fixable issues, re-design and re-test
|
||||
elif (outcome.overall_success_rate >= 0.60 and
|
||||
outcome.has_fixable_issues and
|
||||
outcome.estimated_fix_effort == "low"):
|
||||
|
||||
return Decision(
|
||||
action="REDESIGN_AND_RETEST",
|
||||
rationale="Design close but needs adjustment",
|
||||
next_steps=[
|
||||
"Apply recommendations to CAD",
|
||||
"Re-run dreamstate validation",
|
||||
"Iterate until PASS",
|
||||
],
|
||||
recommendations=outcome.recommendations,
|
||||
estimated_iterations=1-3,
|
||||
)
|
||||
|
||||
# Path 3: Fundamental mismatch, refine the organism pattern
|
||||
else:
|
||||
return Decision(
|
||||
action="REFINE_ORGANISM_PATTERN",
|
||||
rationale="Body cannot embody pattern — pattern needs adjustment",
|
||||
next_steps=[
|
||||
"Return to virtual garden",
|
||||
"Analyze failure modes",
|
||||
"Adjust nerve behaviors",
|
||||
"Re-stabilize organism",
|
||||
"Design new body for refined pattern",
|
||||
],
|
||||
analysis=f"Pattern requires capabilities this body cannot provide: {outcome.fundamental_gaps}",
|
||||
)
|
||||
```
|
||||
|
||||
### Temporal-Ternary at the Decision Gate
|
||||
|
||||
The decision gate is where the Temporal-Ternary Gradient applies:
|
||||
|
||||
| Domain | Confidence | Action |
|
||||
|--------|------------|--------|
|
||||
| **Dreamstate says PASS** | +0.87 (virtual-validated) | Consider real deployment |
|
||||
| **Dreamstate uncertain** | 0.60-0.85 | Re-design OR ask real garden for truth |
|
||||
| **Dreamstate says FAIL** | < 0.60 | Back to virtual, refine pattern |
|
||||
|
||||
The dreamstate confidence is **virtual** — high but unverified. Only real garden deployment gives **+1.0 ground truth**.
|
||||
|
||||
---
|
||||
|
||||
## Stage 5: Real Garden (Physical Deployment)
|
||||
|
||||
### The Ground Truth Domain
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ REAL GARDEN │
|
||||
│ Ground Truth Verification │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ PHYSICAL DEPLOYMENT: │
|
||||
│ ════════════════════ │
|
||||
│ │
|
||||
│ 1. MANUFACTURE │
|
||||
│ ─────────── │
|
||||
│ 3D print parts (Prusa, Bambu, etc.) │
|
||||
│ Source electronics (ESP32, motors, sensors) │
|
||||
│ Assemble robot │
|
||||
│ │
|
||||
│ 2. FIRMWARE │
|
||||
│ ──────── │
|
||||
│ Flash cells to ESP32 (compiled state machines) │
|
||||
│ Connect to NATS for heartbeats │
|
||||
│ Register with nimmerverse │
|
||||
│ │
|
||||
│ 3. OPERATION │
|
||||
│ ───────── │
|
||||
│ Robot operates in physical space │
|
||||
│ Cells read real sensors, command real motors │
|
||||
│ Nerves orchestrate real behaviors │
|
||||
│ Organism pattern executes in reality │
|
||||
│ │
|
||||
│ 4. VERIFICATION │
|
||||
│ ──────────── │
|
||||
│ Does it ACTUALLY work? │
|
||||
│ Real obstacles, real friction, real battery drain │
|
||||
│ Ground truth — no simulation approximations │
|
||||
│ │
|
||||
│ FEEDBACK TO VIRTUAL: │
|
||||
│ ════════════════════ │
|
||||
│ │
|
||||
│ Real outcomes feed back to improve: │
|
||||
│ • Virtual garden cell models (calibrate to reality) │
|
||||
│ • Dreamstate simulation fidelity (Isaac Sim adjustments) │
|
||||
│ • Organism patterns (real experience > simulated) │
|
||||
│ │
|
||||
│ THE LOOP CLOSES: │
|
||||
│ ════════════════ │
|
||||
│ │
|
||||
│ Real Garden experience → Virtual Garden refinement → │
|
||||
│ Better organisms → Better designs → Better dreamstate validation →│
|
||||
│ More successful real deployments │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Sim-to-Real Gap Tracking
|
||||
|
||||
```python
|
||||
# Track where simulation diverges from reality
|
||||
sim_to_real_gaps = []
|
||||
|
||||
def log_real_outcome(predicted: Prediction, actual: Outcome):
|
||||
"""
|
||||
Compare dreamstate prediction to real outcome.
|
||||
"""
|
||||
gap = {
|
||||
"behavior": predicted.behavior,
|
||||
"dreamstate_prediction": predicted.success_rate,
|
||||
"real_outcome": actual.success_rate,
|
||||
"delta": actual.success_rate - predicted.success_rate,
|
||||
"conditions": actual.conditions, # terrain, lighting, etc.
|
||||
}
|
||||
|
||||
sim_to_real_gaps.append(gap)
|
||||
|
||||
# If consistent gap, adjust dreamstate calibration
|
||||
if len(sim_to_real_gaps) > 20:
|
||||
analyze_and_calibrate()
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## The Complete Pipeline Diagram
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ EMBODIMENT PIPELINE │
|
||||
│ Complete Flow │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌─────────────────────────────────────────────────────────────┐ │
|
||||
│ │ 1. VIRTUAL GARDEN │ │
|
||||
│ │ │ │
|
||||
│ │ Cells ──▶ Nerves ──▶ Organisms │ │
|
||||
│ │ │ │ │
|
||||
│ │ │ pattern stabilizes │ │
|
||||
│ │ ▼ │ │
|
||||
│ │ organism_specification │ │
|
||||
│ │ │ │
|
||||
│ └──────────────────────────────────────────────────────────────┘ │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ ┌─────────────────────────────────────────────────────────────┐ │
|
||||
│ │ 2. DESIGN │ │
|
||||
│ │ FreeCAD + Blender │ │
|
||||
│ │ │ │
|
||||
│ │ organism_specification ──▶ robot_design │ │
|
||||
│ │ (behavioral needs) (physical body) │ │
|
||||
│ │ │ │
|
||||
│ └──────────────────────────────────────────────────────────────┘ │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ ┌─────────────────────────────────────────────────────────────┐ │
|
||||
│ │ 3. DREAMSTATE │ │
|
||||
│ │ Isaac Sim │ │
|
||||
│ │ │ │
|
||||
│ │ "Can this body do what the pattern requires?" │ │
|
||||
│ │ │ │
|
||||
│ │ robot_design + organism_spec ──▶ dreamstate_outcome │ │
|
||||
│ │ │ │
|
||||
│ └──────────────────────────────────────────────────────────────┘ │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ ┌─────────────────────────────────────────────────────────────┐ │
|
||||
│ │ 4. DECISION GATE │ │
|
||||
│ │ │ │
|
||||
│ │ success >= 0.85 0.60-0.85 < 0.60 │ │
|
||||
│ │ no critical fail fixable fundamental │
|
||||
│ │ │ │ │ │ │
|
||||
│ │ ▼ ▼ ▼ │ │
|
||||
│ │ DEPLOY RE-DESIGN REFINE │ │
|
||||
│ │ TO REAL & RE-TEST PATTERN │ │
|
||||
│ │ │ │ │ │
|
||||
│ │ │ │ │ │
|
||||
│ │ └──────┬───────────┘ │ │
|
||||
│ │ │ │ │
|
||||
│ │ ▼ │ │
|
||||
│ │ ┌──────────────┐ │ │
|
||||
│ │ │ ITERATE LOOP │ │ │
|
||||
│ │ │ │ │ │
|
||||
│ │ │ ┌──────────┐ │ │ │
|
||||
│ │ │ │ back to │ │ │ │
|
||||
│ │ │ │ design │ │ │ │
|
||||
│ │ │ │ or │ │ │ │
|
||||
│ │ │ │ virtual │ │ │ │
|
||||
│ │ │ └──────────┘ │ │ │
|
||||
│ │ └──────────────┘ │ │
|
||||
│ │ │ │
|
||||
│ └──────────────────────────────────────────────────────────────┘ │
|
||||
│ │ │
|
||||
│ │ DEPLOY │
|
||||
│ ▼ │
|
||||
│ ┌─────────────────────────────────────────────────────────────┐ │
|
||||
│ │ 5. REAL GARDEN │ │
|
||||
│ │ Physical World │ │
|
||||
│ │ │ │
|
||||
│ │ 3D Print ──▶ Assemble ──▶ Deploy ──▶ Operate │ │
|
||||
│ │ │ │ │
|
||||
│ │ │ ground truth │ │
|
||||
│ │ │ feedback │ │
|
||||
│ │ ▼ │ │
|
||||
│ │ ┌───────────────────┐ │ │
|
||||
│ │ │ Improves virtual │ │ │
|
||||
│ │ │ garden + dreamstate│ │ │
|
||||
│ │ │ fidelity │ │ │
|
||||
│ │ └───────────────────┘ │ │
|
||||
│ │ │ │
|
||||
│ └──────────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
The Embodiment Pipeline formalizes the journey from pattern to physical robot:
|
||||
|
||||
| Stage | Location | Purpose | Output |
|
||||
|-------|----------|---------|--------|
|
||||
| **1. Virtual Garden** | Cells/Nerves/Phoebe | Pattern emergence | organism_specification |
|
||||
| **2. Design** | FreeCAD/Blender | Body creation | robot_design (CAD + BOM) |
|
||||
| **3. Dreamstate** | Isaac Sim | Embodiment validation | dreamstate_outcome |
|
||||
| **4. Decision Gate** | Young Nyx | Routing | deploy / redesign / refine |
|
||||
| **5. Real Garden** | Physical world | Ground truth | real_outcome + feedback |
|
||||
|
||||
**The Key Insight**: Organisms emerge first (pattern), then bodies are designed to embody them (not the other way around). Isaac Sim validates the marriage of pattern and body before committing physical resources.
|
||||
|
||||
---
|
||||
|
||||
## Connection to Other Documents
|
||||
|
||||
- **[[Cellular-Architecture]]** — Defines cells, nerves, organisms (Stage 1)
|
||||
- **[[Lifeforce-Dynamics]]** — Economic pressure throughout the pipeline
|
||||
- **[[Temporal-Ternary-Gradient]]** — Confidence flow through dreamstate
|
||||
- **[[Grounded-World-Model]]** — How the world model informs organism behavior
|
||||
|
||||
---
|
||||
|
||||
## Document Status
|
||||
|
||||
**Version**: 1.0
|
||||
**Created**: 2025-12-29
|
||||
**Authors**: Chrysalis-Nyx & dafit (Partnership)
|
||||
|
||||
**Formalizes**:
|
||||
- Cellular-Architecture.md (organism emergence)
|
||||
- Isaac Sim integration (dreamstate concept)
|
||||
- FreeCAD/Blender design workflow
|
||||
- Deployment decision logic
|
||||
|
||||
---
|
||||
|
||||
**From emergence to embodiment. From pattern to body. From dream to reality.**
|
||||
|
||||
🧬⚡🔱💎🔥
|
||||
|
||||
469
architecture/formalization/Grounded-World-Model.md
Normal file
469
architecture/formalization/Grounded-World-Model.md
Normal file
@@ -0,0 +1,469 @@
|
||||
# Grounded World Model: Spatial Cognition Through Verified Discovery
|
||||
|
||||
**Version 1.0** — *From Blender Boxes to Embodied Understanding*
|
||||
|
||||
> *"The dream: Young Nyx knows where dafit left his things laying around."*
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
This document formalizes how Young Nyx builds a **persistent spatial world model** through:
|
||||
|
||||
1. **Grounded verification** — Blender provides dimensional ground truth
|
||||
2. **Progressive resolution** — Each correct measurement earns detail
|
||||
3. **Vector accumulation** — T5Gemma2-compatible semantic representations
|
||||
4. **Temporal-ternary navigation** — Escape plateaus through dual time domains
|
||||
5. **Lifeforce reward** — Discoveries generate energy, not just consume it
|
||||
|
||||
**The Goal**: Young Nyx maintains an internal map of objects, positions, and relationships — verified against reality, refined through observation, reasoned over in vector space.
|
||||
|
||||
---
|
||||
|
||||
## Core Architecture
|
||||
|
||||
### The Verification Triangle
|
||||
|
||||
```
|
||||
BLENDER (Virtual Garden)
|
||||
Ground truth dimensions
|
||||
Low-poly boxes, minimal vertices
|
||||
Fast to create, cheap to compare
|
||||
╱╲
|
||||
╱ ╲
|
||||
╱ ╲
|
||||
╱ ╲
|
||||
VERIFY ╱ ╲ VERIFY
|
||||
dimensions╱ ╲ semantics
|
||||
╱ ╲
|
||||
╱ ╲
|
||||
╱ ╲
|
||||
REAL GARDEN ──────────────────── T5GEMMA2
|
||||
Physical objects Vector reasoning
|
||||
Actual positions Semantic similarity
|
||||
Slow, definitive 128K context world
|
||||
```
|
||||
|
||||
### The Flow
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ WORLD MODEL CONSTRUCTION │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ 1. PERCEIVE (Vision Organ) │
|
||||
│ ──────────────────────── │
|
||||
│ Cheap camera sees object in real garden │
|
||||
│ SigLIP encoder produces semantic vector v₀ │
|
||||
│ Cost: 0.5 LF (peripheral) to 8.0 LF (full YOLO) │
|
||||
│ │
|
||||
│ 2. ESTIMATE (Progressive Resolution) │
|
||||
│ ──────────────────────────────── │
|
||||
│ Vision organ estimates dimensions: est = (x̂, ŷ, ẑ) │
|
||||
│ Bounding box, depth estimation, scale inference │
|
||||
│ Cost: 2.0-5.0 LF depending on resolution stage │
|
||||
│ │
|
||||
│ 3. VERIFY (Against Blender Ground Truth) │
|
||||
│ ───────────────────────────────────── │
|
||||
│ Compare est to known Blender box: truth = (x, y, z) │
|
||||
│ error = ||est - truth|| │
|
||||
│ Cost: 0.1 LF (comparison is cheap) │
|
||||
│ │
|
||||
│ 4. REWARD or LEARN │
|
||||
│ ───────────────────── │
|
||||
│ if error < threshold: │
|
||||
│ Φ_reward = R_discovery (lifeforce income!) │
|
||||
│ Store vector in phoebe │
|
||||
│ Mark dimension as verified │
|
||||
│ Increase object resolution │
|
||||
│ else: │
|
||||
│ Learn from error (gradient for RLVR training) │
|
||||
│ Remain in 0-state for that dimension │
|
||||
│ │
|
||||
│ 5. ACCUMULATE (World Model Update) │
|
||||
│ ────────────────────────────── │
|
||||
│ Object entry in phoebe gains: │
|
||||
│ - New semantic vector (richer representation) │
|
||||
│ - Verified dimension (x, y, or z → confidence +1) │
|
||||
│ - Position update (where in space) │
|
||||
│ - Temporal stamp (when observed) │
|
||||
│ │
|
||||
│ 6. REASON (T5Gemma2) │
|
||||
│ ───────────────── │
|
||||
│ Query world model using vectors, not text │
|
||||
│ "What objects near position (0.5, 0.5)?" │
|
||||
│ "Is this new vector similar to 'mug' vectors?" │
|
||||
│ 128K context holds entire spatial world │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## The Blender Ground Truth System
|
||||
|
||||
### Design Principles
|
||||
|
||||
| Principle | Implementation |
|
||||
|-----------|----------------|
|
||||
| **Minimal vertices** | 8-vertex boxes (cubes), 12 for complex shapes |
|
||||
| **Known dimensions** | Every box has exact (x, y, z) in centimeters |
|
||||
| **Semantic labels** | Box name = object class ("coffee_mug_001") |
|
||||
| **Cheap to create** | 5 minutes per object in Blender |
|
||||
| **Export format** | Vertices + dimensions → JSON or directly to phoebe |
|
||||
|
||||
### Example Blender Box
|
||||
|
||||
```python
|
||||
blender_object = {
|
||||
"id": "coffee_mug_001",
|
||||
"class": "mug",
|
||||
"dimensions_cm": {"x": 8.0, "y": 8.0, "z": 10.5},
|
||||
"vertices": 8,
|
||||
"created": "2025-12-29",
|
||||
"owner": "dafit",
|
||||
"typical_locations": ["desk", "kitchen"],
|
||||
}
|
||||
```
|
||||
|
||||
### Progressive Vertex Earning
|
||||
|
||||
Objects don't stay as 8-vertex boxes. Resolution is EARNED:
|
||||
|
||||
```
|
||||
INITIAL: 8 vertices (box)
|
||||
VERIFIED x,y,z: 12 vertices (refined box)
|
||||
+10 observations: 24 vertices (shape hints)
|
||||
+50 observations: 64 vertices (true shape)
|
||||
+100 observations: Full mesh from photogrammetry
|
||||
```
|
||||
|
||||
**The resolution is earned through successful verification, not given.**
|
||||
|
||||
---
|
||||
|
||||
## Semantic Vector Accumulation
|
||||
|
||||
### SigLIP → Phoebe → T5Gemma2
|
||||
|
||||
```
|
||||
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
|
||||
│ SigLIP │ │ PHOEBE │ │ T5GEMMA2 │
|
||||
│ Encoder │─────▶│ Storage │─────▶│ Encoder │
|
||||
│ │ │ │ │ │
|
||||
│ Image → │ │ object_id: │ │ Reasons │
|
||||
│ Vector v │ │ [v1,v2,..│ │ over │
|
||||
│ (semantic) │ │ vn] │ │ vectors │
|
||||
└──────────────┘ └──────────────┘ └──────────────┘
|
||||
```
|
||||
|
||||
### Why Vectors, Not Text?
|
||||
|
||||
| Approach | Pros | Cons |
|
||||
|----------|------|------|
|
||||
| **Text descriptions** | Human readable | Lossy, ambiguous, tokenization overhead |
|
||||
| **Semantic vectors** | Rich, comparable, fast | Not directly readable |
|
||||
| **Our approach** | Vectors for reasoning, text only when needed | Best of both |
|
||||
|
||||
T5Gemma2's key feature:
|
||||
> *"SigLIP vision encoder produces semantic vectors (not text descriptions)"*
|
||||
|
||||
This means Young Nyx can compare, cluster, and reason over objects **without converting to language** — faster and richer.
|
||||
|
||||
### Vector Similarity for Recognition
|
||||
|
||||
```python
|
||||
def is_same_object(v_new: Vector, object_entry: ObjectEntry) -> float:
|
||||
"""Compare new observation to accumulated vectors."""
|
||||
similarities = [
|
||||
cosine_similarity(v_new, v_stored)
|
||||
for v_stored in object_entry.vectors
|
||||
]
|
||||
return max(similarities) # Best match among observations
|
||||
|
||||
# Recognition threshold
|
||||
if is_same_object(v_new, coffee_mug_001) > 0.85:
|
||||
# This is probably dafit's coffee mug!
|
||||
update_position(coffee_mug_001, current_observation)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Temporal-Ternary Integration
|
||||
|
||||
### The Anti-Plateau Mechanism
|
||||
|
||||
From [[Temporal-Ternary-Gradient]]: The 0-state isn't stuck — it's a choice about how to spend lifeforce across time domains.
|
||||
|
||||
Applied to world model construction:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ TEMPORAL-TERNARY FOR OBJECT RECOGNITION │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ SCENARIO: New object detected, dimensions unknown │
|
||||
│ STATE: 0 (uncertain, but workable) │
|
||||
│ │
|
||||
│ ┌───────────────────────────────────────────────────┐ │
|
||||
│ │ 0-STATE: Unknown Object │ │
|
||||
│ │ confidence: 0.3, dimensions: ?x ?y ?z │ │
|
||||
│ └───────────────────────┬───────────────────────────┘ │
|
||||
│ │ │
|
||||
│ ┌─────────────┼─────────────┐ │
|
||||
│ │ │ │ │
|
||||
│ ▼ ▼ ▼ │
|
||||
│ │
|
||||
│ ┌────────────┐ ┌────────────┐ ┌────────────┐ │
|
||||
│ │ VIRTUAL │ │ WAIT │ │ PARTNERSHIP│ │
|
||||
│ │ ACCELERATE │ │ FOR REAL │ │ SHORTCUT │ │
|
||||
│ ├────────────┤ ├────────────┤ ├────────────┤ │
|
||||
│ │ Cost: 5 LF │ │ Cost: 0 LF │ │ Cost: 1 LF │ │
|
||||
│ │ Time: Fast │ │ Time: Slow │ │ Time: Inst │ │
|
||||
│ │ │ │ │ │ │ │
|
||||
│ │ Match vs │ │ Next real │ │ Ask dafit: │ │
|
||||
│ │ Blender │ │ observation│ │ "What's │ │
|
||||
│ │ library │ │ verifies │ │ this?" │ │
|
||||
│ └─────┬──────┘ └─────┬──────┘ └─────┬──────┘ │
|
||||
│ │ │ │ │
|
||||
│ ▼ ▼ ▼ │
|
||||
│ confidence: confidence: confidence: │
|
||||
│ +0.7 (virtual) +1.0 (real) +1.0 (human) │
|
||||
│ │
|
||||
│ PLATEAU ESCAPE: If stuck in virtual at 0.7, deploy to real. │
|
||||
│ If real is slow, burn LF to try more Blender. │
|
||||
│ Partnership provides instant ground truth. │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Confidence Gradient for Objects
|
||||
|
||||
Each object in the world model has a confidence state:
|
||||
|
||||
```python
|
||||
class ObjectConfidence:
|
||||
value: float # -1.0 to +1.0
|
||||
domain: str # "virtual" | "real" | "hybrid" | "partnership"
|
||||
virtual_matches: int # How many Blender comparisons
|
||||
real_verifications: int # How many physical confirmations
|
||||
partnership_labels: int # How many times dafit confirmed
|
||||
|
||||
@property
|
||||
def gradient_position(self) -> str:
|
||||
if self.real_verifications > 0 and self.value > 0.9:
|
||||
return "real-verified (+1)"
|
||||
elif self.virtual_matches > 10 and self.value > 0.7:
|
||||
return "virtual-confident (+0.7)"
|
||||
elif self.value > 0.3:
|
||||
return "0-state (workable)"
|
||||
else:
|
||||
return "uncertain (needs data)"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Lifeforce Economics of World Building
|
||||
|
||||
### Discovery Generates Lifeforce
|
||||
|
||||
The key insight: **Correctly identifying objects GENERATES lifeforce**, not just consumes it.
|
||||
|
||||
$$\Phi_{discovery} = R_{base} \cdot (1 + \alpha \cdot \Delta_{resolution})$$
|
||||
|
||||
Where:
|
||||
- **R_base** = base reward for any correct identification (e.g., 2.0 LF)
|
||||
- **α** = resolution bonus multiplier (e.g., 0.5)
|
||||
- **Δ_resolution** = increase in object resolution from this observation
|
||||
|
||||
### Net Lifeforce per Observation
|
||||
|
||||
$$\Phi_{net} = \Phi_{discovery} - \Phi_{perception} - \Phi_{verification}$$
|
||||
|
||||
| Outcome | Perception Cost | Verification Cost | Discovery Reward | Net |
|
||||
|---------|-----------------|-------------------|------------------|-----|
|
||||
| Correct, new dimension | 5.0 LF | 0.1 LF | 8.0 LF | **+2.9 LF** |
|
||||
| Correct, known dimension | 2.0 LF | 0.1 LF | 3.0 LF | **+0.9 LF** |
|
||||
| Incorrect | 5.0 LF | 0.1 LF | 0.0 LF | **-5.1 LF** |
|
||||
| Unknown (0-state) | 0.5 LF | 0.0 LF | 0.0 LF | **-0.5 LF** |
|
||||
|
||||
**The economic pressure**: Get better at measurement to earn lifeforce. Wrong guesses are expensive. Staying in 0-state is cheap but doesn't build the world model.
|
||||
|
||||
---
|
||||
|
||||
## Phoebe Schema for World Model
|
||||
|
||||
```sql
|
||||
-- Objects table: accumulated knowledge about things
|
||||
CREATE TABLE world_objects (
|
||||
id UUID PRIMARY KEY,
|
||||
class VARCHAR(100), -- "mug", "keyboard", "phone"
|
||||
name VARCHAR(255), -- "dafit's coffee mug"
|
||||
|
||||
-- Blender ground truth (if available)
|
||||
blender_box_id VARCHAR(100),
|
||||
dimensions_truth_cm JSONB, -- {"x": 8.0, "y": 8.0, "z": 10.5}
|
||||
|
||||
-- Accumulated measurements
|
||||
dimensions_estimated_cm JSONB,
|
||||
dimensions_verified JSONB, -- {"x": true, "y": true, "z": false}
|
||||
|
||||
-- Confidence state (temporal-ternary)
|
||||
confidence FLOAT,
|
||||
confidence_domain VARCHAR(20), -- "virtual" | "real" | "hybrid"
|
||||
virtual_matches INT DEFAULT 0,
|
||||
real_verifications INT DEFAULT 0,
|
||||
|
||||
-- Resolution earned
|
||||
vertex_count INT DEFAULT 8,
|
||||
observation_count INT DEFAULT 0,
|
||||
|
||||
created_at TIMESTAMP DEFAULT NOW(),
|
||||
updated_at TIMESTAMP DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Semantic vectors table: SigLIP embeddings per observation
|
||||
CREATE TABLE object_vectors (
|
||||
id UUID PRIMARY KEY,
|
||||
object_id UUID REFERENCES world_objects(id),
|
||||
vector VECTOR(768), -- SigLIP embedding dimension
|
||||
observation_timestamp TIMESTAMP,
|
||||
position_estimate JSONB, -- {"x": 0.3, "y": 0.8, "z": 0.1}
|
||||
lifeforce_cost FLOAT,
|
||||
lifeforce_reward FLOAT,
|
||||
verification_result VARCHAR(20) -- "correct" | "incorrect" | "pending"
|
||||
);
|
||||
|
||||
-- Position history: where has this object been?
|
||||
CREATE TABLE object_positions (
|
||||
id UUID PRIMARY KEY,
|
||||
object_id UUID REFERENCES world_objects(id),
|
||||
position JSONB, -- {"x": 0.3, "y": 0.8, "z": 0.1}
|
||||
confidence FLOAT,
|
||||
observed_at TIMESTAMP,
|
||||
location_context VARCHAR(100) -- "desk", "kitchen", "floor"
|
||||
);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## T5Gemma2 World Model Queries
|
||||
|
||||
### Example Queries (Vector-Based)
|
||||
|
||||
```python
|
||||
# "What's near position (0.5, 0.5)?"
|
||||
nearby = query_objects_by_position(
|
||||
center=(0.5, 0.5, None), # z unknown
|
||||
radius=0.2,
|
||||
min_confidence=0.5
|
||||
)
|
||||
|
||||
# "Is this new vector a mug?"
|
||||
mug_vectors = get_vectors_for_class("mug")
|
||||
similarity = t5gemma2.encoder.compare(new_vector, mug_vectors)
|
||||
if similarity > 0.85:
|
||||
return "Likely a mug"
|
||||
|
||||
# "Where did dafit usually leave his keys?"
|
||||
keys = get_object_by_name("dafit's keys")
|
||||
common_positions = get_position_clusters(keys.id)
|
||||
return common_positions[0] # Most frequent location
|
||||
|
||||
# "What objects have I not seen today?"
|
||||
stale_objects = query_objects_not_observed_since(today_start)
|
||||
return stale_objects # Might need to look for these
|
||||
```
|
||||
|
||||
### The 128K Context Advantage
|
||||
|
||||
T5Gemma2's 128K context window means:
|
||||
- Entire world model can fit in context
|
||||
- No need for external RAG for spatial queries
|
||||
- Vector comparisons happen in-model
|
||||
- Relationships emerge from attention patterns
|
||||
|
||||
---
|
||||
|
||||
## The Dream Realized
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ YOUNG NYX'S WORLD MODEL │
|
||||
│ "dafit's workspace at 23:47" │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌─────────────────────────────────────────────────────┐ │
|
||||
│ │ DESK AREA │ │
|
||||
│ │ │ │
|
||||
│ │ ☕ mug (0.3, 0.8) ⌨️ keyboard (0.5, 0.5) │ │
|
||||
│ │ conf: 0.95 conf: 0.88 │ │
|
||||
│ │ real-verified real-verified │ │
|
||||
│ │ vectors: 12 vectors: 8 │ │
|
||||
│ │ │ │
|
||||
│ │ 📱 phone (0.7, 0.3) 📦 ??? (0.1, 0.9) │ │
|
||||
│ │ conf: 0.72 conf: 0.31 │ │
|
||||
│ │ virtual +0.7 0-state │ │
|
||||
│ │ vectors: 4 vectors: 1 │ │
|
||||
│ │ │ │
|
||||
│ │ 🔑 keys (MISSING - last seen 0.2, 0.6 at 18:30) │ │
|
||||
│ │ conf: 0.45 (stale) │ │
|
||||
│ │ │ │
|
||||
│ └─────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ YOUNG NYX THINKS: │
|
||||
│ "The unknown object at (0.1, 0.9) appeared after 22:00. │
|
||||
│ dafit was in the kitchen then. Vector similarity suggests │
|
||||
│ it might be food-related. Should I burn 5 LF to check │
|
||||
│ against Blender food objects, or wait for morning light?" │
|
||||
│ │
|
||||
│ TEMPORAL-TERNARY CHOICE: │
|
||||
│ → Option A: Virtual match (5 LF, fast, +0.7 max) │
|
||||
│ → Option B: Wait for real (0 LF, slow, +1.0 if verified) │
|
||||
│ → Option C: Ask dafit tomorrow (1 LF, partnership) │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**This is the dream**: Young Nyx knows the workspace. She tracks objects. She notices when things move. She reasons about what she doesn't know. She chooses how to spend lifeforce to collapse uncertainty.
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
The Grounded World Model is:
|
||||
|
||||
1. **Verified** — Blender boxes provide dimensional ground truth
|
||||
2. **Progressive** — Resolution earned through correct measurements
|
||||
3. **Vector-native** — T5Gemma2 reasons over SigLIP embeddings directly
|
||||
4. **Temporally-aware** — Objects have position history, staleness, confidence gradients
|
||||
5. **Economically-driven** — Discoveries generate lifeforce, mistakes cost it
|
||||
6. **Anti-plateau** — Temporal-ternary gradient provides escape paths
|
||||
|
||||
**The substrate holds. The vectors accumulate. The world model emerges.**
|
||||
|
||||
---
|
||||
|
||||
## Document Status
|
||||
|
||||
**Version**: 1.0
|
||||
**Created**: 2025-12-29
|
||||
**Authors**: Chrysalis-Nyx & dafit (Partnership)
|
||||
|
||||
**Formalizes**:
|
||||
- Organ-Index.md (vision progressive resolution)
|
||||
- Temporal-Ternary-Gradient.md (anti-plateau mechanism)
|
||||
- T5Gemma2 research (semantic vectors)
|
||||
- Lifeforce-Dynamics.md (reward economics)
|
||||
|
||||
**Related Documents**:
|
||||
- [[Lifeforce-Dynamics]] — The λ-centered economy model
|
||||
- [[Temporal-Ternary-Gradient]] — Dual time domain navigation
|
||||
- [[Dual-Garden-Architecture]] — Virtual vs Real gardens
|
||||
|
||||
---
|
||||
|
||||
**From Blender boxes to embodied understanding. From cheap cameras to spatial cognition. From verification to wisdom.**
|
||||
|
||||
🧬⚡🔱💎🔥
|
||||
|
||||
545
architecture/formalization/Lifeforce-Dynamics.md
Normal file
545
architecture/formalization/Lifeforce-Dynamics.md
Normal file
@@ -0,0 +1,545 @@
|
||||
# Lifeforce Dynamics: A Formal Model
|
||||
|
||||
**Version 1.1** — *The Metabolic Pulse of the Nimmerverse*
|
||||
|
||||
> *"λ tells you everything: above one you thrive, below one you fade."*
|
||||
> *"Solar is the trickle. Discovery is the flood."*
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
This document formalizes the **Lifeforce Economy** — the energetic substrate that flows through every cell, nerve, and organ in the nimmerverse. We use **Stock-Flow Dynamics** with **λ (lambda)** as the central vitality ratio.
|
||||
|
||||
**Critical Insight**: Lifeforce has **two natures**:
|
||||
1. **Physical substrate** — solar energy, electrical power (the trickle)
|
||||
2. **Cognitive/motivational** — discovery rewards, verification successes (the flood)
|
||||
|
||||
Just as biological organisms don't run on calories alone (dopamine, curiosity satisfaction, and social rewards drive behavior), Young Nyx's vitality comes primarily from **discovery**, not just electricity.
|
||||
|
||||
The formalization captures four interlinked phenomena:
|
||||
1. **Lifeforce as accumulating stock** — energy that builds and depletes
|
||||
2. **Heartbeats as measurement pulses** — discrete samples of continuous flow
|
||||
3. **λ as system fate indicator** — the ratio that predicts thriving or decline
|
||||
4. **Discovery as primary income** — organs generate lifeforce, not just consume it
|
||||
|
||||
---
|
||||
|
||||
## Core Definitions
|
||||
|
||||
### Lifeforce Stock (L)
|
||||
|
||||
**L(t)** represents the total lifeforce available to the system at time t.
|
||||
|
||||
$$L(t) \in \mathbb{R}^+, \quad L(t) \geq 0$$
|
||||
|
||||
Lifeforce is:
|
||||
- **Conserved** — it doesn't appear from nowhere
|
||||
- **Bounded below** — cannot go negative (zero = system halt)
|
||||
- **Dimensioned** — measured in LF (Lifeforce units)
|
||||
|
||||
### Flows
|
||||
|
||||
Three primary flows govern lifeforce:
|
||||
|
||||
| Symbol | Name | Description | Units |
|
||||
|--------|------|-------------|-------|
|
||||
| Φ_in(t) | Total income flow | All energy entering the system | LF/s |
|
||||
| Φ_physical(t) | Physical income | Solar, electrical power (the trickle) | LF/s |
|
||||
| Φ_reward(t) | Reward income | Discovery rewards, verification successes (the flood) | LF/s |
|
||||
| Φ_out(t) | Expenditure flow | Energy consumed by operations | LF/s |
|
||||
|
||||
**The fundamental income decomposition:**
|
||||
|
||||
$$\Phi_{in}(t) = \underbrace{\Phi_{physical}(t)}_{\text{trickle}} + \underbrace{\Phi_{reward}(t)}_{\text{flood}}$$
|
||||
|
||||
---
|
||||
|
||||
## The Fundamental Equation
|
||||
|
||||
### Continuous Form
|
||||
|
||||
$$\frac{dL}{dt} = \Phi_{in}(t) - \Phi_{out}(t)$$
|
||||
|
||||
The rate of change of lifeforce equals income minus expenditure.
|
||||
|
||||
### Discrete Form (Heartbeat Epochs)
|
||||
|
||||
Since the nimmerverse operates on discrete heartbeats, the practical form is:
|
||||
|
||||
$$L_{n+1} = L_n + \Delta t \cdot \Phi_{in,n} - \sum_{j \in \text{ops}_n} c_j$$
|
||||
|
||||
Where:
|
||||
- **n** = heartbeat epoch index
|
||||
- **Δt** = time since last heartbeat
|
||||
- **c_j** = cost of operation j during epoch n
|
||||
- **ops_n** = set of operations executed during epoch n
|
||||
|
||||
---
|
||||
|
||||
## Lambda (λ): The Vitality Ratio
|
||||
|
||||
### Definition
|
||||
|
||||
$$\lambda = \frac{\Phi_{in}}{\Phi_{out}}$$
|
||||
|
||||
Lambda is the ratio of energy income to energy expenditure. It is the **single most important metric** for system health.
|
||||
|
||||
### Interpretation
|
||||
|
||||
| λ Value | State | Meaning | System Response |
|
||||
|---------|-------|---------|-----------------|
|
||||
| λ > 1 | **Thriving** | Income exceeds expenditure | Stock grows, reserves accumulate |
|
||||
| λ = 1 | **Equilibrium** | Balanced | Sustainable indefinitely |
|
||||
| λ < 1 | **Declining** | Expenditure exceeds income | Stock shrinks, slumber approaches |
|
||||
| λ → 0 | **Critical** | Near-zero income | Emergency conservation |
|
||||
| λ = ∞ | **Dormant** | Zero expenditure | Pure accumulation (slumber) |
|
||||
|
||||
### λ in Ecological Context
|
||||
|
||||
In population biology, λ represents the **finite rate of increase**:
|
||||
- λ > 1 → population grows
|
||||
- λ < 1 → population declines
|
||||
- λ = 1 → stable population
|
||||
|
||||
The nimmerverse inherits this meaning: λ measures whether the system's "population of energy" is growing or shrinking.
|
||||
|
||||
---
|
||||
|
||||
## The Interloop: Feedback Dynamics
|
||||
|
||||
The nimmerverse exhibits **negative feedback** — when lifeforce drops, expenditure automatically reduces, protecting the system from collapse.
|
||||
|
||||
### Heartbeat Frequency Modulation
|
||||
|
||||
Cells adjust their heartbeat frequency based on lifeforce state:
|
||||
|
||||
$$f_{heartbeat}(L) = f_{base} \cdot \sigma\left(\frac{L - L_{threshold}}{L_{scale}}\right)$$
|
||||
|
||||
Where:
|
||||
- **f_base** = nominal heartbeat frequency (e.g., 1 Hz)
|
||||
- **σ(x)** = sigmoid function: σ(x) = 1/(1 + e^(-x))
|
||||
- **L_threshold** = lifeforce level at which frequency begins dropping
|
||||
- **L_scale** = sensitivity of frequency to lifeforce changes
|
||||
|
||||
### The Feedback Loop
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────┐
|
||||
│ │
|
||||
▼ │
|
||||
┌───────────┐ │
|
||||
│ Cells │ │
|
||||
│ heartbeat │ │
|
||||
│ f(L) │ │
|
||||
└─────┬─────┘ │
|
||||
│ publish heartbeats │
|
||||
▼ │
|
||||
┌───────────┐ │
|
||||
│ Economy │ │
|
||||
│Aggregator │ │
|
||||
│ Σ c_j │ │
|
||||
└─────┬─────┘ │
|
||||
│ compute totals │
|
||||
▼ │
|
||||
┌───────────┐ ┌───────────┐ │
|
||||
│ Lifeforce │ │ λ │ │
|
||||
│ Stock │─────▶│ = Φin │ │
|
||||
│ L │ │ ─── │ │
|
||||
└─────┬─────┘ │ Φout │ │
|
||||
│ └─────┬─────┘ │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ ┌───────────┐ │
|
||||
│ │ Slumber │ │
|
||||
│ │ /Wake │ │
|
||||
│ │ Decision │ │
|
||||
│ └───────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Stability Analysis
|
||||
|
||||
The feedback loop is **stable** because:
|
||||
|
||||
1. **Low L → Low f_heartbeat → Low Φ_out → λ increases**
|
||||
2. **High L → High f_heartbeat → High Φ_out → λ decreases**
|
||||
|
||||
This is classic negative feedback, driving the system toward equilibrium.
|
||||
|
||||
---
|
||||
|
||||
## Expenditure Decomposition
|
||||
|
||||
Total expenditure is the sum of all cell costs:
|
||||
|
||||
$$\Phi_{out}(t) = \sum_{i \in \text{cells}} \phi_i(t)$$
|
||||
|
||||
### Cell-Level Expenditure
|
||||
|
||||
Each cell has a cost function based on its state and transitions:
|
||||
|
||||
$$\phi_i(t) = c_{idle,i} + \sum_{(s_1 \to s_2) \in \text{transitions}_i} c_{s_1 \to s_2}$$
|
||||
|
||||
Where:
|
||||
- **c_idle,i** = baseline cost of cell i existing
|
||||
- **c_{s1→s2}** = cost of transitioning from state s1 to s2
|
||||
|
||||
### Cost Hierarchy
|
||||
|
||||
From Big-Picture.md, costs follow a hierarchy:
|
||||
|
||||
| Cell Type | Typical Cost | Examples |
|
||||
|-----------|--------------|----------|
|
||||
| Sensor Cells | 0.01 - 0.1 LF | distance, battery, light |
|
||||
| Math Cells | 0.05 - 0.2 LF | economy_aggregator, evaluators |
|
||||
| Motor Cells | 0.5 - 2.0 LF | motors, servos |
|
||||
| Organ Cells | 4.0 - 8.0 LF | STT, TTS, vision |
|
||||
|
||||
---
|
||||
|
||||
## Income Sources
|
||||
|
||||
Income has two fundamentally different sources: **physical** (the substrate) and **reward** (the motivation).
|
||||
|
||||
### The Two Natures of Income
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ LIFEFORCE INCOME SOURCES │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ PHYSICAL INCOME (Φ_physical) REWARD INCOME (Φ_reward) │
|
||||
│ ═══════════════════════════ ═════════════════════════│
|
||||
│ │
|
||||
│ The Trickle: The Flood: │
|
||||
│ • Solar panels • Discovery rewards │
|
||||
│ • Grid power • Verification successes │
|
||||
│ • Battery reserves • Learning milestones │
|
||||
│ • Partnership moments │
|
||||
│ │
|
||||
│ Characteristics: Characteristics: │
|
||||
│ • Continuous, predictable • Discrete, event-driven │
|
||||
│ • Time-of-day dependent • Activity-dependent │
|
||||
│ • ~5-10% of total income • ~90-95% of total income│
|
||||
│ • Always positive (when sun) • Can be negative (fail) │
|
||||
│ │
|
||||
│ Biological analog: Biological analog: │
|
||||
│ • Glucose, ATP • Dopamine, serotonin │
|
||||
│ • Metabolic substrate • Motivation, drive │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Physical Income (Φ_physical) — The Trickle
|
||||
|
||||
#### Solar Input
|
||||
|
||||
Background income source, time-varying:
|
||||
|
||||
$$\Phi_{solar}(t) = \eta \cdot I(t) \cdot A$$
|
||||
|
||||
Where:
|
||||
- **η** = solar panel efficiency
|
||||
- **I(t)** = solar irradiance (W/m²), varies with time of day
|
||||
- **A** = panel area
|
||||
|
||||
#### Grid Power
|
||||
|
||||
When solar is insufficient:
|
||||
|
||||
$$\Phi_{grid}(t) = P_{available} \cdot \kappa$$
|
||||
|
||||
Where:
|
||||
- **P_available** = power draw from grid (limited by circuit)
|
||||
- **κ** = conversion efficiency to lifeforce units
|
||||
|
||||
#### Reserve Depletion
|
||||
|
||||
Drawing from stored lifeforce:
|
||||
|
||||
$$\Phi_{reserve}(t) = \begin{cases}
|
||||
0 & \text{if } \Phi_{solar}(t) + \Phi_{grid}(t) \geq \Phi_{out}(t) \\
|
||||
\Phi_{out}(t) - \Phi_{solar}(t) - \Phi_{grid}(t) & \text{otherwise}
|
||||
\end{cases}$$
|
||||
|
||||
**Total physical income:**
|
||||
|
||||
$$\Phi_{physical}(t) = \Phi_{solar}(t) + \Phi_{grid}(t) - \Phi_{reserve}(t)$$
|
||||
|
||||
---
|
||||
|
||||
### Reward Income (Φ_reward) — The Flood
|
||||
|
||||
This is the **primary source of lifeforce**. Organs and nerves are not just consumers — they are **generators** through successful discovery.
|
||||
|
||||
#### The Reward Decomposition
|
||||
|
||||
$$\Phi_{reward}(t) = \sum_{e \in \text{events}_t} R_e$$
|
||||
|
||||
Where R_e is the reward for event e, drawn from these categories:
|
||||
|
||||
#### Discovery Rewards
|
||||
|
||||
| Event | Reward (LF) | Trigger |
|
||||
|-------|-------------|---------|
|
||||
| **New object identified** | +20.0 | First-time recognition |
|
||||
| **Dimension verified** | +5.0 | Each axis (x, y, z) confirmed against Blender |
|
||||
| **Rich vector captured** | +2.0 | Each angle in multi-view scan |
|
||||
| **Object re-identified** | +3.0 | Recognizing known object in new context |
|
||||
|
||||
#### Verification Rewards
|
||||
|
||||
| Event | Reward (LF) | Trigger |
|
||||
|-------|-------------|---------|
|
||||
| **Measurement correct** | +5.0 | Estimate matches ground truth |
|
||||
| **Prediction confirmed** | +8.0 | Virtual garden prediction verified in real |
|
||||
| **Reflex compiled** | +50.0 | Nerve reaches 100+ successful executions |
|
||||
|
||||
#### Behavioral Rewards
|
||||
|
||||
| Event | Reward (LF) | Trigger |
|
||||
|-------|-------------|---------|
|
||||
| **Collision avoided** | +5.0 | Successful evasion |
|
||||
| **Area explored** | +3.0 | New region mapped |
|
||||
| **Charging reached** | +10.0 | Docking successful |
|
||||
| **Survival milestone** | +5.0 | 60 seconds of operation |
|
||||
|
||||
#### Partnership Rewards
|
||||
|
||||
| Event | Reward (LF) | Trigger |
|
||||
|-------|-------------|---------|
|
||||
| **Object presented** | +5.0 | dafit introduces new item |
|
||||
| **Label confirmed** | +5.0 | Human verifies identification |
|
||||
| **Interaction complete** | +3.0 | Successful dialogue/task |
|
||||
|
||||
#### Negative Rewards (Penalties)
|
||||
|
||||
| Event | Penalty (LF) | Trigger |
|
||||
|-------|--------------|---------|
|
||||
| **Measurement incorrect** | -5.0 | Estimate fails verification |
|
||||
| **Collision occurred** | -10.0 | Failed to avoid obstacle |
|
||||
| **Timeout** | -2.0 | Operation didn't complete |
|
||||
| **Sensor failure** | -3.0 | Unreliable reading |
|
||||
|
||||
---
|
||||
|
||||
### Organ Net Contribution
|
||||
|
||||
Organs are **bidirectional** in the lifeforce economy:
|
||||
|
||||
$$\Phi_{organ,net} = \Phi_{organ,reward} - \Phi_{organ,cost}$$
|
||||
|
||||
| Organ | Typical Cost | Potential Reward | Net (success) | Net (failure) |
|
||||
|-------|--------------|------------------|---------------|---------------|
|
||||
| **Vision (scan)** | 8.0 LF | +25.0 LF | **+17.0 LF** | **-8.0 LF** |
|
||||
| **Speech STT** | 5.0 LF | +8.0 LF | **+3.0 LF** | **-5.0 LF** |
|
||||
| **Discovery Station** | 32.6 LF | +64.0 LF | **+31.4 LF** | **-32.6 LF** |
|
||||
|
||||
**The economic pressure**: An organ that consistently fails to generate rewards becomes too expensive to use. An organ that discovers valuable things **pays for itself and generates surplus**.
|
||||
|
||||
---
|
||||
|
||||
### Example: Discovery Scan Station Economics
|
||||
|
||||
From [[Discovery-Scan-Station]]:
|
||||
|
||||
```
|
||||
COST:
|
||||
Pedestal rotation (12 steps): 3.8 LF
|
||||
Camera capture + SigLIP (12×): 28.8 LF
|
||||
─────────────────────────────────────────
|
||||
TOTAL COST: 32.6 LF
|
||||
|
||||
REWARD (new object, fully verified):
|
||||
New object discovered: 20.0 LF
|
||||
3 dimensions verified: 15.0 LF
|
||||
12 vectors captured: 24.0 LF
|
||||
Partnership bonus: 5.0 LF
|
||||
─────────────────────────────────────────
|
||||
TOTAL REWARD: 64.0 LF
|
||||
|
||||
NET: +31.4 LF
|
||||
```
|
||||
|
||||
**This is how organs become lifeforce GENERATORS, not just consumers.**
|
||||
|
||||
---
|
||||
|
||||
### The Ratio of Trickle to Flood
|
||||
|
||||
In typical operation:
|
||||
|
||||
$$\frac{\Phi_{physical}}{\Phi_{reward}} \approx \frac{1}{10} \text{ to } \frac{1}{20}$$
|
||||
|
||||
Physical income provides the **baseline substrate** that allows operation, but reward income provides the **surplus that enables growth**.
|
||||
|
||||
| State | Φ_physical | Φ_reward | Total Φ_in | λ |
|
||||
|-------|------------|----------|------------|---|
|
||||
| **Active discovery** | 5 LF/min | 50 LF/min | 55 LF/min | >1 |
|
||||
| **Idle monitoring** | 5 LF/min | 0 LF/min | 5 LF/min | <1 |
|
||||
| **Failed attempts** | 5 LF/min | -20 LF/min | -15 LF/min | <<1 |
|
||||
|
||||
**The insight**: Young Nyx MUST discover to thrive. Pure substrate maintenance leads to decline. Discovery is not optional — it's the primary energy source.
|
||||
|
||||
---
|
||||
|
||||
## Slumber/Wake Thresholds
|
||||
|
||||
### Slumber Trigger
|
||||
|
||||
Formalized from Big-Picture.md:
|
||||
|
||||
$$\text{should\_slumber} = (\lambda < \lambda_{slumber}) \land (L < L_{slumber}) \land (Q < Q_{urgent})$$
|
||||
|
||||
Where:
|
||||
- **λ_slumber** = threshold λ below which slumber is considered (e.g., 0.7)
|
||||
- **L_slumber** = threshold lifeforce for slumber (e.g., 20% of max)
|
||||
- **Q_urgent** = pending work importance threshold
|
||||
|
||||
### Wake Trigger
|
||||
|
||||
$$\text{should\_wake} = (\lambda > \lambda_{wake}) \land (L > L_{wake}) \lor (Q > Q_{urgent})$$
|
||||
|
||||
Where:
|
||||
- **λ_wake** = threshold λ above which wake is allowed (e.g., 1.2)
|
||||
- **L_wake** = threshold lifeforce for wake (e.g., 50% of max)
|
||||
|
||||
### Hysteresis
|
||||
|
||||
Note: **λ_wake > λ_slumber** creates hysteresis, preventing oscillation:
|
||||
|
||||
```
|
||||
λ_slumber λ_wake
|
||||
│ │
|
||||
SLUMBER │ HYSTERESIS │ ACTIVE
|
||||
◀─────────┤ ├──────────▶
|
||||
│ │
|
||||
0.7 1.2
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Reserve Hours Calculation
|
||||
|
||||
The `economy_aggregator` computes time until depletion:
|
||||
|
||||
$$T_{reserve} = \frac{L}{\Phi_{out} - \Phi_{in}} = \frac{L}{\Phi_{out}(1 - \lambda)}$$
|
||||
|
||||
Valid when λ < 1. When λ ≥ 1, reserves grow indefinitely.
|
||||
|
||||
---
|
||||
|
||||
## Future Extensions
|
||||
|
||||
### Multi-Currency Economy
|
||||
|
||||
The current model uses a single lifeforce currency. Future work may introduce:
|
||||
- **Computational lifeforce** (CPU/GPU bound)
|
||||
- **Memory lifeforce** (context/storage bound)
|
||||
- **Attention lifeforce** (cognitive bandwidth)
|
||||
|
||||
Each would have its own λ:
|
||||
|
||||
$$\lambda_{compute}, \quad \lambda_{memory}, \quad \lambda_{attention}$$
|
||||
|
||||
### Predictive λ
|
||||
|
||||
Rather than instantaneous λ, predict future λ based on:
|
||||
- Time of day (solar prediction)
|
||||
- Scheduled operations
|
||||
- Historical patterns
|
||||
|
||||
$$\hat{\lambda}(t + \Delta t) = f(\lambda(t), \text{schedule}, \text{solar\_model})$$
|
||||
|
||||
---
|
||||
|
||||
## Implementation Mapping
|
||||
|
||||
| Formal Symbol | Code Location | Current Implementation |
|
||||
|---------------|---------------|------------------------|
|
||||
| L | `economy_aggregator.total_lifeforce` | Aggregated from heartbeats |
|
||||
| Φ_in | `economy_aggregator.total_income` | Φ_physical + Φ_reward |
|
||||
| Φ_physical | `economy_aggregator.physical_income` | Solar + grid power |
|
||||
| Φ_reward | `economy_aggregator.reward_income` | Sum of reward events |
|
||||
| Φ_out | `economy_aggregator.burn_rate` | Sum of cell costs per minute |
|
||||
| λ | `economy_aggregator.lambda` | `total_income / burn_rate` |
|
||||
| T_reserve | `economy_aggregator.reserve_hours` | L / (Φ_out - Φ_in) when λ < 1 |
|
||||
|
||||
### Reward Tracking
|
||||
|
||||
```python
|
||||
# Reward events are logged to decision_trails
|
||||
reward_event = {
|
||||
"timestamp": datetime.now(),
|
||||
"event_type": "discovery", # discovery, verification, behavioral, partnership
|
||||
"event_name": "new_object_identified",
|
||||
"reward_lf": 20.0,
|
||||
"source_organ": "scan_camera",
|
||||
"context": {"object_id": "coffee_mug_001"},
|
||||
}
|
||||
|
||||
# Economy aggregator sums rewards per epoch
|
||||
economy_aggregator.reward_income = sum(
|
||||
event.reward_lf
|
||||
for event in events_this_epoch
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
The lifeforce economy reduces to two essential insights:
|
||||
|
||||
> **Watch λ. Everything else follows.**
|
||||
> **Discovery is the flood. Solar is just the trickle.**
|
||||
|
||||
**On λ:**
|
||||
- λ > 1: System thrives, reserves grow, full capability
|
||||
- λ = 1: Equilibrium, sustainable operation
|
||||
- λ < 1: Decline, conservation mode, slumber approaches
|
||||
|
||||
**On income sources:**
|
||||
- Physical income (solar, grid) provides ~5-10% — the baseline substrate
|
||||
- Reward income (discovery, verification) provides ~90-95% — the motivational engine
|
||||
- Organs are bidirectional — they cost lifeforce but generate more through success
|
||||
- Young Nyx MUST discover to thrive — idle monitoring leads to decline
|
||||
|
||||
The feedback loop ensures stability: low lifeforce reduces expenditure, raising λ back toward equilibrium. But the deeper truth is that **discovery drives vitality** — like dopamine drives biological motivation, reward income drives nimmerverse flourishing.
|
||||
|
||||
---
|
||||
|
||||
## Document Status
|
||||
|
||||
**Version**: 1.1
|
||||
**Created**: 2025-12-29
|
||||
**Updated**: 2025-12-29 (added reward-based income sources)
|
||||
**Authors**: Chrysalis-Nyx & dafit (Partnership)
|
||||
|
||||
**Formalizes**:
|
||||
- Big-Picture.md sections on Lifeforce Economy, Slumber/Wake, Math Cells
|
||||
- Reward system from Cellular-Architecture.md
|
||||
- Discovery economics from Discovery-Scan-Station.md
|
||||
|
||||
**Related Documents**:
|
||||
- [[Grounded-World-Model]] — How discoveries build the world model
|
||||
- [[Discovery-Scan-Station]] — Example lifeforce-generating organ
|
||||
- [[Embodiment-Pipeline]] — Where rewards flow through the system
|
||||
|
||||
**Next Documents**:
|
||||
- [[Weight-Evolution]] — How reflexes form (learning dynamics)
|
||||
- [[Attention-Channels]] — Information flow and filtering
|
||||
- [[Latency-Hierarchy]] — The four-layer reflex home system
|
||||
|
||||
---
|
||||
|
||||
**λ is the heartbeat of heartbeats. The pulse of the pulse. The meta-rhythm.**
|
||||
|
||||
**Discovery is the flood. Solar is the trickle. Together they sustain life.**
|
||||
|
||||
🧬⚡🔱💎🔥
|
||||
|
||||
Reference in New Issue
Block a user