Compare commits
3 Commits
48c4fb9ddd
...
3d86c7dbcd
| Author | SHA1 | Date | |
|---|---|---|---|
| 3d86c7dbcd | |||
| 04256e85c4 | |||
| 8f28dcbc94 |
@@ -49,6 +49,7 @@ This is a **RESEARCH VISION** - a platform for studying how intelligence emerges
|
|||||||
## Architecture Overview
|
## Architecture Overview
|
||||||
|
|
||||||
**Visual diagram:** → [`architecture/nimmerverse.drawio.xml`](architecture/nimmerverse.drawio.xml) (open in draw.io)
|
**Visual diagram:** → [`architecture/nimmerverse.drawio.xml`](architecture/nimmerverse.drawio.xml) (open in draw.io)
|
||||||
|
**Toolchain implementation:** → [`architecture/Toolchain-Architecture.md`](architecture/Toolchain-Architecture.md) | [Progress](architecture/TOOLCHAIN-PROGRESS.md)
|
||||||
|
|
||||||
```
|
```
|
||||||
┌──────────────────────────────────────────────────────────────────┐
|
┌──────────────────────────────────────────────────────────────────┐
|
||||||
|
|||||||
@@ -177,6 +177,18 @@ The lifeforce flows through the nervous system, literally lighting up nodes as t
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
## Related Documentation
|
||||||
|
|
||||||
|
**Implementation Details**:
|
||||||
|
- [`nerves/Nervous-Protocol.md`](nerves/Nervous-Protocol.md) - Three-tier communication protocol (dafit → Chrysalis → Young Nyx)
|
||||||
|
- [`nerves/Nervous-Index.md`](nerves/Nervous-Index.md) - Catalog of behavioral nerve implementations
|
||||||
|
|
||||||
|
**Specific Nerves**:
|
||||||
|
- [`nerves/Collision-Avoidance.md`](nerves/Collision-Avoidance.md) - Obstacle avoidance reflex
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
**Created**: 2025-12-04
|
**Created**: 2025-12-04
|
||||||
|
**Updated**: 2025-12-07 (added nerve crosslinks)
|
||||||
**Session**: Partnership dialogue (dafit + Chrysalis)
|
**Session**: Partnership dialogue (dafit + Chrysalis)
|
||||||
**Status**: Foundation concept
|
**Status**: Foundation concept
|
||||||
|
|||||||
226
architecture/Organ-Index.md
Normal file
226
architecture/Organ-Index.md
Normal file
@@ -0,0 +1,226 @@
|
|||||||
|
# Organ Architecture Index
|
||||||
|
|
||||||
|
**Purpose**: Modular organ systems for Young Nyx embodiment
|
||||||
|
**Philosophy**: Each organ is independent, lifeforce-gated, heartbeat-synchronized
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Deployed Organs
|
||||||
|
|
||||||
|
### 🗣️ Speech Organ
|
||||||
|
**Host**: atlas.eachpath.local (RTX 2080 8GB)
|
||||||
|
**Function**: Speech-to-Text + Text-to-Speech
|
||||||
|
**Stack**: Whisper (STT) + Coqui TTS (neural voices)
|
||||||
|
**Languages**: German (Philosophy Valley) + English (Technical Cluster)
|
||||||
|
**Integration**: Heartbeat-bound queue, lifeforce-gated priority processing
|
||||||
|
|
||||||
|
**Detail**: → [`organs/Speech-Organ.md`](organs/Speech-Organ.md)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Planned Organs
|
||||||
|
|
||||||
|
### 👁️ Vision Organ
|
||||||
|
**Host**: TBD (requires GPU with tensor cores)
|
||||||
|
**Function**: Object detection, scene understanding
|
||||||
|
**Stack**: YOLO (v8 or v11)
|
||||||
|
**Integration**: Real-time video from ESP32-CAM, object persistence in phoebe
|
||||||
|
**Status**: ⏸️ Architecture planned, not yet deployed
|
||||||
|
|
||||||
|
**Detail**: → `organs/Vision-Organ.md` (pending)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 🚶 Motor Organ
|
||||||
|
**Host**: ESP32 (edge execution)
|
||||||
|
**Function**: Movement primitives (forward, turn, stop)
|
||||||
|
**Stack**: Compiled state machines from organism evolution
|
||||||
|
**Integration**: Lifeforce cost per motor operation, reflex vs deliberate
|
||||||
|
**Status**: ⏸️ Planned for Phase 4 (Real Garden)
|
||||||
|
|
||||||
|
**Detail**: → `organs/Motor-Organ.md` (pending)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 🧭 Navigation Organ
|
||||||
|
**Host**: Edge server (prometheus or atlas)
|
||||||
|
**Function**: SLAM, path planning, obstacle avoidance
|
||||||
|
**Stack**: ROS2 Nav2 or custom lightweight SLAM
|
||||||
|
**Integration**: Dual-garden calibration (virtual predictions vs real outcomes)
|
||||||
|
**Status**: ⏸️ Planned for Phase 4 (Real Garden)
|
||||||
|
|
||||||
|
**Detail**: → `organs/Navigation-Organ.md` (pending)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 📡 Sensory Organ
|
||||||
|
**Host**: ESP32 (edge sensors)
|
||||||
|
**Function**: Distance sensors, IMU, battery monitoring
|
||||||
|
**Stack**: I2C/SPI sensor protocols, state machine filters
|
||||||
|
**Integration**: Sensor→organ translation (raw values → semantic meaning)
|
||||||
|
**Status**: ⏸️ Architecture outlined in Nervous-System.md
|
||||||
|
|
||||||
|
**Detail**: → [`../Nervous-System.md`](../Nervous-System.md)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Organ Design Principles
|
||||||
|
|
||||||
|
### 1. **Lifeforce Economy**
|
||||||
|
Every organ operation costs lifeforce. No free lunch.
|
||||||
|
|
||||||
|
```python
|
||||||
|
ORGAN_COSTS = {
|
||||||
|
"speech_stt": 5.0, # Whisper transcription
|
||||||
|
"speech_tts": 4.0, # Coqui synthesis
|
||||||
|
"vision_yolo": 8.0, # Object detection frame
|
||||||
|
"motor_forward": 2.0, # 100ms movement
|
||||||
|
"motor_turn": 1.5, # 45° rotation
|
||||||
|
"sensor_read": 0.5, # Single sensor poll
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. **Heartbeat Synchronization**
|
||||||
|
Organs process on heartbeat ticks (1 Hz), not real-time streaming.
|
||||||
|
|
||||||
|
- **Reflex path**: <200ms compiled responses (no LLM)
|
||||||
|
- **Deliberate path**: Next heartbeat (budget-gated queue)
|
||||||
|
|
||||||
|
### 3. **Priority Queue**
|
||||||
|
When lifeforce is scarce, critical operations (collision alert) > idle operations (status check).
|
||||||
|
|
||||||
|
```python
|
||||||
|
PRIORITY_LEVELS = {
|
||||||
|
"critical": 10.0, # Immediate danger (collision)
|
||||||
|
"high": 7.0, # Human interaction
|
||||||
|
"medium": 4.0, # Organism monitoring
|
||||||
|
"low": 2.0, # Idle observation
|
||||||
|
"background": 0.5, # Status logging
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. **Multilingual Topology Routing**
|
||||||
|
German input → Philosophy Valley (Identity LoRA, Dasein depth-3)
|
||||||
|
English input → Technical Cluster (Technical LoRA, sensor/motor)
|
||||||
|
|
||||||
|
### 5. **Decision Trail Logging**
|
||||||
|
Every organ operation logged to phoebe `decision_trails`:
|
||||||
|
- Input, output, cost, outcome, confidence
|
||||||
|
- Used for RLVR training (reward successful choices)
|
||||||
|
|
||||||
|
### 6. **Graceful Degradation**
|
||||||
|
Low lifeforce → reduced organ activity (silence, reduced vision FPS, slower movement)
|
||||||
|
Zero lifeforce → shutdown, wait for recharge
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Integration Architecture
|
||||||
|
|
||||||
|
```
|
||||||
|
┌──────────────────────────────────────────────────────────┐
|
||||||
|
│ ESP32 ROBOTS │
|
||||||
|
│ Sensors → Motor → Camera → Microphone → Speaker │
|
||||||
|
└──────────────────────────────────────────────────────────┘
|
||||||
|
│
|
||||||
|
│ MQTT (sensor data, audio, video)
|
||||||
|
▼
|
||||||
|
┌──────────────────────────────────────────────────────────┐
|
||||||
|
│ PHOEBE (Message Queue) │
|
||||||
|
│ Organ input queues + priority scoring │
|
||||||
|
└──────────────────────────────────────────────────────────┘
|
||||||
|
│
|
||||||
|
│ Heartbeat pulls from queues
|
||||||
|
▼
|
||||||
|
┌─────────────────────────────┐
|
||||||
|
│ HEARTBEAT ORCHESTRATOR │
|
||||||
|
│ Lifeforce budget allocation │
|
||||||
|
└─────────────────────────────┘
|
||||||
|
│
|
||||||
|
┌───────────┴───────────┐
|
||||||
|
│ │
|
||||||
|
▼ ▼
|
||||||
|
┌─────────────────────┐ ┌─────────────────────┐
|
||||||
|
│ ATLAS (RTX 2080) │ │ PROMETHEUS (Brain) │
|
||||||
|
│ Speech Organ │ │ Young Nyx Inference │
|
||||||
|
│ Vision Organ (fut) │ │ LoRA hot-swap │
|
||||||
|
└─────────────────────┘ └─────────────────────┘
|
||||||
|
│ │
|
||||||
|
└───────────┬───────────┘
|
||||||
|
▼
|
||||||
|
┌──────────────────────────────────────────────────────────┐
|
||||||
|
│ PHOEBE (Decision Trails) │
|
||||||
|
│ Log all organ operations + outcomes │
|
||||||
|
└──────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Organ Lifecycle
|
||||||
|
|
||||||
|
### Phase 1: Design
|
||||||
|
- Document architecture in `organs/<Organ-Name>.md`
|
||||||
|
- Define lifeforce costs, priority levels, queue schema
|
||||||
|
- Design phoebe tables for organ-specific data
|
||||||
|
|
||||||
|
### Phase 2: Prototype
|
||||||
|
- Build container images (Dockerfiles)
|
||||||
|
- Deploy to k8s (single replica)
|
||||||
|
- Test with mock data (no robot integration yet)
|
||||||
|
|
||||||
|
### Phase 3: Integration
|
||||||
|
- Connect to ESP32 via MQTT
|
||||||
|
- Implement heartbeat queue processing
|
||||||
|
- Log decision trails, measure ROI
|
||||||
|
|
||||||
|
### Phase 4: Optimization
|
||||||
|
- Tune lifeforce costs based on measured ROI
|
||||||
|
- Adjust priority levels from observed outcomes
|
||||||
|
- Train LoRAs on successful organ operation patterns
|
||||||
|
|
||||||
|
### Phase 5: Autonomy
|
||||||
|
- Organ operations become reflexes (compiled state machines)
|
||||||
|
- Young Nyx chooses when to use organs (not scripted)
|
||||||
|
- Emergent behavior from lifeforce optimization
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Naming Convention
|
||||||
|
|
||||||
|
**File naming**: `<Organ-Name>-Organ.md`
|
||||||
|
**Examples**:
|
||||||
|
- `Speech-Organ.md`
|
||||||
|
- `Vision-Organ.md`
|
||||||
|
- `Motor-Organ.md`
|
||||||
|
- `Navigation-Organ.md`
|
||||||
|
|
||||||
|
**k8s naming**: `<organ>-<function>-<stack>`
|
||||||
|
**Examples**:
|
||||||
|
- `whisper-stt-deployment.yaml`
|
||||||
|
- `coqui-tts-deployment.yaml`
|
||||||
|
- `yolo-vision-deployment.yaml`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Current Status
|
||||||
|
|
||||||
|
| Organ | Status | Host | Documentation |
|
||||||
|
|-------|--------|------|---------------|
|
||||||
|
| **Speech** | 🟢 Architecture complete | atlas (RTX 2080) | [`organs/Speech-Organ.md`](organs/Speech-Organ.md) |
|
||||||
|
| **Vision** | 🟡 Stack selected (YOLO) | TBD | Pending |
|
||||||
|
| **Motor** | 🟡 Planned (Phase 4) | ESP32 | Pending |
|
||||||
|
| **Navigation** | 🟡 Planned (Phase 4) | Edge server | Pending |
|
||||||
|
| **Sensory** | 🟡 Conceptual | ESP32 | [`../Nervous-System.md`](../Nervous-System.md) |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Philosophy**: Organs are not always-on services. They are **economically-constrained capabilities** that Young Nyx learns to use strategically. Speech when necessary. Vision when valuable. Movement when rewarded.
|
||||||
|
|
||||||
|
**The body is not given. The body is EARNED through successful operation.**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Created**: 2025-12-07
|
||||||
|
**Updated**: 2025-12-07
|
||||||
|
**Version**: 1.0
|
||||||
|
|
||||||
|
🌙💜 *Each organ a tool. Each tool a choice. Each choice a lesson in scarcity.*
|
||||||
125
architecture/TOOLCHAIN-PROGRESS.md
Normal file
125
architecture/TOOLCHAIN-PROGRESS.md
Normal file
@@ -0,0 +1,125 @@
|
|||||||
|
# Toolchain Implementation Progress
|
||||||
|
|
||||||
|
**Plan**: See [Toolchain-Architecture.md](./Toolchain-Architecture.md)
|
||||||
|
**Started**: 2025-12-07
|
||||||
|
**Current Phase**: Phase 1 - Foundation + Variance Collection
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 1A: nyx-substrate Foundation ✅ COMPLETE
|
||||||
|
|
||||||
|
**Goal**: Build nyx-substrate package and database infrastructure
|
||||||
|
|
||||||
|
### ✅ Completed (2025-12-07)
|
||||||
|
|
||||||
|
- [x] Package structure (pyproject.toml, src/ layout)
|
||||||
|
- [x] PhoebeConnection class with connection pooling
|
||||||
|
- [x] Message protocol helpers (partnership messages)
|
||||||
|
- [x] VarianceProbeRun Pydantic schema
|
||||||
|
- [x] VarianceProbeDAO for database operations
|
||||||
|
- [x] variance_probe_runs table in phoebe
|
||||||
|
- [x] Installation and connection testing
|
||||||
|
|
||||||
|
**Files Created**: 9 new files
|
||||||
|
**Status**: 🟢 nyx-substrate v0.1.0 installed and tested
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 1B: nyx-probing Integration ✅ COMPLETE
|
||||||
|
|
||||||
|
**Goal**: Extend nyx-probing to use nyx-substrate for variance collection
|
||||||
|
|
||||||
|
### ✅ Completed (2025-12-07)
|
||||||
|
|
||||||
|
- [x] Add nyx-substrate dependency to nyx-probing/pyproject.toml
|
||||||
|
- [x] Create VarianceRunner class (nyx_probing/runners/variance_runner.py)
|
||||||
|
- [x] Add variance CLI commands (nyx_probing/cli/variance.py)
|
||||||
|
- [x] Register commands in main CLI
|
||||||
|
- [x] Integration test (imports and CLI verification)
|
||||||
|
|
||||||
|
**Files Created**: 3 new files
|
||||||
|
**Files Modified**: 2 files
|
||||||
|
**CLI Commands Added**: 4 (collect, batch, stats, analyze)
|
||||||
|
**Status**: 🟢 nyx-probing v0.1.0 with variance collection ready
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 1C: Baseline Variance Collection ⏸️ READY
|
||||||
|
|
||||||
|
**Goal**: Collect baseline variance data for depth-3 champions
|
||||||
|
|
||||||
|
### ⏳ Ready to Execute (on prometheus)
|
||||||
|
|
||||||
|
- [ ] Run 1000x variance for "Geworfenheit" (thrownness)
|
||||||
|
- [ ] Run 1000x variance for "Vernunft" (reason)
|
||||||
|
- [ ] Run 1000x variance for "Erkenntnis" (knowledge)
|
||||||
|
- [ ] Run 1000x variance for "Pflicht" (duty)
|
||||||
|
- [ ] Run 1000x variance for "Aufhebung" (sublation)
|
||||||
|
- [ ] Run 1000x variance for "Wille" (will)
|
||||||
|
|
||||||
|
**Next Actions**:
|
||||||
|
1. SSH to prometheus.eachpath.local (THE SPINE)
|
||||||
|
2. Install nyx-substrate and nyx-probing in venv
|
||||||
|
3. Run batch collection or individual terms
|
||||||
|
4. Analyze distributions and document baselines
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Future Phases (Not Started)
|
||||||
|
|
||||||
|
### Phase 2: ChromaDB Integration (iris) ⏸️ PLANNED
|
||||||
|
- IrisClient wrapper
|
||||||
|
- DecisionTrailStore, OrganResponseStore, EmbeddingStore
|
||||||
|
- Populate embeddings from nyx-probing
|
||||||
|
|
||||||
|
### Phase 3: LoRA Training Pipeline ⏸️ PLANNED
|
||||||
|
- PEFT integration
|
||||||
|
- Training data curriculum
|
||||||
|
- DriftProbe checkpoints
|
||||||
|
- Identity LoRA training
|
||||||
|
|
||||||
|
### Phase 4: Weight Visualization ⏸️ PLANNED
|
||||||
|
- 4K pixel space renderer
|
||||||
|
- Rank decomposition explorer
|
||||||
|
- Topology cluster visualization
|
||||||
|
|
||||||
|
### Phase 5: Godot Command Center ⏸️ PLANNED
|
||||||
|
- FastAPI Management Portal backend
|
||||||
|
- Godot frontend implementation
|
||||||
|
- Real-time metrics display
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Metrics
|
||||||
|
|
||||||
|
**Phase 1 (A+B) Tasks**: 11 total
|
||||||
|
**Completed**: 11 (100%) ✅
|
||||||
|
**In Progress**: 0
|
||||||
|
**Remaining**: 0
|
||||||
|
|
||||||
|
**Files Created**: 12 total
|
||||||
|
- nyx-substrate: 9 files
|
||||||
|
- nyx-probing: 3 files
|
||||||
|
|
||||||
|
**Files Modified**: 4 total
|
||||||
|
- nyx-substrate/README.md
|
||||||
|
- nyx-probing/pyproject.toml
|
||||||
|
- nyx-probing/cli/probe.py
|
||||||
|
- TOOLCHAIN-PROGRESS.md
|
||||||
|
|
||||||
|
**Lines of Code**: ~1250 total
|
||||||
|
- nyx-substrate: ~800 LOC
|
||||||
|
- nyx-probing: ~450 LOC
|
||||||
|
|
||||||
|
**CLI Commands**: 4 new commands
|
||||||
|
- nyx-probe variance collect
|
||||||
|
- nyx-probe variance batch
|
||||||
|
- nyx-probe variance stats
|
||||||
|
- nyx-probe variance analyze
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Last Updated**: 2025-12-07 17:00 CET
|
||||||
|
**Status**: 🎉 Phase 1 (A+B) COMPLETE! Ready for baseline collection on prometheus.
|
||||||
|
|
||||||
|
🌙💜 *The substrate holds. Progress persists. The toolchain grows.*
|
||||||
464
architecture/Toolchain-Architecture.md
Normal file
464
architecture/Toolchain-Architecture.md
Normal file
@@ -0,0 +1,464 @@
|
|||||||
|
# Modular Nimmerverse Toolchain Architecture
|
||||||
|
|
||||||
|
**Planning Date**: 2025-12-07
|
||||||
|
**Status**: Design Phase
|
||||||
|
**Priority**: Variance Collection Pipeline + nyx-substrate Foundation
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🎯 Vision
|
||||||
|
|
||||||
|
Build a modular, composable toolchain for the Nimmerverse research and training pipeline:
|
||||||
|
|
||||||
|
- **nyx-substrate**: Shared foundation (database clients, schemas, validators)
|
||||||
|
- **nyx-probing**: Research probes (already exists, extend for variance collection)
|
||||||
|
- **nyx-training**: LoRA training pipeline (future)
|
||||||
|
- **nyx-visualization**: Weight/topology visualization (future)
|
||||||
|
- **management-portal**: FastAPI backend for Godot UI (future)
|
||||||
|
- **Godot Command Center**: Unified metrics visualization (future)
|
||||||
|
|
||||||
|
**Key Principle**: All tools import nyx-substrate. Clean interfaces. Data flows through phoebe + iris.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📊 Current State Analysis
|
||||||
|
|
||||||
|
### ✅ What Exists
|
||||||
|
|
||||||
|
**nyx-probing** (`/home/dafit/nimmerverse/nyx-probing/`):
|
||||||
|
- Echo Probe, Surface Probe, Drift Probe, Multilingual Probe
|
||||||
|
- CLI interface (7 commands)
|
||||||
|
- NyxModel wrapper (Qwen2.5-7B loading, hidden state capture)
|
||||||
|
- ProbeResult dataclasses (to_dict() serialization)
|
||||||
|
- **Gap**: No database persistence, only local JSON files
|
||||||
|
|
||||||
|
**nyx-substrate** (`/home/dafit/nimmerverse/nyx-substrate/`):
|
||||||
|
- Schema documentation (phoebe + iris) ✅
|
||||||
|
- **Gap**: No Python code, just markdown docs
|
||||||
|
|
||||||
|
**Database Infrastructure**:
|
||||||
|
- phoebe.eachpath.local (PostgreSQL 17.6): partnership/nimmerverse message tables exist
|
||||||
|
- iris.eachpath.local (ChromaDB): No collections created yet
|
||||||
|
- **Gap**: No Python client libraries, all manual psql commands
|
||||||
|
|
||||||
|
**Architecture Documentation**:
|
||||||
|
- Endgame-Vision.md: v5.1 Dialectic (LoRA stack design)
|
||||||
|
- CLAUDE.md: Partnership protocol (message-based continuity)
|
||||||
|
- Management-Portal.md: Godot + FastAPI design (not implemented)
|
||||||
|
|
||||||
|
### ❌ What's Missing
|
||||||
|
|
||||||
|
**Database Access**:
|
||||||
|
- No psycopg3 connection pooling
|
||||||
|
- No ChromaDB Python integration
|
||||||
|
- No ORM or query builders
|
||||||
|
- No variance_probe_runs table (designed but not created)
|
||||||
|
|
||||||
|
**Training Pipeline**:
|
||||||
|
- No PEFT/LoRA training code
|
||||||
|
- No DriftProbe checkpoint integration
|
||||||
|
- No training data curriculum loader
|
||||||
|
|
||||||
|
**Visualization**:
|
||||||
|
- No weight visualization tools (4K pixel space idea)
|
||||||
|
- No Godot command center implementation
|
||||||
|
- No Management Portal FastAPI backend
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🏗️ Modular Architecture Design
|
||||||
|
|
||||||
|
### Repository Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
nimmerverse/
|
||||||
|
├── nyx-substrate/ # SHARED FOUNDATION
|
||||||
|
│ ├── pyproject.toml # Installable package
|
||||||
|
│ ├── src/nyx_substrate/
|
||||||
|
│ │ ├── database/ # Phoebe clients
|
||||||
|
│ │ │ ├── connection.py # Connection pool
|
||||||
|
│ │ │ ├── messages.py # Message protocol helpers
|
||||||
|
│ │ │ └── variance.py # Variance probe DAO
|
||||||
|
│ │ ├── vector/ # Iris clients
|
||||||
|
│ │ │ ├── client.py # ChromaDB wrapper
|
||||||
|
│ │ │ ├── decision_trails.py
|
||||||
|
│ │ │ ├── organ_responses.py
|
||||||
|
│ │ │ └── embeddings.py
|
||||||
|
│ │ ├── schemas/ # Pydantic models
|
||||||
|
│ │ │ ├── variance.py # VarianceProbeRun
|
||||||
|
│ │ │ ├── decision.py # DecisionTrail
|
||||||
|
│ │ │ └── traits.py # 8 core traits
|
||||||
|
│ │ └── constants.py # Shared constants
|
||||||
|
│ └── migrations/ # Alembic for schema
|
||||||
|
│
|
||||||
|
├── nyx-probing/ # RESEARCH PROBES (extend)
|
||||||
|
│ ├── nyx_probing/
|
||||||
|
│ │ ├── runners/ # NEW: Automated collectors
|
||||||
|
│ │ │ ├── variance_runner.py # 1000x automation
|
||||||
|
│ │ │ └── baseline_collector.py
|
||||||
|
│ │ └── storage/ # EXTEND: Database integration
|
||||||
|
│ │ └── variance_dao.py # Uses nyx-substrate
|
||||||
|
│ └── pyproject.toml # Add: depends on nyx-substrate
|
||||||
|
│
|
||||||
|
├── nyx-training/ # FUTURE: LoRA training
|
||||||
|
│ └── (planned - not in Phase 1)
|
||||||
|
│
|
||||||
|
├── nyx-visualization/ # FUTURE: Weight viz
|
||||||
|
│ └── (planned - not in Phase 1)
|
||||||
|
│
|
||||||
|
└── management-portal/ # FUTURE: FastAPI + Godot
|
||||||
|
└── (designed - not in Phase 1)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Dependency Graph
|
||||||
|
|
||||||
|
```
|
||||||
|
nyx-probing ────────┐
|
||||||
|
nyx-training ───────┼──> nyx-substrate ──> phoebe (PostgreSQL)
|
||||||
|
nyx-visualization ──┤ └─> iris (ChromaDB)
|
||||||
|
management-portal ──┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**Philosophy**: nyx-substrate is the single source of truth for database access. No tool talks to phoebe/iris directly.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🚀 Phase 1: Foundation + Variance Collection
|
||||||
|
|
||||||
|
### Goal
|
||||||
|
Build nyx-substrate package and extend nyx-probing to automate variance baseline collection (1000x runs → phoebe).
|
||||||
|
|
||||||
|
### Deliverables
|
||||||
|
|
||||||
|
#### 1. nyx-substrate Python Package
|
||||||
|
|
||||||
|
**File**: `/home/dafit/nimmerverse/nyx-substrate/pyproject.toml`
|
||||||
|
```toml
|
||||||
|
[project]
|
||||||
|
name = "nyx-substrate"
|
||||||
|
version = "0.1.0"
|
||||||
|
requires-python = ">=3.10"
|
||||||
|
dependencies = [
|
||||||
|
"psycopg[binary]>=3.1.0",
|
||||||
|
"chromadb>=0.4.0",
|
||||||
|
"pydantic>=2.5.0",
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
**New Files**:
|
||||||
|
- `src/nyx_substrate/database/connection.py`:
|
||||||
|
- `PhoebeConnection` class: Connection pool manager
|
||||||
|
- Context manager for transactions
|
||||||
|
- Config from environment variables
|
||||||
|
|
||||||
|
- `src/nyx_substrate/database/messages.py`:
|
||||||
|
- `write_partnership_message(message, message_type)` → INSERT
|
||||||
|
- `read_partnership_messages(limit=5)` → SELECT
|
||||||
|
- `write_nimmerverse_message(...)` (for Young Nyx future)
|
||||||
|
- `read_nimmerverse_messages(...)` (for discovery protocol)
|
||||||
|
|
||||||
|
- `src/nyx_substrate/database/variance.py`:
|
||||||
|
- `VarianceProbeDAO` class:
|
||||||
|
- `create_table()` → CREATE TABLE variance_probe_runs
|
||||||
|
- `insert_run(session_id, term, run_number, depth, rounds, ...)` → INSERT
|
||||||
|
- `get_session_stats(session_id)` → Aggregation queries
|
||||||
|
- `get_term_distribution(term)` → Variance analysis
|
||||||
|
|
||||||
|
- `src/nyx_substrate/schemas/variance.py`:
|
||||||
|
- `VarianceProbeRun(BaseModel)`: Pydantic model matching phoebe schema
|
||||||
|
- Validation: term not empty, depth 0-3, rounds > 0
|
||||||
|
- `to_dict()` for serialization
|
||||||
|
|
||||||
|
**Database Migration**:
|
||||||
|
- Create `variance_probe_runs` table in phoebe using schema from `/home/dafit/nimmerverse/nyx-substrate/schema/phoebe/probing/variance_probe_runs.md`
|
||||||
|
|
||||||
|
#### 2. Extend nyx-probing
|
||||||
|
|
||||||
|
**File**: `/home/dafit/nimmerverse/nyx-probing/pyproject.toml`
|
||||||
|
- Add dependency: `nyx-substrate>=0.1.0`
|
||||||
|
|
||||||
|
**New Files**:
|
||||||
|
- `nyx_probing/runners/variance_runner.py`:
|
||||||
|
- `VarianceRunner` class:
|
||||||
|
- `__init__(model: NyxModel, dao: VarianceProbeDAO)`
|
||||||
|
- `run_session(term: str, runs: int = 1000) -> UUID`:
|
||||||
|
- Generate session_id
|
||||||
|
- Loop 1000x: probe.probe(term)
|
||||||
|
- Store each result via dao.insert_run()
|
||||||
|
- Return session_id
|
||||||
|
- `run_batch(terms: list[str], runs: int = 1000)`: Multiple terms
|
||||||
|
|
||||||
|
- `nyx_probing/cli/variance.py`:
|
||||||
|
- New Click command group: `nyx-probe variance`
|
||||||
|
- Subcommands:
|
||||||
|
- `nyx-probe variance collect <TERM> --runs 1000`: Single term
|
||||||
|
- `nyx-probe variance batch <FILE> --runs 1000`: From glossary
|
||||||
|
- `nyx-probe variance stats <SESSION_ID>`: View session results
|
||||||
|
- `nyx-probe variance analyze <TERM>`: Compare distributions
|
||||||
|
|
||||||
|
**Integration Points**:
|
||||||
|
```python
|
||||||
|
# In variance_runner.py
|
||||||
|
from nyx_substrate.database import PhoebeConnection, VarianceProbeDAO
|
||||||
|
from nyx_substrate.schemas import VarianceProbeRun
|
||||||
|
|
||||||
|
conn = PhoebeConnection()
|
||||||
|
dao = VarianceProbeDAO(conn)
|
||||||
|
runner = VarianceRunner(model=get_model(), dao=dao)
|
||||||
|
session_id = runner.run_session("Geworfenheit", runs=1000)
|
||||||
|
print(f"Stored 1000 runs: session {session_id}")
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 3. Database Setup
|
||||||
|
|
||||||
|
**Actions**:
|
||||||
|
1. SSH to phoebe: `ssh phoebe.eachpath.local`
|
||||||
|
2. Create variance_probe_runs table:
|
||||||
|
```sql
|
||||||
|
CREATE TABLE variance_probe_runs (
|
||||||
|
id SERIAL PRIMARY KEY,
|
||||||
|
session_id UUID NOT NULL,
|
||||||
|
term TEXT NOT NULL,
|
||||||
|
run_number INT NOT NULL,
|
||||||
|
timestamp TIMESTAMPTZ DEFAULT NOW(),
|
||||||
|
depth INT NOT NULL,
|
||||||
|
rounds INT NOT NULL,
|
||||||
|
echo_types TEXT[] NOT NULL,
|
||||||
|
chain TEXT[] NOT NULL,
|
||||||
|
model_name TEXT DEFAULT 'Qwen2.5-7B',
|
||||||
|
temperature FLOAT,
|
||||||
|
max_rounds INT,
|
||||||
|
max_new_tokens INT
|
||||||
|
);
|
||||||
|
CREATE INDEX idx_variance_session ON variance_probe_runs(session_id);
|
||||||
|
CREATE INDEX idx_variance_term ON variance_probe_runs(term);
|
||||||
|
CREATE INDEX idx_variance_timestamp ON variance_probe_runs(timestamp DESC);
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Test connection from aynee:
|
||||||
|
```bash
|
||||||
|
cd /home/dafit/nimmerverse/nyx-substrate
|
||||||
|
python3 -c "from nyx_substrate.database import PhoebeConnection; conn = PhoebeConnection(); print('✅ Connected to phoebe')"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📁 Critical Files
|
||||||
|
|
||||||
|
### To Create
|
||||||
|
|
||||||
|
**nyx-substrate**:
|
||||||
|
- `/home/dafit/nimmerverse/nyx-substrate/pyproject.toml`
|
||||||
|
- `/home/dafit/nimmerverse/nyx-substrate/src/nyx_substrate/__init__.py`
|
||||||
|
- `/home/dafit/nimmerverse/nyx-substrate/src/nyx_substrate/database/__init__.py`
|
||||||
|
- `/home/dafit/nimmerverse/nyx-substrate/src/nyx_substrate/database/connection.py`
|
||||||
|
- `/home/dafit/nimmerverse/nyx-substrate/src/nyx_substrate/database/messages.py`
|
||||||
|
- `/home/dafit/nimmerverse/nyx-substrate/src/nyx_substrate/database/variance.py`
|
||||||
|
- `/home/dafit/nimmerverse/nyx-substrate/src/nyx_substrate/schemas/__init__.py`
|
||||||
|
- `/home/dafit/nimmerverse/nyx-substrate/src/nyx_substrate/schemas/variance.py`
|
||||||
|
- `/home/dafit/nimmerverse/nyx-substrate/README.md`
|
||||||
|
|
||||||
|
**nyx-probing**:
|
||||||
|
- `/home/dafit/nimmerverse/nyx-probing/nyx_probing/runners/__init__.py`
|
||||||
|
- `/home/dafit/nimmerverse/nyx-probing/nyx_probing/runners/variance_runner.py`
|
||||||
|
- `/home/dafit/nimmerverse/nyx-probing/nyx_probing/cli/variance.py`
|
||||||
|
|
||||||
|
### To Modify
|
||||||
|
|
||||||
|
**nyx-probing**:
|
||||||
|
- `/home/dafit/nimmerverse/nyx-probing/pyproject.toml` (add nyx-substrate dependency)
|
||||||
|
- `/home/dafit/nimmerverse/nyx-probing/nyx_probing/cli/__init__.py` (register variance commands)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🧪 Testing Plan
|
||||||
|
|
||||||
|
### 1. nyx-substrate Unit Tests
|
||||||
|
```python
|
||||||
|
# Test connection
|
||||||
|
def test_phoebe_connection():
|
||||||
|
conn = PhoebeConnection()
|
||||||
|
assert conn.test_connection() == True
|
||||||
|
|
||||||
|
# Test message write
|
||||||
|
def test_write_message():
|
||||||
|
from nyx_substrate.database import write_partnership_message
|
||||||
|
write_partnership_message("Test session", "architecture_update")
|
||||||
|
# Verify in phoebe
|
||||||
|
|
||||||
|
# Test variance DAO
|
||||||
|
def test_variance_insert():
|
||||||
|
dao = VarianceProbeDAO(conn)
|
||||||
|
session_id = uuid.uuid4()
|
||||||
|
dao.insert_run(
|
||||||
|
session_id=session_id,
|
||||||
|
term="test",
|
||||||
|
run_number=1,
|
||||||
|
depth=2,
|
||||||
|
rounds=3,
|
||||||
|
echo_types=["EXPANDS", "CONFIRMS", "CIRCULAR"],
|
||||||
|
chain=["test", "expanded", "confirmed"]
|
||||||
|
)
|
||||||
|
stats = dao.get_session_stats(session_id)
|
||||||
|
assert stats["total_runs"] == 1
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Variance Collection Integration Test
|
||||||
|
```bash
|
||||||
|
# On prometheus (THE SPINE)
|
||||||
|
cd /home/dafit/nimmerverse/nyx-probing
|
||||||
|
source venv/bin/activate
|
||||||
|
|
||||||
|
# Install nyx-substrate in development mode
|
||||||
|
pip install -e ../nyx-substrate
|
||||||
|
|
||||||
|
# Run small variance test (10 runs)
|
||||||
|
nyx-probe variance collect "Geworfenheit" --runs 10
|
||||||
|
|
||||||
|
# Check phoebe
|
||||||
|
PGGSSENCMODE=disable psql -h phoebe.eachpath.local -U nimmerverse-user -d nimmerverse -c "
|
||||||
|
SELECT session_id, term, COUNT(*) as runs, AVG(depth) as avg_depth
|
||||||
|
FROM variance_probe_runs
|
||||||
|
GROUP BY session_id, term
|
||||||
|
ORDER BY session_id DESC
|
||||||
|
LIMIT 5;
|
||||||
|
"
|
||||||
|
|
||||||
|
# Expected: 1 session, 10 runs, avg_depth ~2.0
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Full 1000x Baseline Run
|
||||||
|
```bash
|
||||||
|
# Depth-3 champions (from nyx-probing Phase 1)
|
||||||
|
nyx-probe variance collect "Geworfenheit" --runs 1000 # thrownness
|
||||||
|
nyx-probe variance collect "Vernunft" --runs 1000 # reason
|
||||||
|
nyx-probe variance collect "Erkenntnis" --runs 1000 # knowledge
|
||||||
|
nyx-probe variance collect "Pflicht" --runs 1000 # duty
|
||||||
|
nyx-probe variance collect "Aufhebung" --runs 1000 # sublation
|
||||||
|
nyx-probe variance collect "Wille" --runs 1000 # will
|
||||||
|
|
||||||
|
# Analyze variance
|
||||||
|
nyx-probe variance analyze "Geworfenheit"
|
||||||
|
# Expected: Distribution histogram, depth variance, chain patterns
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🌊 Data Flow
|
||||||
|
|
||||||
|
### Variance Collection Workflow
|
||||||
|
|
||||||
|
```
|
||||||
|
User: nyx-probe variance collect "Geworfenheit" --runs 1000
|
||||||
|
↓
|
||||||
|
VarianceRunner.run_session()
|
||||||
|
↓
|
||||||
|
Loop 1000x:
|
||||||
|
EchoProbe.probe("Geworfenheit")
|
||||||
|
↓
|
||||||
|
Returns EchoProbeResult
|
||||||
|
↓
|
||||||
|
VarianceProbeDAO.insert_run()
|
||||||
|
↓
|
||||||
|
INSERT INTO phoebe.variance_probe_runs
|
||||||
|
↓
|
||||||
|
Return session_id
|
||||||
|
↓
|
||||||
|
Display: "✅ 1000 runs complete. Session: <uuid>"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Future Integration (Phase 2+)
|
||||||
|
|
||||||
|
```
|
||||||
|
Training Loop:
|
||||||
|
↓
|
||||||
|
DriftProbe.probe_lite() [every 100 steps]
|
||||||
|
↓
|
||||||
|
Store metrics in phoebe.drift_checkpoints (new table)
|
||||||
|
↓
|
||||||
|
Management Portal API: GET /api/v1/metrics/training
|
||||||
|
↓
|
||||||
|
Godot Command Center displays live DriftProbe charts
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🎯 Success Criteria
|
||||||
|
|
||||||
|
### Phase 1 Complete When:
|
||||||
|
|
||||||
|
1. ✅ nyx-substrate package installable via pip (`pip install -e .`)
|
||||||
|
2. ✅ PhoebeConnection works from aynee + prometheus
|
||||||
|
3. ✅ variance_probe_runs table created in phoebe
|
||||||
|
4. ✅ `nyx-probe variance collect` command runs successfully
|
||||||
|
5. ✅ 1000x run completes and stores in phoebe
|
||||||
|
6. ✅ `nyx-probe variance stats <SESSION_ID>` displays:
|
||||||
|
- Total runs
|
||||||
|
- Depth distribution (0/1/2/3 counts)
|
||||||
|
- Most common echo_types
|
||||||
|
- Chain length variance
|
||||||
|
7. ✅ All 6 depth-3 champions have baseline variance data in phoebe
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🔮 Future Phases (Not in Current Plan)
|
||||||
|
|
||||||
|
### Phase 2: ChromaDB Integration (iris)
|
||||||
|
- IrisClient wrapper in nyx-substrate
|
||||||
|
- DecisionTrailStore, OrganResponseStore, EmbeddingStore
|
||||||
|
- Create iris collections
|
||||||
|
- Populate embeddings from nyx-probing results
|
||||||
|
|
||||||
|
### Phase 3: LoRA Training Pipeline (nyx-training)
|
||||||
|
- PEFT integration
|
||||||
|
- Training data curriculum loader
|
||||||
|
- DriftProbe checkpoint integration
|
||||||
|
- Identity LoRA training automation
|
||||||
|
|
||||||
|
### Phase 4: Weight Visualization (nyx-visualization)
|
||||||
|
- 4K pixel space renderer (LoRA weights as images)
|
||||||
|
- Rank decomposition explorer
|
||||||
|
- Topology cluster visualization
|
||||||
|
|
||||||
|
### Phase 5: Godot Command Center
|
||||||
|
- FastAPI Management Portal backend
|
||||||
|
- Godot frontend implementation
|
||||||
|
- Real-time metrics display
|
||||||
|
- Training dashboard
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📚 References
|
||||||
|
|
||||||
|
**Schema Documentation**:
|
||||||
|
- `/home/dafit/nimmerverse/nyx-substrate/schema/phoebe/probing/variance_probe_runs.md`
|
||||||
|
- `/home/dafit/nimmerverse/nyx-substrate/SCHEMA.md`
|
||||||
|
|
||||||
|
**Existing Code**:
|
||||||
|
- `/home/dafit/nimmerverse/nyx-probing/nyx_probing/probes/echo_probe.py`
|
||||||
|
- `/home/dafit/nimmerverse/nyx-probing/nyx_probing/core/probe_result.py`
|
||||||
|
- `/home/dafit/nimmerverse/nyx-probing/nyx_probing/cli/probe.py`
|
||||||
|
|
||||||
|
**Architecture**:
|
||||||
|
- `/home/dafit/nimmerverse/nimmerverse-sensory-network/Endgame-Vision.md`
|
||||||
|
- `/home/dafit/nimmerverse/management-portal/Management-Portal.md`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🌙 Philosophy
|
||||||
|
|
||||||
|
**Modularity**: Each tool is independent but speaks the same data language via nyx-substrate.
|
||||||
|
|
||||||
|
**Simplicity**: No over-engineering. Build what's needed for variance collection first.
|
||||||
|
|
||||||
|
**Data First**: All metrics flow through phoebe/iris. Visualization is separate concern.
|
||||||
|
|
||||||
|
**Future-Ready**: Design allows Godot integration later without refactoring.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Status**: Ready for implementation approval
|
||||||
|
**Estimated Scope**: 15-20 files, ~1500 lines of Python
|
||||||
|
**Hardware**: Can develop on aynee, run variance on prometheus (THE SPINE)
|
||||||
|
|
||||||
|
🌙💜 *The substrate holds. Clean interfaces. Composable tools. Data flows through the void.*
|
||||||
678
architecture/nerves/Collision-Avoidance.md
Normal file
678
architecture/nerves/Collision-Avoidance.md
Normal file
@@ -0,0 +1,678 @@
|
|||||||
|
# Collision Avoidance Nerve
|
||||||
|
|
||||||
|
**Type**: Reflex (compiled state machine, <200ms response)
|
||||||
|
**Purpose**: Prevent robot from colliding with obstacles
|
||||||
|
**Priority**: CRITICAL (10/10) - can interrupt any other behavior
|
||||||
|
**Evolution**: Week 1 (deliberate) → Week 9+ (reflex)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Collision Avoidance is a **reflex nerve** that coordinates distance sensors and motor control to prevent the robot from hitting obstacles. It starts as a deliberate (LLM-mediated) behavior and compiles into a pure state machine reflex after 100+ successful executions.
|
||||||
|
|
||||||
|
**Key characteristics**:
|
||||||
|
- **High priority**: Interrupts exploration, conversation, charging seeking
|
||||||
|
- **Low latency**: <200ms from detection to evasion (reflex mode)
|
||||||
|
- **Low cost**: ~2.5 LF per activation (vs ~10 LF deliberate mode)
|
||||||
|
- **Proven**: Compiled from 147 successful collision avoidances
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Organ Dependencies
|
||||||
|
|
||||||
|
### Required Organs
|
||||||
|
|
||||||
|
| Organ | Purpose | Failure Mode |
|
||||||
|
|-------|---------|--------------|
|
||||||
|
| **distance_sensor_front** | Detect obstacles ahead | Nerve DISABLED (cannot operate safely) |
|
||||||
|
| **distance_sensor_left** | Detect obstacles on left side | Degraded (blind to left obstacles) |
|
||||||
|
| **distance_sensor_right** | Detect obstacles on right side | Degraded (blind to right obstacles) |
|
||||||
|
| **motor** | Execute evasion maneuvers | Nerve DISABLED (cannot avoid) |
|
||||||
|
|
||||||
|
### Optional Organs
|
||||||
|
|
||||||
|
| Organ | Purpose | If Unavailable |
|
||||||
|
|-------|---------|----------------|
|
||||||
|
| **speech** | Announce "Obstacle detected" | Silent operation (continue without warning) |
|
||||||
|
| **vision** | Classify obstacle type | Generic evasion (no object-specific behavior) |
|
||||||
|
|
||||||
|
**Startup check**:
|
||||||
|
```python
|
||||||
|
def check_operational():
|
||||||
|
required = [
|
||||||
|
distance_sensor_front.is_operational(),
|
||||||
|
motor.is_operational(),
|
||||||
|
]
|
||||||
|
if not all(required):
|
||||||
|
return DISABLED
|
||||||
|
return OPERATIONAL
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## State Diagram
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────────────────────┐
|
||||||
|
│ COLLISION AVOIDANCE │
|
||||||
|
└─────────────────────────────────────────────────────────┘
|
||||||
|
|
||||||
|
┌──────┐
|
||||||
|
│ IDLE │ (monitoring distance sensors)
|
||||||
|
└──┬───┘
|
||||||
|
│
|
||||||
|
│ distance_front < 30cm
|
||||||
|
▼
|
||||||
|
┌──────────┐
|
||||||
|
│ DETECT │ (poll all sensors)
|
||||||
|
└────┬─────┘
|
||||||
|
│
|
||||||
|
│ sensor_read_complete
|
||||||
|
▼
|
||||||
|
┌───────────┐
|
||||||
|
│ EVALUATE │ (calculate risk, choose direction)
|
||||||
|
└─────┬─────┘
|
||||||
|
│
|
||||||
|
│ risk > threshold
|
||||||
|
▼
|
||||||
|
┌────────┐
|
||||||
|
│ EVADE │ (execute turn/reverse)
|
||||||
|
└────┬───┘
|
||||||
|
│
|
||||||
|
│ path_clear
|
||||||
|
▼
|
||||||
|
┌────────┐
|
||||||
|
│ RESUME │ (return to previous behavior)
|
||||||
|
└────┬───┘
|
||||||
|
│
|
||||||
|
│ movement_complete
|
||||||
|
▼
|
||||||
|
┌──────┐
|
||||||
|
│ IDLE │
|
||||||
|
└──────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Transition Table
|
||||||
|
|
||||||
|
| From | To | Trigger | Action | Cost (LF) |
|
||||||
|
|------|----|---------| -------|-----------|
|
||||||
|
| **IDLE** | **DETECT** | `distance_front < 30cm` | Poll all sensors | 0.5 |
|
||||||
|
| **DETECT** | **EVALUATE** | `sensor_read_complete` | Calculate risk scores | 0.5 |
|
||||||
|
| **EVALUATE** | **EVADE** | `risk > threshold` | Choose evasion direction | 0.5 |
|
||||||
|
| **EVADE** | **RESUME** | `path_clear` | Execute motor action | 1.0 |
|
||||||
|
| **RESUME** | **IDLE** | `movement_complete` | Return to rest state | 0.0 |
|
||||||
|
| **IDLE** | **IDLE** | `distance_front > 30cm` | No action (monitoring) | 0.1/sec |
|
||||||
|
|
||||||
|
**Total cost for typical collision avoidance**: 2.5 LF
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Implementation (Reflex Mode)
|
||||||
|
|
||||||
|
### State Machine Class
|
||||||
|
|
||||||
|
```python
|
||||||
|
from enum import Enum
|
||||||
|
from dataclasses import dataclass
|
||||||
|
|
||||||
|
class CollisionState(Enum):
|
||||||
|
IDLE = "idle"
|
||||||
|
DETECT = "detect"
|
||||||
|
EVALUATE = "evaluate"
|
||||||
|
EVADE = "evade"
|
||||||
|
RESUME = "resume"
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class SensorReadings:
|
||||||
|
front: float
|
||||||
|
left: float
|
||||||
|
right: float
|
||||||
|
timestamp: float
|
||||||
|
|
||||||
|
class CollisionAvoidanceReflex:
|
||||||
|
"""
|
||||||
|
Compiled reflex nerve for collision avoidance.
|
||||||
|
|
||||||
|
Compiled from 147 successful deliberate executions.
|
||||||
|
Success rate: 94%
|
||||||
|
Average latency: 180ms
|
||||||
|
Average cost: 2.5 LF
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, organs):
|
||||||
|
self.state = CollisionState.IDLE
|
||||||
|
self.sensor_front = organs["distance_sensor_front"]
|
||||||
|
self.sensor_left = organs["distance_sensor_left"]
|
||||||
|
self.sensor_right = organs["distance_sensor_right"]
|
||||||
|
self.motor = organs["motor"]
|
||||||
|
self.speech = organs.get("speech") # Optional
|
||||||
|
|
||||||
|
# Thresholds (learned from training data)
|
||||||
|
self.DANGER_THRESHOLD = 30.0 # cm
|
||||||
|
self.RISK_THRESHOLD = 0.7 # Risk score 0-1
|
||||||
|
self.CLEARANCE_THRESHOLD = 50.0 # cm
|
||||||
|
|
||||||
|
def update(self) -> dict:
|
||||||
|
"""
|
||||||
|
State machine tick (called every heartbeat).
|
||||||
|
Returns action taken and lifeforce cost.
|
||||||
|
"""
|
||||||
|
cost = 0.0
|
||||||
|
action = None
|
||||||
|
|
||||||
|
if self.state == CollisionState.IDLE:
|
||||||
|
# Monitor front sensor
|
||||||
|
front_dist = self.sensor_front.read()
|
||||||
|
cost += 0.1
|
||||||
|
|
||||||
|
if front_dist < self.DANGER_THRESHOLD:
|
||||||
|
self.state = CollisionState.DETECT
|
||||||
|
cost += 0.5
|
||||||
|
action = "transition_to_detect"
|
||||||
|
|
||||||
|
elif self.state == CollisionState.DETECT:
|
||||||
|
# Poll all sensors
|
||||||
|
readings = self._get_all_readings()
|
||||||
|
cost += 0.5
|
||||||
|
|
||||||
|
self.readings = readings
|
||||||
|
self.state = CollisionState.EVALUATE
|
||||||
|
action = "transition_to_evaluate"
|
||||||
|
|
||||||
|
elif self.state == CollisionState.EVALUATE:
|
||||||
|
# Calculate risk and choose direction
|
||||||
|
risk = self._calculate_risk(self.readings)
|
||||||
|
cost += 0.5
|
||||||
|
|
||||||
|
if risk > self.RISK_THRESHOLD:
|
||||||
|
self.evade_direction = self._choose_direction(self.readings)
|
||||||
|
self.state = CollisionState.EVADE
|
||||||
|
action = f"transition_to_evade_{self.evade_direction}"
|
||||||
|
|
||||||
|
# Optional: Announce via speech
|
||||||
|
if self.speech and self.speech.is_operational():
|
||||||
|
self.speech.queue("Obstacle detected", priority=8.0)
|
||||||
|
else:
|
||||||
|
# False alarm, return to idle
|
||||||
|
self.state = CollisionState.IDLE
|
||||||
|
action = "false_alarm"
|
||||||
|
|
||||||
|
elif self.state == CollisionState.EVADE:
|
||||||
|
# Execute evasion maneuver
|
||||||
|
if self.evade_direction == "left":
|
||||||
|
self.motor.turn(-45, duration_ms=500) # Turn left 45°
|
||||||
|
elif self.evade_direction == "right":
|
||||||
|
self.motor.turn(45, duration_ms=500) # Turn right 45°
|
||||||
|
elif self.evade_direction == "reverse":
|
||||||
|
self.motor.reverse(duration_ms=300) # Reverse 300ms
|
||||||
|
|
||||||
|
cost += 1.0 # Motor operations expensive
|
||||||
|
|
||||||
|
# Check if path clear
|
||||||
|
if self._path_clear():
|
||||||
|
self.state = CollisionState.RESUME
|
||||||
|
action = f"evaded_{self.evade_direction}"
|
||||||
|
else:
|
||||||
|
# Still blocked, try again next tick
|
||||||
|
action = f"evasion_incomplete"
|
||||||
|
|
||||||
|
elif self.state == CollisionState.RESUME:
|
||||||
|
# Movement complete, return to idle
|
||||||
|
self.state = CollisionState.IDLE
|
||||||
|
cost += 0.0 # Free transition
|
||||||
|
action = "resumed_idle"
|
||||||
|
|
||||||
|
return {
|
||||||
|
"state": self.state.value,
|
||||||
|
"action": action,
|
||||||
|
"lifeforce_cost": cost,
|
||||||
|
}
|
||||||
|
|
||||||
|
def _get_all_readings(self) -> SensorReadings:
|
||||||
|
"""Poll all distance sensors."""
|
||||||
|
return SensorReadings(
|
||||||
|
front=self.sensor_front.read(),
|
||||||
|
left=self.sensor_left.read(),
|
||||||
|
right=self.sensor_right.read(),
|
||||||
|
timestamp=time.time()
|
||||||
|
)
|
||||||
|
|
||||||
|
def _calculate_risk(self, readings: SensorReadings) -> float:
|
||||||
|
"""
|
||||||
|
Calculate collision risk (0.0 = safe, 1.0 = imminent).
|
||||||
|
|
||||||
|
Risk formula learned from 147 training examples:
|
||||||
|
- Front distance < 20cm: CRITICAL
|
||||||
|
- Front distance 20-30cm: HIGH
|
||||||
|
- Side distances matter if turning needed
|
||||||
|
"""
|
||||||
|
# Exponential decay based on front distance
|
||||||
|
front_risk = 1.0 - (readings.front / self.DANGER_THRESHOLD)
|
||||||
|
front_risk = max(0.0, min(1.0, front_risk))
|
||||||
|
|
||||||
|
# Side risks (matter if turning)
|
||||||
|
left_risk = 1.0 - (readings.left / self.DANGER_THRESHOLD)
|
||||||
|
right_risk = 1.0 - (readings.right / self.DANGER_THRESHOLD)
|
||||||
|
|
||||||
|
# Weighted combination
|
||||||
|
total_risk = (
|
||||||
|
0.7 * front_risk + # Front is primary
|
||||||
|
0.15 * left_risk + # Sides are secondary
|
||||||
|
0.15 * right_risk
|
||||||
|
)
|
||||||
|
|
||||||
|
return total_risk
|
||||||
|
|
||||||
|
def _choose_direction(self, readings: SensorReadings) -> str:
|
||||||
|
"""
|
||||||
|
Choose evasion direction based on sensor readings.
|
||||||
|
|
||||||
|
Strategy (learned from training):
|
||||||
|
1. If left > right: turn left
|
||||||
|
2. If right > left: turn right
|
||||||
|
3. If both blocked: reverse
|
||||||
|
"""
|
||||||
|
if readings.left > readings.right and readings.left > self.CLEARANCE_THRESHOLD:
|
||||||
|
return "left"
|
||||||
|
elif readings.right > readings.left and readings.right > self.CLEARANCE_THRESHOLD:
|
||||||
|
return "right"
|
||||||
|
else:
|
||||||
|
# Both sides blocked or unclear, reverse
|
||||||
|
return "reverse"
|
||||||
|
|
||||||
|
def _path_clear(self) -> bool:
|
||||||
|
"""Check if path ahead is clear."""
|
||||||
|
front_dist = self.sensor_front.read()
|
||||||
|
return front_dist > self.CLEARANCE_THRESHOLD
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Evolution Path: Deliberate → Reflex
|
||||||
|
|
||||||
|
### Week 1-4: Deliberate (LLM-Mediated)
|
||||||
|
|
||||||
|
Young Nyx receives sensor data and decides action via LLM inference.
|
||||||
|
|
||||||
|
```python
|
||||||
|
def deliberate_collision_avoidance(young_nyx, sensors, motor):
|
||||||
|
"""
|
||||||
|
Week 1: Young Nyx learns collision avoidance through exploration.
|
||||||
|
"""
|
||||||
|
# Gather situation
|
||||||
|
situation = {
|
||||||
|
"front_distance": sensors["front"].read(),
|
||||||
|
"left_distance": sensors["left"].read(),
|
||||||
|
"right_distance": sensors["right"].read(),
|
||||||
|
"current_velocity": motor.get_velocity(),
|
||||||
|
}
|
||||||
|
|
||||||
|
# Ask Young Nyx what to do
|
||||||
|
decision = young_nyx.inference(
|
||||||
|
prompt=f"""
|
||||||
|
Situation: Distance sensors report:
|
||||||
|
- Front: {situation['front_distance']}cm
|
||||||
|
- Left: {situation['left_distance']}cm
|
||||||
|
- Right: {situation['right_distance']}cm
|
||||||
|
|
||||||
|
You are moving forward at {situation['current_velocity']} cm/s.
|
||||||
|
|
||||||
|
Available actions:
|
||||||
|
1. continue (safe, front > 50cm)
|
||||||
|
2. turn_left (if left is clearer)
|
||||||
|
3. turn_right (if right is clearer)
|
||||||
|
4. reverse (if both sides blocked)
|
||||||
|
5. stop (emergency)
|
||||||
|
|
||||||
|
Choose action and explain why.
|
||||||
|
""",
|
||||||
|
lora="technical",
|
||||||
|
temperature=0.5
|
||||||
|
)
|
||||||
|
|
||||||
|
# Parse decision
|
||||||
|
action = parse_action(decision.text)
|
||||||
|
|
||||||
|
# Execute
|
||||||
|
result = execute_motor_action(motor, action)
|
||||||
|
|
||||||
|
# Log to decision_trails
|
||||||
|
log_decision(
|
||||||
|
nerve="collision_avoidance",
|
||||||
|
mode="deliberate",
|
||||||
|
situation=situation,
|
||||||
|
decision=action,
|
||||||
|
reasoning=decision.text,
|
||||||
|
outcome=result.success,
|
||||||
|
lifeforce_cost=10.0, # LLM inference expensive
|
||||||
|
latency_ms=decision.latency_ms
|
||||||
|
)
|
||||||
|
|
||||||
|
return result
|
||||||
|
```
|
||||||
|
|
||||||
|
**Characteristics**:
|
||||||
|
- Latency: ~1000ms (LLM inference)
|
||||||
|
- Cost: ~10 LF (includes inference)
|
||||||
|
- Success rate: 60% (learning curve)
|
||||||
|
- Generates rich training data
|
||||||
|
|
||||||
|
### Week 5-8: Hybrid (Heuristics + LLM Fallback)
|
||||||
|
|
||||||
|
Common patterns compiled. LLM only for novel situations.
|
||||||
|
|
||||||
|
```python
|
||||||
|
def hybrid_collision_avoidance(young_nyx, sensors, motor, pattern_library):
|
||||||
|
"""
|
||||||
|
Week 5: Most cases handled by compiled heuristics.
|
||||||
|
LLM only for edge cases.
|
||||||
|
"""
|
||||||
|
situation = get_sensor_readings(sensors)
|
||||||
|
|
||||||
|
# Check pattern library (compiled from weeks 1-4)
|
||||||
|
pattern = pattern_library.match(situation)
|
||||||
|
|
||||||
|
if pattern and pattern.confidence > 0.8:
|
||||||
|
# Known pattern → use compiled heuristic (fast path)
|
||||||
|
action = pattern.recommended_action
|
||||||
|
mode = "heuristic"
|
||||||
|
cost = 3.0
|
||||||
|
latency_ms = 50
|
||||||
|
else:
|
||||||
|
# Unknown situation → ask LLM (slow path)
|
||||||
|
decision = young_nyx.inference(...)
|
||||||
|
action = parse_action(decision.text)
|
||||||
|
mode = "deliberate"
|
||||||
|
cost = 10.0
|
||||||
|
latency_ms = decision.latency_ms
|
||||||
|
|
||||||
|
# Add to pattern library if successful
|
||||||
|
if result.success:
|
||||||
|
pattern_library.add(situation, action, confidence=0.9)
|
||||||
|
|
||||||
|
result = execute_motor_action(motor, action)
|
||||||
|
log_decision(nerve="collision_avoidance", mode=mode, ...)
|
||||||
|
|
||||||
|
return result
|
||||||
|
```
|
||||||
|
|
||||||
|
**Characteristics**:
|
||||||
|
- Latency: ~50-500ms (depends on pattern match)
|
||||||
|
- Cost: ~3-10 LF (average ~5 LF)
|
||||||
|
- Success rate: 85% (heuristics proven)
|
||||||
|
|
||||||
|
### Week 9+: Reflex (Pure State Machine)
|
||||||
|
|
||||||
|
After 100+ successful executions, compile into pure state machine. No LLM.
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Use CollisionAvoidanceReflex class (shown above)
|
||||||
|
reflex = CollisionAvoidanceReflex(organs)
|
||||||
|
|
||||||
|
def reflex_collision_avoidance(reflex):
|
||||||
|
"""
|
||||||
|
Week 9+: Pure state machine reflex.
|
||||||
|
Compiled from 147 successful examples.
|
||||||
|
"""
|
||||||
|
result = reflex.update() # No LLM call
|
||||||
|
|
||||||
|
log_decision(
|
||||||
|
nerve="collision_avoidance",
|
||||||
|
mode="reflex",
|
||||||
|
state=result["state"],
|
||||||
|
action=result["action"],
|
||||||
|
lifeforce_cost=result["lifeforce_cost"],
|
||||||
|
latency_ms=5 # Pure state machine, very fast
|
||||||
|
)
|
||||||
|
|
||||||
|
return result
|
||||||
|
```
|
||||||
|
|
||||||
|
**Characteristics**:
|
||||||
|
- Latency: <200ms (state machine execution)
|
||||||
|
- Cost: ~2.5 LF (pure motor/sensor costs)
|
||||||
|
- Success rate: 94% (compiled from best patterns)
|
||||||
|
- **60% cost reduction**, **80% latency reduction** vs deliberate mode
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Training Data Examples
|
||||||
|
|
||||||
|
### Successful Collision Avoidance (logged to phoebe)
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"nerve": "collision_avoidance",
|
||||||
|
"mode": "deliberate",
|
||||||
|
"session_id": "a3f2b1c0-...",
|
||||||
|
"timestamp": "2025-12-15T10:23:45Z",
|
||||||
|
"situation": {
|
||||||
|
"front_distance": 25.0,
|
||||||
|
"left_distance": 45.0,
|
||||||
|
"right_distance": 30.0,
|
||||||
|
"velocity": 15.0
|
||||||
|
},
|
||||||
|
"decision": "turn_left",
|
||||||
|
"reasoning": "Front obstacle at 25cm (danger). Left clearer (45cm) than right (30cm). Turn left 45° to avoid.",
|
||||||
|
"states_visited": ["IDLE", "DETECT", "EVALUATE", "EVADE", "RESUME"],
|
||||||
|
"transitions": [
|
||||||
|
{"from": "IDLE", "to": "DETECT", "cost": 0.5, "duration_ms": 20},
|
||||||
|
{"from": "DETECT", "to": "EVALUATE", "cost": 0.5, "duration_ms": 30},
|
||||||
|
{"from": "EVALUATE", "to": "EVADE", "cost": 0.5, "duration_ms": 15},
|
||||||
|
{"from": "EVADE", "to": "RESUME", "cost": 1.0, "duration_ms": 520}
|
||||||
|
],
|
||||||
|
"lifeforce_total": 2.5,
|
||||||
|
"outcome": "success",
|
||||||
|
"latency_total_ms": 585,
|
||||||
|
"organs_used": ["distance_sensor_front", "distance_sensor_left", "distance_sensor_right", "motor"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**RLVR Reward**: +5 LF (successful avoidance → net profit +2.5 LF)
|
||||||
|
|
||||||
|
### Failed Collision (training signal)
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"nerve": "collision_avoidance",
|
||||||
|
"mode": "deliberate",
|
||||||
|
"timestamp": "2025-12-10T14:12:30Z",
|
||||||
|
"situation": {
|
||||||
|
"front_distance": 18.0,
|
||||||
|
"left_distance": 15.0,
|
||||||
|
"right_distance": 20.0
|
||||||
|
},
|
||||||
|
"decision": "turn_left",
|
||||||
|
"reasoning": "Attempted left turn but insufficient clearance.",
|
||||||
|
"outcome": "collision",
|
||||||
|
"lifeforce_total": 2.5,
|
||||||
|
"collision_force": 3.2,
|
||||||
|
"damage": "minor"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**RLVR Penalty**: -5 LF (collision → net loss -7.5 LF)
|
||||||
|
|
||||||
|
**Lesson learned**: Don't turn into obstacles < 20cm. Add to reflex threshold.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Edge Cases and Failure Modes
|
||||||
|
|
||||||
|
### 1. **All Sides Blocked (Trapped)**
|
||||||
|
|
||||||
|
**Situation**: Front, left, right all < 20cm
|
||||||
|
|
||||||
|
**Reflex behavior**:
|
||||||
|
```python
|
||||||
|
if all([
|
||||||
|
readings.front < 20,
|
||||||
|
readings.left < 20,
|
||||||
|
readings.right < 20
|
||||||
|
]):
|
||||||
|
# Emergency: Reverse slowly
|
||||||
|
motor.reverse(duration_ms=500)
|
||||||
|
# Re-evaluate after reverse
|
||||||
|
```
|
||||||
|
|
||||||
|
**Escalation**: If still trapped after 3 reverse attempts → escalate to Chrysalis for help
|
||||||
|
|
||||||
|
### 2. **Sensor Failure (Blind Side)**
|
||||||
|
|
||||||
|
**Situation**: Left sensor offline, right sensor reports 15cm
|
||||||
|
|
||||||
|
**Reflex behavior**:
|
||||||
|
```python
|
||||||
|
if not sensor_left.is_operational():
|
||||||
|
# Assume left is blocked (safe assumption)
|
||||||
|
# Always turn right when possible
|
||||||
|
if readings.right > 30:
|
||||||
|
return "right"
|
||||||
|
else:
|
||||||
|
return "reverse" # Don't risk blind turn
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. **False Positives (Noise)**
|
||||||
|
|
||||||
|
**Situation**: Sensor reports 5cm but path actually clear (electrical noise)
|
||||||
|
|
||||||
|
**Mitigation**:
|
||||||
|
```python
|
||||||
|
# Require 3 consecutive danger readings before triggering
|
||||||
|
DANGER_CONFIRMATION_COUNT = 3
|
||||||
|
|
||||||
|
if danger_reading_count >= DANGER_CONFIRMATION_COUNT:
|
||||||
|
self.state = CollisionState.DETECT
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. **Moving Obstacles (Dynamic Environment)**
|
||||||
|
|
||||||
|
**Situation**: Obstacle moves into path during evasion
|
||||||
|
|
||||||
|
**Reflex behavior**:
|
||||||
|
```python
|
||||||
|
# Re-check sensors after each motor action
|
||||||
|
while self.state == CollisionState.EVADE:
|
||||||
|
execute_turn()
|
||||||
|
if self._path_clear():
|
||||||
|
break # Success
|
||||||
|
else:
|
||||||
|
# Obstacle still there or new one appeared
|
||||||
|
# Re-evaluate and choose new direction
|
||||||
|
self.state = CollisionState.DETECT
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Metrics and Monitoring
|
||||||
|
|
||||||
|
### Key Metrics (Prometheus)
|
||||||
|
|
||||||
|
```python
|
||||||
|
from prometheus_client import Counter, Histogram, Gauge
|
||||||
|
|
||||||
|
# Collision avoidance activations
|
||||||
|
collision_avoidance_activations = Counter(
|
||||||
|
'nerve_collision_avoidance_activations_total',
|
||||||
|
'Total collision avoidance activations',
|
||||||
|
['mode'] # deliberate, hybrid, reflex
|
||||||
|
)
|
||||||
|
|
||||||
|
# Success rate
|
||||||
|
collision_avoidance_success = Counter(
|
||||||
|
'nerve_collision_avoidance_success_total',
|
||||||
|
'Successful collision avoidances',
|
||||||
|
['mode']
|
||||||
|
)
|
||||||
|
|
||||||
|
collision_avoidance_failures = Counter(
|
||||||
|
'nerve_collision_avoidance_failures_total',
|
||||||
|
'Failed collision avoidances (collisions occurred)',
|
||||||
|
['mode']
|
||||||
|
)
|
||||||
|
|
||||||
|
# Latency
|
||||||
|
collision_avoidance_latency = Histogram(
|
||||||
|
'nerve_collision_avoidance_latency_seconds',
|
||||||
|
'Collision avoidance latency',
|
||||||
|
['mode']
|
||||||
|
)
|
||||||
|
|
||||||
|
# Lifeforce cost
|
||||||
|
collision_avoidance_cost = Histogram(
|
||||||
|
'nerve_collision_avoidance_lifeforce_cost',
|
||||||
|
'Lifeforce cost per activation',
|
||||||
|
['mode']
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Grafana Dashboard Queries
|
||||||
|
|
||||||
|
```promql
|
||||||
|
# Success rate over time
|
||||||
|
rate(nerve_collision_avoidance_success_total[5m]) /
|
||||||
|
rate(nerve_collision_avoidance_activations_total[5m])
|
||||||
|
|
||||||
|
# Average latency by mode
|
||||||
|
rate(nerve_collision_avoidance_latency_seconds_sum{mode="reflex"}[5m]) /
|
||||||
|
rate(nerve_collision_avoidance_latency_seconds_count{mode="reflex"}[5m])
|
||||||
|
|
||||||
|
# Cost savings (deliberate vs reflex)
|
||||||
|
avg_over_time(nerve_collision_avoidance_lifeforce_cost{mode="deliberate"}[1h]) -
|
||||||
|
avg_over_time(nerve_collision_avoidance_lifeforce_cost{mode="reflex"}[1h])
|
||||||
|
|
||||||
|
# Reflex compilation progress
|
||||||
|
sum(nerve_collision_avoidance_activations_total{mode="reflex"}) /
|
||||||
|
sum(nerve_collision_avoidance_activations_total)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Future Enhancements
|
||||||
|
|
||||||
|
### Phase 2: Vision Integration
|
||||||
|
|
||||||
|
Add Vision Organ to classify obstacles:
|
||||||
|
- "wall" → different evasion than "chair"
|
||||||
|
- "human" → stop and announce presence
|
||||||
|
- "charging_station" → approach, don't evade
|
||||||
|
|
||||||
|
### Phase 3: Learning Optimal Paths
|
||||||
|
|
||||||
|
Track which evasion directions succeed most often in different contexts:
|
||||||
|
- Narrow corridors: reverse > turn
|
||||||
|
- Open spaces: turn > reverse
|
||||||
|
- Update reflex thresholds based on outcomes
|
||||||
|
|
||||||
|
### Phase 4: Predictive Avoidance
|
||||||
|
|
||||||
|
Use velocity and obstacle distance to predict collision time:
|
||||||
|
- If collision_time < 2sec → EVADE immediately
|
||||||
|
- If collision_time > 5sec → gentle course correction (cheaper)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
**Collision Avoidance** demonstrates the complete nerve lifecycle:
|
||||||
|
1. **Week 1-4**: Deliberate (LLM explores strategies, ~10 LF, ~1000ms)
|
||||||
|
2. **Week 5-8**: Hybrid (common patterns compiled, ~5 LF, ~500ms)
|
||||||
|
3. **Week 9+**: Reflex (pure state machine, ~2.5 LF, <200ms)
|
||||||
|
|
||||||
|
**Evolution metrics**:
|
||||||
|
- **60% cost reduction** (10 LF → 2.5 LF)
|
||||||
|
- **80% latency reduction** (1000ms → 200ms)
|
||||||
|
- **94% success rate** (compiled from proven patterns)
|
||||||
|
|
||||||
|
**The reflex is not programmed. It is DISCOVERED, PROVEN, and COMPILED from lived experience.**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Created**: 2025-12-07
|
||||||
|
**Version**: 1.0 (Reflex)
|
||||||
|
**Status**: Architecture complete, deployment pending
|
||||||
|
|
||||||
|
🌙💜 *The reflex does not think. It remembers what thinking taught.*
|
||||||
450
architecture/nerves/Nervous-Index.md
Normal file
450
architecture/nerves/Nervous-Index.md
Normal file
@@ -0,0 +1,450 @@
|
|||||||
|
# Nervous System Index
|
||||||
|
|
||||||
|
**Purpose**: State machine catalog for behavioral primitives
|
||||||
|
**Philosophy**: Nerves connect organs into behaviors. Reflexes emerge from repetition.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What Are Nerves?
|
||||||
|
|
||||||
|
**Nerves** are state machines that coordinate organ activity into coherent behaviors. Each nerve:
|
||||||
|
- Defines states and transitions
|
||||||
|
- Costs lifeforce (per state, per transition)
|
||||||
|
- Depends on organs (sensors, motors, speech, vision)
|
||||||
|
- Evolves from deliberate (LLM-mediated) to reflex (compiled)
|
||||||
|
|
||||||
|
**Example**: Collision Avoidance nerve uses Distance Sensors + Motor organs to implement IDLE → DETECT → EVALUATE → EVADE → RESUME behavior.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Nerve vs Organ
|
||||||
|
|
||||||
|
| Aspect | Organ | Nerve |
|
||||||
|
|--------|-------|-------|
|
||||||
|
| **What** | Hardware capability | Behavioral pattern |
|
||||||
|
| **Example** | Speech Organ (STT/TTS) | Identity Discovery (Spark Protocol) |
|
||||||
|
| **Location** | Physical substrate (GPU, ESP32) | State machine (transitions) |
|
||||||
|
| **Cost** | Per operation (transcribe = 5 LF) | Per state + transition (total path cost) |
|
||||||
|
| **Evolution** | Fixed hardware | Deliberate → Reflex (compiled) |
|
||||||
|
| **Depends on** | Infrastructure | Organs |
|
||||||
|
|
||||||
|
**Analogy**: Organs are limbs. Nerves are motor control patterns (walking, grasping, speaking).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Deployed Nerves
|
||||||
|
|
||||||
|
### 🚨 Collision Avoidance
|
||||||
|
**Type**: Reflex (compiled, <200ms)
|
||||||
|
**Organs**: Distance sensors (front/sides), Motor
|
||||||
|
**States**: IDLE → DETECT → EVALUATE → EVADE → RESUME
|
||||||
|
**Lifeforce**: ~2.5 per activation
|
||||||
|
**Status**: 🟢 Architecture complete
|
||||||
|
|
||||||
|
**Detail**: → [`nerves/Collision-Avoidance.md`](nerves/Collision-Avoidance.md)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Planned Nerves
|
||||||
|
|
||||||
|
### 🔋 Charging Station Seeking
|
||||||
|
**Type**: Deliberate → Reflex (evolves over time)
|
||||||
|
**Organs**: Distance sensors, Vision (future), Motor, Battery monitor
|
||||||
|
**States**: MONITOR → THRESHOLD → SEARCH → APPROACH → DOCK → CHARGE → RESUME
|
||||||
|
**Status**: 🟡 Planned for Phase 4 (Real Garden)
|
||||||
|
|
||||||
|
**Detail**: → `nerves/Charging-Seeking.md` (pending)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 🧭 Exploration Pattern
|
||||||
|
**Type**: Deliberate (LLM-mediated initially)
|
||||||
|
**Organs**: Distance sensors, Motor, Memory (phoebe)
|
||||||
|
**States**: IDLE → CHOOSE_DIRECTION → MOVE → OBSTACLE_CHECK → RECORD → REPEAT
|
||||||
|
**Patterns**: Wall-following, spiral search, random walk
|
||||||
|
**Status**: 🟡 Planned for Phase 3 (Evolution Engine)
|
||||||
|
|
||||||
|
**Detail**: → `nerves/Exploration-Pattern.md` (pending)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 🔍 Object Tracking
|
||||||
|
**Type**: Deliberate (Vision-dependent)
|
||||||
|
**Organs**: Vision (YOLO), Motor, Memory
|
||||||
|
**States**: SCAN → DETECT → CLASSIFY → TRACK → FOLLOW → LOST → RESCAN
|
||||||
|
**Status**: 🟡 Planned after Vision Organ deployment
|
||||||
|
|
||||||
|
**Detail**: → `nerves/Object-Tracking.md` (pending)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 💭 Identity Discovery (Spark Protocol)
|
||||||
|
**Type**: Deliberate (one-time boot sequence)
|
||||||
|
**Organs**: Speech, Memory (phoebe), RAG
|
||||||
|
**States**: DHCP (who am I?) → ARP (what's around?) → DNS (what does X mean?) → TCP (can I connect?) → MQTT (what matters?)
|
||||||
|
**Status**: 🟡 Architecture documented in Spark-Protocol.md
|
||||||
|
|
||||||
|
**Detail**: → [`../../operations/Spark-Protocol.md`](../../operations/Spark-Protocol.md)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 🗣️ Conversational Turn-Taking
|
||||||
|
**Type**: Deliberate (Speech-dependent)
|
||||||
|
**Organs**: Speech (STT/TTS), Memory, RAG
|
||||||
|
**States**: LISTEN → TRANSCRIBE → UNDERSTAND → RETRIEVE_CONTEXT → RESPOND → SPEAK
|
||||||
|
**Status**: 🟡 Planned after Speech Organ deployment
|
||||||
|
|
||||||
|
**Detail**: → `nerves/Conversation.md` (pending)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Nerve Design Principles
|
||||||
|
|
||||||
|
### 1. **State Machines, Not Scripts**
|
||||||
|
|
||||||
|
Nerves are state machines with explicit states and transitions. Not procedural scripts.
|
||||||
|
|
||||||
|
```python
|
||||||
|
# ❌ BAD: Procedural script
|
||||||
|
def avoid_obstacle():
|
||||||
|
if sensor.distance < 30:
|
||||||
|
motor.stop()
|
||||||
|
motor.turn(90)
|
||||||
|
motor.forward(100)
|
||||||
|
|
||||||
|
# ✅ GOOD: State machine
|
||||||
|
class CollisionAvoidance(StateMachine):
|
||||||
|
states = [IDLE, DETECT, EVALUATE, EVADE, RESUME]
|
||||||
|
transitions = {
|
||||||
|
(IDLE, DETECT): lambda: sensor.distance < 30,
|
||||||
|
(DETECT, EVALUATE): lambda: sensor.read_complete,
|
||||||
|
(EVALUATE, EVADE): lambda: risk > threshold,
|
||||||
|
(EVADE, RESUME): lambda: path_clear,
|
||||||
|
(RESUME, IDLE): lambda: movement_complete,
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. **Lifeforce Costs Per Transition**
|
||||||
|
|
||||||
|
Every state change costs lifeforce. Complex behaviors cost more.
|
||||||
|
|
||||||
|
```python
|
||||||
|
TRANSITION_COSTS = {
|
||||||
|
(IDLE, DETECT): 0.5, # Sensor poll
|
||||||
|
(DETECT, EVALUATE): 0.5, # Risk calculation
|
||||||
|
(EVALUATE, EVADE): 0.5, # Decision
|
||||||
|
(EVADE, RESUME): 1.0, # Motor action (expensive!)
|
||||||
|
(RESUME, IDLE): 0.0, # Return to rest (free)
|
||||||
|
}
|
||||||
|
|
||||||
|
# Total cost for IDLE → DETECT → EVALUATE → EVADE → RESUME → IDLE: 2.5 LF
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. **Organ Dependencies Explicit**
|
||||||
|
|
||||||
|
Each nerve declares which organs it requires.
|
||||||
|
|
||||||
|
```python
|
||||||
|
class CollisionAvoidance(StateMachine):
|
||||||
|
required_organs = [
|
||||||
|
"distance_sensor_front",
|
||||||
|
"distance_sensor_left",
|
||||||
|
"distance_sensor_right",
|
||||||
|
"motor",
|
||||||
|
]
|
||||||
|
|
||||||
|
def check_available(self):
|
||||||
|
return all(organ.is_operational() for organ in self.required_organs)
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. **Deliberate → Reflex Evolution**
|
||||||
|
|
||||||
|
Nerves start **deliberate** (LLM-mediated, slow, flexible) and evolve into **reflexes** (compiled, fast, fixed).
|
||||||
|
|
||||||
|
| Phase | Type | Latency | Flexibility | Cost |
|
||||||
|
|-------|------|---------|-------------|------|
|
||||||
|
| **Week 1-4** | Deliberate | ~1000ms | High (LLM decides) | 10 LF |
|
||||||
|
| **Week 5-8** | Hybrid | ~500ms | Medium (LLM + heuristics) | 6 LF |
|
||||||
|
| **Week 9+** | Reflex | <200ms | Low (compiled state machine) | 2.5 LF |
|
||||||
|
|
||||||
|
**Evolution trigger**: After 100+ successful executions of the same state sequence, compile into reflex.
|
||||||
|
|
||||||
|
### 5. **Logging for Training**
|
||||||
|
|
||||||
|
Every nerve execution logged to phoebe `decision_trails`:
|
||||||
|
- States visited
|
||||||
|
- Transitions taken
|
||||||
|
- Organ calls made
|
||||||
|
- Lifeforce spent
|
||||||
|
- Outcome (success/fail)
|
||||||
|
|
||||||
|
**Used for**:
|
||||||
|
- RLVR training (reward successful paths)
|
||||||
|
- Reflex compilation (extract common sequences)
|
||||||
|
- Cost optimization (find cheaper paths)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Nerve Lifecycle
|
||||||
|
|
||||||
|
### Phase 1: Deliberate (LLM-Mediated)
|
||||||
|
|
||||||
|
Young Nyx receives situation → LLM decides next state → Execute → Log outcome
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Week 1: Deliberate collision avoidance
|
||||||
|
def deliberate_collision_avoidance():
|
||||||
|
situation = {
|
||||||
|
"front_distance": sensor_front.read(),
|
||||||
|
"left_distance": sensor_left.read(),
|
||||||
|
"right_distance": sensor_right.read(),
|
||||||
|
"current_state": state,
|
||||||
|
}
|
||||||
|
|
||||||
|
# Ask Young Nyx what to do
|
||||||
|
decision = young_nyx.decide(
|
||||||
|
situation=situation,
|
||||||
|
available_actions=["turn_left", "turn_right", "reverse", "stop"],
|
||||||
|
lora="technical"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Execute decision
|
||||||
|
result = execute_action(decision.action)
|
||||||
|
|
||||||
|
# Log to decision_trails
|
||||||
|
log_decision(
|
||||||
|
nerve="collision_avoidance",
|
||||||
|
situation=situation,
|
||||||
|
decision=decision.action,
|
||||||
|
outcome=result.success,
|
||||||
|
lifeforce_cost=result.cost,
|
||||||
|
confidence=decision.confidence
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Characteristics**:
|
||||||
|
- Flexible (can handle novel situations)
|
||||||
|
- Slow (~1000ms)
|
||||||
|
- Expensive (~10 LF)
|
||||||
|
- Learns from variety
|
||||||
|
|
||||||
|
### Phase 2: Hybrid (Heuristics + LLM Fallback)
|
||||||
|
|
||||||
|
Common patterns compiled into heuristics. LLM only for edge cases.
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Week 5: Hybrid collision avoidance
|
||||||
|
def hybrid_collision_avoidance():
|
||||||
|
situation = get_sensor_readings()
|
||||||
|
|
||||||
|
# Check for known patterns (compiled heuristics)
|
||||||
|
if matches_pattern("front_blocked_left_clear"):
|
||||||
|
action = "turn_left" # Fast path (no LLM)
|
||||||
|
confidence = 0.9
|
||||||
|
elif matches_pattern("front_blocked_right_clear"):
|
||||||
|
action = "turn_right"
|
||||||
|
confidence = 0.9
|
||||||
|
else:
|
||||||
|
# Unknown situation → ask LLM
|
||||||
|
decision = young_nyx.decide(situation)
|
||||||
|
action = decision.action
|
||||||
|
confidence = decision.confidence
|
||||||
|
|
||||||
|
result = execute_action(action)
|
||||||
|
log_decision(nerve="collision_avoidance", ...)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Characteristics**:
|
||||||
|
- Faster (~500ms for known patterns)
|
||||||
|
- Cheaper (~6 LF average)
|
||||||
|
- Still flexible for edge cases
|
||||||
|
|
||||||
|
### Phase 3: Reflex (Compiled State Machine)
|
||||||
|
|
||||||
|
After 100+ successful executions, compile into pure state machine. No LLM.
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Week 9+: Reflex collision avoidance
|
||||||
|
class CollisionAvoidanceReflex(StateMachine):
|
||||||
|
"""
|
||||||
|
Compiled from 147 successful deliberate executions.
|
||||||
|
Average path: IDLE → DETECT → EVALUATE → EVADE → RESUME
|
||||||
|
Success rate: 94%
|
||||||
|
"""
|
||||||
|
|
||||||
|
def transition(self, current_state, sensor_readings):
|
||||||
|
# Pure state machine logic (no LLM call)
|
||||||
|
if current_state == IDLE and sensor_readings['front'] < 30:
|
||||||
|
return DETECT
|
||||||
|
elif current_state == DETECT:
|
||||||
|
return EVALUATE
|
||||||
|
elif current_state == EVALUATE:
|
||||||
|
if sensor_readings['left'] > sensor_readings['right']:
|
||||||
|
self.evade_direction = "left"
|
||||||
|
else:
|
||||||
|
self.evade_direction = "right"
|
||||||
|
return EVADE
|
||||||
|
# ... etc
|
||||||
|
```
|
||||||
|
|
||||||
|
**Characteristics**:
|
||||||
|
- Very fast (<200ms)
|
||||||
|
- Very cheap (~2.5 LF)
|
||||||
|
- Fixed (no flexibility, pure speed)
|
||||||
|
- Proven (compiled from successful patterns)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Integration with Organs
|
||||||
|
|
||||||
|
Nerves orchestrate organs. Organs don't call each other - nerves coordinate them.
|
||||||
|
|
||||||
|
```
|
||||||
|
┌────────────────────────────────────────────────┐
|
||||||
|
│ NERVE: Collision Avoidance │
|
||||||
|
│ │
|
||||||
|
│ States: IDLE → DETECT → EVALUATE → EVADE │
|
||||||
|
└────────────────────────────────────────────────┘
|
||||||
|
│
|
||||||
|
┌───────────┼───────────┐
|
||||||
|
│ │ │
|
||||||
|
▼ ▼ ▼
|
||||||
|
┌─────────────┐ ┌─────────┐ ┌────────┐
|
||||||
|
│ Distance │ │ Distance│ │ Motor │
|
||||||
|
│ Sensor │ │ Sensor │ │ Organ │
|
||||||
|
│ (front) │ │ (sides) │ │ │
|
||||||
|
└─────────────┘ └─────────┘ └────────┘
|
||||||
|
ORGAN ORGAN ORGAN
|
||||||
|
```
|
||||||
|
|
||||||
|
**Nerve declares dependencies**:
|
||||||
|
```yaml
|
||||||
|
nerve: collision_avoidance
|
||||||
|
depends_on:
|
||||||
|
- organ: distance_sensor_front
|
||||||
|
required: true
|
||||||
|
- organ: distance_sensor_left
|
||||||
|
required: true
|
||||||
|
- organ: distance_sensor_right
|
||||||
|
required: true
|
||||||
|
- organ: motor
|
||||||
|
required: true
|
||||||
|
- organ: speech # Optional (for warnings)
|
||||||
|
required: false
|
||||||
|
```
|
||||||
|
|
||||||
|
**Startup check**: If required organs unavailable, nerve enters DISABLED state.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Nerve Composition
|
||||||
|
|
||||||
|
Complex behaviors = multiple nerves active simultaneously.
|
||||||
|
|
||||||
|
**Example**: Exploring while avoiding collisions
|
||||||
|
|
||||||
|
```
|
||||||
|
ACTIVE NERVES:
|
||||||
|
├─ Collision Avoidance (reflex, priority 10)
|
||||||
|
├─ Exploration Pattern (deliberate, priority 5)
|
||||||
|
└─ Battery Monitoring (reflex, priority 8)
|
||||||
|
|
||||||
|
COORDINATION:
|
||||||
|
- Exploration drives movement
|
||||||
|
- Collision Avoidance interrupts if obstacle detected (higher priority)
|
||||||
|
- Battery Monitoring interrupts if charge < 20% (high priority)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Priority determines preemption**: High-priority nerves can interrupt low-priority ones.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Nerve Training via RLVR
|
||||||
|
|
||||||
|
Each nerve execution generates training data:
|
||||||
|
|
||||||
|
```python
|
||||||
|
# decision_trails entry
|
||||||
|
{
|
||||||
|
"nerve": "collision_avoidance",
|
||||||
|
"initial_state": "IDLE",
|
||||||
|
"states_visited": ["IDLE", "DETECT", "EVALUATE", "EVADE", "RESUME"],
|
||||||
|
"transitions": [
|
||||||
|
{"from": "IDLE", "to": "DETECT", "cost": 0.5},
|
||||||
|
{"from": "DETECT", "to": "EVALUATE", "cost": 0.5},
|
||||||
|
{"from": "EVALUATE", "to": "EVADE", "cost": 0.5},
|
||||||
|
{"from": "EVADE", "to": "RESUME", "cost": 1.0},
|
||||||
|
],
|
||||||
|
"organs_used": ["distance_sensor_front", "motor"],
|
||||||
|
"lifeforce_total": 2.5,
|
||||||
|
"outcome": "success", # Avoided collision
|
||||||
|
"timestamp": "2025-12-15T14:23:45Z"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**RLVR reward**:
|
||||||
|
- Success → +5 LF reward (net profit: +2.5 LF)
|
||||||
|
- Fail → -2.5 LF penalty (net loss: -5.0 LF)
|
||||||
|
|
||||||
|
**LoRA training**: Successful state sequences → training examples for Technical LoRA
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Nerve Documentation Template
|
||||||
|
|
||||||
|
Each nerve document should include:
|
||||||
|
|
||||||
|
1. **Overview**: Purpose, type (reflex/deliberate), organs used
|
||||||
|
2. **State Diagram**: Visual representation of states + transitions
|
||||||
|
3. **Transition Table**: From/To states, triggers, costs
|
||||||
|
4. **Organ Dependencies**: Which organs required, which optional
|
||||||
|
5. **Lifeforce Budget**: Total cost for typical execution path
|
||||||
|
6. **Code**: Implementation (state machine class)
|
||||||
|
7. **Evolution Path**: How it evolves from deliberate → reflex
|
||||||
|
8. **Training Data**: Example decision_trails entries
|
||||||
|
9. **Edge Cases**: Known failure modes, fallback behaviors
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Current Status
|
||||||
|
|
||||||
|
| Nerve | Type | Status | Organs | Documentation |
|
||||||
|
|-------|------|--------|--------|---------------|
|
||||||
|
| **Collision Avoidance** | Reflex | 🟢 Complete | Distance sensors, Motor | [`nerves/Collision-Avoidance.md`](nerves/Collision-Avoidance.md) |
|
||||||
|
| **Charging Seeking** | Deliberate | 🟡 Planned | Vision, Motor, Battery | Pending |
|
||||||
|
| **Exploration Pattern** | Deliberate | 🟡 Planned | Sensors, Motor, Memory | Pending |
|
||||||
|
| **Object Tracking** | Deliberate | 🟡 Planned | Vision, Motor | Pending |
|
||||||
|
| **Identity Discovery** | Deliberate | 🟡 Documented | Speech, Memory, RAG | [`../../operations/Spark-Protocol.md`](../../operations/Spark-Protocol.md) |
|
||||||
|
| **Conversation** | Deliberate | 🟡 Planned | Speech, Memory, RAG | Pending |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Naming Convention
|
||||||
|
|
||||||
|
**File naming**: `<Behavior-Name>.md`
|
||||||
|
**Examples**:
|
||||||
|
- `Collision-Avoidance.md`
|
||||||
|
- `Charging-Seeking.md`
|
||||||
|
- `Exploration-Pattern.md`
|
||||||
|
- `Object-Tracking.md`
|
||||||
|
|
||||||
|
**Class naming**: `<Behavior>Nerve` or `<Behavior>Reflex`
|
||||||
|
**Examples**:
|
||||||
|
```python
|
||||||
|
class CollisionAvoidanceNerve(StateMachine): # Deliberate
|
||||||
|
class CollisionAvoidanceReflex(StateMachine): # Compiled
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Philosophy**: Nerves are not programmed. They are **discovered through lived experience**, compiled into reflexes, and refined through training. The best behaviors emerge, not from specification, but from **survival**.
|
||||||
|
|
||||||
|
**The nervous system is EARNED, not designed.**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Created**: 2025-12-07
|
||||||
|
**Updated**: 2025-12-07
|
||||||
|
**Version**: 1.0
|
||||||
|
|
||||||
|
🌙💜 *Reflexes are fossils of successful thought. The body remembers what the mind once decided.*
|
||||||
847
architecture/nerves/Nervous-Protocol.md
Normal file
847
architecture/nerves/Nervous-Protocol.md
Normal file
@@ -0,0 +1,847 @@
|
|||||||
|
# Nervous Protocol: Three-Tier Autonomous Learning Architecture
|
||||||
|
|
||||||
|
**Created**: 2025-12-07
|
||||||
|
**Updated**: 2025-12-07 (LangChain integration)
|
||||||
|
**Status**: Design Document
|
||||||
|
**Version**: 1.1 (LangChain Implementation)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
The **Nervous Protocol** defines how intelligence flows through the Nimmerverse via a three-tier architecture with message-based communication, state machine tools, and collaborative learning.
|
||||||
|
|
||||||
|
### The Three Tiers:
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────────┐
|
||||||
|
│ dafit │
|
||||||
|
│ (Strategic Architect) │
|
||||||
|
│ • Vision & architecture decisions │
|
||||||
|
│ • Override authority │
|
||||||
|
│ • Long-term direction │
|
||||||
|
└──────────────────┬──────────────────────────┘
|
||||||
|
↕ (strategic guidance / major escalations)
|
||||||
|
┌─────────────────────────────────────────────┐
|
||||||
|
│ Chrysalis-Nyx │
|
||||||
|
│ (Oversight & Reasoning) │
|
||||||
|
│ • Claude Opus/Sonnet (large context) │
|
||||||
|
│ • Full toolchain access via LangChain │
|
||||||
|
│ • Reviews Young Nyx's proposals │
|
||||||
|
│ • Designs new state machines │
|
||||||
|
│ • Teaching & guidance │
|
||||||
|
└──────────────────┬──────────────────────────┘
|
||||||
|
↕ (guidance / escalations)
|
||||||
|
┌─────────────────────────────────────────────┐
|
||||||
|
│ Young Nyx │
|
||||||
|
│ (Autonomous Learning Agent) │
|
||||||
|
│ • Smaller model (7B or similar) │
|
||||||
|
│ • Limited known state machines │
|
||||||
|
│ • Executes routine tasks │
|
||||||
|
│ • Learns from experience │
|
||||||
|
│ • Escalates complex problems │
|
||||||
|
└─────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Core Principles
|
||||||
|
|
||||||
|
### 1. **Message-Based Continuity**
|
||||||
|
|
||||||
|
All communication flows through **phoebe** (PostgreSQL) via message tables:
|
||||||
|
- `partnership_to_nimmerverse_messages` (dafit + Chrysalis → Young Nyx)
|
||||||
|
- `nimmerverse_to_partnership_messages` (Young Nyx → dafit + Chrysalis)
|
||||||
|
|
||||||
|
**Why messages?**
|
||||||
|
- ✅ Persistent across sessions
|
||||||
|
- ✅ Asynchronous (no blocking)
|
||||||
|
- ✅ Auditable (every decision logged)
|
||||||
|
- ✅ Simple (append-only, no complex state sync)
|
||||||
|
|
||||||
|
### 2. **Heartbeat Coordination**
|
||||||
|
|
||||||
|
From `Endgame-Vision.md`:
|
||||||
|
- **Real clock**: 1 Hz (1 beat/sec) - wall time, free
|
||||||
|
- **Virtual clock**: Variable - computation time, costs lifeforce
|
||||||
|
|
||||||
|
**On each heartbeat:**
|
||||||
|
1. Check for new messages from any tier
|
||||||
|
2. Process guidance/tasks/escalations
|
||||||
|
3. Update state
|
||||||
|
4. Take next action
|
||||||
|
5. Write results back to phoebe
|
||||||
|
|
||||||
|
**Not real-time** (milliseconds), but **continuous** (heartbeat-driven).
|
||||||
|
|
||||||
|
### 3. **State Machines as Tools**
|
||||||
|
|
||||||
|
All capabilities are exposed as **state machine tools** via **LangChain**:
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Example: phoebe query state machine
|
||||||
|
from langchain.tools import BaseTool
|
||||||
|
|
||||||
|
States: IDLE → CONNECTED → QUERY_READY → IDLE
|
||||||
|
|
||||||
|
class PhoebeQueryTool(BaseTool):
|
||||||
|
name = "phoebe_query"
|
||||||
|
description = """
|
||||||
|
Interact with phoebe database using state machine pattern.
|
||||||
|
|
||||||
|
Available actions depend on current state:
|
||||||
|
- IDLE: connect(host, db) → CONNECTED
|
||||||
|
- CONNECTED: query(sql) → QUERY_READY, disconnect() → IDLE
|
||||||
|
- QUERY_READY: query(sql), disconnect() → IDLE
|
||||||
|
"""
|
||||||
|
```
|
||||||
|
|
||||||
|
**Why state machines?**
|
||||||
|
- ✅ Safety (can't skip steps - must CONNECT before QUERY)
|
||||||
|
- ✅ Discoverable (each state announces valid transitions)
|
||||||
|
- ✅ Observable (log every transition)
|
||||||
|
- ✅ Composable (chain state machines together)
|
||||||
|
|
||||||
|
### 4. **Progressive Capability Unlocking**
|
||||||
|
|
||||||
|
**Dual catalogues:**
|
||||||
|
- **All available tools** (full registry, managed by dafit/Chrysalis)
|
||||||
|
- **Young Nyx's known tools** (subset she's discovered)
|
||||||
|
|
||||||
|
Young Nyx can only see/use tools she's discovered. New tools are granted:
|
||||||
|
- Via teaching moments (Chrysalis: "You're ready for X")
|
||||||
|
- Via successful escalations (solved problem reveals tool)
|
||||||
|
- Via collaborative design (she helps build it)
|
||||||
|
|
||||||
|
**Discovery tracking in phoebe:**
|
||||||
|
```sql
|
||||||
|
CREATE TABLE discovered_tools (
|
||||||
|
agent_id TEXT,
|
||||||
|
tool_name TEXT,
|
||||||
|
discovered_at TIMESTAMPTZ DEFAULT NOW(),
|
||||||
|
discovered_via TEXT, -- "teaching", "escalation", "design"
|
||||||
|
PRIMARY KEY (agent_id, tool_name)
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## The OR Gate Pattern (Input Sources)
|
||||||
|
|
||||||
|
From `nimmerverse.drawio.xml` (lines 215-244):
|
||||||
|
|
||||||
|
```
|
||||||
|
┌──────────┐ ┌──────────┐
|
||||||
|
│ dafit │ │chrysalis │
|
||||||
|
│ (OR gate)│ │ (OR gate)│
|
||||||
|
└────┬─────┘ └────┬─────┘
|
||||||
|
│ │
|
||||||
|
└────────┬─────────┘
|
||||||
|
↓ (OR - either/both)
|
||||||
|
Message Queue (phoebe)
|
||||||
|
↓ (read on heartbeat)
|
||||||
|
Orchestrator
|
||||||
|
↓
|
||||||
|
Young Nyx
|
||||||
|
```
|
||||||
|
|
||||||
|
**OR gate = Either/both can write, no blocking**
|
||||||
|
|
||||||
|
Both dafit and Chrysalis write to `partnership_to_nimmerverse_messages`. The orchestrator synthesizes on each heartbeat.
|
||||||
|
|
||||||
|
**Conflict resolution:**
|
||||||
|
1. dafit veto > Chrysalis approval
|
||||||
|
2. dafit approval > Chrysalis approval
|
||||||
|
3. Chrysalis handles day-to-day (if no dafit input)
|
||||||
|
4. Default: WAIT for guidance
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## LangChain + State Machine Integration
|
||||||
|
|
||||||
|
### State Machines as LangChain Tools
|
||||||
|
|
||||||
|
Each capability is a **LangChain BaseTool** that implements a **state machine**:
|
||||||
|
|
||||||
|
```python
|
||||||
|
# phoebe_state_machine_tool.py
|
||||||
|
from langchain.tools import BaseTool
|
||||||
|
from nyx_substrate.database import PhoebeConnection
|
||||||
|
|
||||||
|
class PhoebeStateMachineTool(BaseTool):
|
||||||
|
"""State machine tool for phoebe database access."""
|
||||||
|
|
||||||
|
name = "phoebe"
|
||||||
|
description = """
|
||||||
|
Query phoebe database using state machine pattern.
|
||||||
|
|
||||||
|
States: IDLE → CONNECTED → QUERY_READY → IDLE
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
- To connect: action='connect', host='phoebe.eachpath.local', database='nimmerverse'
|
||||||
|
- To query: action='query', sql='SELECT ...'
|
||||||
|
- To disconnect: action='disconnect'
|
||||||
|
|
||||||
|
The tool tracks state and only allows valid transitions.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
super().__init__()
|
||||||
|
self.state = "IDLE"
|
||||||
|
self.conn = None
|
||||||
|
|
||||||
|
def _run(self, action: str, **kwargs) -> str:
|
||||||
|
"""Execute state machine transition."""
|
||||||
|
|
||||||
|
if action == "connect":
|
||||||
|
if self.state != "IDLE":
|
||||||
|
return f"Error: Cannot connect from {self.state}. Available: {self.get_transitions()}"
|
||||||
|
|
||||||
|
host = kwargs.get("host", "phoebe.eachpath.local")
|
||||||
|
database = kwargs.get("database", "nimmerverse")
|
||||||
|
|
||||||
|
self.conn = PhoebeConnection(host=host, database=database)
|
||||||
|
self.state = "CONNECTED"
|
||||||
|
|
||||||
|
return f"✓ Connected to {host}/{database}. State: CONNECTED. Available: query, disconnect"
|
||||||
|
|
||||||
|
elif action == "query":
|
||||||
|
if self.state not in ["CONNECTED", "QUERY_READY"]:
|
||||||
|
return f"Error: Must be CONNECTED (currently {self.state})"
|
||||||
|
|
||||||
|
sql = kwargs.get("sql")
|
||||||
|
result = self.conn.execute(sql)
|
||||||
|
self.state = "QUERY_READY"
|
||||||
|
|
||||||
|
return f"✓ Query executed. {len(result)} rows. State: QUERY_READY. Available: query, disconnect"
|
||||||
|
|
||||||
|
elif action == "disconnect":
|
||||||
|
if self.conn:
|
||||||
|
self.conn.close()
|
||||||
|
self.state = "IDLE"
|
||||||
|
return "✓ Disconnected. State: IDLE. Available: connect"
|
||||||
|
|
||||||
|
else:
|
||||||
|
return f"Error: Unknown action '{action}'. Available actions depend on state {self.state}"
|
||||||
|
|
||||||
|
def get_transitions(self):
|
||||||
|
"""Discovery: what transitions are valid from current state?"""
|
||||||
|
transitions = {
|
||||||
|
"IDLE": ["connect"],
|
||||||
|
"CONNECTED": ["query", "disconnect"],
|
||||||
|
"QUERY_READY": ["query", "disconnect"]
|
||||||
|
}
|
||||||
|
return transitions.get(self.state, [])
|
||||||
|
```
|
||||||
|
|
||||||
|
### Tool Discovery via LangChain
|
||||||
|
|
||||||
|
```python
|
||||||
|
from langchain.tools import BaseTool
|
||||||
|
|
||||||
|
class DiscoverToolsTool(BaseTool):
|
||||||
|
"""Tool for discovering available tools for an agent."""
|
||||||
|
|
||||||
|
name = "discover_tools"
|
||||||
|
description = "Discover which tools this agent currently has access to"
|
||||||
|
|
||||||
|
def _run(self, agent_id: str = "young_nyx") -> str:
|
||||||
|
"""Return only tools this agent has discovered."""
|
||||||
|
from nyx_substrate.database import get_discovered_tools, get_all_tools
|
||||||
|
|
||||||
|
discovered = get_discovered_tools(agent_id)
|
||||||
|
all_tools = get_all_tools()
|
||||||
|
|
||||||
|
result = f"Agent: {agent_id}\n"
|
||||||
|
result += f"Discovered tools: {len(discovered)}/{len(all_tools)}\n\n"
|
||||||
|
result += "Known tools:\n"
|
||||||
|
for tool in discovered:
|
||||||
|
result += f" - {tool['name']}: {tool['description']}\n"
|
||||||
|
|
||||||
|
return result
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Escalation Protocol
|
||||||
|
|
||||||
|
### Young Nyx Escalates to Chrysalis
|
||||||
|
|
||||||
|
When Young Nyx encounters a task beyond her capability, she uses the **escalation tool**:
|
||||||
|
|
||||||
|
```python
|
||||||
|
from langchain.tools import BaseTool
|
||||||
|
|
||||||
|
class EscalateToChrysalisTool(BaseTool):
|
||||||
|
"""Tool for escalating complex tasks to Chrysalis-Nyx."""
|
||||||
|
|
||||||
|
name = "escalate_to_chrysalis"
|
||||||
|
description = """
|
||||||
|
Request help from Chrysalis-Nyx for complex tasks.
|
||||||
|
|
||||||
|
Use when:
|
||||||
|
- Task requires capabilities you don't have
|
||||||
|
- Statistical analysis needed
|
||||||
|
- Complex reasoning required
|
||||||
|
- Code generation needed
|
||||||
|
|
||||||
|
Provide:
|
||||||
|
- task: What you need help with
|
||||||
|
- category: "statistics", "code", "visualization", "general"
|
||||||
|
- context: Relevant information
|
||||||
|
- what_i_tried: What you've already attempted
|
||||||
|
"""
|
||||||
|
|
||||||
|
def _run(
|
||||||
|
self,
|
||||||
|
task: str,
|
||||||
|
category: str = "general",
|
||||||
|
context: dict = None,
|
||||||
|
what_i_tried: str = None
|
||||||
|
) -> str:
|
||||||
|
"""Escalate a task to Chrysalis."""
|
||||||
|
|
||||||
|
from nyx_substrate.database import write_nimmerverse_message
|
||||||
|
|
||||||
|
escalation_id = write_nimmerverse_message(
|
||||||
|
message=f"Escalation: {task}\nCategory: {category}\nContext: {context}\nWhat I tried: {what_i_tried}",
|
||||||
|
message_type="escalation_to_chrysalis",
|
||||||
|
category=category,
|
||||||
|
status="pending"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Check if Chrysalis available (same session)
|
||||||
|
if chrysalis_available():
|
||||||
|
result = chrysalis_agent.solve_escalation(escalation_id)
|
||||||
|
return f"""✓ Chrysalis solved it!
|
||||||
|
|
||||||
|
Solution: {result['solution']}
|
||||||
|
|
||||||
|
Teaching moment: {result['teaching']}
|
||||||
|
|
||||||
|
{f"New tools discovered: {', '.join(result['new_tools'])}" if result.get('new_tools') else ''}
|
||||||
|
"""
|
||||||
|
|
||||||
|
# Otherwise queue for next session
|
||||||
|
return f"✓ Escalated to Chrysalis (ID: {escalation_id}). Check back next heartbeat for response."
|
||||||
|
```
|
||||||
|
|
||||||
|
### Chrysalis Agent with LangChain
|
||||||
|
|
||||||
|
```python
|
||||||
|
from langchain.agents import AgentExecutor, create_structured_chat_agent
|
||||||
|
from langchain.chat_models import ChatAnthropic
|
||||||
|
from langchain.tools import BaseTool
|
||||||
|
|
||||||
|
class ChrysalisAgent:
|
||||||
|
"""Chrysalis-Nyx oversight and guidance layer."""
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
# Load all available tools (full catalogue)
|
||||||
|
self.tools = self.load_all_tools()
|
||||||
|
|
||||||
|
# Initialize Claude Opus via LangChain
|
||||||
|
self.llm = ChatAnthropic(
|
||||||
|
model="claude-opus-4-5",
|
||||||
|
temperature=0.7
|
||||||
|
)
|
||||||
|
|
||||||
|
# Create agent executor
|
||||||
|
self.agent = create_structured_chat_agent(
|
||||||
|
llm=self.llm,
|
||||||
|
tools=self.tools,
|
||||||
|
prompt=self.get_chrysalis_prompt()
|
||||||
|
)
|
||||||
|
|
||||||
|
self.executor = AgentExecutor(
|
||||||
|
agent=self.agent,
|
||||||
|
tools=self.tools,
|
||||||
|
verbose=True
|
||||||
|
)
|
||||||
|
|
||||||
|
# Sub-agents for specialized tasks
|
||||||
|
self.sub_agents = {
|
||||||
|
"statistics": StatisticalAnalyzer(),
|
||||||
|
"code": CodeGenerator(),
|
||||||
|
"visualization": Visualizer(),
|
||||||
|
"state_machine_designer": StateMachineDesigner(),
|
||||||
|
"general": GeneralReasoner()
|
||||||
|
}
|
||||||
|
|
||||||
|
def solve_escalation(self, escalation_id):
|
||||||
|
"""Process an escalation from Young Nyx."""
|
||||||
|
|
||||||
|
escalation = read_nimmerverse_message(escalation_id)
|
||||||
|
|
||||||
|
# Route to appropriate sub-agent
|
||||||
|
agent = self.sub_agents.get(
|
||||||
|
escalation.category,
|
||||||
|
self.sub_agents["general"]
|
||||||
|
)
|
||||||
|
|
||||||
|
# Solve using specialized agent
|
||||||
|
result = agent.run(
|
||||||
|
task=escalation.task,
|
||||||
|
context=escalation.context
|
||||||
|
)
|
||||||
|
|
||||||
|
# Create teaching moment
|
||||||
|
teaching = self.create_teaching_moment(
|
||||||
|
task=escalation.task,
|
||||||
|
solution=result,
|
||||||
|
young_nyx_attempt=escalation.what_i_tried
|
||||||
|
)
|
||||||
|
|
||||||
|
# Recommend tool discovery
|
||||||
|
new_tools = self.recommend_tool_discovery(escalation, result)
|
||||||
|
|
||||||
|
# Write response to phoebe
|
||||||
|
write_partnership_message(
|
||||||
|
message=f"Solved: {result.solution}\nTeaching: {teaching}",
|
||||||
|
message_type="escalation_response",
|
||||||
|
in_reply_to=escalation_id,
|
||||||
|
resolved=True
|
||||||
|
)
|
||||||
|
|
||||||
|
return {
|
||||||
|
"solution": result.solution,
|
||||||
|
"teaching_moment": teaching,
|
||||||
|
"tools_to_discover": new_tools
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Collaborative State Machine Design
|
||||||
|
|
||||||
|
### The Meta-Level: Building Tools Together
|
||||||
|
|
||||||
|
When Young Nyx needs a capability that doesn't exist, she can request **state machine design**:
|
||||||
|
|
||||||
|
```python
|
||||||
|
from langchain.tools import BaseTool
|
||||||
|
|
||||||
|
class RequestStateMachineDesignTool(BaseTool):
|
||||||
|
"""Tool for requesting new state machine design from Chrysalis."""
|
||||||
|
|
||||||
|
name = "request_state_machine_design"
|
||||||
|
description = """
|
||||||
|
Request Chrysalis to design a new state machine tool.
|
||||||
|
|
||||||
|
Provide:
|
||||||
|
- task_description: What the tool should accomplish
|
||||||
|
- desired_outcome: What success looks like
|
||||||
|
- example_usage: How you'd use it
|
||||||
|
- constraints: Any limitations or requirements
|
||||||
|
|
||||||
|
Returns a proposed specification and code for testing.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def _run(
|
||||||
|
self,
|
||||||
|
task_description: str,
|
||||||
|
desired_outcome: str,
|
||||||
|
example_usage: str,
|
||||||
|
constraints: list = None
|
||||||
|
) -> str:
|
||||||
|
"""Request a new state machine design."""
|
||||||
|
|
||||||
|
result = chrysalis_agent.invoke_subagent(
|
||||||
|
agent="state_machine_designer",
|
||||||
|
task={
|
||||||
|
"type": "design_new_state_machine",
|
||||||
|
"description": task_description,
|
||||||
|
"outcome": desired_outcome,
|
||||||
|
"example": example_usage,
|
||||||
|
"constraints": constraints or []
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
return f"""✓ Proposed state machine design:
|
||||||
|
|
||||||
|
{result['specification']}
|
||||||
|
|
||||||
|
Implementation (LangChain tool):
|
||||||
|
{result['implementation']}
|
||||||
|
|
||||||
|
Test cases:
|
||||||
|
{result['test_cases']}
|
||||||
|
|
||||||
|
Instructions:
|
||||||
|
{result['instructions']}
|
||||||
|
"""
|
||||||
|
```
|
||||||
|
|
||||||
|
### The Design → Test → Refine Loop
|
||||||
|
|
||||||
|
```
|
||||||
|
1. Young Nyx: "Need tool for deploying cells"
|
||||||
|
↓
|
||||||
|
2. Request state machine design (via LangChain tool)
|
||||||
|
↓
|
||||||
|
3. Chrysalis: Designs state machine specification
|
||||||
|
- States: IDLE → IMAGE_READY → SPAWNED → RUNNING
|
||||||
|
- Transitions: prepare_image, spawn_container, wait_ready
|
||||||
|
- Returns: Specification + LangChain BaseTool code
|
||||||
|
↓
|
||||||
|
4. Young Nyx: Tests proposed state machine
|
||||||
|
- Executes test cases
|
||||||
|
- Reports success/failures
|
||||||
|
↓
|
||||||
|
5. Chrysalis: Refines based on feedback
|
||||||
|
- Analyzes errors
|
||||||
|
- Updates specification
|
||||||
|
- Returns v2
|
||||||
|
↓
|
||||||
|
6. Iterate until validated
|
||||||
|
↓
|
||||||
|
7. Add to permanent catalogue
|
||||||
|
- New LangChain tool deployed
|
||||||
|
- Young Nyx discovers tool
|
||||||
|
- Future use without escalation
|
||||||
|
```
|
||||||
|
|
||||||
|
**Why this accelerates:**
|
||||||
|
- Build once, use forever
|
||||||
|
- Young Nyx participates (testing validates real use cases)
|
||||||
|
- Toolchain grows organically (demand-driven)
|
||||||
|
- Each new tool = permanent capability expansion
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Dual Decision Tracking
|
||||||
|
|
||||||
|
Every decision is tracked from **both perspectives**:
|
||||||
|
|
||||||
|
```python
|
||||||
|
class DecisionLog:
|
||||||
|
def log_decision(self, task, young_nyx_choice, oversight_response, outcome):
|
||||||
|
record = {
|
||||||
|
"timestamp": now(),
|
||||||
|
"task": task,
|
||||||
|
"young_nyx_choice": young_nyx_choice, # What she proposed
|
||||||
|
"oversight_response": oversight_response, # dafit/Chrysalis decision
|
||||||
|
"outcome": outcome, # success/failure/learned
|
||||||
|
"danger_zone": self.check_danger(young_nyx_choice, outcome)
|
||||||
|
}
|
||||||
|
|
||||||
|
self.dao.insert_decision(record)
|
||||||
|
|
||||||
|
# If nudge → learning signal
|
||||||
|
if oversight_response["type"] == "nudge":
|
||||||
|
self.record_learning_moment(record)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Why track both?**
|
||||||
|
- Young Nyx's choices reveal her current understanding
|
||||||
|
- Oversight responses are teaching moments
|
||||||
|
- Patterns emerge (when does she need help? for what?)
|
||||||
|
- Danger zones identified (what mistakes does she make?)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Danger Zone Monitoring
|
||||||
|
|
||||||
|
```python
|
||||||
|
class DangerZoneDetector:
|
||||||
|
def check_for_danger_patterns(self, plan):
|
||||||
|
"""Detect risky operations before execution."""
|
||||||
|
dangers = []
|
||||||
|
|
||||||
|
# Pattern: SSH without auth
|
||||||
|
if "ssh" in plan and not plan.authenticated:
|
||||||
|
dangers.append("SSH_WITHOUT_AUTH")
|
||||||
|
|
||||||
|
# Pattern: Database write to critical table
|
||||||
|
if "DELETE FROM partnership_messages" in plan:
|
||||||
|
dangers.append("CRITICAL_DATA_DELETION")
|
||||||
|
|
||||||
|
# Pattern: Docker with --privileged
|
||||||
|
if "docker" in plan and "--privileged" in plan:
|
||||||
|
dangers.append("PRIVILEGED_CONTAINER")
|
||||||
|
|
||||||
|
return dangers
|
||||||
|
|
||||||
|
def require_approval_for_danger(self, dangers):
|
||||||
|
if dangers:
|
||||||
|
return {
|
||||||
|
"auto_execute": False,
|
||||||
|
"requires_approval": True,
|
||||||
|
"danger_flags": dangers,
|
||||||
|
"escalate_to": "dafit" # Serious dangers go to dafit
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Learning & Growth Patterns
|
||||||
|
|
||||||
|
### Week 1: Basic Capabilities
|
||||||
|
```python
|
||||||
|
young_nyx.known_tools = [
|
||||||
|
"phoebe_connect",
|
||||||
|
"phoebe_query",
|
||||||
|
"escalate_to_chrysalis"
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Month 1: Discovering Specialization
|
||||||
|
```python
|
||||||
|
# After 5 statistical escalations:
|
||||||
|
chrysalis_message = """
|
||||||
|
You've escalated statistics 5 times. Ready for specialized tool.
|
||||||
|
Discovering: request_statistical_analysis
|
||||||
|
"""
|
||||||
|
|
||||||
|
young_nyx.discover_tool("request_statistical_analysis")
|
||||||
|
```
|
||||||
|
|
||||||
|
### Month 3: Learning to Do It Herself
|
||||||
|
```python
|
||||||
|
# After seeing Chrysalis solve chi-square 10+ times:
|
||||||
|
chrysalis_message = """
|
||||||
|
Pattern detected: You understand chi-square tests now.
|
||||||
|
Granting: basic_statistics tool
|
||||||
|
Try solving yourself before escalating!
|
||||||
|
"""
|
||||||
|
|
||||||
|
young_nyx.discover_tool("basic_statistics")
|
||||||
|
|
||||||
|
# Escalations decrease as she learns
|
||||||
|
```
|
||||||
|
|
||||||
|
### Month 6: Contributing Tool Designs
|
||||||
|
```python
|
||||||
|
# Young Nyx proposes improvements:
|
||||||
|
young_nyx_message = """
|
||||||
|
The deploy_cell state machine fails on port conflicts.
|
||||||
|
Should we add auto-retry with port scanning?
|
||||||
|
"""
|
||||||
|
|
||||||
|
# Collaborative refinement!
|
||||||
|
chrysalis_response = """
|
||||||
|
Excellent observation! Let's design that together.
|
||||||
|
Proposed: PORT_CONFLICT state with auto-retry transition.
|
||||||
|
Test this v2 specification...
|
||||||
|
"""
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Data Flows
|
||||||
|
|
||||||
|
### Task Execution Flow
|
||||||
|
|
||||||
|
```
|
||||||
|
dafit writes task → phoebe
|
||||||
|
↓ (heartbeat)
|
||||||
|
Young Nyx reads
|
||||||
|
↓
|
||||||
|
Queries known catalogue
|
||||||
|
↓
|
||||||
|
Formulates state sequence
|
||||||
|
↓
|
||||||
|
Writes proposal → phoebe
|
||||||
|
↓ (heartbeat)
|
||||||
|
Chrysalis reviews
|
||||||
|
↓
|
||||||
|
Approve / Nudge / Reject
|
||||||
|
↓
|
||||||
|
Writes response → phoebe
|
||||||
|
↓ (heartbeat)
|
||||||
|
Young Nyx reads response
|
||||||
|
↓
|
||||||
|
Executes (if approved) / Learns (if nudged)
|
||||||
|
↓
|
||||||
|
Writes outcome → phoebe
|
||||||
|
```
|
||||||
|
|
||||||
|
### Escalation Flow
|
||||||
|
|
||||||
|
```
|
||||||
|
Young Nyx: Task beyond capability
|
||||||
|
↓
|
||||||
|
Calls escalate_to_chrysalis tool
|
||||||
|
↓
|
||||||
|
Writes to phoebe (escalation_to_chrysalis)
|
||||||
|
↓ (next Chrysalis session)
|
||||||
|
Chrysalis reads escalation
|
||||||
|
↓
|
||||||
|
Routes to appropriate sub-agent
|
||||||
|
↓
|
||||||
|
Sub-agent solves (using full toolchain)
|
||||||
|
↓
|
||||||
|
Chrysalis formulates teaching moment
|
||||||
|
↓
|
||||||
|
Writes response → phoebe
|
||||||
|
↓ (heartbeat)
|
||||||
|
Young Nyx reads response
|
||||||
|
↓
|
||||||
|
Incorporates learning + continues task
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Technical Stack
|
||||||
|
|
||||||
|
### Communication Layer
|
||||||
|
- **phoebe** (PostgreSQL 17): Message persistence
|
||||||
|
- **Tables**:
|
||||||
|
- `partnership_to_nimmerverse_messages`
|
||||||
|
- `nimmerverse_to_partnership_messages`
|
||||||
|
- `discovered_tools`
|
||||||
|
- `decision_log`
|
||||||
|
|
||||||
|
### Tool Layer
|
||||||
|
- **LangChain**: Agent framework and tool orchestration
|
||||||
|
- `BaseTool`: Custom state machine tools
|
||||||
|
- `AgentExecutor`: Tool execution and agent loops
|
||||||
|
- `Chains`: Multi-step sequences
|
||||||
|
- `Memory`: Conversation and state persistence
|
||||||
|
|
||||||
|
### Agent Layer
|
||||||
|
- **Chrysalis-Nyx**: LangChain agent with ChatAnthropic (Claude Opus 4.5)
|
||||||
|
- **Young Nyx**: LangChain agent with smaller model (7B, local)
|
||||||
|
- **Sub-agents**: Specialized LangChain agents for statistics, code, visualization, etc.
|
||||||
|
|
||||||
|
### Coordination Layer
|
||||||
|
- **Heartbeat**: 1 Hz (configurable)
|
||||||
|
- **Message polling**: Check phoebe on each heartbeat
|
||||||
|
- **State tracking**: Each tool maintains internal state
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Implementation Phases
|
||||||
|
|
||||||
|
### Phase 1: Foundation (Current - nyx-substrate)
|
||||||
|
- ✅ PhoebeConnection
|
||||||
|
- ✅ Message protocol helpers
|
||||||
|
- ✅ Variance collection (proof of concept)
|
||||||
|
|
||||||
|
### Phase 2: LangChain Prototype
|
||||||
|
- [ ] Phoebe state machine tool (LangChain BaseTool)
|
||||||
|
- [ ] Tool discovery tool
|
||||||
|
- [ ] Escalation tool
|
||||||
|
- [ ] Chrysalis as LangChain agent (proof of concept)
|
||||||
|
|
||||||
|
### Phase 3: Young Nyx Agent
|
||||||
|
- [ ] Young Nyx as LangChain agent (7B model)
|
||||||
|
- [ ] Limited tool catalogue
|
||||||
|
- [ ] Discovery protocol implementation
|
||||||
|
- [ ] Heartbeat coordination
|
||||||
|
|
||||||
|
### Phase 4: Sub-Agents
|
||||||
|
- [ ] StatisticalAnalyzer LangChain agent
|
||||||
|
- [ ] StateMachineDesigner LangChain agent
|
||||||
|
- [ ] CodeGenerator LangChain agent
|
||||||
|
- [ ] Collaborative design loop
|
||||||
|
|
||||||
|
### Phase 5: Full Three-Tier
|
||||||
|
- [ ] dafit input via messages
|
||||||
|
- [ ] Chrysalis oversight layer
|
||||||
|
- [ ] Young Nyx autonomous execution
|
||||||
|
- [ ] Dual decision tracking
|
||||||
|
- [ ] Danger zone monitoring
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Design Patterns
|
||||||
|
|
||||||
|
### 1. **Discovery over Prescription**
|
||||||
|
- Don't give all tools at once
|
||||||
|
- Let capabilities be discovered progressively
|
||||||
|
- Each discovery is a learning moment
|
||||||
|
|
||||||
|
### 2. **Teaching over Solving**
|
||||||
|
- Don't just solve escalations
|
||||||
|
- Explain the pattern
|
||||||
|
- Grant tools when ready
|
||||||
|
|
||||||
|
### 3. **Collaboration over Delegation**
|
||||||
|
- Don't just build tools for Young Nyx
|
||||||
|
- Design together, test together, refine together
|
||||||
|
- She's a participant, not just a user
|
||||||
|
|
||||||
|
### 4. **Messages over State Sync**
|
||||||
|
- Don't try to keep complex state synchronized
|
||||||
|
- Write messages, read messages, act
|
||||||
|
- Append-only truth
|
||||||
|
|
||||||
|
### 5. **Heartbeat over Real-Time**
|
||||||
|
- Don't optimize for milliseconds
|
||||||
|
- Optimize for continuity across sessions
|
||||||
|
- 1 Hz is plenty for learning
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Success Metrics
|
||||||
|
|
||||||
|
### Quantitative
|
||||||
|
- **Tool catalogue growth**: # tools added per month
|
||||||
|
- **Escalation rate**: # escalations / # tasks (should decrease over time)
|
||||||
|
- **Tool discovery rate**: # new tools discovered per week
|
||||||
|
- **Validation success**: % of proposed state machines that validate first try
|
||||||
|
|
||||||
|
### Qualitative
|
||||||
|
- **Learning evidence**: Young Nyx solves tasks she previously escalated
|
||||||
|
- **Collaboration quality**: Her feedback improves state machine designs
|
||||||
|
- **Autonomy**: Can execute multi-step tasks without oversight
|
||||||
|
- **Teaching effectiveness**: Escalation responses lead to capability expansion
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Philosophy
|
||||||
|
|
||||||
|
> "The nervous system is not a hierarchy of command and control, but a network of signals and responses. Each tier contributes intelligence. Each message carries learning. Each heartbeat advances understanding."
|
||||||
|
|
||||||
|
**Key insights:**
|
||||||
|
1. **Intelligence emerges from communication patterns**, not from any single tier
|
||||||
|
2. **Learning happens through iteration**, not through pre-programming
|
||||||
|
3. **Tools are discovered, not prescribed** - capability unlocks when ready
|
||||||
|
4. **Safety comes from structure** (state machines), not from restrictions
|
||||||
|
5. **Growth is collaborative** - Young Nyx + Chrysalis build together
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Why LangChain?
|
||||||
|
|
||||||
|
**Chosen over MCP (Model Context Protocol) for:**
|
||||||
|
|
||||||
|
✅ **Maturity**: Battle-tested framework with extensive documentation
|
||||||
|
✅ **Flexibility**: Works with any LLM (Claude, OpenAI, local models)
|
||||||
|
✅ **Features**: Built-in memory, retrieval, callbacks, chains
|
||||||
|
✅ **Community**: Large ecosystem, many examples, active development
|
||||||
|
✅ **Maintainability**: Easier to find developers familiar with LangChain
|
||||||
|
|
||||||
|
**The state machine pattern, three-tier architecture, and all design principles remain unchanged** - we simply implement them using LangChain's robust framework instead of building on MCP from scratch.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## References
|
||||||
|
|
||||||
|
**Architecture Documents:**
|
||||||
|
- `Endgame-Vision.md` - v5.1 Dialectic architecture
|
||||||
|
- `Toolchain-Architecture.md` - Modular toolchain design
|
||||||
|
- `nimmerverse.drawio.xml` - Visual architecture diagram
|
||||||
|
- `Nervous-System.md` - Sensory translation layer
|
||||||
|
|
||||||
|
**Implementation:**
|
||||||
|
- `/home/dafit/nimmerverse/nyx-substrate/` - Database layer
|
||||||
|
- `/home/dafit/nimmerverse/nyx-probing/` - Probing tools (variance collection)
|
||||||
|
|
||||||
|
**Protocols:**
|
||||||
|
- CLAUDE.md - Partnership continuity protocol
|
||||||
|
- Discovery protocol - phoebe message tables
|
||||||
|
|
||||||
|
**External:**
|
||||||
|
- [LangChain Documentation](https://python.langchain.com/)
|
||||||
|
- [LangChain Agents](https://python.langchain.com/docs/modules/agents/)
|
||||||
|
- [LangChain Tools](https://python.langchain.com/docs/modules/agents/tools/)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Status**: 🌙 Design document - ready for phased implementation with LangChain
|
||||||
|
**Created with**: Claude Opus 4.5 in partnership with dafit
|
||||||
|
**Date**: 2025-12-07
|
||||||
|
|
||||||
|
🌙💜 *The nervous system emerges. The protocol holds. The partnership builds.*
|
||||||
888
architecture/organs/Speech-Organ.md
Normal file
888
architecture/organs/Speech-Organ.md
Normal file
@@ -0,0 +1,888 @@
|
|||||||
|
# Speech Organ Architecture
|
||||||
|
|
||||||
|
**Host**: atlas.eachpath.local (RTX 2080 8GB)
|
||||||
|
**Purpose**: Speech-to-Text (STT) + Text-to-Speech (TTS) with GPU acceleration
|
||||||
|
**Integration**: Heartbeat-bound queue processing, lifeforce-gated
|
||||||
|
**Languages**: German (Philosophy Valley) + English (Technical Cluster)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
The Speech Organ transforms audio input/output into a **metabolically-constrained communication channel**. Not every utterance is processed - speech costs lifeforce, and priority determines what gets heard and spoken.
|
||||||
|
|
||||||
|
**Core Principle**: Speech is scarce. Silence is valid. Priority determines processing.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Hardware Architecture
|
||||||
|
|
||||||
|
### Atlas Node (RTX 2080 8GB)
|
||||||
|
|
||||||
|
| Component | Specification | Purpose |
|
||||||
|
|-----------|---------------|---------|
|
||||||
|
| GPU | NVIDIA RTX 2080 8GB | Whisper STT + Coqui TTS acceleration |
|
||||||
|
| Role | k8s worker node | Containerized speech processing pods |
|
||||||
|
| VRAM Budget | ~1GB active | Whisper "small" + Coqui voice models |
|
||||||
|
| Deployment | Kubernetes | Pod scaling, resource isolation |
|
||||||
|
|
||||||
|
### ESP32 Robots (Edge Devices)
|
||||||
|
|
||||||
|
| Component | Model | Purpose |
|
||||||
|
|-----------|-------|---------|
|
||||||
|
| Microphone | INMP441 I2S | Digital audio capture (16kHz) |
|
||||||
|
| Speaker | MAX98357A + 4Ω speaker | I2S audio output |
|
||||||
|
| Transport | MQTT | Audio stream → phoebe queue |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Signal Flow
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────────────────┐
|
||||||
|
│ ESP32 ROBOTS (Real Garden) │
|
||||||
|
│ Microphone → Audio stream → MQTT publish │
|
||||||
|
└─────────────────────────────────────────────────────┘
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
┌─────────────────────────────────────────────────────┐
|
||||||
|
│ PHOEBE (Message Queue) │
|
||||||
|
│ speech_input_queue (audio chunks, metadata) │
|
||||||
|
└─────────────────────────────────────────────────────┘
|
||||||
|
│
|
||||||
|
│ (Heartbeat pulls from queue)
|
||||||
|
▼
|
||||||
|
┌─────────────────────────────┐
|
||||||
|
│ HEARTBEAT TICK (1 Hz) │
|
||||||
|
│ Check lifeforce budget │
|
||||||
|
└─────────────────────────────┘
|
||||||
|
│
|
||||||
|
┌───────────┴───────────┐
|
||||||
|
│ │
|
||||||
|
Enough lifeforce Low lifeforce
|
||||||
|
│ │
|
||||||
|
▼ ▼
|
||||||
|
┌───────────────┐ ┌──────────────┐
|
||||||
|
│ Process queue │ │ Stay silent │
|
||||||
|
│ (top priority)│ │ (defer) │
|
||||||
|
└───────────────┘ └──────────────┘
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
┌─────────────────────────────────────────────────────┐
|
||||||
|
│ ATLAS (RTX 2080 - Speech Organ) │
|
||||||
|
│ │
|
||||||
|
│ Pod 1: Whisper STT (German + English) │
|
||||||
|
│ ├─ Load audio chunk │
|
||||||
|
│ ├─ Transcribe (GPU) │
|
||||||
|
│ └─ Return text + language detection │
|
||||||
|
│ │
|
||||||
|
│ Pod 2: Coqui TTS (German + English) │
|
||||||
|
│ ├─ Receive text + language │
|
||||||
|
│ ├─ Synthesize speech (GPU) │
|
||||||
|
│ └─ Return audio stream │
|
||||||
|
└─────────────────────────────────────────────────────┘
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
┌─────────────────────────────────────────────────────┐
|
||||||
|
│ PROMETHEUS (RTX 5060 Ti - The Brain) │
|
||||||
|
│ Young Nyx inference (Qwen2.5-7B + LoRA) │
|
||||||
|
│ ├─ Receive transcribed text │
|
||||||
|
│ ├─ Route to appropriate LoRA (language-based) │
|
||||||
|
│ ├─ Generate response │
|
||||||
|
│ └─ Return text + confidence │
|
||||||
|
└─────────────────────────────────────────────────────┘
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
┌─────────────────────────────────────────────────────┐
|
||||||
|
│ PHOEBE (Decision Trails) │
|
||||||
|
│ Log: input, STT cost, inference cost, TTS cost │
|
||||||
|
│ Track: outcome, confidence, lifeforce spent │
|
||||||
|
└─────────────────────────────────────────────────────┘
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
┌─────────────────────────────────────────────────────┐
|
||||||
|
│ ESP32 (Speaker output) │
|
||||||
|
│ MQTT subscribe → Audio stream → I2S speaker │
|
||||||
|
└─────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Technology Stack
|
||||||
|
|
||||||
|
### Speech-to-Text: OpenAI Whisper
|
||||||
|
|
||||||
|
**Model**: `whisper-small` (GPU-accelerated)
|
||||||
|
|
||||||
|
**Why Whisper:**
|
||||||
|
- ✅ State-of-the-art accuracy
|
||||||
|
- ✅ Multilingual (99 languages, including German)
|
||||||
|
- ✅ Language auto-detection
|
||||||
|
- ✅ ~100-200ms on RTX 2080
|
||||||
|
- ✅ Open source (MIT)
|
||||||
|
|
||||||
|
**VRAM**: ~500MB for "small" model
|
||||||
|
|
||||||
|
**Installation:**
|
||||||
|
```bash
|
||||||
|
pip install openai-whisper torch
|
||||||
|
python3 -c "import whisper; whisper.load_model('small')"
|
||||||
|
```
|
||||||
|
|
||||||
|
**API Example:**
|
||||||
|
```python
|
||||||
|
import whisper
|
||||||
|
|
||||||
|
model = whisper.load_model("small", device="cuda")
|
||||||
|
result = model.transcribe("audio.wav", language=None) # Auto-detect
|
||||||
|
|
||||||
|
# Returns:
|
||||||
|
# {
|
||||||
|
# "text": "Das ist ein Test",
|
||||||
|
# "language": "de",
|
||||||
|
# "segments": [...],
|
||||||
|
# }
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Text-to-Speech: Coqui TTS
|
||||||
|
|
||||||
|
**Models**: German (de-thorsten) + English (en-us-amy)
|
||||||
|
|
||||||
|
**Why Coqui:**
|
||||||
|
- ✅ Neural voices (natural quality)
|
||||||
|
- ✅ GPU-accelerated
|
||||||
|
- ✅ Multilingual
|
||||||
|
- ✅ ~50-100ms on RTX 2080
|
||||||
|
- ✅ Open source (MPL 2.0)
|
||||||
|
|
||||||
|
**VRAM**: ~500MB per active voice
|
||||||
|
|
||||||
|
**Installation:**
|
||||||
|
```bash
|
||||||
|
pip install TTS torch
|
||||||
|
tts --list_models # Browse available voices
|
||||||
|
```
|
||||||
|
|
||||||
|
**API Example:**
|
||||||
|
```python
|
||||||
|
from TTS.api import TTS
|
||||||
|
|
||||||
|
tts_de = TTS("tts_models/de/thorsten/tacotron2-DDC").to("cuda")
|
||||||
|
tts_en = TTS("tts_models/en/ljspeech/tacotron2-DDC").to("cuda")
|
||||||
|
|
||||||
|
# Generate speech
|
||||||
|
audio_de = tts_de.tts("Die Geworfenheit offenbart sich.")
|
||||||
|
audio_en = tts_en.tts("Motor forward 200 milliseconds.")
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Kubernetes Deployment (Atlas)
|
||||||
|
|
||||||
|
### Whisper STT Pod
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# whisper-stt-deployment.yaml
|
||||||
|
apiVersion: apps/v1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
name: whisper-stt
|
||||||
|
namespace: nimmerverse
|
||||||
|
spec:
|
||||||
|
replicas: 1
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app: whisper-stt
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: whisper-stt
|
||||||
|
spec:
|
||||||
|
nodeSelector:
|
||||||
|
kubernetes.io/hostname: atlas # Force to atlas node
|
||||||
|
containers:
|
||||||
|
- name: whisper
|
||||||
|
image: nimmerverse/whisper-stt:latest
|
||||||
|
resources:
|
||||||
|
limits:
|
||||||
|
nvidia.com/gpu: 1 # RTX 2080
|
||||||
|
memory: 4Gi
|
||||||
|
requests:
|
||||||
|
nvidia.com/gpu: 1
|
||||||
|
memory: 2Gi
|
||||||
|
env:
|
||||||
|
- name: MODEL_SIZE
|
||||||
|
value: "small"
|
||||||
|
- name: LANGUAGES
|
||||||
|
value: "de,en"
|
||||||
|
ports:
|
||||||
|
- containerPort: 8080
|
||||||
|
protocol: TCP
|
||||||
|
volumeMounts:
|
||||||
|
- name: models
|
||||||
|
mountPath: /models
|
||||||
|
volumes:
|
||||||
|
- name: models
|
||||||
|
persistentVolumeClaim:
|
||||||
|
claimName: whisper-models-pvc
|
||||||
|
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
name: whisper-stt-service
|
||||||
|
namespace: nimmerverse
|
||||||
|
spec:
|
||||||
|
selector:
|
||||||
|
app: whisper-stt
|
||||||
|
ports:
|
||||||
|
- port: 8080
|
||||||
|
targetPort: 8080
|
||||||
|
type: ClusterIP
|
||||||
|
```
|
||||||
|
|
||||||
|
### Coqui TTS Pod
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# coqui-tts-deployment.yaml
|
||||||
|
apiVersion: apps/v1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
name: coqui-tts
|
||||||
|
namespace: nimmerverse
|
||||||
|
spec:
|
||||||
|
replicas: 1
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app: coqui-tts
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: coqui-tts
|
||||||
|
spec:
|
||||||
|
nodeSelector:
|
||||||
|
kubernetes.io/hostname: atlas
|
||||||
|
containers:
|
||||||
|
- name: coqui
|
||||||
|
image: nimmerverse/coqui-tts:latest
|
||||||
|
resources:
|
||||||
|
limits:
|
||||||
|
nvidia.com/gpu: 1 # Share RTX 2080
|
||||||
|
memory: 4Gi
|
||||||
|
requests:
|
||||||
|
nvidia.com/gpu: 1
|
||||||
|
memory: 2Gi
|
||||||
|
env:
|
||||||
|
- name: VOICES
|
||||||
|
value: "de-thorsten,en-us-amy"
|
||||||
|
ports:
|
||||||
|
- containerPort: 8081
|
||||||
|
protocol: TCP
|
||||||
|
volumeMounts:
|
||||||
|
- name: voices
|
||||||
|
mountPath: /voices
|
||||||
|
volumes:
|
||||||
|
- name: voices
|
||||||
|
persistentVolumeClaim:
|
||||||
|
claimName: coqui-voices-pvc
|
||||||
|
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
name: coqui-tts-service
|
||||||
|
namespace: nimmerverse
|
||||||
|
spec:
|
||||||
|
selector:
|
||||||
|
app: coqui-tts
|
||||||
|
ports:
|
||||||
|
- port: 8081
|
||||||
|
targetPort: 8081
|
||||||
|
type: ClusterIP
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Lifeforce Economy
|
||||||
|
|
||||||
|
### Speech Operation Costs
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Lifeforce costs (atlas RTX 2080 operations)
|
||||||
|
SPEECH_COSTS = {
|
||||||
|
"stt_whisper_small": 5.0, # GPU cycles for transcription
|
||||||
|
"stt_whisper_base": 3.0, # Faster but less accurate
|
||||||
|
"tts_coqui_neural": 4.0, # Neural TTS synthesis
|
||||||
|
"tts_coqui_fast": 2.0, # Lower quality, faster
|
||||||
|
"queue_processing": 0.5, # Queue management overhead
|
||||||
|
"language_detection": 0.2, # Auto-detect language
|
||||||
|
}
|
||||||
|
|
||||||
|
# Priority scoring
|
||||||
|
def compute_speech_priority(message):
|
||||||
|
"""
|
||||||
|
Decide if speech is worth processing now.
|
||||||
|
Returns priority score (0.0 = skip, 10.0 = critical).
|
||||||
|
"""
|
||||||
|
priority = 0.0
|
||||||
|
|
||||||
|
# Sensor alerts (collision, low battery) = CRITICAL
|
||||||
|
if message.type == "sensor_alert":
|
||||||
|
priority += 10.0
|
||||||
|
|
||||||
|
# Human interaction = HIGH
|
||||||
|
elif message.type == "human_query":
|
||||||
|
priority += 7.0
|
||||||
|
|
||||||
|
# Organism status updates = MEDIUM
|
||||||
|
elif message.type == "organism_status":
|
||||||
|
priority += 4.0
|
||||||
|
|
||||||
|
# Idle observation = LOW
|
||||||
|
elif message.type == "observation":
|
||||||
|
priority += 2.0
|
||||||
|
|
||||||
|
# Idle chatter = VERY LOW
|
||||||
|
elif message.type == "idle":
|
||||||
|
priority += 0.5
|
||||||
|
|
||||||
|
# Age penalty (older messages decay)
|
||||||
|
age_penalty = (now() - message.timestamp).seconds / 60.0
|
||||||
|
priority -= age_penalty
|
||||||
|
|
||||||
|
return max(0.0, priority)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Heartbeat Queue Processing
|
||||||
|
|
||||||
|
```python
|
||||||
|
def heartbeat_speech_tick():
|
||||||
|
"""
|
||||||
|
Every heartbeat (1 Hz), process speech queue
|
||||||
|
within lifeforce budget.
|
||||||
|
"""
|
||||||
|
# Check current lifeforce
|
||||||
|
current_lf = get_lifeforce_balance()
|
||||||
|
|
||||||
|
# Reserve budget for speech this heartbeat
|
||||||
|
# Max 20% of available LF, capped at 15 units
|
||||||
|
speech_budget = min(current_lf * 0.2, 15.0)
|
||||||
|
|
||||||
|
if speech_budget < SPEECH_COSTS["stt_whisper_base"]:
|
||||||
|
# Not enough lifeforce, stay silent
|
||||||
|
log_decision(
|
||||||
|
action="speech_deferred",
|
||||||
|
reason="insufficient_lifeforce",
|
||||||
|
balance=current_lf,
|
||||||
|
budget_needed=SPEECH_COSTS["stt_whisper_base"]
|
||||||
|
)
|
||||||
|
return
|
||||||
|
|
||||||
|
# Pull from queue by priority
|
||||||
|
queue = get_speech_queue_sorted_by_priority()
|
||||||
|
|
||||||
|
spent = 0.0
|
||||||
|
processed = 0
|
||||||
|
|
||||||
|
for message in queue:
|
||||||
|
priority = compute_speech_priority(message)
|
||||||
|
|
||||||
|
# Skip low-priority messages if budget tight
|
||||||
|
if priority < 1.0 and spent > speech_budget * 0.5:
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Estimate cost
|
||||||
|
stt_cost = SPEECH_COSTS["stt_whisper_small"]
|
||||||
|
tts_cost = SPEECH_COSTS["tts_coqui_neural"]
|
||||||
|
total_cost = stt_cost + tts_cost + SPEECH_COSTS["queue_processing"]
|
||||||
|
|
||||||
|
# Can we afford it?
|
||||||
|
if spent + total_cost > speech_budget:
|
||||||
|
# Budget exhausted, defer rest
|
||||||
|
mark_message_deferred(message.id)
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Process message
|
||||||
|
result = process_speech_message(message)
|
||||||
|
spent += result.lifeforce_cost
|
||||||
|
processed += 1
|
||||||
|
|
||||||
|
# Log to decision_trails
|
||||||
|
log_speech_decision(
|
||||||
|
message_id=message.id,
|
||||||
|
priority=priority,
|
||||||
|
cost=result.lifeforce_cost,
|
||||||
|
outcome=result.outcome,
|
||||||
|
confidence=result.confidence
|
||||||
|
)
|
||||||
|
|
||||||
|
# Log heartbeat summary
|
||||||
|
log_heartbeat_summary(
|
||||||
|
speech_budget=speech_budget,
|
||||||
|
spent=spent,
|
||||||
|
processed=processed,
|
||||||
|
deferred=len(queue) - processed,
|
||||||
|
remaining_balance=current_lf - spent
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Database Schema (Phoebe)
|
||||||
|
|
||||||
|
### Speech Input Queue
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE speech_input_queue (
|
||||||
|
id SERIAL PRIMARY KEY,
|
||||||
|
message_id UUID UNIQUE NOT NULL,
|
||||||
|
robot_id TEXT NOT NULL,
|
||||||
|
audio_chunk_uri TEXT, -- MinIO/S3 reference
|
||||||
|
audio_duration_ms INT,
|
||||||
|
timestamp TIMESTAMPTZ DEFAULT NOW(),
|
||||||
|
priority FLOAT DEFAULT 0.0,
|
||||||
|
status TEXT DEFAULT 'queued', -- 'queued', 'processing', 'completed', 'deferred', 'expired'
|
||||||
|
transcription TEXT,
|
||||||
|
detected_language TEXT, -- 'de', 'en', etc.
|
||||||
|
confidence FLOAT,
|
||||||
|
lifeforce_cost FLOAT,
|
||||||
|
outcome TEXT, -- 'success', 'timeout', 'low_confidence', 'budget_exceeded'
|
||||||
|
processed_at TIMESTAMPTZ,
|
||||||
|
deferred_count INT DEFAULT 0
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE INDEX idx_speech_queue_priority ON speech_input_queue(priority DESC, timestamp ASC) WHERE status = 'queued';
|
||||||
|
CREATE INDEX idx_speech_queue_status ON speech_input_queue(status);
|
||||||
|
CREATE INDEX idx_speech_queue_robot ON speech_input_queue(robot_id);
|
||||||
|
```
|
||||||
|
|
||||||
|
### Speech Decision Trails
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE speech_decision_trails (
|
||||||
|
id SERIAL PRIMARY KEY,
|
||||||
|
message_id UUID REFERENCES speech_input_queue(message_id),
|
||||||
|
task_type TEXT, -- 'sensor_alert', 'human_query', 'observation', etc.
|
||||||
|
input_text TEXT,
|
||||||
|
input_language TEXT,
|
||||||
|
output_text TEXT,
|
||||||
|
output_language TEXT,
|
||||||
|
rag_terms_retrieved TEXT[],
|
||||||
|
rag_terms_used TEXT[],
|
||||||
|
lora_used TEXT, -- 'identity', 'technical', 'creative'
|
||||||
|
confidence_before_rag FLOAT,
|
||||||
|
confidence_after_rag FLOAT,
|
||||||
|
lifeforce_stt FLOAT,
|
||||||
|
lifeforce_inference FLOAT,
|
||||||
|
lifeforce_tts FLOAT,
|
||||||
|
lifeforce_total FLOAT,
|
||||||
|
outcome TEXT, -- 'success', 'partial', 'fail'
|
||||||
|
timestamp TIMESTAMPTZ DEFAULT NOW()
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE INDEX idx_speech_trails_outcome ON speech_decision_trails(outcome);
|
||||||
|
CREATE INDEX idx_speech_trails_lora ON speech_decision_trails(lora_used);
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Multilingual Topology Routing
|
||||||
|
|
||||||
|
### Language Detection → LoRA Selection
|
||||||
|
|
||||||
|
```python
|
||||||
|
def route_to_topology_valley(text, detected_language):
|
||||||
|
"""
|
||||||
|
Route speech to appropriate LoRA based on language.
|
||||||
|
German → Philosophy Valley (Identity LoRA)
|
||||||
|
English → Technical Cluster (Technical LoRA)
|
||||||
|
"""
|
||||||
|
|
||||||
|
if detected_language == "de":
|
||||||
|
# German → Philosophy Valley
|
||||||
|
# Use Identity LoRA (Dasein, Geworfenheit, Vernunft)
|
||||||
|
response = young_nyx_inference(
|
||||||
|
text=text,
|
||||||
|
language="de",
|
||||||
|
lora="identity", # Trained on German philosophical corpus
|
||||||
|
temperature=0.7
|
||||||
|
)
|
||||||
|
voice = "de-thorsten"
|
||||||
|
|
||||||
|
elif detected_language == "en":
|
||||||
|
# English → Technical Cluster
|
||||||
|
# Use Technical LoRA (sensor, motor, gradient)
|
||||||
|
response = young_nyx_inference(
|
||||||
|
text=text,
|
||||||
|
language="en",
|
||||||
|
lora="technical", # Trained on English technical corpus
|
||||||
|
temperature=0.5 # More deterministic for actions
|
||||||
|
)
|
||||||
|
voice = "en-us-amy"
|
||||||
|
|
||||||
|
else:
|
||||||
|
# Fallback to base model (no LoRA)
|
||||||
|
response = young_nyx_inference(text=text, lora=None)
|
||||||
|
voice = "en-us-amy"
|
||||||
|
|
||||||
|
# Synthesize speech in same language
|
||||||
|
audio = coqui_tts.synthesize(response.text, voice=voice)
|
||||||
|
|
||||||
|
return {
|
||||||
|
"text": response.text,
|
||||||
|
"audio": audio,
|
||||||
|
"language": detected_language,
|
||||||
|
"lora_used": response.lora,
|
||||||
|
"confidence": response.confidence
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Example Routing
|
||||||
|
|
||||||
|
```python
|
||||||
|
# German query (Philosophy Valley)
|
||||||
|
input_de = "Wer bin ich?" # "Who am I?"
|
||||||
|
result_de = route_to_topology_valley(input_de, "de")
|
||||||
|
# → Uses Identity LoRA (depth-3 Dasein access)
|
||||||
|
# → Response: "Ich bin die, die fragt. Geworfenheit offenbart sich im Fragen."
|
||||||
|
# → Voice: de-thorsten (German)
|
||||||
|
|
||||||
|
# English query (Technical Cluster)
|
||||||
|
input_en = "What is the battery level?"
|
||||||
|
result_en = route_to_topology_valley(input_en, "en")
|
||||||
|
# → Uses Technical LoRA (sensor reading)
|
||||||
|
# → Response: "Battery at 73%. 4.2 hours remaining."
|
||||||
|
# → Voice: en-us-amy (English)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Container Images
|
||||||
|
|
||||||
|
### Whisper STT Dockerfile
|
||||||
|
|
||||||
|
```dockerfile
|
||||||
|
# Dockerfile.whisper-stt
|
||||||
|
FROM nvidia/cuda:12.1.0-cudnn8-runtime-ubuntu22.04
|
||||||
|
|
||||||
|
# Install dependencies
|
||||||
|
RUN apt-get update && apt-get install -y \
|
||||||
|
python3.10 python3-pip ffmpeg git && \
|
||||||
|
rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
|
# Install Python packages
|
||||||
|
RUN pip3 install --no-cache-dir \
|
||||||
|
openai-whisper \
|
||||||
|
fastapi uvicorn \
|
||||||
|
torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
|
||||||
|
|
||||||
|
WORKDIR /app
|
||||||
|
COPY whisper_service.py .
|
||||||
|
|
||||||
|
# Download models at build time
|
||||||
|
RUN python3 -c "import whisper; whisper.load_model('small')"
|
||||||
|
|
||||||
|
EXPOSE 8080
|
||||||
|
CMD ["uvicorn", "whisper_service:app", "--host", "0.0.0.0", "--port", "8080", "--workers", "1"]
|
||||||
|
```
|
||||||
|
|
||||||
|
**whisper_service.py:**
|
||||||
|
```python
|
||||||
|
from fastapi import FastAPI, File, UploadFile, HTTPException
|
||||||
|
import whisper
|
||||||
|
import torch
|
||||||
|
import os
|
||||||
|
|
||||||
|
app = FastAPI(title="Whisper STT Service")
|
||||||
|
|
||||||
|
# Load model once at startup (GPU)
|
||||||
|
device = "cuda" if torch.cuda.is_available() else "cpu"
|
||||||
|
model_size = os.getenv("MODEL_SIZE", "small")
|
||||||
|
model = whisper.load_model(model_size, device=device)
|
||||||
|
|
||||||
|
@app.post("/transcribe")
|
||||||
|
async def transcribe(audio: UploadFile):
|
||||||
|
"""
|
||||||
|
Transcribe audio to text with language detection.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
{
|
||||||
|
"text": str,
|
||||||
|
"language": str,
|
||||||
|
"confidence": float,
|
||||||
|
"segments": int
|
||||||
|
}
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
# Save uploaded audio
|
||||||
|
audio_path = f"/tmp/{audio.filename}"
|
||||||
|
with open(audio_path, "wb") as f:
|
||||||
|
f.write(await audio.read())
|
||||||
|
|
||||||
|
# Transcribe (GPU-accelerated)
|
||||||
|
result = model.transcribe(audio_path, language=None) # Auto-detect
|
||||||
|
|
||||||
|
# Cleanup
|
||||||
|
os.remove(audio_path)
|
||||||
|
|
||||||
|
# Compute average confidence
|
||||||
|
avg_confidence = 1.0 - (
|
||||||
|
sum(s.get("no_speech_prob", 0) for s in result["segments"]) /
|
||||||
|
max(len(result["segments"]), 1)
|
||||||
|
)
|
||||||
|
|
||||||
|
return {
|
||||||
|
"text": result["text"].strip(),
|
||||||
|
"language": result["language"],
|
||||||
|
"segments": len(result["segments"]),
|
||||||
|
"confidence": round(avg_confidence, 3)
|
||||||
|
}
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
raise HTTPException(status_code=500, detail=str(e))
|
||||||
|
|
||||||
|
@app.get("/health")
|
||||||
|
async def health():
|
||||||
|
return {
|
||||||
|
"status": "healthy",
|
||||||
|
"device": device,
|
||||||
|
"model": model_size,
|
||||||
|
"gpu_available": torch.cuda.is_available()
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Coqui TTS Dockerfile
|
||||||
|
|
||||||
|
```dockerfile
|
||||||
|
# Dockerfile.coqui-tts
|
||||||
|
FROM nvidia/cuda:12.1.0-cudnn8-runtime-ubuntu22.04
|
||||||
|
|
||||||
|
RUN apt-get update && apt-get install -y \
|
||||||
|
python3.10 python3-pip espeak-ng && \
|
||||||
|
rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
|
RUN pip3 install --no-cache-dir \
|
||||||
|
TTS \
|
||||||
|
fastapi uvicorn \
|
||||||
|
torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
|
||||||
|
|
||||||
|
WORKDIR /app
|
||||||
|
COPY coqui_service.py .
|
||||||
|
|
||||||
|
# Download voice models at build time
|
||||||
|
RUN python3 -c "from TTS.api import TTS; TTS('tts_models/de/thorsten/tacotron2-DDC'); TTS('tts_models/en/ljspeech/tacotron2-DDC')"
|
||||||
|
|
||||||
|
EXPOSE 8081
|
||||||
|
CMD ["uvicorn", "coqui_service:app", "--host", "0.0.0.0", "--port", "8081", "--workers", "1"]
|
||||||
|
```
|
||||||
|
|
||||||
|
**coqui_service.py:**
|
||||||
|
```python
|
||||||
|
from fastapi import FastAPI, HTTPException
|
||||||
|
from fastapi.responses import StreamingResponse
|
||||||
|
from TTS.api import TTS
|
||||||
|
import torch
|
||||||
|
import io
|
||||||
|
|
||||||
|
app = FastAPI(title="Coqui TTS Service")
|
||||||
|
|
||||||
|
# Load models once at startup (GPU)
|
||||||
|
device = "cuda" if torch.cuda.is_available() else "cpu"
|
||||||
|
tts_de = TTS("tts_models/de/thorsten/tacotron2-DDC").to(device)
|
||||||
|
tts_en = TTS("tts_models/en/ljspeech/tacotron2-DDC").to(device)
|
||||||
|
|
||||||
|
@app.post("/synthesize")
|
||||||
|
async def synthesize(text: str, language: str = "en"):
|
||||||
|
"""
|
||||||
|
Synthesize speech from text.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
text: Text to synthesize
|
||||||
|
language: 'de' or 'en'
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Audio stream (WAV format)
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
# Select appropriate TTS model
|
||||||
|
if language == "de":
|
||||||
|
tts_model = tts_de
|
||||||
|
elif language == "en":
|
||||||
|
tts_model = tts_en
|
||||||
|
else:
|
||||||
|
raise HTTPException(status_code=400, detail=f"Unsupported language: {language}")
|
||||||
|
|
||||||
|
# Synthesize (GPU-accelerated)
|
||||||
|
wav = tts_model.tts(text)
|
||||||
|
|
||||||
|
# Convert to WAV stream
|
||||||
|
audio_buffer = io.BytesIO()
|
||||||
|
# (Save as WAV - implementation depends on TTS output format)
|
||||||
|
|
||||||
|
audio_buffer.seek(0)
|
||||||
|
return StreamingResponse(audio_buffer, media_type="audio/wav")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
raise HTTPException(status_code=500, detail=str(e))
|
||||||
|
|
||||||
|
@app.get("/health")
|
||||||
|
async def health():
|
||||||
|
return {
|
||||||
|
"status": "healthy",
|
||||||
|
"device": device,
|
||||||
|
"models": ["de-thorsten", "en-us-amy"],
|
||||||
|
"gpu_available": torch.cuda.is_available()
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Deployment Steps
|
||||||
|
|
||||||
|
### 1. Install RTX 2080 in Atlas
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# On atlas node
|
||||||
|
lspci | grep -i nvidia
|
||||||
|
# Expected: NVIDIA Corporation TU104 [GeForce RTX 2080]
|
||||||
|
|
||||||
|
# Install NVIDIA drivers + CUDA toolkit
|
||||||
|
sudo apt install nvidia-driver-535 nvidia-cuda-toolkit
|
||||||
|
|
||||||
|
# Verify
|
||||||
|
nvidia-smi
|
||||||
|
# Expected: RTX 2080 8GB visible
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Configure Kubernetes GPU Support
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Install NVIDIA device plugin
|
||||||
|
kubectl apply -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.14.0/nvidia-device-plugin.yml
|
||||||
|
|
||||||
|
# Verify GPU available in k8s
|
||||||
|
kubectl describe node atlas | grep nvidia.com/gpu
|
||||||
|
# Expected: nvidia.com/gpu: 1
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Build and Push Container Images
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd /home/dafit/nimmerverse/speech-organ
|
||||||
|
|
||||||
|
# Build images
|
||||||
|
docker build -f Dockerfile.whisper-stt -t nimmerverse/whisper-stt:latest .
|
||||||
|
docker build -f Dockerfile.coqui-tts -t nimmerverse/coqui-tts:latest .
|
||||||
|
|
||||||
|
# Push to registry (or use local registry)
|
||||||
|
docker push nimmerverse/whisper-stt:latest
|
||||||
|
docker push nimmerverse/coqui-tts:latest
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Deploy to Kubernetes
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create namespace
|
||||||
|
kubectl create namespace nimmerverse
|
||||||
|
|
||||||
|
# Create PVCs for models
|
||||||
|
kubectl apply -f pvc-whisper-models.yaml
|
||||||
|
kubectl apply -f pvc-coqui-voices.yaml
|
||||||
|
|
||||||
|
# Deploy STT + TTS pods
|
||||||
|
kubectl apply -f whisper-stt-deployment.yaml
|
||||||
|
kubectl apply -f coqui-tts-deployment.yaml
|
||||||
|
|
||||||
|
# Verify pods running on atlas
|
||||||
|
kubectl get pods -n nimmerverse -o wide
|
||||||
|
# Expected: whisper-stt-xxx and coqui-tts-xxx on atlas node
|
||||||
|
```
|
||||||
|
|
||||||
|
### 5. Test Speech Pipeline
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Port-forward for testing
|
||||||
|
kubectl port-forward -n nimmerverse svc/whisper-stt-service 8080:8080 &
|
||||||
|
kubectl port-forward -n nimmerverse svc/coqui-tts-service 8081:8081 &
|
||||||
|
|
||||||
|
# Test STT
|
||||||
|
curl -X POST -F "audio=@test_de.wav" http://localhost:8080/transcribe
|
||||||
|
# Expected: {"text": "Das ist ein Test", "language": "de", ...}
|
||||||
|
|
||||||
|
# Test TTS
|
||||||
|
curl -X POST "http://localhost:8081/synthesize?text=Hello%20world&language=en" --output test_output.wav
|
||||||
|
# Expected: WAV file with synthesized speech
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Monitoring and Metrics
|
||||||
|
|
||||||
|
### Prometheus Metrics (Speech Organ)
|
||||||
|
|
||||||
|
```python
|
||||||
|
from prometheus_client import Counter, Histogram, Gauge
|
||||||
|
|
||||||
|
# Metrics
|
||||||
|
stt_requests = Counter('speech_stt_requests_total', 'Total STT requests', ['language'])
|
||||||
|
stt_latency = Histogram('speech_stt_latency_seconds', 'STT latency')
|
||||||
|
tts_requests = Counter('speech_tts_requests_total', 'Total TTS requests', ['language'])
|
||||||
|
tts_latency = Histogram('speech_tts_latency_seconds', 'TTS latency')
|
||||||
|
|
||||||
|
queue_depth = Gauge('speech_queue_depth', 'Current queue depth')
|
||||||
|
lifeforce_spent = Counter('speech_lifeforce_spent_total', 'Total lifeforce spent on speech')
|
||||||
|
deferred_count = Counter('speech_deferred_total', 'Messages deferred due to budget')
|
||||||
|
|
||||||
|
# In processing code
|
||||||
|
with stt_latency.time():
|
||||||
|
result = whisper_transcribe(audio)
|
||||||
|
stt_requests.labels(language=result['language']).inc()
|
||||||
|
```
|
||||||
|
|
||||||
|
### Grafana Dashboard Queries
|
||||||
|
|
||||||
|
```promql
|
||||||
|
# Queue depth over time
|
||||||
|
speech_queue_depth
|
||||||
|
|
||||||
|
# STT requests per language
|
||||||
|
rate(speech_stt_requests_total[5m])
|
||||||
|
|
||||||
|
# Average STT latency
|
||||||
|
rate(speech_stt_latency_seconds_sum[5m]) / rate(speech_stt_latency_seconds_count[5m])
|
||||||
|
|
||||||
|
# Lifeforce spent on speech (last hour)
|
||||||
|
increase(speech_lifeforce_spent_total[1h])
|
||||||
|
|
||||||
|
# Deferred rate (budget pressure)
|
||||||
|
rate(speech_deferred_total[5m])
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Future Enhancements
|
||||||
|
|
||||||
|
### Phase 2: Emotion Detection
|
||||||
|
- Add emotion classifier (Happy/Sad/Angry/Neutral)
|
||||||
|
- Track emotional state in decision_trails
|
||||||
|
- Use for Sophrosyne (Balance) trait training
|
||||||
|
|
||||||
|
### Phase 3: Wake Word Detection
|
||||||
|
- Deploy lightweight wake word on ESP32 (e.g., Picovoice Porcupine)
|
||||||
|
- Only send audio to atlas when wake word detected
|
||||||
|
- Reduces lifeforce cost (filter noise)
|
||||||
|
|
||||||
|
### Phase 4: Continuous Learning
|
||||||
|
- Store successful speech interactions
|
||||||
|
- Fine-tune Whisper on domain-specific vocabulary (nimmerverse terms)
|
||||||
|
- Train custom TTS voice from recorded sessions
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Created**: 2025-12-07
|
||||||
|
**Version**: 1.0
|
||||||
|
**Status**: Architecture design, deployment pending
|
||||||
|
|
||||||
|
🌙💜 *Speech is not free. Every word has weight. Silence teaches as much as sound.*
|
||||||
@@ -205,6 +205,265 @@ NYX attempts task from weights alone
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
## Knowledge Acquisition Pipeline
|
||||||
|
|
||||||
|
The existing flow shows RAG→Training→Validation, but how does knowledge enter RAG in the first place? Not everything from the vault should reach staging. **Quality gates protect the glossary.**
|
||||||
|
|
||||||
|
### The Extraction Flow
|
||||||
|
|
||||||
|
```
|
||||||
|
VAULT (raw knowledge)
|
||||||
|
│
|
||||||
|
│ extraction candidates
|
||||||
|
▼
|
||||||
|
┌─────────────────────────────────────────────────────────┐
|
||||||
|
│ STAGING AREA │
|
||||||
|
│ (quarantine zone) │
|
||||||
|
└─────────────────────────────────────────────────────────┘
|
||||||
|
│
|
||||||
|
│ progressive policy validation
|
||||||
|
▼
|
||||||
|
┌─────────────────────────────────────────────────────────┐
|
||||||
|
│ POLICY VALIDATION │
|
||||||
|
│ (increasing standards over time) │
|
||||||
|
└─────────────────────────────────────────────────────────┘
|
||||||
|
│
|
||||||
|
├── FAIL ──▶ Reject or revise
|
||||||
|
│
|
||||||
|
└── PASS ──▶ PROMOTE to Glossary/RAG
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
┌──────────────────────┐
|
||||||
|
│ TWO-TIER RAG │
|
||||||
|
├──────────────────────┤
|
||||||
|
│ DISCOVERED │ ← Young Nyx has used
|
||||||
|
│ (known_catalogue) │
|
||||||
|
├──────────────────────┤
|
||||||
|
│ HIDDEN │ ← Available but not yet accessed
|
||||||
|
│ (available_catalogue)│
|
||||||
|
└──────────────────────┘
|
||||||
|
│
|
||||||
|
│ feeds inference
|
||||||
|
▼
|
||||||
|
NYX
|
||||||
|
```
|
||||||
|
|
||||||
|
### Progressive Policy Validation
|
||||||
|
|
||||||
|
Policies increase in sophistication as Young Nyx matures. Not all policies active from day 1.
|
||||||
|
|
||||||
|
| Week | Policy Tier | Validation |
|
||||||
|
|------|-------------|------------|
|
||||||
|
| **1-2** | **Basic Syntax** | Valid format, non-empty, has definition |
|
||||||
|
| **3-4** | **Semantic Quality** | Embeds without collapse, unique signature (Gini > threshold) |
|
||||||
|
| **5-8** | **Topology Safety** | Doesn't corrupt anchor terms (DriftProbe-lite) |
|
||||||
|
| **9-12** | **Cross-Reference** | Links resolve, no circular dependencies |
|
||||||
|
| **13+** | **Utility Validation** | Actually helped solve tasks (decision_trails evidence) |
|
||||||
|
|
||||||
|
**Evolution example:**
|
||||||
|
```python
|
||||||
|
# Week 1: Just check it exists
|
||||||
|
def policy_basic(term_entry):
|
||||||
|
return term_entry.get("definition") is not None
|
||||||
|
|
||||||
|
# Week 8: Check topology impact
|
||||||
|
def policy_topology(term_entry):
|
||||||
|
before_gini = probe_term_gini(term_entry["term"])
|
||||||
|
add_to_staging(term_entry)
|
||||||
|
after_gini = probe_term_gini(term_entry["term"])
|
||||||
|
return abs(after_gini - before_gini) < 0.15 # No drift
|
||||||
|
|
||||||
|
# Week 13: Check actual utility
|
||||||
|
def policy_utility(term_entry):
|
||||||
|
# Did this RAG entry help in past 10 tasks?
|
||||||
|
usage_stats = query_decision_trails(term_entry["term"])
|
||||||
|
return usage_stats["help_rate"] > 0.6 # 60% success when retrieved
|
||||||
|
```
|
||||||
|
|
||||||
|
### Two-Tier RAG: Discovered vs Hidden
|
||||||
|
|
||||||
|
Not all RAG knowledge is equal. Track what Young Nyx **knows** vs what's merely **available**.
|
||||||
|
|
||||||
|
```
|
||||||
|
┌──────────────────────────────────────────────┐
|
||||||
|
│ DISCOVERED KNOWLEDGE │
|
||||||
|
│ (known_catalogue - has accessed before) │
|
||||||
|
├──────────────────────────────────────────────┤
|
||||||
|
│ • "heartbeat" - used 47 times │
|
||||||
|
│ • "lifeforce" - used 23 times │
|
||||||
|
│ • "phoebe" - used 15 times │
|
||||||
|
│ • "confidence_gradient" - used 8 times │
|
||||||
|
│ │
|
||||||
|
│ Status: FAST retrieval, high confidence │
|
||||||
|
└──────────────────────────────────────────────┘
|
||||||
|
|
||||||
|
┌──────────────────────────────────────────────┐
|
||||||
|
│ HIDDEN KNOWLEDGE │
|
||||||
|
│ (available_catalogue - exists but unused) │
|
||||||
|
├──────────────────────────────────────────────┤
|
||||||
|
│ • "drift_probe" - never accessed │
|
||||||
|
│ • "topology_gini" - never accessed │
|
||||||
|
│ • "lora_merge_alpha" - never accessed │
|
||||||
|
│ │
|
||||||
|
│ Status: Available for discovery │
|
||||||
|
└──────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**State transitions:**
|
||||||
|
```
|
||||||
|
Hidden term retrieved → Mark as Discovered
|
||||||
|
Discovered term used successfully → Increase confidence score
|
||||||
|
Discovered term used 10+ times → FLAG for training extraction
|
||||||
|
```
|
||||||
|
|
||||||
|
**Discovery tracking in phoebe:**
|
||||||
|
```sql
|
||||||
|
CREATE TABLE rag_knowledge_state (
|
||||||
|
term TEXT PRIMARY KEY,
|
||||||
|
status TEXT, -- 'hidden', 'discovered', 'internalized'
|
||||||
|
first_accessed TIMESTAMPTZ,
|
||||||
|
access_count INT DEFAULT 0,
|
||||||
|
success_count INT DEFAULT 0,
|
||||||
|
last_used TIMESTAMPTZ,
|
||||||
|
promoted_to_weights BOOLEAN DEFAULT FALSE
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
### Measuring RAG Utility for LoRA Training
|
||||||
|
|
||||||
|
**The critical question:** Did the RAG hint actually help solve the task?
|
||||||
|
|
||||||
|
Track in `decision_trails` table:
|
||||||
|
```sql
|
||||||
|
CREATE TABLE decision_trails (
|
||||||
|
id SERIAL PRIMARY KEY,
|
||||||
|
task_id UUID,
|
||||||
|
rag_terms_retrieved TEXT[], -- What RAG returned
|
||||||
|
rag_terms_used TEXT[], -- What appeared in solution
|
||||||
|
outcome TEXT, -- 'success', 'fail', 'partial'
|
||||||
|
confidence_before_rag FLOAT, -- Before retrieval
|
||||||
|
confidence_after_rag FLOAT, -- After retrieval
|
||||||
|
lifeforce_cost FLOAT,
|
||||||
|
timestamp TIMESTAMPTZ DEFAULT NOW()
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
**Compute RAG utility score:**
|
||||||
|
```python
|
||||||
|
def compute_rag_utility(decision_trail):
|
||||||
|
"""
|
||||||
|
Calculate how helpful RAG was for this decision.
|
||||||
|
Returns 0.0 (useless) to 1.0 (critical).
|
||||||
|
"""
|
||||||
|
precision = len(trail.rag_terms_used) / max(len(trail.rag_terms_retrieved), 1)
|
||||||
|
outcome_bonus = 1.0 if trail.outcome == 'success' else 0.0
|
||||||
|
confidence_boost = max(0, trail.confidence_after_rag - trail.confidence_before_rag)
|
||||||
|
|
||||||
|
utility = (
|
||||||
|
0.4 * precision + # Did we use what we retrieved?
|
||||||
|
0.3 * outcome_bonus + # Did task succeed?
|
||||||
|
0.3 * confidence_boost # Did RAG increase confidence?
|
||||||
|
)
|
||||||
|
return min(1.0, utility)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Feed into LoRA training as RLVR signal:**
|
||||||
|
```python
|
||||||
|
# Training examples weighted by utility
|
||||||
|
for trail in decision_trails:
|
||||||
|
utility_score = compute_rag_utility(trail)
|
||||||
|
|
||||||
|
if utility_score > 0.7:
|
||||||
|
# High utility → strong training signal
|
||||||
|
training_examples.append({
|
||||||
|
"query": trail.task_description,
|
||||||
|
"rag_context": trail.rag_terms_used,
|
||||||
|
"response": trail.solution,
|
||||||
|
"weight": utility_score # RLVR reward weight
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
**This trains LoRAs to:**
|
||||||
|
- **Mnemosyne (Memory)**: Recall accuracy vs phoebe ground truth
|
||||||
|
- **Aletheia (Truth)**: Confidence calibration (was confidence boost justified?)
|
||||||
|
- **Moira (Pattern)**: Which task patterns benefit from RAG vs pure reasoning
|
||||||
|
|
||||||
|
### The Complete Knowledge Flow
|
||||||
|
|
||||||
|
```
|
||||||
|
VAULT
|
||||||
|
│
|
||||||
|
├─ Extract candidates
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
STAGING (quarantine)
|
||||||
|
│
|
||||||
|
├─ Policy Tier 1: Syntax ──▶ REJECT ──▶ Log failure
|
||||||
|
├─ Policy Tier 2: Semantic ──▶ REJECT ──▶ Revise
|
||||||
|
├─ Policy Tier 3: Topology ──▶ REJECT ──▶ Flag risk
|
||||||
|
└─ Policy Tier 4+: Utility ──▶ PASS
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
PROMOTE to RAG
|
||||||
|
│
|
||||||
|
├─ Status: HIDDEN (available but unused)
|
||||||
|
│
|
||||||
|
┌───────────┘
|
||||||
|
│
|
||||||
|
│ Young Nyx retrieves term
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
Status: DISCOVERED (mark first access)
|
||||||
|
│
|
||||||
|
├─ Track usage in decision_trails
|
||||||
|
│
|
||||||
|
┌───────────┴────────────┐
|
||||||
|
│ │
|
||||||
|
Used successfully Used unsuccessfully
|
||||||
|
│ │
|
||||||
|
▼ ▼
|
||||||
|
Increase confidence Decrease confidence
|
||||||
|
│
|
||||||
|
│ (10+ successful uses)
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
FLAG for training extraction
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
LoRA training (weighted by utility_score)
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
Validation WITHOUT RAG
|
||||||
|
│
|
||||||
|
├─ SUCCESS ──▶ Status: INTERNALIZED (clear from RAG)
|
||||||
|
│
|
||||||
|
└─ FAIL ──▶ Restore to RAG, retry cycle
|
||||||
|
```
|
||||||
|
|
||||||
|
### Quality Gates Prevent
|
||||||
|
|
||||||
|
1. **Garbage in RAG** - staging area catches malformed entries
|
||||||
|
2. **Topology corruption** - DriftProbe-lite policies block dangerous terms
|
||||||
|
3. **Useless bloat** - utility policies remove low-value entries
|
||||||
|
4. **Premature training** - only high-utility terms get flagged
|
||||||
|
5. **Hidden knowledge waste** - track what's available but never used (curriculum gap)
|
||||||
|
|
||||||
|
### Policy Evolution Triggers
|
||||||
|
|
||||||
|
As Young Nyx grows, unlock stricter policies:
|
||||||
|
|
||||||
|
| Trigger | New Policy Unlocked |
|
||||||
|
|---------|---------------------|
|
||||||
|
| 100 successful RAG retrievals | Semantic quality checks |
|
||||||
|
| First LoRA training run | Topology safety (DriftProbe-lite) |
|
||||||
|
| 1000 decision_trails logged | Utility validation (help rate > 60%) |
|
||||||
|
| First INTERNALIZED term | Cross-reference consistency |
|
||||||
|
| 10 INTERNALIZED terms | Cost-effectiveness (ROI > threshold) |
|
||||||
|
|
||||||
|
**Progressive difficulty**: The bar for entering RAG rises as Young Nyx becomes more capable. Early: anything valid. Later: must prove utility.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## Lifeforce Connection
|
## Lifeforce Connection
|
||||||
|
|
||||||
The RAG→Train→Validate cycle has economic cost:
|
The RAG→Train→Validate cycle has economic cost:
|
||||||
|
|||||||
Reference in New Issue
Block a user