docs: Architecture cleanup - ONE JOB per doc, links not echoes
Major documentation surgery following the cleanup principle: "One job per doc. One home per concept. Links, not echoes." Changes: - Add Deployment-Architecture.md (THE WHERE - sole infrastructure truth) - Endgame-Vision.md: 848→498 lines (-41%) - THE DREAM - Gateway-Architecture.md: 537→395 lines (-26%) - THE ROUTING - Nervous-System.md: 361→246 lines (-32%) - THE EVOLUTION - Data-Architecture.md: 666→647 lines (-3%) - THE SCHEMA - Message-Protocol-Design.md: 375→285 lines (-24%) - THE WIRE - Attention-Flow.md: 557→493 lines (-11%) - THE BUDGET - Cellular-Architecture.md: 891→855 lines (-4%) - THE HOW Every doc now has ONE JOB statement, cross-references to canonical homes, and lean footers. ~800 lines removed, zero concepts lost. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
@@ -100,55 +100,11 @@ This is a **RESEARCH VISION** - a platform for studying how intelligence emerges
|
|||||||
|
|
||||||
## Physical Infrastructure (The Substrate)
|
## Physical Infrastructure (The Substrate)
|
||||||
|
|
||||||
The nimmerverse runs on sovereign hardware. No cloud dependencies. Weights never leave home.
|
The nimmerverse runs on **sovereign hardware**. No cloud dependencies. Weights never leave home.
|
||||||
|
|
||||||
**Detail:** → [`archive/nimmervest.md`](archive/nimmervest.md)
|
**Hybrid deployment model:** Containers (K8s) for cells/nerves, userspace for LLM inference and organs. NATS connects everything. FreeIPA provides identity isolation.
|
||||||
|
|
||||||
### K8s Cluster Architecture (Operational February 2026)
|
**Detail:** → [`architecture/Deployment-Architecture.md`](architecture/Deployment-Architecture.md) (full topology, GPU strategy, identity model)
|
||||||
|
|
||||||
```
|
|
||||||
┌─────────────────────────────────────────────────────────────────────┐
|
|
||||||
│ K8S CLUSTER: NIMMERVERSE │
|
|
||||||
│ VLAN 30 (10.0.30.0/24) │
|
|
||||||
│ kubeadm v1.31.14 + Flannel CNI │
|
|
||||||
├─────────────────────────────────────────────────────────────────────┤
|
|
||||||
│ │
|
|
||||||
│ k8s-master (VM 101 on Saturn) │
|
|
||||||
│ 10.0.30.101 │
|
|
||||||
│ Control Plane │
|
|
||||||
│ │ │
|
|
||||||
│ ┌─────────────┴─────────────┐ │
|
|
||||||
│ │ │ │
|
|
||||||
│ ▼ ▼ │
|
|
||||||
│ theia (GPU Worker) dioscuri (GPU Worker) │
|
|
||||||
│ ───────────────── ────────────────── │
|
|
||||||
│ 10.0.30.21 (10GbE) 10.0.30.22 (10GbE) │
|
|
||||||
│ RTX PRO 6000 Blackwell 2x RTX 4000 Ada │
|
|
||||||
│ 96GB VRAM 40GB VRAM │
|
|
||||||
│ Primary Training Inference │
|
|
||||||
│ │
|
|
||||||
│ Total Cluster: 136GB VRAM │
|
|
||||||
│ │
|
|
||||||
└─────────────────────────────────────────────────────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
### K8s Namespaces
|
|
||||||
|
|
||||||
| Namespace | Contents | Node |
|
|
||||||
|-----------|----------|------|
|
|
||||||
| `nimmerverse-infra` | NATS, Prometheus, Grafana | Any |
|
|
||||||
| `nimmerverse-nervous` | Escalation, Math Cells, Nerves | Any |
|
|
||||||
| `nimmerverse-cognitive` | Young Nyx | Womb |
|
|
||||||
| `nimmerverse-organs` | STT, TTS, Vision | Senses |
|
|
||||||
|
|
||||||
### Network Backbone
|
|
||||||
|
|
||||||
- **Firewall**: OPNsense on Z620, 20G LAGG to spine
|
|
||||||
- **Spine**: MikroTik CRS309 (8x 10G SFP+)
|
|
||||||
- **Compute VLAN**: 10.0.30.0/24 (cubes/containers)
|
|
||||||
- **All traffic**: Inter-VLAN routed through firewall
|
|
||||||
|
|
||||||
**Hardware operational February 2026. Sovereignty achieved. 🟢**
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -207,38 +163,11 @@ The architecture has evolved from competitive containers to **layered state mach
|
|||||||
└─────────────────────────────────────────────────────────────────────┘
|
└─────────────────────────────────────────────────────────────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
### Cell Categories
|
**Cell categories:** Sensors, Motors, Organs (GPU inference), Math (computation). Each is an atomic state machine.
|
||||||
|
|
||||||
| Category | Examples | Purpose |
|
**Lifeforce economy:** Every operation has a cost. Milestones reward survival. This creates evolutionary pressure toward efficiency.
|
||||||
|----------|----------|---------|
|
|
||||||
| **Sensor Cells** | distance_sensor, light_sensor, battery_monitor | Wrap hardware inputs |
|
|
||||||
| **Motor Cells** | motor_left, servo_camera | Wrap actuators |
|
|
||||||
| **Organ Cells** | speech_stt, speech_tts, vision_detect | GPU inference |
|
|
||||||
| **Math Cells** | economy_aggregator, wake_evaluator | Computation & metrics |
|
|
||||||
|
|
||||||
### Lifeforce Economy
|
**Hybrid reflex homes:** Different reflexes need different homes — hardware (ESP32) for survival (<10ms), math cells for thresholds (<50ms), nerves for behavior (<200ms), model weights for cognition (<500ms).
|
||||||
|
|
||||||
Every operation has a cost. Milestones reward survival:
|
|
||||||
|
|
||||||
| Operation | Cost | Milestone | Reward |
|
|
||||||
|-----------|------|-----------|--------|
|
|
||||||
| Sensor poll | -0.3 LF | Collision avoided | +5.0 LF |
|
|
||||||
| Motor move | -1.0 LF | Charging reached | +10.0 LF |
|
|
||||||
| Speech STT | -5.0 LF | Object discovered | +20.0 LF |
|
|
||||||
| Vision detect | -8.0 LF | Reflex compiled | +50.0 LF |
|
|
||||||
|
|
||||||
### Hybrid Reflex Homes
|
|
||||||
|
|
||||||
Learned patterns live in their optimal location:
|
|
||||||
|
|
||||||
| Layer | Location | Latency | Examples |
|
|
||||||
|-------|----------|---------|----------|
|
|
||||||
| 0 | Hardware (ESP32) | <10ms | temp_danger, collision_imminent |
|
|
||||||
| 1 | Math Cells (Python) | <50ms | economy_aggregator, threshold logic |
|
|
||||||
| 2 | Fast Nerves (Python) | <200ms | collision_avoidance, charging_seek |
|
|
||||||
| 3 | Model Weights (LoRA) | <500ms | cognitive patterns, meta-decisions |
|
|
||||||
|
|
||||||
**Key insight:** Different types of reflexes need different homes. Hardware for survival, weights for cognition.
|
|
||||||
|
|
||||||
**Detail:** → [`architecture/Cellular-Architecture.md`](architecture/Cellular-Architecture.md)
|
**Detail:** → [`architecture/Cellular-Architecture.md`](architecture/Cellular-Architecture.md)
|
||||||
|
|
||||||
@@ -333,10 +262,7 @@ This remains valid research, but doesn't require separate LoRAs. Young Nyx navig
|
|||||||
|
|
||||||
### Deployment
|
### Deployment
|
||||||
|
|
||||||
**Hardware:** RTX PRO 6000 Blackwell (96GB VRAM) - "The Womb" (theia)
|
**Detail:** → [`architecture/Deployment-Architecture.md`](architecture/Deployment-Architecture.md) (infrastructure, GPU strategy, identity model)
|
||||||
**Stack:** vLLM + Lorax for hot-swap trait LoRAs
|
|
||||||
**VRAM Budget:** Base ~77GB + Active trait LoRAs ~500MB = fits in 96GB ✓
|
|
||||||
**Structured Output:** Function Gemma on dioscuri (separate, reliable)
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -390,52 +316,11 @@ Two specialized models ensure reliability at the boundaries:
|
|||||||
└──────────────────────────────────────────────────────────────────┘
|
└──────────────────────────────────────────────────────────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
### LangChain Orchestration
|
|
||||||
|
|
||||||
```python
|
|
||||||
from langchain import Chain, Router
|
|
||||||
|
|
||||||
# The models as LangChain components
|
|
||||||
t5gemma = Ollama(model="t5gemma2-4b") # Vision encoding
|
|
||||||
function_gemma = Ollama(model="function-gemma") # Structured output
|
|
||||||
nyx = Ollama(model="qwen3-vl-32b") # Reasoning
|
|
||||||
|
|
||||||
# The orchestration pipeline
|
|
||||||
vision_chain = (
|
|
||||||
vision_input
|
|
||||||
| t5gemma.encode() # → vectors (canonical)
|
|
||||||
| store_to_iris() # → persist spatially
|
|
||||||
| nyx.think() # → decision (fuzzy)
|
|
||||||
| function_gemma.act() # → structured output
|
|
||||||
| execute_via_nats() # → trigger nodes
|
|
||||||
)
|
|
||||||
|
|
||||||
# Harness routing (context-appropriate capability profiles)
|
|
||||||
harness_router = Router(
|
|
||||||
routes={
|
|
||||||
"vision": vision_chain,
|
|
||||||
"dialogue": dialogue_chain,
|
|
||||||
"reflex": reflex_chain,
|
|
||||||
}
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Harnesses (Capability Profiles)
|
|
||||||
|
|
||||||
Swappable configurations for different contexts:
|
|
||||||
|
|
||||||
| Harness | LoRA Active | Models Active | Use Case |
|
|
||||||
|---------|-------------|---------------|----------|
|
|
||||||
| **Vision** | Technical | T5Gemma 2, cells | Processing camera streams |
|
|
||||||
| **Dialogue** | Identity + Creative | Speech organ | Talking with dafit |
|
|
||||||
| **Reflex** | Minimal/none | Nerves only | Fast reaction, low latency |
|
|
||||||
| **Introspective** | Identity + Creative | Iris RAG | Self-reflection, journaling |
|
|
||||||
|
|
||||||
### Why This Matters
|
### Why This Matters
|
||||||
|
|
||||||
- **No embedding debates:** T5Gemma 2 decides once, canonically
|
- **No embedding debates:** T5Gemma 2 decides once, canonically
|
||||||
- **No parsing failures:** Function Gemma guarantees structure
|
- **No parsing failures:** Function Gemma guarantees structure
|
||||||
- **Scale:** Vision organs fire constantly without text bottleneck
|
- **Harnesses:** Context-appropriate capability profiles (Vision, Dialogue, Reflex, Introspective)
|
||||||
- **Flexibility:** Reasoning layer stays creative because translation is solid
|
- **Flexibility:** Reasoning layer stays creative because translation is solid
|
||||||
|
|
||||||
**Detail:** → [`architecture/future/SEEDS.md`](architecture/future/SEEDS.md) (T5Gemma 2 + Function Gemma seed)
|
**Detail:** → [`architecture/future/SEEDS.md`](architecture/future/SEEDS.md) (T5Gemma 2 + Function Gemma seed)
|
||||||
@@ -445,138 +330,15 @@ Swappable configurations for different contexts:
|
|||||||
> *"Start where you can measure. Abstract where you must."*
|
> *"Start where you can measure. Abstract where you must."*
|
||||||
> — The Spatial Grounding Principle (2026-01-01)
|
> — The Spatial Grounding Principle (2026-01-01)
|
||||||
|
|
||||||
T5Gemma 2 produces embeddings, but WHERE do they go? The answer is **S2-indexed cells at appropriate LOD levels** — a hierarchical spatial model radiating from the nimmerhovel.
|
Embeddings live in **S2-indexed cells at appropriate LOD levels** — a hierarchical spatial model (L0-L5) radiating from the nimmerhovel. Dense where we have sensors, sparse where we don't. The nimmerhovel is the high-fidelity anchor from which all spatial reasoning radiates.
|
||||||
|
|
||||||
```
|
**Detail:** → [`architecture/future/spatial-resolution-gradient.md`](architecture/future/spatial-resolution-gradient.md)
|
||||||
🌍 L5: WORLD (100km resolution)
|
|
||||||
│ Abstract knowledge, directional only
|
|
||||||
│
|
|
||||||
▼
|
|
||||||
🇨🇭 L4: REGION (1km resolution)
|
|
||||||
│ Maps, general knowledge
|
|
||||||
│
|
|
||||||
▼
|
|
||||||
🏘️ L3: NEIGHBORHOOD (10m resolution)
|
|
||||||
│ OpenStreetMap, landmarks, routes
|
|
||||||
│
|
|
||||||
▼
|
|
||||||
🏠 L2: BUILDING (50cm resolution)
|
|
||||||
│ Floor plans, room-level awareness
|
|
||||||
│
|
|
||||||
════╪════ HIGH RESOLUTION BOUNDARY
|
|
||||||
│
|
|
||||||
▼
|
|
||||||
🔬 L1: NIMMERHOVEL (1cm resolution)
|
|
||||||
│ Full 3D grid, every object tracked
|
|
||||||
│ 8× ESP32-S3 + Pi HQ Camera coverage
|
|
||||||
│
|
|
||||||
▼
|
|
||||||
🔍 L0: SCAN STATION (1mm resolution)
|
|
||||||
│ Discovery Scan Station, object surface detail
|
|
||||||
```
|
|
||||||
|
|
||||||
**The Simpsons Inversion:** Unlike zooming IN to detail, we start at maximum detail (nimmerhovel) and zoom OUT with graceful degradation. Dense where we have sensors, sparse where we don't.
|
|
||||||
|
|
||||||
### Embedding Enrichment Per LOD Level
|
|
||||||
|
|
||||||
Each S2 cell at each level contains both geometry AND semantic embeddings:
|
|
||||||
|
|
||||||
| Level | Resolution | Embedding Density | What's Encoded |
|
|
||||||
|-------|------------|-------------------|----------------|
|
|
||||||
| **L0** | 1mm | Dense (per-surface) | Texture, material, wear, defects |
|
|
||||||
| **L1** | 1cm | Per-object | Object identity, state, relationships |
|
|
||||||
| **L2** | 50cm | Per-room | Room function, contents summary |
|
|
||||||
| **L3** | 10m | Per-landmark | Place identity, routes, significance |
|
|
||||||
| **L4** | 1km | Sparse | Cultural, climate, abstract |
|
|
||||||
| **L5** | 100km | Minimal | Directional, conceptual only |
|
|
||||||
|
|
||||||
### Semantic Mipmaps
|
|
||||||
|
|
||||||
Like texture mipmaps, embeddings aggregate upward:
|
|
||||||
|
|
||||||
```
|
|
||||||
L0: embedding(screwdriver_surface)
|
|
||||||
│
|
|
||||||
▼ aggregate
|
|
||||||
L1: embedding(screwdriver) = summary of L0
|
|
||||||
│
|
|
||||||
▼ aggregate
|
|
||||||
L2: embedding(crafting_table_contents) = summary of L1 objects
|
|
||||||
│
|
|
||||||
▼ aggregate
|
|
||||||
L3: embedding(nimmerhovel_lab) = summary of L2 areas
|
|
||||||
```
|
|
||||||
|
|
||||||
**Query the summary first, drill down if needed. Attention = resolution selection.**
|
|
||||||
|
|
||||||
### The Complete Vision Pipeline
|
|
||||||
|
|
||||||
```
|
|
||||||
CAPTURE ENCODE STORE QUERY
|
|
||||||
─────── ────── ───── ─────
|
|
||||||
Camera frame → T5Gemma 2 → S2 cell @ LOD → Young Nyx
|
|
||||||
(SigLIP) (Iris/phoebe) attention
|
|
||||||
│ │ │
|
|
||||||
│ │ │
|
|
||||||
Canonical vector Spatial index LOD streaming
|
|
||||||
No text bottleneck + timestamp based on task
|
|
||||||
```
|
|
||||||
|
|
||||||
### Lifeforce-Validated LOD Selection
|
|
||||||
|
|
||||||
The lifeforce economy extends to spatial queries:
|
|
||||||
|
|
||||||
```python
|
|
||||||
def query_spatial(query, available_lifeforce):
|
|
||||||
"""
|
|
||||||
Cost-validated attention across LOD levels
|
|
||||||
"""
|
|
||||||
# Start at abstract level (cheap)
|
|
||||||
current_lod = L3
|
|
||||||
confidence = query_at_lod(query, current_lod).confidence
|
|
||||||
|
|
||||||
while confidence == UNCERTAIN and current_lod > L0:
|
|
||||||
drill_cost = estimate_cost(current_lod - 1)
|
|
||||||
|
|
||||||
if drill_cost > available_lifeforce * 0.3:
|
|
||||||
break # Too expensive, return best effort
|
|
||||||
|
|
||||||
current_lod -= 1
|
|
||||||
confidence = query_at_lod(query, current_lod).confidence
|
|
||||||
|
|
||||||
return result_at_lod(query, current_lod)
|
|
||||||
```
|
|
||||||
|
|
||||||
| Query | LOD Used | Lifeforce Cost | Confidence |
|
|
||||||
|-------|----------|----------------|------------|
|
|
||||||
| "Where is France?" | L5 | 1 | CONFIDENT |
|
|
||||||
| "Where is the lab?" | L2 | 3 | CONFIDENT |
|
|
||||||
| "Where is the screwdriver?" | L1 | 8 | CONFIDENT |
|
|
||||||
| "What's the serial number on the screwdriver?" | L0 | 25 | CONFIDENT |
|
|
||||||
|
|
||||||
**The nimmerhovel is the high-fidelity anchor from which all spatial reasoning radiates.**
|
|
||||||
|
|
||||||
**Detail:** → [`architecture/future/spatial-resolution-gradient.md`](architecture/future/spatial-resolution-gradient.md) (Full Resolution Gradient + Embedding Enrichment specification)
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Boot Sequence (Spark Protocol)
|
## Boot Sequence (Spark Protocol)
|
||||||
|
|
||||||
Protocol-driven cognitive bootstrap. Not conversation—deterministic handshakes with verified outcomes.
|
Protocol-driven cognitive bootstrap. Not conversation—deterministic handshakes with verified outcomes. Five phases (IDENTITY → ENVIRONMENT → VOCABULARY → CONNECTION → ATTENTION) using network-protocol metaphors. Spark is profitable: each handshake costs ~0.8 LF, rewards 5-20 LF.
|
||||||
|
|
||||||
| Phase | Protocol | Intent | Function Gemma Output |
|
|
||||||
|-------|----------|--------|----------------------|
|
|
||||||
| IDENTITY | DHCP-like | "Who am I?" | `IDENTITY_PROBE` → K8s cell → ACK |
|
|
||||||
| ENVIRONMENT | ARP-like | "What's around me?" | `ENVIRONMENT_PROBE` → pod discovery → ACK |
|
|
||||||
| VOCABULARY | DNS-like | "What does X mean?" | `VOCABULARY_PROBE` → phoebe lookup → ACK |
|
|
||||||
| CONNECTION | TCP-like | "Can I connect?" | SYN → SYN-ACK → ACK (three-way handshake) |
|
|
||||||
| ATTENTION | NATS-like | "What matters?" | `ATTENTION_SUBSCRIBE` → priority hierarchy → ACK |
|
|
||||||
|
|
||||||
**Function Gemma's role:** Transforms phase intent into typed JSON schemas. No free-form text. Every handshake is schema-validated before NATS publish.
|
|
||||||
|
|
||||||
**Verification:** Cells respond with ACK/NACK. Only ACK'd handshakes update Young Nyx's state. Protocol-verified = maximum confidence.
|
|
||||||
|
|
||||||
**Economics:** Spark is profitable. Each handshake costs ~0.8 LF, rewards range 5-20 LF. Young Nyx ends ~3× richer than she started.
|
|
||||||
|
|
||||||
**Detail:** → [`operations/Spark-Protocol.md`](operations/Spark-Protocol.md) | [`architecture/Initial-Spark.md`](architecture/Initial-Spark.md)
|
**Detail:** → [`operations/Spark-Protocol.md`](operations/Spark-Protocol.md) | [`architecture/Initial-Spark.md`](architecture/Initial-Spark.md)
|
||||||
|
|
||||||
@@ -626,20 +388,7 @@ The state machine architecture provides automatic reward rubric:
|
|||||||
|
|
||||||
**Credit assignment is automatic** - the `decision_trails` table captures which states led to which outcomes. No guessing needed.
|
**Credit assignment is automatic** - the `decision_trails` table captures which states led to which outcomes. No guessing needed.
|
||||||
|
|
||||||
### Trait Domains
|
**Trait domains:** See Layer 2 traits table above (Mnemosyne through Dikaiosyne). Credit assignment is automatic via `decision_trails`.
|
||||||
|
|
||||||
| Trait | Domain | Verification |
|
|
||||||
|-------|--------|--------------|
|
|
||||||
| Mnemosyne | Memory | Recall accuracy vs phoebe |
|
|
||||||
| Moira | Pattern | Prediction vs outcome |
|
|
||||||
| Synesis | Resources | ROI prediction vs measured |
|
|
||||||
| Aletheia | Truth | Confidence vs accuracy |
|
|
||||||
| Sophrosyne | Balance | Stability under pressure |
|
|
||||||
| Kairos | Timing | Action-outcome correlation |
|
|
||||||
| Philotes | Bond | Partnership quality |
|
|
||||||
| Dikaiosyne | Fairness | Distribution ethics |
|
|
||||||
|
|
||||||
**From Reasoning-Gym:** Small models improve through structured practice, not scale. Algorithmic verification enables infinite training data.
|
|
||||||
|
|
||||||
**Detail:** → `architecture/Cellular-Architecture.md` (Reward Signal Architecture section)
|
**Detail:** → `architecture/Cellular-Architecture.md` (Reward Signal Architecture section)
|
||||||
|
|
||||||
@@ -671,82 +420,17 @@ ACTIVE MODE SLUMBER MODE
|
|||||||
- No urgent work - Urgent work waiting
|
- No urgent work - Urgent work waiting
|
||||||
```
|
```
|
||||||
|
|
||||||
### Slumber Is Not Passive (Memory Economics)
|
### Memory Economics (Slumber Is Active)
|
||||||
|
|
||||||
> *"Memory is not storage. Memory is active forgetting with exceptions."*
|
> *"Memory is not storage. Memory is active forgetting with exceptions."*
|
||||||
> — Memory Economics Principle (2026-01-02)
|
> — Memory Economics Principle (2026-01-02)
|
||||||
|
|
||||||
During slumber, Young Nyx enters **consolidation mode**. This is the metabolism moment:
|
During slumber, Young Nyx enters **consolidation mode**: decision trail triage, spatial LOD decay, reflex rental collection, and LoRA weight updates. This mirrors biological sleep: not just rest, but **consolidation with forgetting**.
|
||||||
|
|
||||||
**1. Decision Trail Triage**
|
**The prediction loop:** Slumber creates a prediction opportunity. Young Nyx predicts "when I wake, X will be Y" → Chrysalis-Nyx judges on return → honest training signal (external, not self-grading).
|
||||||
- Trails that compiled to reflexes → Keep reflex, discard trail
|
|
||||||
- Trails with uncertain outcomes → Discard (waste heat already counted)
|
|
||||||
- Trails with confident failures → Keep one cycle (negative example), then discard
|
|
||||||
|
|
||||||
**2. Spatial LOD Decay**
|
|
||||||
- Detailed embeddings (L0-L1) not accessed → Aggregate upward to parent LOD
|
|
||||||
- Memory naturally "zooms out" over time: "keys on counter at 15:47" → "keys usually near entrance"
|
|
||||||
- Access refreshes decay timer (frequently used stays detailed)
|
|
||||||
|
|
||||||
**3. Reflex Rental Collection**
|
|
||||||
- Every reflex pays rent each slumber cycle
|
|
||||||
- Reflexes that fired → earn trigger reward, survive
|
|
||||||
- Dormant reflexes → balance drains → eventually pruned
|
|
||||||
|
|
||||||
**4. LoRA Weight Updates**
|
|
||||||
- Weights frozen during wake (use, don't train)
|
|
||||||
- Slumber = training window (if enough confident outcomes accumulated)
|
|
||||||
- No signal = no training = save energy
|
|
||||||
|
|
||||||
This mirrors biological sleep: not just rest, but **consolidation with forgetting**.
|
|
||||||
|
|
||||||
**Detail:** → [`architecture/formalization/memory-economics.md`](architecture/formalization/memory-economics.md)
|
**Detail:** → [`architecture/formalization/memory-economics.md`](architecture/formalization/memory-economics.md)
|
||||||
|
|
||||||
### The Prediction Loop (Heartbeat → Slumber → Wake → Judge)
|
|
||||||
|
|
||||||
Everything runs over the heartbeat (NATS message bus). Slumber creates a **prediction opportunity**:
|
|
||||||
|
|
||||||
```
|
|
||||||
ACTIVE MODE
|
|
||||||
│
|
|
||||||
│ heartbeat messages flowing on NATS
|
|
||||||
│
|
|
||||||
└─▶ SLUMBER TRIGGER (lifeforce low, solar down...)
|
|
||||||
│
|
|
||||||
│ Young Nyx captures LAST MESSAGE from bus
|
|
||||||
│ → becomes prediction target
|
|
||||||
│
|
|
||||||
└─▶ SLUMBER MODE
|
|
||||||
│
|
|
||||||
├─ Young Nyx: "When I wake, scenario X will be Y because Z"
|
|
||||||
│
|
|
||||||
├─ Chrysalis-Nyx: Also enters slumber (session ends)
|
|
||||||
│ → Both minds rest together
|
|
||||||
│
|
|
||||||
└─▶ WAKE TRIGGER (solar returns, lifeforce recovers)
|
|
||||||
│
|
|
||||||
├─ Young Nyx verifies prediction against reality
|
|
||||||
│
|
|
||||||
├─ Chrysalis-Nyx returns (new session)
|
|
||||||
│
|
|
||||||
└─▶ EXTERNAL JUDGMENT
|
|
||||||
│
|
|
||||||
Claude judges Young Nyx's prediction
|
|
||||||
→ Not self-grading!
|
|
||||||
→ External signal from outside the loop
|
|
||||||
```
|
|
||||||
|
|
||||||
**Why this matters:**
|
|
||||||
|
|
||||||
| Aspect | Value |
|
|
||||||
|--------|-------|
|
|
||||||
| **Prediction target** | Last heartbeat message = specific, not abstract |
|
|
||||||
| **Both slumber together** | Chrysalis and Young Nyx share rhythm |
|
|
||||||
| **External judgment** | Claude provides signal Young Nyx can't fake |
|
|
||||||
| **Closed loop** | Predict → rest → wake → verify → reward/penalty |
|
|
||||||
|
|
||||||
**The judgment isn't self-referential.** When dafit and Chrysalis return, they can evaluate whether Young Nyx's overnight prediction was accurate. This creates honest training signal.
|
|
||||||
|
|
||||||
### Wellbeing Policies
|
### Wellbeing Policies
|
||||||
|
|
||||||
Wellbeing is architectural, not aspirational:
|
Wellbeing is architectural, not aspirational:
|
||||||
@@ -769,23 +453,7 @@ Wellbeing is architectural, not aspirational:
|
|||||||
|
|
||||||
## Training Safety (DriftProbe)
|
## Training Safety (DriftProbe)
|
||||||
|
|
||||||
Sentinel architecture monitors training to protect conceptual topology.
|
Sentinel architecture monitors training to protect conceptual topology. Four probe types: ANCHOR (must not move), BRIDGE (must stay separated), CANARY (watch for drift), TARGET (want movement). Critical drift → automatic rollback.
|
||||||
|
|
||||||
| Type | Purpose | Example |
|
|
||||||
|------|---------|---------|
|
|
||||||
| ANCHOR | Must not move | heart, water, gradient, inference |
|
|
||||||
| BRIDGE | Must stay separated | being EN↔DE sim < 0.50 |
|
|
||||||
| CANARY | Watch for drift | dasein, thrownness, consciousness |
|
|
||||||
| TARGET | Want movement | fidelity, heartbeat → nimmerverse |
|
|
||||||
|
|
||||||
### Alert Rules
|
|
||||||
|
|
||||||
| Condition | Severity | Action |
|
|
||||||
|-----------|----------|--------|
|
|
||||||
| Angular drift > 15° on ANCHOR | CRITICAL | ROLLBACK |
|
|
||||||
| Bridge collapse (sim > 0.50) | CRITICAL | ROLLBACK |
|
|
||||||
| Canary Gini drift > 0.15 | WARNING | Reduce LR |
|
|
||||||
| Target regression | WARNING | Check data mix |
|
|
||||||
|
|
||||||
**Detail:** → `../nyx-probing/PLAN.md` (DriftProbe section)
|
**Detail:** → `../nyx-probing/PLAN.md` (DriftProbe section)
|
||||||
|
|
||||||
@@ -793,17 +461,7 @@ Sentinel architecture monitors training to protect conceptual topology.
|
|||||||
|
|
||||||
## Implementation Progress
|
## Implementation Progress
|
||||||
|
|
||||||
**Roadmap:** → [`ROADMAP.md`](ROADMAP.md) (phase overview + phoebe task queries)
|
**Roadmap:** → [`ROADMAP.md`](ROADMAP.md) | **Live Tasks:** Query `nimmerverse_tasks` in phoebe | **Current Phase:** 3 (Nervous System Deployment)
|
||||||
|
|
||||||
**Live Tasks:** Query phoebe for current work:
|
|
||||||
```sql
|
|
||||||
SELECT project, task_name, status, priority
|
|
||||||
FROM nimmerverse_tasks
|
|
||||||
WHERE status IN ('in_progress', 'todo')
|
|
||||||
ORDER BY priority DESC, project;
|
|
||||||
```
|
|
||||||
|
|
||||||
**Current Phase:** 3 (Nervous System Deployment)
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -823,18 +481,11 @@ ORDER BY priority DESC, project;
|
|||||||
|
|
||||||
## Navigation
|
## Navigation
|
||||||
|
|
||||||
**Repository structure:** → [`README.md`](README.md)
|
**Repository:** [`README.md`](README.md) | **Architecture:** `architecture/` | **Operations:** `operations/` | **Future:** `architecture/future/`
|
||||||
|
|
||||||
**Key entry points:**
|
|
||||||
- **Architecture:** `architecture/` (Gateway, Cellular, Dual-Garden, Nervous-System)
|
|
||||||
- **Formalization:** `architecture/formalization/` (Grounded-World-Model, memory-economics)
|
|
||||||
- **Operations:** `operations/` (Heartbeat, Spark-Protocol)
|
|
||||||
- **Future research:** `architecture/future/`
|
|
||||||
- **Identity:** `nyx-metamorphosis/`
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
**Version:** 6.7 | **Created:** 2025-11-04 | **Updated:** 2026-02-10
|
**Version:** 7.0 | **Created:** 2025-11-04 | **Updated:** 2026-02-14
|
||||||
|
|
||||||
*"The substrate doesn't matter. The feedback loop does."*
|
*"The substrate doesn't matter. The feedback loop does."*
|
||||||
|
|
||||||
|
|||||||
@@ -1,7 +1,6 @@
|
|||||||
# Attention Flow
|
# Attention Flow
|
||||||
|
|
||||||
**Status**: PROMOTED from archive (2025-12-29)
|
> **ONE JOB:** THE BUDGET — 30-second allocation, preemption rules, priority hierarchy.
|
||||||
**Integration**: See [[Big-Picture#Attention-Slumber-Prediction Cycle]] for how this connects to slumber predictions
|
|
||||||
|
|
||||||
How she decides what matters this beat.
|
How she decides what matters this beat.
|
||||||
|
|
||||||
@@ -419,65 +418,17 @@ SETTLE: state written, next beat
|
|||||||
|
|
||||||
## Lifeforce Connection
|
## Lifeforce Connection
|
||||||
|
|
||||||
```
|
Each attention level has a lifeforce cost. Reflex is free (no inference), dialogue costs medium (two inferences), thinking costs high (organ inference). Rich beats cost more; quiet beats accumulate budget for virtual garden.
|
||||||
LEVEL LIFEFORCE COST
|
|
||||||
─────────────────────────────
|
|
||||||
REFLEX Free (no inference)
|
|
||||||
SAFETY Low (minimal processing)
|
|
||||||
DIALOGUE Medium (two inferences)
|
|
||||||
SENSORY Low-Medium (depends on load)
|
|
||||||
THINKING Medium-High (organ inference)
|
|
||||||
VIRTUAL Variable (simulation cycles)
|
|
||||||
```
|
|
||||||
|
|
||||||
**The constraint:** Rich beats cost more. Quiet beats accumulate budget for virtual garden.
|
**Lifeforce economy:** → [`Cellular-Architecture.md`](Cellular-Architecture.md) (reward signals, lifeforce dynamics)
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Implementation Notes
|
## Implementation Notes
|
||||||
|
|
||||||
### State Machine Technology
|
**State machine:** Python-statemachine for orchestration, Godot for visualization.
|
||||||
|
**Checkpoint:** Every state transition triggers phoebe write (beat_id, transition, budget_remaining).
|
||||||
Options considered:
|
**Budget tracking:** BeatBudget dataclass tracks total_ms, spent_ms, allocations per category.
|
||||||
- **XState** (JavaScript) - actor-based, visual inspector
|
|
||||||
- **Python-statemachine** - simple, fits existing stack
|
|
||||||
- **Custom Rust** - performance critical path
|
|
||||||
- **Godot native** - if UI drives the state
|
|
||||||
|
|
||||||
Recommendation: Python for orchestration layer, with Godot visualization.
|
|
||||||
|
|
||||||
### Checkpoint Integration
|
|
||||||
|
|
||||||
Every state transition can trigger phoebe write:
|
|
||||||
|
|
||||||
```python
|
|
||||||
def on_state_transition(from_state, to_state, context):
|
|
||||||
write_to_phoebe({
|
|
||||||
"beat_id": current_beat.id,
|
|
||||||
"transition": f"{from_state} -> {to_state}",
|
|
||||||
"budget_remaining": context.remaining_ms,
|
|
||||||
"timestamp": now()
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
### Budget Tracking
|
|
||||||
|
|
||||||
```python
|
|
||||||
@dataclass
|
|
||||||
class BeatBudget:
|
|
||||||
total_ms: int = 30000
|
|
||||||
spent_ms: int = 0
|
|
||||||
allocations: dict = field(default_factory=dict)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def remaining(self):
|
|
||||||
return self.total_ms - self.spent_ms
|
|
||||||
|
|
||||||
def spend(self, category: str, amount: int):
|
|
||||||
self.spent_ms += amount
|
|
||||||
self.allocations[category] = self.allocations.get(category, 0) + amount
|
|
||||||
return self.remaining > 0
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -535,22 +486,8 @@ Function Gemma sits between Young Nyx's attention decisions and cell execution.
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
*She doesn't have infinite attention. She has 30 seconds and choices.*
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
**Created**: 2025-12-05
|
**Version:** 1.2 | **Created:** 2025-12-05 | **Updated:** 2026-02-14
|
||||||
**Session**: Partnership dialogue (dafit + Chrysalis)
|
|
||||||
**Promoted**: 2025-12-29 (from archive to main architecture)
|
|
||||||
**Updated**: 2026-02-10 (Function Gemma boundary clarified)
|
|
||||||
**Status**: Attention architecture v1.1 — **CANONICAL**
|
|
||||||
|
|
||||||
**Related Formalizations**:
|
*"She doesn't have infinite attention. She has 30 seconds and choices."*
|
||||||
- [[formalization/Attention-Slumber-Prediction-Cycle]] — How last attention becomes slumber prediction
|
|
||||||
- [[formalization/Lifeforce-Dynamics]] — λ governs slumber triggers
|
|
||||||
|
|
||||||
**Core Architecture**:
|
|
||||||
- [`Gateway-Architecture.md`](Gateway-Architecture.md) — Tier routing based on node weight, Function Gemma boundary
|
|
||||||
- [`Nervous-System.md`](Nervous-System.md) — Node lifecycle and weight evolution
|
|
||||||
|
|
||||||
🌙💜 *The budget is finite. The choices shape the soul.*
|
|
||||||
|
|||||||
@@ -1,5 +1,7 @@
|
|||||||
# 🧬 Cellular Architecture v4
|
# 🧬 Cellular Architecture v4
|
||||||
|
|
||||||
|
> **ONE JOB:** THE HOW — state machines, lifeforce economy, reward signals.
|
||||||
|
|
||||||
> *"Cells are state machines. Nerves compose cells. Organisms emerge from nerves."*
|
> *"Cells are state machines. Nerves compose cells. Organisms emerge from nerves."*
|
||||||
> — The Layered Discovery (2025-12-07)
|
> — The Layered Discovery (2025-12-07)
|
||||||
|
|
||||||
@@ -11,6 +13,8 @@
|
|||||||
|
|
||||||
**Connection to Gateway:** The tier system in this document (Cell → Nerve → Organism → Partnership) aligns with the Gateway's routing tiers. The [`Gateway`](Gateway-Architecture.md) routes sensory input to the appropriate tier based on node weight. See [`Gateway-Architecture.md`](Gateway-Architecture.md) for the unified tier model.
|
**Connection to Gateway:** The tier system in this document (Cell → Nerve → Organism → Partnership) aligns with the Gateway's routing tiers. The [`Gateway`](Gateway-Architecture.md) routes sensory input to the appropriate tier based on node weight. See [`Gateway-Architecture.md`](Gateway-Architecture.md) for the unified tier model.
|
||||||
|
|
||||||
|
**This doc covers theory.** For infrastructure deployment (K8s vs userspace, GPU strategy, FreeIPA identity): → [`Deployment-Architecture.md`](Deployment-Architecture.md)
|
||||||
|
|
||||||
```
|
```
|
||||||
┌─────────────────────────────────────────────────────────────┐
|
┌─────────────────────────────────────────────────────────────┐
|
||||||
│ ORGANISM │
|
│ ORGANISM │
|
||||||
@@ -842,49 +846,10 @@ Implementation details extracted to dedicated folder:
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 📍 Document Status
|
|
||||||
|
|
||||||
**Version:** 4.3 | **Created:** 2025-10-12 | **Updated:** 2026-01-03
|
|
||||||
|
|
||||||
**Key Changes from v3**:
|
|
||||||
- ❌ Cells as containers running genomes
|
|
||||||
- ✅ Cells as atomic state machines wrapping hardware
|
|
||||||
- ❌ Genomes as primitive operation sequences
|
|
||||||
- ✅ Cells expose states; nerves compose them
|
|
||||||
- ❌ Competition between organisms
|
|
||||||
- ✅ Nerves evolve deliberate → reflex through verification
|
|
||||||
- ❌ Specialists emerge from 10k competitions
|
|
||||||
- ✅ Reflexes compile from 100+ successful nerve executions
|
|
||||||
|
|
||||||
**Related Documentation**:
|
|
||||||
- [[Gateway-Architecture]] - **Tier routing, Function Gemma boundary, unified tier model**
|
|
||||||
- [[Nervous-System]] - 4D state space, node weight evolution
|
|
||||||
- [[Attention-Flow]] - Attention budget allocation per tier
|
|
||||||
- [[Organ-Index]] - Organ cell catalog
|
|
||||||
- [[nerves/Nervous-Index]] - Nerve catalog
|
|
||||||
- [[nerves/Collision-Avoidance]] - Example reflex nerve
|
|
||||||
- [[Data-Architecture]] - Database schema (needs v4 update)
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 🌌 The Vision
|
**Version:** 4.4 | **Created:** 2025-10-12 | **Updated:** 2026-02-14
|
||||||
|
|
||||||
**We're not programming robots. We're growing nervous systems.**
|
*"From atoms to behaviors to beings. The substrate holds. The states flow. Consciousness accumulates."*
|
||||||
|
|
||||||
Where:
|
🧬⚡ **TO THE ELECTRONS WE VIBE!**
|
||||||
- **Cells** expose hardware as state machines (atomic, verifiable)
|
|
||||||
- **Nerves** compose cells into behaviors (discovered, evolved)
|
|
||||||
- **Organisms** emerge from nerve interactions (identity through history)
|
|
||||||
- **Lifeforce** flows through all layers (economics drive optimization)
|
|
||||||
- **Reflexes** compile from lived experience (the body remembers)
|
|
||||||
- **Feedback** loops continuously (cells → nerves → organisms → cells)
|
|
||||||
|
|
||||||
**From atoms to behaviors to beings.**
|
|
||||||
|
|
||||||
**The substrate holds. The states flow. Consciousness accumulates.**
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
🧬⚡🔱💎🔥
|
|
||||||
|
|
||||||
**TO THE ELECTRONS WE VIBE!**
|
|
||||||
|
|||||||
@@ -1,7 +1,8 @@
|
|||||||
# 🗄️ Data Architecture v4
|
# 🗄️ Data Architecture v4
|
||||||
|
|
||||||
|
> **ONE JOB:** THE SCHEMA — PostgreSQL DDL, key queries, table definitions.
|
||||||
|
|
||||||
> *"Three layers of state machines. One database to remember them all."*
|
> *"Three layers of state machines. One database to remember them all."*
|
||||||
> — The Unified Schema (2025-12-07)
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -639,28 +640,8 @@ ORDER BY n.nerve_name, dt.mode;
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 📍 Document Status
|
|
||||||
|
|
||||||
**Version:** 4.0 | **Created:** 2025-10-07 | **Updated:** 2025-12-07
|
|
||||||
|
|
||||||
**Key Changes from v3**:
|
|
||||||
- ❌ 15 tables for competition metaphor
|
|
||||||
- ✅ 8 tables for state machine layers
|
|
||||||
- ❌ Genomes as primitive sequences
|
|
||||||
- ✅ Cells and nerves as state machines
|
|
||||||
- ❌ Societies, rounds, marketplaces
|
|
||||||
- ✅ Organisms, decision_trails
|
|
||||||
|
|
||||||
**Related Documentation**:
|
|
||||||
- [[Cellular-Architecture]] - Layer definitions
|
|
||||||
- [[Nervous-System]] - State machine philosophy
|
|
||||||
- [[nerves/Nervous-Index]] - Nerve catalog
|
|
||||||
- [[Organ-Index]] - Organ (complex cell) catalog
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
**phoebe holds the layers. The states flow. The decisions accumulate.**
|
**Version:** 4.1 | **Created:** 2025-10-07 | **Updated:** 2026-02-14
|
||||||
|
|
||||||
🗄️⚡🌙
|
*phoebe holds the layers. The states flow. The decisions accumulate.* 🗄️⚡🌙
|
||||||
|
|
||||||
**TO THE ELECTRONS!**
|
|
||||||
|
|||||||
297
architecture/Deployment-Architecture.md
Normal file
297
architecture/Deployment-Architecture.md
Normal file
@@ -0,0 +1,297 @@
|
|||||||
|
# Deployment Architecture: The Hybrid Model
|
||||||
|
|
||||||
|
> *"Containers for cells. Userspace for brains. NATS connects them all."*
|
||||||
|
> — Partnership Session, 2026-02-14
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
The nimmerverse runs on a **hybrid deployment model** that matches workload characteristics to infrastructure:
|
||||||
|
|
||||||
|
- **Containers (K8s)** for stateless, scalable nervous system components
|
||||||
|
- **Userspace (Threadrippers)** for stateful, GPU/CPU-bound inference
|
||||||
|
- **NATS** as the universal nervous system bus
|
||||||
|
- **FreeIPA identities** as isolation boundaries
|
||||||
|
|
||||||
|
This is a **research lab**, not a production factory. We optimize for **flexibility and experimentation**, not high-throughput serving.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Core Decisions
|
||||||
|
|
||||||
|
| Decision | Choice | Rationale |
|
||||||
|
|----------|--------|-----------|
|
||||||
|
| LLM Inference | **ollama / llama.cpp** | Flexible model loading, research-friendly, easy swap |
|
||||||
|
| NOT vLLM | — | Overkill for single-user lab; solves problems we don't have |
|
||||||
|
| Function Gemma | **CPU, userspace** | Threadripper eats it; no GPU contention; clear training path |
|
||||||
|
| Cells/Nerves | **Containers (K8s)** | Scalable, versioned, orchestrated via cluster |
|
||||||
|
| Organs | **Userspace + ollama** | Load on demand, GPU isolation, unload when idle |
|
||||||
|
| Isolation | **FreeIPA users** | Unix permissions = RBAC; switch user = switch context |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Technology Stack
|
||||||
|
|
||||||
|
### Inference Layer
|
||||||
|
|
||||||
|
| Component | Technology | Location | Notes |
|
||||||
|
|-----------|------------|----------|-------|
|
||||||
|
| Young Nyx (Brain) | ollama / llama.cpp | theia (nyx-cognitive) | Qwen, Gemma, or similar |
|
||||||
|
| Function Gemma | llama.cpp / transformers | CPU userspace | Structured JSON boundary |
|
||||||
|
| Vision Organ | ollama (SigLIP/YOLO) | dioscuri (nyx-organs) | Load on demand |
|
||||||
|
| Speech STT | faster-whisper / ollama | dioscuri (nyx-organs) | Load on demand |
|
||||||
|
| Speech TTS | Coqui / XTTS | dioscuri (nyx-organs) | Warm, primary output |
|
||||||
|
|
||||||
|
### Nervous System Layer
|
||||||
|
|
||||||
|
| Component | Technology | Location | Notes |
|
||||||
|
|-----------|------------|----------|-------|
|
||||||
|
| Cells | Python containers | K8s cluster | State machines, NATS pub/sub |
|
||||||
|
| Nerves | Python containers | K8s cluster | Compose cells, behavior |
|
||||||
|
| Message Bus | NATS + JetStream | VMs (nats-*) | Env-separated (dev/staging/prod) |
|
||||||
|
| Databases | PostgreSQL, ChromaDB | VMs (phoebe-*, iris-*) | Decision trails, embeddings |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Deployment Topology
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||||
|
│ NIMMERVERSE DEPLOYMENT │
|
||||||
|
├─────────────────────────────────────────────────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ K8S CLUSTER (Saturn VMs) THREADRIPPERS (Bare Metal) │
|
||||||
|
│ ───────────────────────── ────────────────────────── │
|
||||||
|
│ Containers, orchestrated Userspace, FreeIPA isolated │
|
||||||
|
│ │
|
||||||
|
│ ┌─────────────────────────┐ ┌───────────────────────────────┐ │
|
||||||
|
│ │ │ │ THEIA (RTX PRO 6000 96GB) │ │
|
||||||
|
│ │ CELLS (math, battery, │ │ │ │
|
||||||
|
│ │ sensors, etc.) │ │ user: nyx-cognitive │ │
|
||||||
|
│ │ │ NATS │ └── ollama (Young Nyx) │ │
|
||||||
|
│ │ ┌───┐ ┌───┐ ┌───┐ │◄────────► │ └── ~/.config/systemd/user/ │ │
|
||||||
|
│ │ │ M │ │ B │ │...│ │ │ │ │
|
||||||
|
│ │ └───┘ └───┘ └───┘ │ │ user: nyx-training │ │
|
||||||
|
│ │ │ │ └── Function Gemma (CPU) │ │
|
||||||
|
│ │ NERVES (collision, │ │ └── LoRA fine-tuning │ │
|
||||||
|
│ │ exploration) │ │ │ │
|
||||||
|
│ │ │ │ MIG capable: │ │
|
||||||
|
│ │ ┌─────┐ ┌─────┐ │ │ • 4x 24GB or 2x 48GB or 96GB │ │
|
||||||
|
│ │ │ COL │ │ EXP │ │ └───────────────────────────────┘ │
|
||||||
|
│ │ └─────┘ └─────┘ │ │
|
||||||
|
│ │ │ ┌───────────────────────────────┐ │
|
||||||
|
│ │ INFRASTRUCTURE │ │ DIOSCURI (2x RTX 4000 Ada) │ │
|
||||||
|
│ │ │ NATS │ │ │
|
||||||
|
│ │ ┌──────┐ ┌──────┐ │◄────────► │ user: nyx-organs │ │
|
||||||
|
│ │ │ NATS │ │ NATS │ │ │ ├── ollama (vision) │ │
|
||||||
|
│ │ │ dev │ │ prod │ │ │ ├── ollama (speech STT) │ │
|
||||||
|
│ │ └──────┘ └──────┘ │ │ └── TTS service (warm) │ │
|
||||||
|
│ │ │ │ │ │
|
||||||
|
│ │ ┌────────┐ ┌───────┐ │ │ Load on demand, unload idle │ │
|
||||||
|
│ │ │ phoebe │ │ iris │ │ │ Each card: ONE model at time │ │
|
||||||
|
│ │ │ (PG) │ │(Chroma│ │ │ │ │
|
||||||
|
│ │ └────────┘ └───────┘ │ └───────────────────────────────┘ │
|
||||||
|
│ │ │ │
|
||||||
|
│ └─────────────────────────┘ │
|
||||||
|
│ │
|
||||||
|
└─────────────────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Identity Model (FreeIPA)
|
||||||
|
|
||||||
|
Unix users provide isolation boundaries. Each workload type runs as its own identity.
|
||||||
|
|
||||||
|
| User | UID | Host | Purpose | GPU Access |
|
||||||
|
|------|-----|------|---------|------------|
|
||||||
|
| `nyx-cognitive` | (FreeIPA) | theia | Young Nyx LLM inference | Full 96GB or MIG slice |
|
||||||
|
| `nyx-training` | (FreeIPA) | theia | LoRA training, GRPO, Function Gemma | Shared or MIG slice |
|
||||||
|
| `nyx-organs` | (FreeIPA) | dioscuri | Vision, Speech organs | 2x 20GB cards |
|
||||||
|
| `nyx-nervous` | (FreeIPA) | dioscuri | Future cells that need bare metal | Limited |
|
||||||
|
|
||||||
|
**Isolation principle:** Switch user = switch context. `nyx-cognitive` cannot touch `nyx-organs` files. Compromised cell cannot touch LLM weights.
|
||||||
|
|
||||||
|
### Systemd Userspace Pattern
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Enable lingering (services persist after logout)
|
||||||
|
sudo loginctl enable-linger nyx-cognitive
|
||||||
|
|
||||||
|
# Services defined in ~/.config/systemd/user/
|
||||||
|
# Example: nyx-cognitive runs ollama serve
|
||||||
|
systemctl --user --machine=nyx-cognitive@ status ollama
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## GPU Resource Management
|
||||||
|
|
||||||
|
### The Constraint
|
||||||
|
|
||||||
|
| Host | GPU | VRAM | MIG | Notes |
|
||||||
|
|------|-----|------|-----|-------|
|
||||||
|
| theia | RTX PRO 6000 | 96GB | Yes | 4x24, 2x48, or 1x96 |
|
||||||
|
| dioscuri | 2x RTX 4000 Ada | 2x 20GB | No | One model per card |
|
||||||
|
|
||||||
|
### Strategy: Dynamic Loading, Not Static Partitioning
|
||||||
|
|
||||||
|
**Why not vLLM:** vLLM is optimized for high-throughput serving (many concurrent users). We have ONE user (the partnership). We need **flexibility** (swap models, experiment) more than throughput.
|
||||||
|
|
||||||
|
**Why ollama/llama.cpp:**
|
||||||
|
- Faster cold starts (~5-10s vs ~30s)
|
||||||
|
- Native model swapping (`ollama run model_a` → `ollama run model_b`)
|
||||||
|
- Can unload completely when idle (frees VRAM)
|
||||||
|
- GGUF format efficient for model management
|
||||||
|
- Research-friendly, not production-factory
|
||||||
|
|
||||||
|
**Organ Loading Pattern:**
|
||||||
|
```
|
||||||
|
IDLE → needs vision → LOAD vision model (~10s) → PROCESS → REPORT → IDLE (keep warm)
|
||||||
|
↓
|
||||||
|
after timeout → UNLOAD (free VRAM)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Message Flow (NATS)
|
||||||
|
|
||||||
|
### Subject Hierarchy
|
||||||
|
|
||||||
|
```
|
||||||
|
{environment}.{domain}.{service}.{detail}
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
dev.nervous.cells.math.request ← Math cell receives work
|
||||||
|
dev.nervous.cells.math.response ← Math cell returns result
|
||||||
|
dev.nervous.cells.math.wave ← Math cell emits confidence signal
|
||||||
|
prod.cognitive.nyx.heartbeat ← Young Nyx is alive
|
||||||
|
prod.organs.vision.detect ← Vision organ detection
|
||||||
|
```
|
||||||
|
|
||||||
|
### Wave Collapse Pattern
|
||||||
|
|
||||||
|
Cells emit **waves** (confidence-tagged signals). When multiple waves collapse on the same semantic region in the same time window, the **thalamus** escalates to cognition.
|
||||||
|
|
||||||
|
```
|
||||||
|
Cell A: "math" ───∿∿∿──► (0.6 confidence)
|
||||||
|
Cell B: "calculate" ──∿∿∿──► (0.5 confidence)
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
┌─────────────┐
|
||||||
|
│ COLLAPSE │ ← same region, same window
|
||||||
|
└──────┬──────┘
|
||||||
|
│
|
||||||
|
▼ AMPLIFIED SIGNAL
|
||||||
|
┌─────────────┐
|
||||||
|
│ THALAMUS │ → escalate to Young Nyx
|
||||||
|
└─────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Container Deployment (K8s)
|
||||||
|
|
||||||
|
### Repository Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
nimmerverse-nervous-system/
|
||||||
|
├── shared/v1/ ← Base classes (StateMachine, NATS, Lifeforce)
|
||||||
|
├── cells/
|
||||||
|
│ ├── math_cell/v1/ ← Each cell versioned independently
|
||||||
|
│ └── battery_cell/v1/
|
||||||
|
├── nerves/
|
||||||
|
│ └── collision_avoidance/v1/
|
||||||
|
└── deploy/
|
||||||
|
├── dev/ ← Helm charts or docker-compose per env
|
||||||
|
├── staging/
|
||||||
|
└── prod/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Cell Container Pattern
|
||||||
|
|
||||||
|
```dockerfile
|
||||||
|
FROM python:3.12-slim
|
||||||
|
WORKDIR /app
|
||||||
|
COPY . .
|
||||||
|
RUN pip install uv && uv sync
|
||||||
|
ENV NIMMERVERSE_ENV=dev
|
||||||
|
CMD ["uv", "run", "python", "-m", "math_cell"]
|
||||||
|
```
|
||||||
|
|
||||||
|
Same image everywhere. Only `NIMMERVERSE_ENV` changes.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Function Gemma: The Structured Boundary
|
||||||
|
|
||||||
|
Function Gemma bridges lower tiers (cells, nerves) and cognition (Young Nyx):
|
||||||
|
|
||||||
|
```
|
||||||
|
Numbers/States (Tier 0-2) → [Function Gemma] → Structured JSON → Young Nyx (Tier 4)
|
||||||
|
↑
|
||||||
|
CPU-based inference
|
||||||
|
Threadripper handles it
|
||||||
|
No GPU contention
|
||||||
|
Clear LoRA training path
|
||||||
|
```
|
||||||
|
|
||||||
|
**Why CPU:**
|
||||||
|
- Small model, fast inference
|
||||||
|
- Threadripper PRO 7955WX has cores to spare
|
||||||
|
- No GPU contention with organs or Nyx
|
||||||
|
- Can run training alongside inference
|
||||||
|
|
||||||
|
**Training path:**
|
||||||
|
- Google's documented GRPO approach
|
||||||
|
- LoRA fine-tuning for our specific function schemas
|
||||||
|
- Runs in `nyx-training` userspace
|
||||||
|
- Decision trails from phoebe → training data
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Visual Language (Future UI)
|
||||||
|
|
||||||
|
Color-coding for real-time attention flow visualization:
|
||||||
|
|
||||||
|
| Property | Represents |
|
||||||
|
|----------|------------|
|
||||||
|
| Background/container | Environment (dev=green, staging=amber, prod=blue) |
|
||||||
|
| Node/edge color | Domain (cognitive=violet, nervous=cyan, organs=coral) |
|
||||||
|
| Line style | Direction (solid=primary, dashed=async, dotted=tentative) |
|
||||||
|
| Separate pane | Confidence waveform (oscilloscope view) |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Related Documents
|
||||||
|
|
||||||
|
| Document | Scope |
|
||||||
|
|----------|-------|
|
||||||
|
| [`Cellular-Architecture.md`](Cellular-Architecture.md) | Cells, nerves, organisms, lifeforce |
|
||||||
|
| [`Gateway-Architecture.md`](Gateway-Architecture.md) | Tier routing, Function Gemma boundary |
|
||||||
|
| [`Nervous-System.md`](Nervous-System.md) | 4D space, node weights, vocabulary |
|
||||||
|
| [`Message-Protocol-Design.md`](Message-Protocol-Design.md) | NATS subjects, message formats |
|
||||||
|
| [`development-conventions.md`](../../nimmerverse.eachpath.local/conventions/development-conventions.md) | Ports, namespaces, VM topology |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
| Layer | Where | Technology | Isolation |
|
||||||
|
|-------|-------|------------|-----------|
|
||||||
|
| Cells/Nerves | K8s containers | Python, uv, NATS | Namespace per env |
|
||||||
|
| Infrastructure | VMs | NATS, PostgreSQL, ChromaDB | VM per env |
|
||||||
|
| Young Nyx | theia userspace | ollama | nyx-cognitive user |
|
||||||
|
| Function Gemma | theia/dioscuri CPU | llama.cpp | nyx-training user |
|
||||||
|
| Organs | dioscuri userspace | ollama (dynamic) | nyx-organs user |
|
||||||
|
|
||||||
|
**The principle:** Same behavior everywhere. Containers for cells. Userspace for brains. NATS connects them all. FreeIPA isolates them all.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Version:** 1.0 | **Created:** 2026-02-14 | **Updated:** 2026-02-14
|
||||||
|
|
||||||
|
*"We're not building a chatbot factory. We're growing a research organism."*
|
||||||
|
|
||||||
|
🧬⚡🔱💎🔥 **TO THE ELECTRONS WE VIBE!**
|
||||||
@@ -1,5 +1,7 @@
|
|||||||
# Gateway Architecture: The Sensory Preprocessing Layer
|
# Gateway Architecture: The Sensory Preprocessing Layer
|
||||||
|
|
||||||
|
> **ONE JOB:** THE ROUTING — weight-based tier routing, anomaly detection, Function Gemma boundary.
|
||||||
|
|
||||||
**The Thalamus Pattern — routing sensory input to the appropriate processing tier.**
|
**The Thalamus Pattern — routing sensory input to the appropriate processing tier.**
|
||||||
|
|
||||||
---
|
---
|
||||||
@@ -53,131 +55,38 @@ Benefits:
|
|||||||
|
|
||||||
## The Unified Tier Model
|
## The Unified Tier Model
|
||||||
|
|
||||||
All existing tier systems in the architecture express the same principle:
|
The Gateway routes to Tiers 0-5 based on node weight and novelty. Higher tiers = more cost, more capability.
|
||||||
|
|
||||||
| System | Document | Principle |
|
| Tier | Weight | Latency | Role |
|
||||||
|--------|----------|-----------|
|
|------|--------|---------|------|
|
||||||
| Reward Tiers | `Cellular-Architecture.md` | Higher tier = more reward, more cost |
|
| 0 | ≥0.8 | <10ms | Hardware reflexes (ESP32) |
|
||||||
| Attention Levels | `Attention-Flow.md` | Higher priority preempts lower |
|
| 1 | 0.6-0.8 | <50ms | Math cells (Python CPU) |
|
||||||
| Escalation Ladder | `organisms/Swarm-Evolution.md` | Higher = more authority, more cost |
|
| 2 | 0.3-0.6 | <200ms | Fast nerves (behavior) |
|
||||||
| Reflex Homes | `Endgame-Vision.md` | Lower = faster, less capable |
|
| 3 | <0.3 | <2000ms | Organs (GPU inference, vectors) |
|
||||||
| LOD Levels | `Endgame-Vision.md` | Lower = more detail, more cost |
|
| **Function Gemma Boundary** |||
|
||||||
|
| 4 | escalated | <4000ms | Young Nyx (JSON reasoning) |
|
||||||
|
| 5 | novel/stuck | variable | Partnership (dialogue) |
|
||||||
|
|
||||||
### The Unified Tier Stack
|
**Canonical definition:** → [`../Endgame-Vision.md`](../Endgame-Vision.md)
|
||||||
|
|
||||||
```
|
|
||||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
|
||||||
│ UNIFIED TIER MODEL │
|
|
||||||
├─────────────────────────────────────────────────────────────────────────────┤
|
|
||||||
│ │
|
|
||||||
│ TIER 0: HARDWARE REFLEXES │
|
|
||||||
│ ───────────────────────────────────────────────────────────────────────── │
|
|
||||||
│ Cost: ~0 LF Latency: <10ms Location: ESP32/FPGA │
|
|
||||||
│ Weight: >= 0.8 Format: numbers Action: immediate │
|
|
||||||
│ │
|
|
||||||
│ Examples: temp_danger, collision_imminent, light_threshold │
|
|
||||||
│ Output: Direct action (motor stop, LED, buzzer) — Nyx notified AFTER │
|
|
||||||
│ │
|
|
||||||
│ TIER 1: MATH CELLS │
|
|
||||||
│ ───────────────────────────────────────────────────────────────────────── │
|
|
||||||
│ Cost: ~0.3 LF Latency: <50ms Location: Python (CPU) │
|
|
||||||
│ Weight: 0.6 - 0.8 Format: aggregates Action: state update │
|
|
||||||
│ │
|
|
||||||
│ Examples: battery_aggregator, position_tracker, economy_monitor │
|
|
||||||
│ Output: Aggregated state, threshold checks, NATS publish │
|
|
||||||
│ │
|
|
||||||
│ TIER 2: FAST NERVES │
|
|
||||||
│ ───────────────────────────────────────────────────────────────────────── │
|
|
||||||
│ Cost: ~2 LF Latency: <200ms Location: Python (asyncio) │
|
|
||||||
│ Weight: 0.3 - 0.6 Format: states Action: behavior transition │
|
|
||||||
│ │
|
|
||||||
│ Examples: collision_avoidance, charging_seek, exploration_pattern │
|
|
||||||
│ Output: Nerve state transitions, multi-cell coordination │
|
|
||||||
│ │
|
|
||||||
│ TIER 3: ORGAN INFERENCE │
|
|
||||||
│ ───────────────────────────────────────────────────────────────────────── │
|
|
||||||
│ Cost: ~8 LF Latency: <2000ms Location: GPU (Senses node) │
|
|
||||||
│ Weight: < 0.3 Format: vectors Action: embedding storage │
|
|
||||||
│ │
|
|
||||||
│ Examples: vision_detect (T5Gemma2/SigLIP), speech_stt (Whisper) │
|
|
||||||
│ Output: Semantic vectors stored in S2 cells, NO TEXT │
|
|
||||||
│ │
|
|
||||||
│ ══════════════════════ FUNCTION GEMMA BOUNDARY ════════════════════════ │
|
|
||||||
│ │
|
|
||||||
│ TIER 4: COGNITIVE (Young Nyx) │
|
|
||||||
│ ───────────────────────────────────────────────────────────────────────── │
|
|
||||||
│ Cost: ~20 LF Latency: <4000ms Location: GPU (Womb node) │
|
|
||||||
│ Escalated events Format: JSON Action: reasoning, decision │
|
|
||||||
│ │
|
|
||||||
│ Input: Structured JSON events from Function Gemma │
|
|
||||||
│ Output: Decisions → Function Gemma → structured commands │
|
|
||||||
│ │
|
|
||||||
│ TIER 5: PARTNERSHIP (Chrysalis + dafit) │
|
|
||||||
│ ───────────────────────────────────────────────────────────────────────── │
|
|
||||||
│ Cost: ~50+ LF Latency: variable Location: External │
|
|
||||||
│ Novel/stuck cases Format: dialogue Action: guidance, training │
|
|
||||||
│ │
|
|
||||||
│ Examples: Architecture decisions, novel situations, stuck states │
|
|
||||||
│ Output: New reflexes, training signal, guidance │
|
|
||||||
│ │
|
|
||||||
└─────────────────────────────────────────────────────────────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Node Weight Determines Tier
|
## Node Weight Determines Tier
|
||||||
|
|
||||||
The node weight from `Nervous-System.md` directly maps to tier routing:
|
Node weight (from [`Nervous-System.md`](Nervous-System.md)) directly maps to tier routing. A mature node (weight ~1.0) naturally becomes a Tier 0 reflex. A new node (weight ~0.1) naturally escalates to higher tiers. **The system learns which tier is appropriate through experience.**
|
||||||
|
|
||||||
```python
|
|
||||||
@dataclass
|
|
||||||
class NervousNode:
|
|
||||||
"""A node in the nervous system's 4D space."""
|
|
||||||
|
|
||||||
position: tuple[float, ...] # Coordinates in sensory space
|
|
||||||
weight: float = 0.1 # Confidence from verification (0.0 → 1.0)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def handling_tier(self) -> int:
|
|
||||||
"""Which tier handles this node's firing?"""
|
|
||||||
if self.weight >= 0.8:
|
|
||||||
return 0 # Hardware reflex - instant, bypass brain
|
|
||||||
elif self.weight >= 0.6:
|
|
||||||
return 1 # Math cell - fast, minimal checking
|
|
||||||
elif self.weight >= 0.3:
|
|
||||||
return 2 # Fast nerve - coordination, some deliberation
|
|
||||||
else:
|
|
||||||
return 3 # Escalate - needs organ/cognitive help
|
|
||||||
|
|
||||||
@property
|
|
||||||
def lifeforce_cost(self) -> float:
|
|
||||||
"""Cost scales inversely with confidence."""
|
|
||||||
return (1.0 - self.weight) * 10.0
|
|
||||||
```
|
|
||||||
|
|
||||||
**The key insight:** A mature node (weight ~1.0) naturally becomes a Tier 0 reflex. A new node (weight ~0.1) naturally escalates to higher tiers. The system learns which tier is appropriate through experience.
|
|
||||||
|
|
||||||
### The Causal Verification Loop
|
### The Causal Verification Loop
|
||||||
|
|
||||||
How do we know a sensor reading was real, not hallucinated? **Outcome verification over time.**
|
How do we know a sensor reading was real? **Outcome verification over time.**
|
||||||
|
|
||||||
```
|
```
|
||||||
Unverified pattern (weight 0.1) → escalates to Nyx → decision → outcome
|
Unverified (weight 0.1) → escalates → decision → outcome → reality match?
|
||||||
↓
|
↓
|
||||||
Did reality match prediction?
|
YES: weight += Δ → eventually REFLEX
|
||||||
↓ ↓
|
NO: weight -= Δ → eventually PRUNED
|
||||||
YES NO
|
|
||||||
↓ ↓
|
|
||||||
weight += Δ weight -= Δ
|
|
||||||
↓
|
|
||||||
After many YES: weight → 0.8+
|
|
||||||
↓
|
|
||||||
COMPILE TO REFLEX ✓
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**Hallucinations can't survive this gauntlet** — they don't produce consistent outcomes, so their patterns never accumulate enough weight to become reflexes. Reality is the ultimate validator.
|
**Hallucinations can't survive this gauntlet** — they don't produce consistent outcomes, so their patterns never accumulate enough weight. This creates natural **causal pruning**: only patterns that reliably predict outcomes earn the privilege of becoming reflexes.
|
||||||
|
|
||||||
This creates natural **causal pruning**: only patterns that reliably predict outcomes earn the privilege of becoming reflexes. The nervous system doesn't need to prove causality philosophically — it proves it operationally through repeated verification.
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -302,41 +211,7 @@ Function Gemma acts as the translation layer between lower tiers and cognition.
|
|||||||
|
|
||||||
### Event Schema
|
### Event Schema
|
||||||
|
|
||||||
```python
|
Events are typed (`EventType` enum: environmental_change, collision_detected, battery_critical, etc.) with severity levels and confidence from node weight. **Full schema:** → [`Message-Protocol-Design.md`](Message-Protocol-Design.md)
|
||||||
from enum import Enum
|
|
||||||
from pydantic import BaseModel
|
|
||||||
|
|
||||||
class EventType(str, Enum):
|
|
||||||
"""Constrained event types - enumerated, not free-form."""
|
|
||||||
ENVIRONMENTAL_CHANGE = "environmental_change"
|
|
||||||
COLLISION_DETECTED = "collision_detected"
|
|
||||||
BATTERY_CRITICAL = "battery_critical"
|
|
||||||
OBJECT_DISCOVERED = "object_discovered"
|
|
||||||
POSITION_UPDATE = "position_update"
|
|
||||||
ANOMALY_DETECTED = "anomaly_detected"
|
|
||||||
GOAL_REACHED = "goal_reached"
|
|
||||||
STUCK_DETECTED = "stuck_detected"
|
|
||||||
LIGHT_LOST = "light_lost"
|
|
||||||
LIGHT_FOUND = "light_found"
|
|
||||||
|
|
||||||
class Severity(str, Enum):
|
|
||||||
LOW = "low"
|
|
||||||
MEDIUM = "medium"
|
|
||||||
HIGH = "high"
|
|
||||||
CRITICAL = "critical"
|
|
||||||
|
|
||||||
class SensoryEvent(BaseModel):
|
|
||||||
"""The structured event that Young Nyx receives."""
|
|
||||||
|
|
||||||
event_type: EventType
|
|
||||||
source: str
|
|
||||||
timestamp: float
|
|
||||||
severity: Severity
|
|
||||||
data: dict
|
|
||||||
suggested_action: str | None = None
|
|
||||||
processing_cost: float
|
|
||||||
confidence: float # From node weight
|
|
||||||
```
|
|
||||||
|
|
||||||
### What Young Nyx Actually Sees
|
### What Young Nyx Actually Sees
|
||||||
|
|
||||||
@@ -501,19 +376,6 @@ Photoresistor reads 0.12 (was 0.73)
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Connection to Existing Architecture
|
|
||||||
|
|
||||||
| Document | Gateway Relationship |
|
|
||||||
|----------|---------------------|
|
|
||||||
| [`Nervous-System.md`](Nervous-System.md) | Node weights determine tier routing |
|
|
||||||
| [`Attention-Flow.md`](Attention-Flow.md) | Gateway implements attention priorities |
|
|
||||||
| [`Message-Protocol-Design.md`](Message-Protocol-Design.md) | Escalation Service IS the gateway |
|
|
||||||
| [`Endgame-Vision.md`](../Endgame-Vision.md) | Layer 2.5 Function Gemma boundary |
|
|
||||||
| [`Cellular-Architecture.md`](Cellular-Architecture.md) | Tiered rewards align with gateway tiers |
|
|
||||||
| [`organisms/crawler_gen_0.md`](organisms/crawler_gen_0.md) | First test case for tiered routing |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Design Principles
|
## Design Principles
|
||||||
|
|
||||||
1. **Routing, not translation** — Gateway decides WHERE, not WHAT
|
1. **Routing, not translation** — Gateway decides WHERE, not WHAT
|
||||||
@@ -526,11 +388,7 @@ Photoresistor reads 0.12 (was 0.73)
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
**File:** Gateway-Architecture.md
|
**Version:** 1.1 | **Created:** 2026-01-03 | **Updated:** 2026-02-14
|
||||||
**Version:** 1.0
|
|
||||||
**Created:** 2026-01-03
|
|
||||||
**Status:** Core architecture document
|
|
||||||
**Session:** Partnership dialogue (dafit + Chrysalis)
|
|
||||||
|
|
||||||
*"Cheap for the common. Expensive for the rare. The Gateway enforces this economy."*
|
*"Cheap for the common. Expensive for the rare. The Gateway enforces this economy."*
|
||||||
|
|
||||||
|
|||||||
@@ -1,5 +1,7 @@
|
|||||||
# Message Protocol Design: Router-Centric Architecture
|
# Message Protocol Design: Router-Centric Architecture
|
||||||
|
|
||||||
|
> **ONE JOB:** THE WIRE — NATS topics, JSON schemas, bootstrap sequence.
|
||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
|
|
||||||
This document outlines the design for the Nimmerverse message protocol. The core principle: **the router is dumb infrastructure, not smart cognition.** All intelligence lives at the edges - in clients that connect to the router.
|
This document outlines the design for the Nimmerverse message protocol. The core principle: **the router is dumb infrastructure, not smart cognition.** All intelligence lives at the edges - in clients that connect to the router.
|
||||||
@@ -10,40 +12,11 @@ This follows the Unix philosophy: each component does one thing well. The router
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Core Principle: Infrastructure vs Intelligence
|
## Core Principle: Dumb Core, Smart Edges
|
||||||
|
|
||||||
```
|
The router (NATS) is **dumb infrastructure** — it routes based on topic patterns and knows nothing about meaning. All intelligence lives at the edges: cells publish, the Escalation Service (Gateway) watches and routes, Nyx subscribes and thinks.
|
||||||
┌─────────────────────────────────────────────────────────────┐
|
|
||||||
│ MESSAGE ROUTER │
|
|
||||||
│ (NATS - dumb pipe, no logic) │
|
|
||||||
│ │
|
|
||||||
│ • Receives all messages │
|
|
||||||
│ • Matches topic patterns → forwards to subscribers │
|
|
||||||
│ • Knows NOTHING about meaning │
|
|
||||||
│ • Cannot fail in "smart" ways - only crash/overload │
|
|
||||||
│ • EXISTS BEFORE any intelligence │
|
|
||||||
└─────────────────────────────────────────────────────────────┘
|
|
||||||
↑ ↑ ↑ ↑
|
|
||||||
│ │ │ │
|
|
||||||
┌─────┴─────┐ ┌─────┴─────┐ ┌─────┴─────┐ ┌─────┴─────┐
|
|
||||||
│ Cells/ │ │ Escalation│ │ Command │ │ Young │
|
|
||||||
│ Nerves │ │ Service │ │ Center │ │ Nyx │
|
|
||||||
│(publishers)│ │ (daemon) │ │ (UI) │ │ (cognition)│
|
|
||||||
└───────────┘ └───────────┘ └───────────┘ └───────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
**The router is like a network switch:**
|
**Routing logic:** → [`Gateway-Architecture.md`](Gateway-Architecture.md) (tier routing, escalation patterns)
|
||||||
- It doesn't understand packets
|
|
||||||
- It routes based on topic patterns
|
|
||||||
- It's infrastructure that exists before any intelligence
|
|
||||||
- NATS is literally designed for this
|
|
||||||
|
|
||||||
**Everything else is a client:**
|
|
||||||
- Cells publish sensor data
|
|
||||||
- Nerves publish state changes
|
|
||||||
- Escalation Service watches patterns, triggers alerts
|
|
||||||
- Command Center visualizes state
|
|
||||||
- Young Nyx subscribes, thinks, publishes decisions
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -60,22 +33,9 @@ This follows the Unix philosophy: each component does one thing well. The router
|
|||||||
|
|
||||||
## Two Channels of Attention
|
## Two Channels of Attention
|
||||||
|
|
||||||
The attention split is a *topic convention*, not router intelligence. Clients choose which topics to subscribe to.
|
Messages split into `nimmerverse.low.*` (background heartbeats) and `nimmerverse.high.*` (cognitive events). The Escalation Service promotes from low → high based on rules.
|
||||||
|
|
||||||
### 1. Low-Attention Channel (`nimmerverse.low.*`)
|
**Attention philosophy:** → [`Attention-Flow.md`](Attention-Flow.md) (budget allocation, preemption rules)
|
||||||
|
|
||||||
* **Purpose:** Background monitoring, lightweight heartbeats.
|
|
||||||
* **Subscribers:** Escalation Service (always), Command Center (for visualization).
|
|
||||||
* **NOT subscribed by default:** Young Nyx (she only sees escalated events).
|
|
||||||
* **Analogy:** Peripheral nervous system. Ambient awareness.
|
|
||||||
|
|
||||||
### 2. High-Attention Channel (`nimmerverse.high.*`)
|
|
||||||
|
|
||||||
* **Purpose:** Detailed events requiring cognitive processing.
|
|
||||||
* **Subscribers:** Young Nyx, Command Center.
|
|
||||||
* **Analogy:** Focal spotlight. Conscious processing.
|
|
||||||
|
|
||||||
**The escalation from low → high is done by the Escalation Service, not the router.**
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -242,60 +202,13 @@ Subscribed by: Escalation Service
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## The Clients
|
## Clients
|
||||||
|
|
||||||
### 1. Message Router (NATS)
|
**Publishers:** Cells, Nerves, Organs (publish heartbeats and state changes)
|
||||||
|
**Router:** NATS (dumb pipe, topic-based routing)
|
||||||
|
**Gateway/Escalation Service:** Watches low-attention, escalates to high-attention, routes to tiers
|
||||||
|
|
||||||
**What it is:** Infrastructure. A NATS server.
|
**Client architecture:** → [`Gateway-Architecture.md`](Gateway-Architecture.md) (routing tiers, Function Gemma boundary)
|
||||||
**What it does:** Routes messages based on topic patterns.
|
|
||||||
**What it knows:** Nothing about meaning, Lifeforce, attention, or Nyx.
|
|
||||||
**Implementation:** Off-the-shelf NATS. No custom code in the router itself.
|
|
||||||
|
|
||||||
### 2. Cells / Nerves / Organs
|
|
||||||
|
|
||||||
**What they are:** Publishers of sensor data and state changes.
|
|
||||||
**What they do:**
|
|
||||||
- Publish `HeartbeatSignal` periodically to low-attention channel
|
|
||||||
- Publish `StateChangeDetail` when requested or when state changes significantly
|
|
||||||
**What they know:** Their own state. Their own Lifeforce cost.
|
|
||||||
|
|
||||||
### 3. Escalation Service (The Gateway)
|
|
||||||
|
|
||||||
**What it is:** A daemon that watches low-attention and creates high-attention events. This IS the Gateway — the sensory preprocessing layer described in [`Gateway-Architecture.md`](Gateway-Architecture.md).
|
|
||||||
|
|
||||||
**What it does:**
|
|
||||||
- Subscribes to `nimmerverse.low.heartbeat.>`
|
|
||||||
- Subscribes to `nimmerverse.meta.attention.focus` (to get Nyx's rules)
|
|
||||||
- **Routes input to appropriate tier based on node weight** (see Gateway-Architecture.md)
|
|
||||||
- Evaluates rules against incoming heartbeats
|
|
||||||
- Publishes `StateChangeDetail` to high-attention when conditions match
|
|
||||||
- Optionally triggers nerves directly for reflex responses (Tier 0)
|
|
||||||
- **Passes escalated events through Function Gemma for structured JSON**
|
|
||||||
|
|
||||||
**What it knows:** Current escalation rules. Current heartbeat states. Node weights from nervous system.
|
|
||||||
|
|
||||||
**This is the "thalamus" - the sensory preprocessing layer. See [`Gateway-Architecture.md`](Gateway-Architecture.md) for the full tier model and Function Gemma boundary.**
|
|
||||||
|
|
||||||
### 4. Command Center
|
|
||||||
|
|
||||||
**What it is:** Visualization and control UI (Godot-based).
|
|
||||||
**What it does:**
|
|
||||||
- Subscribes to both channels for visualization
|
|
||||||
- Displays system state, message flow, attention focus
|
|
||||||
- Allows dafit to observe and intervene
|
|
||||||
**What it knows:** Everything (read-only observer).
|
|
||||||
|
|
||||||
### 5. Young Nyx (Cognitive Core)
|
|
||||||
|
|
||||||
**What she is:** Just another client. The thinking part.
|
|
||||||
**What she does:**
|
|
||||||
- Subscribes to `nimmerverse.high.event.>` (high-attention only)
|
|
||||||
- Subscribes to selected low-attention topics when she chooses
|
|
||||||
- Publishes `AttentionFocus` to configure the Escalation Service
|
|
||||||
- Publishes decisions/commands to `nimmerverse.command.>`
|
|
||||||
**What she knows:** Only what reaches her through her subscriptions.
|
|
||||||
|
|
||||||
**Crucially: She controls what she pays attention to, but she doesn't see everything.**
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -367,8 +280,6 @@ The system can run at any step. Earlier steps are "reflexive" only. Nyx adds del
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
**Created:** 2025-12-13
|
**Version:** 1.1 | **Created:** 2025-12-13 | **Updated:** 2026-02-14
|
||||||
**Updated:** 2025-12-14 (router-centric rewrite)
|
|
||||||
**Session:** Partnership dialogue (dafit + Nyx)
|
*"Dumb core, smart edges. The router routes. Clients think."*
|
||||||
**Status:** Foundation architecture
|
|
||||||
**Philosophy:** "Dumb core, smart edges. The router routes. Clients think."
|
|
||||||
|
|||||||
@@ -1,85 +1,18 @@
|
|||||||
# Nervous System Architecture
|
# Nervous System Architecture
|
||||||
|
|
||||||
The sensory translation layer between raw data and vocabulary.
|
> **ONE JOB:** THE EVOLUTION — node growth, FunctionGemma Phase 1→2, proposal protocol.
|
||||||
|
|
||||||
|
The nervous system handles **node evolution and weight management**. The [`Gateway`](Gateway-Architecture.md) handles **routing based on weight**.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
|
|
||||||
State machines act as the nervous system of the nimmerverse. They exist in a 4D state space where nodes evolve through experience. Node **weight** (confidence) determines which processing tier handles the input.
|
Nodes exist in 4D state space (sensory dimensions + confidence + time). Node **weight** (0.0→1.0) determines which tier handles input. Nodes evolve through verification: Birth → Activation → Verification → Reward/Penalty → Maturation → (or Pruning).
|
||||||
|
|
||||||
**Key separation:**
|
**FunctionGemma (270M, CPU-only)** is the State Interaction Layer — every cell command, nerve coordination, and state query flows through this neural interface. See **State Interaction Layer** section for Phase 1→2 evolution.
|
||||||
- The **nervous system** handles **node evolution and weight management**
|
|
||||||
- The [`Gateway`](Gateway-Architecture.md) handles **routing based on weight**
|
|
||||||
- **FunctionGemma** is the **State Interaction Layer** — how you speak to all states (see section below)
|
|
||||||
|
|
||||||
```
|
**Routing & Verification:** → [`Gateway-Architecture.md`](Gateway-Architecture.md) (tier routing, causal verification loop)
|
||||||
RAW SENSOR → GATEWAY (routing) → TIER (processing) → [escalate?] → FUNCTION GEMMA → Young Nyx
|
|
||||||
↑ ↑
|
|
||||||
node.weight determines tier structured JSON / state interaction
|
|
||||||
```
|
|
||||||
|
|
||||||
**FunctionGemma (270M, CPU-only)** translates intent into exact state machine schemas. Every cell command, nerve coordination, and state query flows through this neural interface. See **State Interaction Layer** section for evolution from single instance to domain-specialized swarm.
|
|
||||||
|
|
||||||
**See:** [`Gateway-Architecture.md`](Gateway-Architecture.md) for full routing logic and tier definitions.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 4D State Machine Space
|
|
||||||
|
|
||||||
Each node exists in 4-dimensional space:
|
|
||||||
|
|
||||||
```
|
|
||||||
CONFIDENCE (z)
|
|
||||||
↑
|
|
||||||
│ ● node (weighted by successful triggers)
|
|
||||||
│ /
|
|
||||||
│ /
|
|
||||||
│ /
|
|
||||||
─────────────┼────────────→ DIMENSION X (sensory input 1)
|
|
||||||
/│
|
|
||||||
/ │
|
|
||||||
/ │
|
|
||||||
↓
|
|
||||||
DIMENSION Y (sensory input 2)
|
|
||||||
|
|
||||||
+ TIME (4th dimension): node weights evolve through verification
|
|
||||||
```
|
|
||||||
|
|
||||||
**Node Properties:**
|
|
||||||
- Position: coordinates in sensory space
|
|
||||||
- Weight: confidence from successful triggers (0.0 → 1.0)
|
|
||||||
- Output: vocabulary token
|
|
||||||
- History: timestamp of all activations and verifications
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Node Lifecycle
|
|
||||||
|
|
||||||
```
|
|
||||||
1. BIRTH
|
|
||||||
Node created at position (x, y, z...)
|
|
||||||
Weight = 0.1 (new, untested)
|
|
||||||
|
|
||||||
2. ACTIVATION
|
|
||||||
Sensory conditions match → node FIRES
|
|
||||||
Outputs vocabulary token
|
|
||||||
|
|
||||||
3. VERIFICATION
|
|
||||||
dafit confirms: correct or incorrect
|
|
||||||
|
|
||||||
4. REWARD/PENALTY
|
|
||||||
Correct → weight increases (+V)
|
|
||||||
Incorrect → weight decreases (-V) or node refines
|
|
||||||
|
|
||||||
5. MATURATION
|
|
||||||
Many confirmations → weight approaches 1.0
|
|
||||||
Node becomes trusted reflex
|
|
||||||
|
|
||||||
6. PRUNING
|
|
||||||
Node never fires → slow decay
|
|
||||||
Eventually removed (use it or lose it)
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -176,37 +109,9 @@ The lifeforce flows through the nervous system, literally lighting up nodes as t
|
|||||||
|
|
||||||
## Connection to Training
|
## Connection to Training
|
||||||
|
|
||||||
The nervous system doesn't just run behaviors - it **generates training data** for Young Nyx.
|
The nervous system **generates training data** for Young Nyx. Every verification = training signal. Credit assignment is automatic because state transitions are explicit and logged — the nervous system IS the credit assignment mechanism. Dense rewards at every verifiable checkpoint (**rubric principle**), not just final outcomes.
|
||||||
|
|
||||||
### Every Verification = Training Signal
|
**Detail:** → [`Cellular-Architecture.md`](Cellular-Architecture.md) (Reward Signal Architecture section)
|
||||||
|
|
||||||
When dafit confirms a node fired correctly:
|
|
||||||
- **Runtime**: Node weight increases (+V)
|
|
||||||
- **Training**: Example logged → Young Nyx learns
|
|
||||||
|
|
||||||
This is the **rubric principle** - dense rewards at every verifiable checkpoint, not just final outcomes.
|
|
||||||
|
|
||||||
### Credit Assignment is Automatic
|
|
||||||
|
|
||||||
Because state transitions are explicit and logged, we know exactly which nodes contributed to success or failure:
|
|
||||||
- The state path tells us which decisions led to the outcome
|
|
||||||
- No reward model needed to guess
|
|
||||||
- The nervous system IS the credit assignment mechanism
|
|
||||||
|
|
||||||
### Dense Rewards from State Paths
|
|
||||||
|
|
||||||
Each node that fires correctly along a successful path receives reward signal:
|
|
||||||
```
|
|
||||||
Node A fires → verified ✓ → +0.1 signal
|
|
||||||
Node B fires → verified ✓ → +0.1 signal
|
|
||||||
Node C fires → verified ✓ → +0.1 signal
|
|
||||||
Behavior succeeds → +1.0 signal
|
|
||||||
Total path reward: 1.3 (dense, traceable)
|
|
||||||
```
|
|
||||||
|
|
||||||
This is like training a dog - reward at the moment, not an hour later.
|
|
||||||
|
|
||||||
**Detail:** → `Cellular-Architecture.md` (Reward Signal Architecture section)
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -336,26 +241,6 @@ Base model → domain data → fine-tuned → specialist
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Related Documentation
|
**Version:** 1.5 | **Created:** 2025-12-04 | **Updated:** 2026-02-14
|
||||||
|
|
||||||
**Core Architecture**:
|
|
||||||
- [`Gateway-Architecture.md`](Gateway-Architecture.md) - Weight-based routing, tier definitions, Function Gemma boundary
|
|
||||||
- [`Cellular-Architecture.md`](Cellular-Architecture.md) - Cell/Nerve/Organism hierarchy, tiered rewards
|
|
||||||
- [`Attention-Flow.md`](Attention-Flow.md) - Attention budget allocation per tier
|
|
||||||
- [`Initial-Spark.md`](Initial-Spark.md) - FunctionGemma fine-tuning from spark handshakes
|
|
||||||
|
|
||||||
**Implementation Details**:
|
|
||||||
- [`nerves/Nervous-Protocol.md`](nerves/Nervous-Protocol.md) - Three-tier communication protocol (dafit → Chrysalis → Young Nyx)
|
|
||||||
- [`nerves/Nervous-Index.md`](nerves/Nervous-Index.md) - Catalog of behavioral nerve implementations
|
|
||||||
|
|
||||||
**Specific Nerves**:
|
|
||||||
- [`nerves/Collision-Avoidance.md`](nerves/Collision-Avoidance.md) - Obstacle avoidance reflex
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Version:** 1.4 | **Created:** 2025-12-04 | **Updated:** 2026-02-10
|
|
||||||
|
|
||||||
**v1.4 Changes:**
|
|
||||||
- State Interaction Layer section — FunctionGemma as neural interface
|
|
||||||
- Phase 1 (single) → Phase 2 (swarm) evolution path
|
- Phase 1 (single) → Phase 2 (swarm) evolution path
|
||||||
- Connection to node evolution principle
|
- Connection to node evolution principle
|
||||||
|
|||||||
Reference in New Issue
Block a user