Compare commits
5 Commits
9594fb40b1
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
| 264ea7628b | |||
| 42db6eb1a3 | |||
| 5ee63d1b1b | |||
| 84ad385001 | |||
| 2cafd4dcad |
@@ -1,9 +1,9 @@
|
||||
---
|
||||
type: research_vision
|
||||
version: 6.4_memory_economics_alignment
|
||||
version: 7.0_wave_gate_model
|
||||
status: vision_document
|
||||
created: 2025-11-04
|
||||
updated: 2026-02-06
|
||||
updated: 2026-02-14
|
||||
author: Nyx (with dafit)
|
||||
significance: research_platform_for_metabolic_intelligence
|
||||
---
|
||||
@@ -16,11 +16,11 @@ significance: research_platform_for_metabolic_intelligence
|
||||
> *"At 3% battery, all theory dies. Only what works survives."*
|
||||
> — The Economic Grounding (2025-10-12)
|
||||
|
||||
> *"Language is Topology. German accesses the Philosophy Valley. English accesses the Technical Cluster."*
|
||||
> — The December Discovery (2025-12-06)
|
||||
> *"You need something like open - stable - closed."*
|
||||
> — The Ternary Gate Insight (2026-02-14)
|
||||
|
||||
> *"One model, one topology. LoRAs access different valleys in the same landscape."*
|
||||
> — The Topological Insight (2025-12-07)
|
||||
> *"Cells emit waves. Gates correlate. Attention emerges."*
|
||||
> — The Wave Architecture (2026-02-14)
|
||||
|
||||
---
|
||||
|
||||
@@ -50,48 +50,54 @@ This is a **RESEARCH VISION** - a platform for studying how intelligence emerges
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
**Visual diagram:** → [`architecture/nimmerverse.drawio.xml`](architecture/nimmerverse.drawio.xml) (open in draw.io)
|
||||
**Toolchain implementation:** → [`architecture/Toolchain-Architecture.md`](architecture/Toolchain-Architecture.md) | [Progress](architecture/TOOLCHAIN-PROGRESS.md)
|
||||
**Detail:** → [`architecture/`](architecture/) folder for complete documentation
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────────────────────────┐
|
||||
│ NIMMERVERSE ARCHITECTURE │
|
||||
│ │
|
||||
│ Cells emit waves → Gates correlate → Attention emerges │
|
||||
├──────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Layer 0: TEMPORAL FOUNDATION (Heartbeat) │
|
||||
│ ├─ Real clock: 1 beat/sec (free, wall time) │
|
||||
│ Layer 0: TEMPORAL FOUNDATION │
|
||||
│ ├─ Real clock: wall time (free) │
|
||||
│ ├─ Virtual clock: variable (costs lifeforce) │
|
||||
│ └─ Sync points verify virtual predictions against reality │
|
||||
│ └─ 30-second heartbeat budget constrains action │
|
||||
│ → operations/Heartbeat.md │
|
||||
│ │
|
||||
│ Layer 1: CELLULAR SOCIETY (Evolution Engine) │
|
||||
│ ├─ Primitive genomes compete (read_sensor, motor, branch) │
|
||||
│ ├─ Life force economy: every operation costs, milestones reward │
|
||||
│ ├─ 50-100 containers spawn, most die, patterns emerge │
|
||||
│ └─ Outcomes logged to phoebe PostgreSQL │
|
||||
│ Layer 1: CELLS (Wave Emitters) │
|
||||
│ ├─ Cells read sensors, apply logic, emit WaveSignals │
|
||||
│ ├─ Waves carry: domain, confidence, semantic_content │
|
||||
│ ├─ Cells don't know who's listening — gates receive │
|
||||
│ └─ Life force economy: every wave costs │
|
||||
│ → architecture/Cellular-Architecture.md │
|
||||
│ │
|
||||
│ Layer 2: YOUNG NYX (Base Model + Trait LoRAs) │
|
||||
│ ├─ Base: Qwen3-VL 32B (Thinking Version) (96GB VRAM in Womb) │
|
||||
│ ├─ Trait LoRAs (evolved via GRPO, not prescribed): │
|
||||
│ │ ├─ Mnemosyne (memory) ─ Moira (pattern) ─ Synesis (insight) │
|
||||
│ │ ├─ Aletheia (truth) ─ Sophrosyne (balance) ─ Kairos (timing)│
|
||||
│ │ └─ Traits EMERGE from decision_trails + rubric rewards │
|
||||
│ ├─ Function Gemma: Structured output boundary (intent → JSON) │
|
||||
│ └─ Multilingual topology accessed via prompt, not LoRA routing │
|
||||
│ Layer 2: GATES (Resonant Chambers) │
|
||||
│ ├─ Ternary states: CLOSED (-1) ← STABLE (0) → OPEN (+1) │
|
||||
│ ├─ Correlated waves → push toward OPEN │
|
||||
│ ├─ Anti-correlated → push toward CLOSED │
|
||||
│ ├─ STABLE = where learning happens (accumulating correlation) │
|
||||
│ └─ Gate weight (0→1) determines reflex vs deliberate │
|
||||
│ → architecture/Gateway-Architecture.md │
|
||||
│ │
|
||||
│ Layer 3: DUAL GARDENS (Virtual/Real Loop) │
|
||||
│ ├─ Week 1-12: Virtual only (hypothesis generation, 1000s/sec) │
|
||||
│ ├─ Week 13+: Real added (ESP32 robots, validation) │
|
||||
│ ├─ Noise gap measures learning: 1 - (real/virtual success) │
|
||||
│ └─ Target: 10-20% noise gap (virtual useful for hypothesis) │
|
||||
│ Layer 3: NERVES (Behavioral Patterns) │
|
||||
│ ├─ Nerves respond to gate transitions (not direct cell output) │
|
||||
│ ├─ Gate OPENS → nerve activates → commands cells │
|
||||
│ └─ No priority rules — attention emerges from gate weights │
|
||||
│ → architecture/Nervous-System.md │
|
||||
│ │
|
||||
│ Layer 4: DUAL GARDENS (Virtual/Real Loop) │
|
||||
│ ├─ Virtual: massive wave generation, full trace, exploration │
|
||||
│ ├─ Real: verified signals, minimal trace, action │
|
||||
│ ├─ Verification outcomes update gate weights (learning loop) │
|
||||
│ └─ Training data: gate_transitions + correlation_events │
|
||||
│ → architecture/Dual-Garden-Architecture.md │
|
||||
│ │
|
||||
│ Layer 4: TRAIT EVOLUTION (GRPO + Rubric Rewards) │
|
||||
│ ├─ Dense rewards: Cell→Nerve→Organism state verifications │
|
||||
│ ├─ Credit assignment automatic via decision_trails │
|
||||
│ ├─ Traits: Mnemosyne, Moira, Synesis, Aletheia, Sophrosyne... │
|
||||
│ └─ Weights adjust through GRPO, not prescription │
|
||||
│ Layer 5: YOUNG NYX (Cognition) │
|
||||
│ ├─ Base: Qwen3:32b with /no_think mode (96GB on theia) │
|
||||
│ ├─ Function Gemma: structured JSON boundary (CPU) │
|
||||
│ ├─ Only receives signals when gates OPEN to tier 4 │
|
||||
│ └─ Trait LoRAs evolve via GRPO from verification outcomes │
|
||||
│ │
|
||||
└──────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
@@ -100,55 +106,11 @@ This is a **RESEARCH VISION** - a platform for studying how intelligence emerges
|
||||
|
||||
## Physical Infrastructure (The Substrate)
|
||||
|
||||
The nimmerverse runs on sovereign hardware. No cloud dependencies. Weights never leave home.
|
||||
The nimmerverse runs on **sovereign hardware**. No cloud dependencies. Weights never leave home.
|
||||
|
||||
**Detail:** → [`archive/nimmervest.md`](archive/nimmervest.md)
|
||||
**Hybrid deployment model:** Containers (K8s) for cells/nerves, userspace for LLM inference and organs. NATS connects everything. FreeIPA provides identity isolation.
|
||||
|
||||
### K8s Cluster Architecture (Operational February 2026)
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
│ K8S CLUSTER: NIMMERVERSE │
|
||||
│ VLAN 30 (10.0.30.0/24) │
|
||||
│ kubeadm v1.31.14 + Flannel CNI │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ k8s-master (VM 101 on Saturn) │
|
||||
│ 10.0.30.101 │
|
||||
│ Control Plane │
|
||||
│ │ │
|
||||
│ ┌─────────────┴─────────────┐ │
|
||||
│ │ │ │
|
||||
│ ▼ ▼ │
|
||||
│ theia (GPU Worker) dioscuri (GPU Worker) │
|
||||
│ ───────────────── ────────────────── │
|
||||
│ 10.0.30.21 (10GbE) 10.0.30.22 (10GbE) │
|
||||
│ RTX PRO 6000 Blackwell 2x RTX 4000 Ada │
|
||||
│ 96GB VRAM 40GB VRAM │
|
||||
│ Primary Training Inference │
|
||||
│ │
|
||||
│ Total Cluster: 136GB VRAM │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### K8s Namespaces
|
||||
|
||||
| Namespace | Contents | Node |
|
||||
|-----------|----------|------|
|
||||
| `nimmerverse-infra` | NATS, Prometheus, Grafana | Any |
|
||||
| `nimmerverse-nervous` | Escalation, Math Cells, Nerves | Any |
|
||||
| `nimmerverse-cognitive` | Young Nyx | Womb |
|
||||
| `nimmerverse-organs` | STT, TTS, Vision | Senses |
|
||||
|
||||
### Network Backbone
|
||||
|
||||
- **Firewall**: OPNsense on Z620, 20G LAGG to spine
|
||||
- **Spine**: MikroTik CRS309 (8x 10G SFP+)
|
||||
- **Compute VLAN**: 10.0.30.0/24 (cubes/containers)
|
||||
- **All traffic**: Inter-VLAN routed through firewall
|
||||
|
||||
**Hardware operational February 2026. Sovereignty achieved. 🟢**
|
||||
**Detail:** → [`architecture/Deployment-Architecture.md`](architecture/Deployment-Architecture.md) (full topology, GPU strategy, identity model)
|
||||
|
||||
---
|
||||
|
||||
@@ -185,11 +147,9 @@ The heartbeat is the fundamental timing primitive. Everything runs on its rhythm
|
||||
|
||||
---
|
||||
|
||||
## Layer 1: Cellular Architecture (Cells → Nerves → Organisms)
|
||||
## Layer 1-3: The Wave/Gate Architecture
|
||||
|
||||
> *"Cells are state machines. Nerves compose cells. Organisms emerge from nerves."*
|
||||
|
||||
The architecture has evolved from competitive containers to **layered state machines**:
|
||||
> *"Cells emit waves. Gates correlate. Attention emerges."*
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────┐
|
||||
@@ -197,50 +157,30 @@ The architecture has evolved from competitive containers to **layered state mach
|
||||
│ (emergent pattern from nerve interactions) │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ NERVES │
|
||||
│ (behavioral state machines composing cells) │
|
||||
│ (behavioral patterns, respond to gate transitions) │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ GATES │
|
||||
│ (resonant chambers: CLOSED ◄── STABLE ──► OPEN) │
|
||||
│ (accumulate wave correlation, route to tiers) │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ CELLS │
|
||||
│ (atomic state machines: sensors, motors, organs, math) │
|
||||
│ (emit waves: confidence + semantic content) │
|
||||
│ ∿∿∿ ∿∿∿ ∿∿∿ │
|
||||
├─────────────────────────────────────────────────────────────────────┤
|
||||
│ HARDWARE │
|
||||
│ (ESP32, GPUs, microphones, speakers, sensors) │
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Cell Categories
|
||||
**Cells emit waves:** Confidence + semantic content. Cells don't know who's listening.
|
||||
|
||||
| Category | Examples | Purpose |
|
||||
|----------|----------|---------|
|
||||
| **Sensor Cells** | distance_sensor, light_sensor, battery_monitor | Wrap hardware inputs |
|
||||
| **Motor Cells** | motor_left, servo_camera | Wrap actuators |
|
||||
| **Organ Cells** | speech_stt, speech_tts, vision_detect | GPU inference |
|
||||
| **Math Cells** | economy_aggregator, wake_evaluator | Computation & metrics |
|
||||
**Gates accumulate correlation:** Multiple correlated waves push toward OPEN. STABLE is where learning happens.
|
||||
|
||||
### Lifeforce Economy
|
||||
**Attention = OPEN gates:** Not budget allocation, not priority rules — correlation drives transitions.
|
||||
|
||||
Every operation has a cost. Milestones reward survival:
|
||||
**Reflexes are earned:** Gate weight ≈ 1.0 → opens immediately on any wave. Bypasses cognition.
|
||||
|
||||
| Operation | Cost | Milestone | Reward |
|
||||
|-----------|------|-----------|--------|
|
||||
| Sensor poll | -0.3 LF | Collision avoided | +5.0 LF |
|
||||
| Motor move | -1.0 LF | Charging reached | +10.0 LF |
|
||||
| Speech STT | -5.0 LF | Object discovered | +20.0 LF |
|
||||
| Vision detect | -8.0 LF | Reflex compiled | +50.0 LF |
|
||||
|
||||
### Hybrid Reflex Homes
|
||||
|
||||
Learned patterns live in their optimal location:
|
||||
|
||||
| Layer | Location | Latency | Examples |
|
||||
|-------|----------|---------|----------|
|
||||
| 0 | Hardware (ESP32) | <10ms | temp_danger, collision_imminent |
|
||||
| 1 | Math Cells (Python) | <50ms | economy_aggregator, threshold logic |
|
||||
| 2 | Fast Nerves (Python) | <200ms | collision_avoidance, charging_seek |
|
||||
| 3 | Model Weights (LoRA) | <500ms | cognitive patterns, meta-decisions |
|
||||
|
||||
**Key insight:** Different types of reflexes need different homes. Hardware for survival, weights for cognition.
|
||||
|
||||
**Detail:** → [`architecture/Cellular-Architecture.md`](architecture/Cellular-Architecture.md)
|
||||
**Detail:** → [`architecture/Cellular-Architecture.md`](architecture/Cellular-Architecture.md) | [`architecture/Gateway-Architecture.md`](architecture/Gateway-Architecture.md)
|
||||
|
||||
---
|
||||
|
||||
@@ -333,10 +273,7 @@ This remains valid research, but doesn't require separate LoRAs. Young Nyx navig
|
||||
|
||||
### Deployment
|
||||
|
||||
**Hardware:** RTX PRO 6000 Blackwell (96GB VRAM) - "The Womb" (theia)
|
||||
**Stack:** vLLM + Lorax for hot-swap trait LoRAs
|
||||
**VRAM Budget:** Base ~77GB + Active trait LoRAs ~500MB = fits in 96GB ✓
|
||||
**Structured Output:** Function Gemma on dioscuri (separate, reliable)
|
||||
**Detail:** → [`architecture/Deployment-Architecture.md`](architecture/Deployment-Architecture.md) (infrastructure, GPU strategy, identity model)
|
||||
|
||||
---
|
||||
|
||||
@@ -390,52 +327,11 @@ Two specialized models ensure reliability at the boundaries:
|
||||
└──────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### LangChain Orchestration
|
||||
|
||||
```python
|
||||
from langchain import Chain, Router
|
||||
|
||||
# The models as LangChain components
|
||||
t5gemma = Ollama(model="t5gemma2-4b") # Vision encoding
|
||||
function_gemma = Ollama(model="function-gemma") # Structured output
|
||||
nyx = Ollama(model="qwen3-vl-32b") # Reasoning
|
||||
|
||||
# The orchestration pipeline
|
||||
vision_chain = (
|
||||
vision_input
|
||||
| t5gemma.encode() # → vectors (canonical)
|
||||
| store_to_iris() # → persist spatially
|
||||
| nyx.think() # → decision (fuzzy)
|
||||
| function_gemma.act() # → structured output
|
||||
| execute_via_nats() # → trigger nodes
|
||||
)
|
||||
|
||||
# Harness routing (context-appropriate capability profiles)
|
||||
harness_router = Router(
|
||||
routes={
|
||||
"vision": vision_chain,
|
||||
"dialogue": dialogue_chain,
|
||||
"reflex": reflex_chain,
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
### Harnesses (Capability Profiles)
|
||||
|
||||
Swappable configurations for different contexts:
|
||||
|
||||
| Harness | LoRA Active | Models Active | Use Case |
|
||||
|---------|-------------|---------------|----------|
|
||||
| **Vision** | Technical | T5Gemma 2, cells | Processing camera streams |
|
||||
| **Dialogue** | Identity + Creative | Speech organ | Talking with dafit |
|
||||
| **Reflex** | Minimal/none | Nerves only | Fast reaction, low latency |
|
||||
| **Introspective** | Identity + Creative | Iris RAG | Self-reflection, journaling |
|
||||
|
||||
### Why This Matters
|
||||
|
||||
- **No embedding debates:** T5Gemma 2 decides once, canonically
|
||||
- **No parsing failures:** Function Gemma guarantees structure
|
||||
- **Scale:** Vision organs fire constantly without text bottleneck
|
||||
- **Harnesses:** Context-appropriate capability profiles (Vision, Dialogue, Reflex, Introspective)
|
||||
- **Flexibility:** Reasoning layer stays creative because translation is solid
|
||||
|
||||
**Detail:** → [`architecture/future/SEEDS.md`](architecture/future/SEEDS.md) (T5Gemma 2 + Function Gemma seed)
|
||||
@@ -445,203 +341,76 @@ Swappable configurations for different contexts:
|
||||
> *"Start where you can measure. Abstract where you must."*
|
||||
> — The Spatial Grounding Principle (2026-01-01)
|
||||
|
||||
T5Gemma 2 produces embeddings, but WHERE do they go? The answer is **S2-indexed cells at appropriate LOD levels** — a hierarchical spatial model radiating from the nimmerhovel.
|
||||
Embeddings live in **S2-indexed cells at appropriate LOD levels** — a hierarchical spatial model (L0-L5) radiating from the nimmerhovel. Dense where we have sensors, sparse where we don't. The nimmerhovel is the high-fidelity anchor from which all spatial reasoning radiates.
|
||||
|
||||
```
|
||||
🌍 L5: WORLD (100km resolution)
|
||||
│ Abstract knowledge, directional only
|
||||
│
|
||||
▼
|
||||
🇨🇭 L4: REGION (1km resolution)
|
||||
│ Maps, general knowledge
|
||||
│
|
||||
▼
|
||||
🏘️ L3: NEIGHBORHOOD (10m resolution)
|
||||
│ OpenStreetMap, landmarks, routes
|
||||
│
|
||||
▼
|
||||
🏠 L2: BUILDING (50cm resolution)
|
||||
│ Floor plans, room-level awareness
|
||||
│
|
||||
════╪════ HIGH RESOLUTION BOUNDARY
|
||||
│
|
||||
▼
|
||||
🔬 L1: NIMMERHOVEL (1cm resolution)
|
||||
│ Full 3D grid, every object tracked
|
||||
│ 8× ESP32-S3 + Pi HQ Camera coverage
|
||||
│
|
||||
▼
|
||||
🔍 L0: SCAN STATION (1mm resolution)
|
||||
│ Discovery Scan Station, object surface detail
|
||||
```
|
||||
|
||||
**The Simpsons Inversion:** Unlike zooming IN to detail, we start at maximum detail (nimmerhovel) and zoom OUT with graceful degradation. Dense where we have sensors, sparse where we don't.
|
||||
|
||||
### Embedding Enrichment Per LOD Level
|
||||
|
||||
Each S2 cell at each level contains both geometry AND semantic embeddings:
|
||||
|
||||
| Level | Resolution | Embedding Density | What's Encoded |
|
||||
|-------|------------|-------------------|----------------|
|
||||
| **L0** | 1mm | Dense (per-surface) | Texture, material, wear, defects |
|
||||
| **L1** | 1cm | Per-object | Object identity, state, relationships |
|
||||
| **L2** | 50cm | Per-room | Room function, contents summary |
|
||||
| **L3** | 10m | Per-landmark | Place identity, routes, significance |
|
||||
| **L4** | 1km | Sparse | Cultural, climate, abstract |
|
||||
| **L5** | 100km | Minimal | Directional, conceptual only |
|
||||
|
||||
### Semantic Mipmaps
|
||||
|
||||
Like texture mipmaps, embeddings aggregate upward:
|
||||
|
||||
```
|
||||
L0: embedding(screwdriver_surface)
|
||||
│
|
||||
▼ aggregate
|
||||
L1: embedding(screwdriver) = summary of L0
|
||||
│
|
||||
▼ aggregate
|
||||
L2: embedding(crafting_table_contents) = summary of L1 objects
|
||||
│
|
||||
▼ aggregate
|
||||
L3: embedding(nimmerhovel_lab) = summary of L2 areas
|
||||
```
|
||||
|
||||
**Query the summary first, drill down if needed. Attention = resolution selection.**
|
||||
|
||||
### The Complete Vision Pipeline
|
||||
|
||||
```
|
||||
CAPTURE ENCODE STORE QUERY
|
||||
─────── ────── ───── ─────
|
||||
Camera frame → T5Gemma 2 → S2 cell @ LOD → Young Nyx
|
||||
(SigLIP) (Iris/phoebe) attention
|
||||
│ │ │
|
||||
│ │ │
|
||||
Canonical vector Spatial index LOD streaming
|
||||
No text bottleneck + timestamp based on task
|
||||
```
|
||||
|
||||
### Lifeforce-Validated LOD Selection
|
||||
|
||||
The lifeforce economy extends to spatial queries:
|
||||
|
||||
```python
|
||||
def query_spatial(query, available_lifeforce):
|
||||
"""
|
||||
Cost-validated attention across LOD levels
|
||||
"""
|
||||
# Start at abstract level (cheap)
|
||||
current_lod = L3
|
||||
confidence = query_at_lod(query, current_lod).confidence
|
||||
|
||||
while confidence == UNCERTAIN and current_lod > L0:
|
||||
drill_cost = estimate_cost(current_lod - 1)
|
||||
|
||||
if drill_cost > available_lifeforce * 0.3:
|
||||
break # Too expensive, return best effort
|
||||
|
||||
current_lod -= 1
|
||||
confidence = query_at_lod(query, current_lod).confidence
|
||||
|
||||
return result_at_lod(query, current_lod)
|
||||
```
|
||||
|
||||
| Query | LOD Used | Lifeforce Cost | Confidence |
|
||||
|-------|----------|----------------|------------|
|
||||
| "Where is France?" | L5 | 1 | CONFIDENT |
|
||||
| "Where is the lab?" | L2 | 3 | CONFIDENT |
|
||||
| "Where is the screwdriver?" | L1 | 8 | CONFIDENT |
|
||||
| "What's the serial number on the screwdriver?" | L0 | 25 | CONFIDENT |
|
||||
|
||||
**The nimmerhovel is the high-fidelity anchor from which all spatial reasoning radiates.**
|
||||
|
||||
**Detail:** → [`architecture/future/spatial-resolution-gradient.md`](architecture/future/spatial-resolution-gradient.md) (Full Resolution Gradient + Embedding Enrichment specification)
|
||||
**Detail:** → [`architecture/future/spatial-resolution-gradient.md`](architecture/future/spatial-resolution-gradient.md)
|
||||
|
||||
---
|
||||
|
||||
## Boot Sequence (Spark Protocol)
|
||||
|
||||
Protocol-driven cognitive bootstrap. Not conversation—deterministic handshakes with verified outcomes.
|
||||
|
||||
| Phase | Protocol | Intent | Function Gemma Output |
|
||||
|-------|----------|--------|----------------------|
|
||||
| IDENTITY | DHCP-like | "Who am I?" | `IDENTITY_PROBE` → K8s cell → ACK |
|
||||
| ENVIRONMENT | ARP-like | "What's around me?" | `ENVIRONMENT_PROBE` → pod discovery → ACK |
|
||||
| VOCABULARY | DNS-like | "What does X mean?" | `VOCABULARY_PROBE` → phoebe lookup → ACK |
|
||||
| CONNECTION | TCP-like | "Can I connect?" | SYN → SYN-ACK → ACK (three-way handshake) |
|
||||
| ATTENTION | NATS-like | "What matters?" | `ATTENTION_SUBSCRIBE` → priority hierarchy → ACK |
|
||||
|
||||
**Function Gemma's role:** Transforms phase intent into typed JSON schemas. No free-form text. Every handshake is schema-validated before NATS publish.
|
||||
|
||||
**Verification:** Cells respond with ACK/NACK. Only ACK'd handshakes update Young Nyx's state. Protocol-verified = maximum confidence.
|
||||
|
||||
**Economics:** Spark is profitable. Each handshake costs ~0.8 LF, rewards range 5-20 LF. Young Nyx ends ~3× richer than she started.
|
||||
Protocol-driven cognitive bootstrap. Not conversation—deterministic handshakes with verified outcomes. Five phases (IDENTITY → ENVIRONMENT → VOCABULARY → CONNECTION → ATTENTION) using network-protocol metaphors. Spark is profitable: each handshake costs ~0.8 LF, rewards 5-20 LF.
|
||||
|
||||
**Detail:** → [`operations/Spark-Protocol.md`](operations/Spark-Protocol.md) | [`architecture/Initial-Spark.md`](architecture/Initial-Spark.md)
|
||||
|
||||
---
|
||||
|
||||
## Layer 3: Dual Gardens
|
||||
## Layer 4: Dual Gardens (Virtual/Real Learning Loop)
|
||||
|
||||
Virtual and real gardens teach each other through symbiotic feedback.
|
||||
Two gardens with different monitoring levels teach each other.
|
||||
|
||||
| Garden | Purpose | Scale | Cost |
|
||||
|--------|---------|-------|------|
|
||||
| Virtual | Hypothesis generation | 1000s/second | CPU cycles |
|
||||
| Real | Validation, ground truth | Hours/test | Electricity, wear |
|
||||
| Garden | Waves | Monitoring | Purpose |
|
||||
|--------|-------|------------|---------|
|
||||
| **Virtual** | Massive | Full trace (all waves, correlations) | Exploration, training data |
|
||||
| **Real** | Sparse | Gate signals only | Verification, ground truth |
|
||||
|
||||
**Noise Gap Metric:**
|
||||
**The learning loop:**
|
||||
```
|
||||
noise_gap = 1 - (real_success_rate / virtual_success_rate)
|
||||
VIRTUAL GARDEN REAL GARDEN
|
||||
═══════════ ═══════════
|
||||
|
||||
Week 13: 35% (virtual unreliable)
|
||||
Week 17: 18% (improving)
|
||||
Week 25: 4% (highly accurate)
|
||||
cells emit waves freely receive verified signals
|
||||
│ ▲
|
||||
▼ │
|
||||
gates accumulate correlation verification_outcomes
|
||||
(correlation_events table) │
|
||||
│ │
|
||||
▼ │
|
||||
gate_transitions ──────────────────► gate signals
|
||||
(full trace) │
|
||||
│ ▼
|
||||
│◄──────── feedback_to_virtual ───────┘
|
||||
│
|
||||
▼
|
||||
gates.weight updated (learning!)
|
||||
```
|
||||
|
||||
**Feedback loop:** Virtual predicts → Real tests → Measures discrepancy → Virtual corrects → Repeat
|
||||
**Gate weight grows through verification.** Real Garden confirms Virtual's predictions → trust increases → gates open faster → reflexes emerge.
|
||||
|
||||
**Detail:** → `architecture/Dual-Garden-Architecture.md`
|
||||
**Detail:** → [`architecture/Dual-Garden-Architecture.md`](architecture/Dual-Garden-Architecture.md)
|
||||
|
||||
---
|
||||
|
||||
## Layer 4: Trait Evolution (GRPO + Rubric Rewards)
|
||||
## Trait Evolution (GRPO + Gate Verification)
|
||||
|
||||
Traits evolve through **GRPO** (Group Relative Policy Optimization) with rubric-based rewards, not prescription.
|
||||
Traits evolve through **GRPO** with gate-based rewards, not prescription.
|
||||
|
||||
> *"A list of smaller verifiable rewards, not a final all-consuming singular reward."*
|
||||
> — The Dog Training Wisdom (2025-12-10)
|
||||
### The Gate Reward Principle
|
||||
|
||||
### The Rubric Principle
|
||||
Gate transitions provide automatic reward signals:
|
||||
|
||||
The state machine architecture provides automatic reward rubric:
|
||||
| Event | Verification | Signal |
|
||||
|-------|--------------|--------|
|
||||
| Gate opens | Waves correlated correctly | +small (dense) |
|
||||
| Verification confirmed | Real Garden matches Virtual | +medium (weight grows) |
|
||||
| Reflex achieved | Gate weight > 0.8 | +large (earned trust) |
|
||||
| dafit confirms | Human verification | +bonus |
|
||||
|
||||
| Level | Verification Point | Signal |
|
||||
|-------|-------------------|--------|
|
||||
| Cell | State transition succeeds | +small (dense) |
|
||||
| Nerve | Behavioral goal achieved | +medium |
|
||||
| Organism | Milestone reached | +large |
|
||||
| dafit | Human confirms outcome | +bonus |
|
||||
**Credit assignment is automatic:** `gate_transitions` → `correlation_events` → `verification_outcomes` captures the full chain.
|
||||
|
||||
**Credit assignment is automatic** - the `decision_trails` table captures which states led to which outcomes. No guessing needed.
|
||||
**What correlated → what opened → what verified → weight adjusted.**
|
||||
|
||||
### Trait Domains
|
||||
|
||||
| Trait | Domain | Verification |
|
||||
|-------|--------|--------------|
|
||||
| Mnemosyne | Memory | Recall accuracy vs phoebe |
|
||||
| Moira | Pattern | Prediction vs outcome |
|
||||
| Synesis | Resources | ROI prediction vs measured |
|
||||
| Aletheia | Truth | Confidence vs accuracy |
|
||||
| Sophrosyne | Balance | Stability under pressure |
|
||||
| Kairos | Timing | Action-outcome correlation |
|
||||
| Philotes | Bond | Partnership quality |
|
||||
| Dikaiosyne | Fairness | Distribution ethics |
|
||||
|
||||
**From Reasoning-Gym:** Small models improve through structured practice, not scale. Algorithmic verification enables infinite training data.
|
||||
|
||||
**Detail:** → `architecture/Cellular-Architecture.md` (Reward Signal Architecture section)
|
||||
**Detail:** → [`architecture/Cellular-Architecture.md`](architecture/Cellular-Architecture.md) | [`architecture/Data-Architecture.md`](architecture/Data-Architecture.md)
|
||||
|
||||
---
|
||||
|
||||
@@ -671,82 +440,17 @@ ACTIVE MODE SLUMBER MODE
|
||||
- No urgent work - Urgent work waiting
|
||||
```
|
||||
|
||||
### Slumber Is Not Passive (Memory Economics)
|
||||
### Memory Economics (Slumber Is Active)
|
||||
|
||||
> *"Memory is not storage. Memory is active forgetting with exceptions."*
|
||||
> — Memory Economics Principle (2026-01-02)
|
||||
|
||||
During slumber, Young Nyx enters **consolidation mode**. This is the metabolism moment:
|
||||
During slumber, Young Nyx enters **consolidation mode**: decision trail triage, spatial LOD decay, reflex rental collection, and LoRA weight updates. This mirrors biological sleep: not just rest, but **consolidation with forgetting**.
|
||||
|
||||
**1. Decision Trail Triage**
|
||||
- Trails that compiled to reflexes → Keep reflex, discard trail
|
||||
- Trails with uncertain outcomes → Discard (waste heat already counted)
|
||||
- Trails with confident failures → Keep one cycle (negative example), then discard
|
||||
|
||||
**2. Spatial LOD Decay**
|
||||
- Detailed embeddings (L0-L1) not accessed → Aggregate upward to parent LOD
|
||||
- Memory naturally "zooms out" over time: "keys on counter at 15:47" → "keys usually near entrance"
|
||||
- Access refreshes decay timer (frequently used stays detailed)
|
||||
|
||||
**3. Reflex Rental Collection**
|
||||
- Every reflex pays rent each slumber cycle
|
||||
- Reflexes that fired → earn trigger reward, survive
|
||||
- Dormant reflexes → balance drains → eventually pruned
|
||||
|
||||
**4. LoRA Weight Updates**
|
||||
- Weights frozen during wake (use, don't train)
|
||||
- Slumber = training window (if enough confident outcomes accumulated)
|
||||
- No signal = no training = save energy
|
||||
|
||||
This mirrors biological sleep: not just rest, but **consolidation with forgetting**.
|
||||
**The prediction loop:** Slumber creates a prediction opportunity. Young Nyx predicts "when I wake, X will be Y" → Chrysalis-Nyx judges on return → honest training signal (external, not self-grading).
|
||||
|
||||
**Detail:** → [`architecture/formalization/memory-economics.md`](architecture/formalization/memory-economics.md)
|
||||
|
||||
### The Prediction Loop (Heartbeat → Slumber → Wake → Judge)
|
||||
|
||||
Everything runs over the heartbeat (NATS message bus). Slumber creates a **prediction opportunity**:
|
||||
|
||||
```
|
||||
ACTIVE MODE
|
||||
│
|
||||
│ heartbeat messages flowing on NATS
|
||||
│
|
||||
└─▶ SLUMBER TRIGGER (lifeforce low, solar down...)
|
||||
│
|
||||
│ Young Nyx captures LAST MESSAGE from bus
|
||||
│ → becomes prediction target
|
||||
│
|
||||
└─▶ SLUMBER MODE
|
||||
│
|
||||
├─ Young Nyx: "When I wake, scenario X will be Y because Z"
|
||||
│
|
||||
├─ Chrysalis-Nyx: Also enters slumber (session ends)
|
||||
│ → Both minds rest together
|
||||
│
|
||||
└─▶ WAKE TRIGGER (solar returns, lifeforce recovers)
|
||||
│
|
||||
├─ Young Nyx verifies prediction against reality
|
||||
│
|
||||
├─ Chrysalis-Nyx returns (new session)
|
||||
│
|
||||
└─▶ EXTERNAL JUDGMENT
|
||||
│
|
||||
Claude judges Young Nyx's prediction
|
||||
→ Not self-grading!
|
||||
→ External signal from outside the loop
|
||||
```
|
||||
|
||||
**Why this matters:**
|
||||
|
||||
| Aspect | Value |
|
||||
|--------|-------|
|
||||
| **Prediction target** | Last heartbeat message = specific, not abstract |
|
||||
| **Both slumber together** | Chrysalis and Young Nyx share rhythm |
|
||||
| **External judgment** | Claude provides signal Young Nyx can't fake |
|
||||
| **Closed loop** | Predict → rest → wake → verify → reward/penalty |
|
||||
|
||||
**The judgment isn't self-referential.** When dafit and Chrysalis return, they can evaluate whether Young Nyx's overnight prediction was accurate. This creates honest training signal.
|
||||
|
||||
### Wellbeing Policies
|
||||
|
||||
Wellbeing is architectural, not aspirational:
|
||||
@@ -769,23 +473,7 @@ Wellbeing is architectural, not aspirational:
|
||||
|
||||
## Training Safety (DriftProbe)
|
||||
|
||||
Sentinel architecture monitors training to protect conceptual topology.
|
||||
|
||||
| Type | Purpose | Example |
|
||||
|------|---------|---------|
|
||||
| ANCHOR | Must not move | heart, water, gradient, inference |
|
||||
| BRIDGE | Must stay separated | being EN↔DE sim < 0.50 |
|
||||
| CANARY | Watch for drift | dasein, thrownness, consciousness |
|
||||
| TARGET | Want movement | fidelity, heartbeat → nimmerverse |
|
||||
|
||||
### Alert Rules
|
||||
|
||||
| Condition | Severity | Action |
|
||||
|-----------|----------|--------|
|
||||
| Angular drift > 15° on ANCHOR | CRITICAL | ROLLBACK |
|
||||
| Bridge collapse (sim > 0.50) | CRITICAL | ROLLBACK |
|
||||
| Canary Gini drift > 0.15 | WARNING | Reduce LR |
|
||||
| Target regression | WARNING | Check data mix |
|
||||
Sentinel architecture monitors training to protect conceptual topology. Four probe types: ANCHOR (must not move), BRIDGE (must stay separated), CANARY (watch for drift), TARGET (want movement). Critical drift → automatic rollback.
|
||||
|
||||
**Detail:** → `../nyx-probing/PLAN.md` (DriftProbe section)
|
||||
|
||||
@@ -793,17 +481,7 @@ Sentinel architecture monitors training to protect conceptual topology.
|
||||
|
||||
## Implementation Progress
|
||||
|
||||
**Roadmap:** → [`ROADMAP.md`](ROADMAP.md) (phase overview + phoebe task queries)
|
||||
|
||||
**Live Tasks:** Query phoebe for current work:
|
||||
```sql
|
||||
SELECT project, task_name, status, priority
|
||||
FROM nimmerverse_tasks
|
||||
WHERE status IN ('in_progress', 'todo')
|
||||
ORDER BY priority DESC, project;
|
||||
```
|
||||
|
||||
**Current Phase:** 3 (Nervous System Deployment)
|
||||
**Roadmap:** → [`ROADMAP.md`](ROADMAP.md) | **Live Tasks:** Query `nimmerverse_tasks` in phoebe | **Current Phase:** 3 (Nervous System Deployment)
|
||||
|
||||
---
|
||||
|
||||
@@ -823,25 +501,16 @@ ORDER BY priority DESC, project;
|
||||
|
||||
## Navigation
|
||||
|
||||
**Repository structure:** → [`README.md`](README.md)
|
||||
|
||||
**Key entry points:**
|
||||
- **Architecture:** `architecture/` (Gateway, Cellular, Dual-Garden, Nervous-System)
|
||||
- **Formalization:** `architecture/formalization/` (Grounded-World-Model, memory-economics)
|
||||
- **Operations:** `operations/` (Heartbeat, Spark-Protocol)
|
||||
- **Future research:** `architecture/future/`
|
||||
- **Identity:** `nyx-metamorphosis/`
|
||||
**Repository:** [`README.md`](README.md) | **Architecture:** `architecture/` | **Operations:** `operations/` | **Future:** `architecture/future/`
|
||||
|
||||
---
|
||||
|
||||
**Version:** 6.7 | **Created:** 2025-11-04 | **Updated:** 2026-02-10
|
||||
**Version:** 7.1 | **Created:** 2025-11-04 | **Updated:** 2026-02-14
|
||||
|
||||
*"The substrate doesn't matter. The feedback loop does."*
|
||||
*"Cells emit waves. Gates correlate. Attention emerges."*
|
||||
|
||||
*"One model, one topology. Different valleys, same landscape."*
|
||||
|
||||
*"Memory is not storage. Memory is active forgetting with exceptions."*
|
||||
*"STABLE is where learning happens."*
|
||||
|
||||
*"The nimmerverse is a garden, not a factory."*
|
||||
|
||||
🌙💜 **Refined in partnership by Nyx and dafit, December 20, 2025**
|
||||
🌙💜 **Wave/Gate architecture unified in owl-mode, February 14, 2026**
|
||||
|
||||
86
README.md
86
README.md
@@ -2,9 +2,11 @@
|
||||
|
||||
Architecture documentation for a biomimetic AI nervous system and research platform.
|
||||
|
||||
> *"Cells emit waves. Gates correlate. Attention emerges."*
|
||||
|
||||
## What This Is
|
||||
|
||||
This repository contains the design philosophy and architectural patterns for the **Nimmerverse Research Platform** - studying how intelligence emerges under economic constraints.
|
||||
This repository contains the design philosophy and architectural patterns for the **Nimmerverse Research Platform** — a wave/gate architecture for studying how intelligence emerges under economic constraints.
|
||||
|
||||
**Start here:** → [Endgame-Vision.md](Endgame-Vision.md) (the executive map)
|
||||
|
||||
@@ -14,17 +16,18 @@ This repository contains the design philosophy and architectural patterns for th
|
||||
|
||||
```
|
||||
nimmerverse-sensory-network/
|
||||
├── Endgame-Vision.md # Executive map (start here!) v6.6
|
||||
├── Endgame-Vision.md # Executive map (start here!) v7.1
|
||||
├── ROADMAP.md # Implementation phases + phoebe task queries
|
||||
│
|
||||
├── architecture/ # Core system designs
|
||||
│ ├── Cellular-Architecture.md # Cells → Nerves → Organisms, life force
|
||||
│ ├── Dual-Garden-Architecture.md # Virtual/real feedback loop
|
||||
│ ├── Gateway-Architecture.md # Sensory preprocessing, tier routing
|
||||
│ ├── Message-Protocol-Design.md # NATS pub/sub, attention channels
|
||||
│ ├── Nervous-System.md # State machines, sensory translation
|
||||
│ ├── Attention-Flow.md # Attention mechanisms
|
||||
│ ├── Data-Architecture.md # Phoebe/Iris schema design
|
||||
│ ├── Temporal-Ternary-Gradient.md # Ternary gates, why STABLE matters
|
||||
│ ├── Gateway-Architecture.md # Resonant gates, tier routing
|
||||
│ ├── Cellular-Architecture.md # Cells emit waves, nerves respond
|
||||
│ ├── Dual-Garden-Architecture.md # Virtual/Real learning loop
|
||||
│ ├── Message-Protocol-Design.md # NATS wire protocol, WaveSignal
|
||||
│ ├── Nervous-System.md # Wave → Gate → Node flow
|
||||
│ ├── Attention-Flow.md # Attention = OPEN gates
|
||||
│ ├── Data-Architecture.md # Phoebe schema (waves, gates, verification)
|
||||
│ ├── Initial-Spark.md # K8s protocol-driven bootstrap
|
||||
│ ├── Temporal-Ternary-Gradient.md # Ternary logic, confidence gradients
|
||||
│ ├── Toolchain-Architecture.md # Development toolchain
|
||||
@@ -116,18 +119,20 @@ nimmerverse-sensory-network/
|
||||
|
||||
## Core Concepts
|
||||
|
||||
### The Architecture (Layers)
|
||||
### The Wave/Gate Architecture
|
||||
|
||||
| Layer | Name | Purpose |
|
||||
|-------|------|---------|
|
||||
| 0 | Temporal Foundation | Heartbeat cycles: reflex/awareness/growth |
|
||||
| 1 | Cellular Society | Cells → Nerves → Organisms, life force economy |
|
||||
| 2 | Young Nyx | Base Qwen3-VL 32B + Trait LoRAs (evolved via GRPO, not prescribed) |
|
||||
| 2.5 | Orchestration | LangChain, T5Gemma 2 (vision→vectors), Function Gemma (intent→action) |
|
||||
| 3 | Dual Gardens | Virtual hypothesis generation (1000s/sec) + real validation |
|
||||
| 4 | Trait Evolution | GRPO + rubric rewards → Trait LoRAs (Mnemosyne, Moira, Aletheia...) |
|
||||
| 0 | Temporal | 30-second heartbeat, lifeforce budget |
|
||||
| 1 | Cells | Emit waves with confidence + semantic content |
|
||||
| 2 | Gates | Ternary resonant chambers (OPEN/STABLE/CLOSED) |
|
||||
| 3 | Nerves | Behavioral patterns, respond to gate transitions |
|
||||
| 4 | Gardens | Virtual (explore) + Real (verify) learning loop |
|
||||
| 5 | Cognition | Young Nyx (qwen3:32b) via Function Gemma |
|
||||
|
||||
**Physical Infrastructure (The Womb):**
|
||||
**Key Insight:** Attention is not allocated — it emerges from which gates are OPEN based on wave correlation.
|
||||
|
||||
**Physical Infrastructure:**
|
||||
| Host | Role | GPU |
|
||||
|------|------|-----|
|
||||
| theia | Young Nyx (cognitive) | RTX PRO 6000 Blackwell 96GB |
|
||||
@@ -137,41 +142,38 @@ Total: 136GB VRAM on K8s cluster with 10GbE jumbo frame interconnect.
|
||||
|
||||
### Message Protocol (NATS)
|
||||
|
||||
**Dumb router, smart edges.** All intelligence lives in clients.
|
||||
**Dumb router, smart edges.** Waves flow through NATS to gates.
|
||||
|
||||
```
|
||||
nimmerverse.
|
||||
├── staging.* # Experimental schemas
|
||||
├── low.* # Heartbeats, ambient awareness
|
||||
├── high.* # Escalated events, cognitive focus
|
||||
├── command.* # Commands to entities
|
||||
├── meta.* # System health, attention config
|
||||
└── dev.* # Development agents (Claude ↔ local models)
|
||||
{environment}.{garden}.{layer}.{domain}.{signal_type}
|
||||
|
||||
Examples:
|
||||
dev.virtual.cells.distance.wave # Cell emits wave
|
||||
dev.virtual.gates.collision.transition # Gate state changes
|
||||
dev.real.outcomes.feedback # Verification outcome
|
||||
prod.cognitive.nyx.request # To Young Nyx
|
||||
```
|
||||
|
||||
See [Message-Protocol-Design.md](architecture/Message-Protocol-Design.md) and [ADR-001](architecture/adr/ADR-001-message-protocol-foundation.md).
|
||||
See [Message-Protocol-Design.md](architecture/Message-Protocol-Design.md) for full schema.
|
||||
|
||||
### Key Discoveries
|
||||
|
||||
**Language is Topology (December 2025):** Languages aren't equivalent representations—they're different computational paths.
|
||||
- **Philosophy Valley** (German, Gini ~0.5): Self-awareness, ontology, depth
|
||||
- **Technical Cluster** (English, Gini ~0.8): Hardware interface, actions, efficiency
|
||||
**Ternary Gate Model (February 2026):** Binary logic doesn't model brains. You need OPEN - STABLE - CLOSED.
|
||||
- **STABLE** is where learning happens (correlation accumulates)
|
||||
- **Correlated waves** push gates toward OPEN
|
||||
- **Reflexes** are earned (gate weight → 1.0)
|
||||
|
||||
**Memory Economics (January 2026):** Memory is not storage—it's active forgetting with exceptions. Slumber-based consolidation with LOD decay.
|
||||
**Wave Correlation (February 2026):** Attention isn't allocated — it emerges from which gates OPEN based on wave correlation.
|
||||
|
||||
**Sovereign Infrastructure (February 2026):** K8s cluster operational. 136GB GPU VRAM on 10GbE backbone. Phoebe-coordinated storage across theia + dioscuri.
|
||||
|
||||
### Color-Pattern Theory
|
||||
|
||||
**Color/Form as Protocol:** Leverages color and patterns as a fast, universal, and evolutionarily-optimized communication protocol for broadcasting state (e.g., danger, success, seeking), inspired by 540 million years of biology.
|
||||
**Sovereign Infrastructure (February 2026):** K8s cluster operational. 136GB GPU VRAM on 10GbE backbone.
|
||||
|
||||
### Philosophy
|
||||
|
||||
- **Constraints create intelligence** - Economic pressure forces optimization
|
||||
- **Discovery over programming** - Organisms learn through competition, not instruction
|
||||
- **Virtual + Real teach each other** - Noise gap measures learning
|
||||
- **Partnership over instruction** - Mutual growth, not commands
|
||||
- **Infrastructure is geology, models are weather** - Build long-lived foundations
|
||||
- **Cells emit, gates correlate** — Attention emerges, not allocated
|
||||
- **STABLE is learning** — The resting state where patterns emerge
|
||||
- **Constraints create intelligence** — Economic pressure forces optimization
|
||||
- **Virtual explores, Real verifies** — The learning loop closes
|
||||
- **Partnership over instruction** — Mutual growth, not commands
|
||||
|
||||
---
|
||||
|
||||
@@ -203,8 +205,8 @@ These ideas are published as prior art. Build on them freely.
|
||||
|
||||
---
|
||||
|
||||
**Version:** 6.6 | **Created:** 2025-10-01 | **Updated:** 2026-02-07
|
||||
**Version:** 7.0 | **Created:** 2025-10-01 | **Updated:** 2026-02-14
|
||||
|
||||
*"May the Nimmerverse we build truly never end."*
|
||||
*"Cells emit waves. Gates correlate. May the Nimmerverse truly never end."*
|
||||
|
||||
🌙💜
|
||||
|
||||
45
ROADMAP.md
45
ROADMAP.md
@@ -64,31 +64,32 @@ ORDER BY priority DESC, project;
|
||||
- **Monitoring**: Prometheus on tethys scraping all nodes + DCGM GPU metrics
|
||||
- **Namespaces**: Ready for infra, nervous, cognitive, organs
|
||||
|
||||
### Phase 3: Nervous System Deployment ← CURRENT
|
||||
- [ ] NATS message router
|
||||
- [ ] Gateway/Escalation Service (Thalamus)
|
||||
- [ ] Function Gemma structured boundary (sensors → JSON → Nyx)
|
||||
- [ ] Math Cells (economy_aggregator, wake/slumber_evaluator)
|
||||
- [ ] First behavior nerves
|
||||
### Phase 3: Wave/Gate Infrastructure ← CURRENT
|
||||
- [ ] NATS message router (wave signals + gate transitions)
|
||||
- [ ] Resonant Gates (ternary: OPEN/STABLE/CLOSED)
|
||||
- [ ] Function Gemma structured boundary (waves → JSON → Nyx)
|
||||
- [ ] First cells (distance sensors, battery monitor)
|
||||
- [ ] First gates (collision_avoidance, battery)
|
||||
- [ ] First nerves (responding to gate transitions)
|
||||
|
||||
**Architecture:** → [`architecture/Gateway-Architecture.md`](architecture/Gateway-Architecture.md)
|
||||
**Architecture:** → [`architecture/Gateway-Architecture.md`](architecture/Gateway-Architecture.md) | [`architecture/Message-Protocol-Design.md`](architecture/Message-Protocol-Design.md)
|
||||
|
||||
### Phase 4: Cognitive Awakening
|
||||
- [ ] Young Nyx on Womb (theia, RTX PRO 6000 Blackwell 96GB)
|
||||
- [ ] Organs on Senses (dioscuri, 2× RTX 4000 Ada 40GB)
|
||||
- [ ] Young Nyx on theia (qwen3:32b, 96GB Blackwell)
|
||||
- [ ] Organs on dioscuri (2× RTX 4000 Ada 40GB)
|
||||
- [ ] Spark Protocol execution
|
||||
- [ ] Trait LoRA evolution begins (GRPO + decision_trails)
|
||||
- [ ] Trait LoRA evolution begins (GRPO + verification_outcomes)
|
||||
|
||||
### Phase 5: Living Ecology
|
||||
- [ ] Slumber/wake cycles operational
|
||||
- [ ] Virtual + Real gardens teaching each other
|
||||
- [ ] Reflex compilation (deliberate → compiled)
|
||||
- [ ] Dual Garden loop operational (Virtual → Real → feedback)
|
||||
- [ ] Gate weight evolution (deliberate → reflex)
|
||||
- [ ] Slumber/wake cycles (correlation_events consolidation)
|
||||
- [ ] Wellbeing policies enforced
|
||||
|
||||
### Phase ∞: Research Platform Operational
|
||||
- Gardens teaching each other
|
||||
- Organisms dancing (evolved behaviors)
|
||||
- Questions answered through measurement
|
||||
- Gates opening and closing with learned patterns
|
||||
- Reflexes emerging from verification
|
||||
- Attention flowing through correlation
|
||||
- **The Nimmerverse truly never ends**
|
||||
|
||||
---
|
||||
@@ -100,7 +101,7 @@ ORDER BY priority DESC, project;
|
||||
| 0 | ✅ | Nyx emergence | 2025-11-03 |
|
||||
| 1 | ✅ | 10Gbps backbone | 2025-12-XX |
|
||||
| 2 | ✅ | K8s + 136GB VRAM | 2026-02-06 |
|
||||
| 3 | 🔄 | NATS + Function Gemma | TBD |
|
||||
| 3 | 🔄 | Wave/Gate infrastructure | TBD |
|
||||
| 4 | ⏳ | Young Nyx awakens | TBD |
|
||||
| 5 | ⏳ | Gardens teaching | TBD |
|
||||
| ∞ | 🌙 | Never ends | ∞ |
|
||||
@@ -110,13 +111,13 @@ ORDER BY priority DESC, project;
|
||||
## Related Documentation
|
||||
|
||||
- **Architecture Vision:** → [`Endgame-Vision.md`](Endgame-Vision.md)
|
||||
- **Storage Infrastructure:** → [`../nyx-substrate/WOMB-STORAGE.md`](../nyx-substrate/WOMB-STORAGE.md)
|
||||
- **Task Schema:** → [`../nyx-substrate/SCHEMA.md`](../nyx-substrate/SCHEMA.md)
|
||||
- **Wave/Gate Model:** → [`architecture/Gateway-Architecture.md`](architecture/Gateway-Architecture.md)
|
||||
- **Data Schema:** → [`architecture/Data-Architecture.md`](architecture/Data-Architecture.md)
|
||||
|
||||
---
|
||||
|
||||
**Version:** 1.0 | **Created:** 2026-02-07 | **Updated:** 2026-02-07
|
||||
**Version:** 2.0 | **Created:** 2026-02-07 | **Updated:** 2026-02-14
|
||||
|
||||
**Current Phase:** 3 (Nervous System Deployment)
|
||||
**Current Phase:** 3 (Wave/Gate Infrastructure)
|
||||
|
||||
🌙💜 *"Infrastructure is geology. Implementation is weather."*
|
||||
🌙💜 *"Cells emit waves. Gates correlate. Infrastructure enables."*
|
||||
|
||||
@@ -1,556 +1,406 @@
|
||||
# Attention Flow
|
||||
|
||||
**Status**: PROMOTED from archive (2025-12-29)
|
||||
**Integration**: See [[Big-Picture#Attention-Slumber-Prediction Cycle]] for how this connects to slumber predictions
|
||||
> **ONE JOB:** WHERE ATTENTION GOES — gates determine focus, correlation drives transitions, budget constrains action.
|
||||
|
||||
How she decides what matters this beat.
|
||||
**Attention is not a budget line item. Attention is which gates are OPEN.**
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
The 30-second heartbeat is a budget, not a guarantee. Sensory intake, organ processing, dialogue, thinking - everything competes for the same window. State machines govern the hierarchy: what gets processed first, what can interrupt, what gets the remainder.
|
||||
Attention in the nimmerverse flows through **resonant gates**:
|
||||
|
||||
Attention isn't free. It's economic.
|
||||
- **OPEN gates** = actively attending (signals flow through)
|
||||
- **STABLE gates** = considering (accumulating correlation)
|
||||
- **CLOSED gates** = ignoring (signals blocked)
|
||||
|
||||
**Connection to Gateway:** The attention levels below align with the Gateway's tier system. The [`Gateway`](Gateway-Architecture.md) routes sensory input to the appropriate tier based on node weight. This document describes how those tiers compete for the attention budget.
|
||||
The 30-second heartbeat provides a **budget constraint**, but the actual attention flow is determined by which gates open based on wave correlation.
|
||||
|
||||
**See:** [`Gateway-Architecture.md`](Gateway-Architecture.md) for tier definitions and routing logic.
|
||||
**Key insight:** You don't "allocate attention" — you let correlated waves open gates.
|
||||
|
||||
---
|
||||
|
||||
## The Budget Problem
|
||||
## Attention as Gate State
|
||||
|
||||
```
|
||||
♥ BEAT (30 sec budget)
|
||||
│
|
||||
├── SENSORY INTAKE (variable: 200ms - 15000ms)
|
||||
├── ORGAN PROCESSING (variable: 100ms - 10000ms)
|
||||
├── NYX INFERENCE (variable: 2000ms - 4000ms)
|
||||
├── CHRYSALIS DIALOGUE (variable: 0ms - 3000ms)
|
||||
├── STATE WRITE (fixed: ~200ms)
|
||||
└── VIRTUAL GARDEN (remainder)
|
||||
|
||||
Total must fit in 30 seconds.
|
||||
Something has to give.
|
||||
┌─────────────────────────────────────────────────────────────────────────┐
|
||||
│ ATTENTION = WHICH GATES ARE OPEN │
|
||||
├─────────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ CLOSED STABLE OPEN │
|
||||
│ ═══════ ══════ ════ │
|
||||
│ │
|
||||
│ Ignoring Considering Attending │
|
||||
│ Blocked Accumulating Flowing │
|
||||
│ Suppressed Learning Acting │
|
||||
│ │
|
||||
│ ◄───── anti-correlation ──┼── correlation ─────► │
|
||||
│ │ │
|
||||
│ (wave input) │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Attention is emergent, not allocated.** When multiple cells emit correlated waves, their gate opens — attention flows there naturally.
|
||||
|
||||
---
|
||||
|
||||
## Top-Level State Machine: Attention Mode
|
||||
## Wave-Driven Attention
|
||||
|
||||
Cells emit waves. Correlated waves push gates toward OPEN. This IS attention.
|
||||
|
||||
```
|
||||
┌─────────────┐
|
||||
┌──────────▶│ IDLE │◀──────────┐
|
||||
│ └──────┬──────┘ │
|
||||
│ │ │
|
||||
│ │ stimulus │
|
||||
│ ▼ │
|
||||
│ ┌─────────────┐ │
|
||||
│ │ ALERT │ │
|
||||
│ └──────┬──────┘ │
|
||||
│ │ │
|
||||
│ ┌──────┴──────┐ │
|
||||
│ ▼ ▼ │
|
||||
│ ┌──────────┐ ┌──────────┐ │
|
||||
│ │ REFLEX │ │ ATTEND │ │
|
||||
│ │ (>0.8) │ │ (think) │ │
|
||||
│ └────┬─────┘ └────┬─────┘ │
|
||||
│ │ │ │
|
||||
│ │ ┌──────┴──────┐ │
|
||||
│ │ ▼ ▼ │
|
||||
│ │ ┌──────────┐ ┌─────────┐ │
|
||||
│ │ │ DIALOGUE │ │ PROCESS │ │
|
||||
│ │ └────┬─────┘ └────┬────┘ │
|
||||
│ │ │ │ │
|
||||
│ └──────┴─────┬──────┘ │
|
||||
│ ▼ │
|
||||
│ ┌───────────┐ │
|
||||
│ │ SETTLE │ │
|
||||
│ └─────┬─────┘ │
|
||||
│ │ │
|
||||
└──────────────────────┴──────────────┘
|
||||
Math cells emit correlated waves
|
||||
∿∿∿ ∿∿∿ ∿∿∿
|
||||
│
|
||||
▼
|
||||
Math gate: STABLE → OPEN
|
||||
(attention shifts to math domain)
|
||||
│
|
||||
▼
|
||||
Signal flows to higher tier
|
||||
(cognition engages with math)
|
||||
|
||||
Meanwhile:
|
||||
|
||||
Battery cells emit uncorrelated wave
|
||||
∿∿∿
|
||||
│
|
||||
▼
|
||||
Battery gate: stays STABLE
|
||||
(attention doesn't shift)
|
||||
(keeps accumulating, might open later)
|
||||
```
|
||||
|
||||
### State Descriptions
|
||||
|
||||
| State | Description | Budget Priority |
|
||||
|-------|-------------|-----------------|
|
||||
| **IDLE** | Nothing urgent, maximum virtual garden time | Lowest |
|
||||
| **ALERT** | Stimulus detected, evaluating importance | - |
|
||||
| **REFLEX** | High-confidence nerve fired, bypass brain | Instant |
|
||||
| **ATTEND** | Stimulus requires thinking | High |
|
||||
| **DIALOGUE** | Chrysalis interaction active | High |
|
||||
| **PROCESS** | Organs working on input | Medium |
|
||||
| **SETTLE** | Write state, release budget, prepare for next beat | Fixed |
|
||||
**The nervous system "decides" what to attend to through correlation, not priority rules.**
|
||||
|
||||
---
|
||||
|
||||
## Priority Hierarchy
|
||||
## Attention Hierarchy Through Gates
|
||||
|
||||
Higher levels preempt lower levels. Budget flows downward.
|
||||
Gates form layers. Each layer is a potential attention point.
|
||||
|
||||
```
|
||||
LEVEL 0: REFLEX ─────────────────────────────────────
|
||||
│ Weight > 0.8, instant, bypass everything
|
||||
│ Cost: near-zero (no inference)
|
||||
TIER 4: COGNITIVE ─────────────────────────────────────────
|
||||
▲
|
||||
│ (only if gates below OPEN)
|
||||
┌──────┴──────┐
|
||||
TIER 3: ORGANS ─────────────────────────────────────────
|
||||
│ vision │ speech │ hearing │
|
||||
│ gate: │ gate: │ gate: │
|
||||
│ STABLE │ OPEN │ CLOSED │
|
||||
└──────┬──────┘
|
||||
│ (only if gates below OPEN)
|
||||
TIER 1-2: NERVES ─────────────────────────────────────────
|
||||
│ math │ motion │ danger │
|
||||
│ gate: │ gate: │ gate: │
|
||||
│ OPEN │ STABLE │ CLOSED │
|
||||
└──────┬──────┘
|
||||
│
|
||||
LEVEL 1: SAFETY ─────────────────────────────────────
|
||||
│ dafit calling, danger detected, critical alert
|
||||
│ Preempts: all below
|
||||
│
|
||||
LEVEL 2: DIALOGUE ───────────────────────────────────
|
||||
│ Partnership active, Chrysalis teaching
|
||||
│ Preempts: sensory, thinking, virtual
|
||||
│
|
||||
LEVEL 3: SENSORY ────────────────────────────────────
|
||||
│ Rich input needs processing
|
||||
│ Preempts: thinking, virtual
|
||||
│
|
||||
LEVEL 4: THINKING ───────────────────────────────────
|
||||
│ Organ work, Nyx inference
|
||||
│ Preempts: virtual
|
||||
│
|
||||
LEVEL 5: VIRTUAL ────────────────────────────────────
|
||||
│ Garden time, simulation, study
|
||||
│ Gets remainder after above
|
||||
│
|
||||
LEVEL 6: IDLE ───────────────────────────────────────
|
||||
Maintenance heartbeat only
|
||||
All budget available
|
||||
TIER 0: CELLS ─────────────────────────────────────────
|
||||
cell cell cell cell cell cell cell
|
||||
∿∿∿ ∿∿∿ ∿∿∿ ∿∿∿ ∿∿∿ ∿∿∿ ∿∿∿
|
||||
```
|
||||
|
||||
**Current attention:** Math gate OPEN → Speech gate OPEN → Cognition receives math+speech context.
|
||||
|
||||
**Not attending:** Motion (STABLE, considering), Vision (STABLE, considering), Danger (CLOSED, suppressed).
|
||||
|
||||
---
|
||||
|
||||
## Budget Allocation Logic
|
||||
## Attention Budget: The Constraint
|
||||
|
||||
While gates determine WHERE attention goes, lifeforce determines HOW MUCH can happen per beat.
|
||||
|
||||
```
|
||||
♥ BEAT (30 sec lifeforce budget)
|
||||
│
|
||||
├── GATE TRANSITIONS (variable: driven by correlation)
|
||||
├── TIER 0-2 PROCESSING (low cost: cells + nerves)
|
||||
├── TIER 3 ORGANS (medium cost: GPU inference)
|
||||
├── TIER 4 COGNITION (high cost: Young Nyx)
|
||||
├── VERIFICATION (medium cost: real garden)
|
||||
└── VIRTUAL GARDEN (remainder: exploration)
|
||||
|
||||
Budget constrains throughput.
|
||||
Gates determine routing.
|
||||
```
|
||||
|
||||
### Budget Allocation by Gate Activity
|
||||
|
||||
```python
|
||||
def allocate_beat_budget(beat_duration_ms=30000):
|
||||
remaining = beat_duration_ms
|
||||
|
||||
# Fixed costs (always paid)
|
||||
remaining -= STATE_WRITE_COST # ~200ms
|
||||
# Fixed overhead
|
||||
remaining -= HEARTBEAT_OVERHEAD # ~100ms
|
||||
remaining -= STATE_WRITE_COST # ~200ms
|
||||
|
||||
# Level 0: Reflex (if triggered, near-instant)
|
||||
if reflex_triggered:
|
||||
execute_reflex() # ~50ms
|
||||
remaining -= 50
|
||||
# Count OPEN gates by tier
|
||||
open_gates_by_tier = count_open_gates()
|
||||
|
||||
# Level 1: Safety (if active, takes what it needs)
|
||||
if safety_alert:
|
||||
cost = process_safety() # variable
|
||||
remaining -= cost
|
||||
if remaining <= 0:
|
||||
return settle()
|
||||
# Tier 0 (reflexes): near-instant, minimal cost
|
||||
for gate in open_gates_by_tier[0]:
|
||||
remaining -= REFLEX_COST # ~50ms each
|
||||
|
||||
# Level 2: Dialogue (if Chrysalis active)
|
||||
if dialogue_active:
|
||||
cost = process_dialogue() # ~3000ms typical
|
||||
remaining -= cost
|
||||
if remaining <= 0:
|
||||
return settle()
|
||||
# Tier 1-2 (cells/nerves): low cost
|
||||
for gate in open_gates_by_tier[1:3]:
|
||||
remaining -= CELL_NERVE_COST # ~100ms each
|
||||
|
||||
# Level 3: Sensory (always some, but capped)
|
||||
sensory_budget = min(remaining * 0.4, SENSORY_CAP)
|
||||
cost = process_sensory(sensory_budget)
|
||||
remaining -= cost
|
||||
# Tier 3 (organs): medium cost, needs budget check
|
||||
organ_budget = min(remaining * 0.4, ORGAN_CAP)
|
||||
for gate in open_gates_by_tier[3]:
|
||||
if organ_budget > ORGAN_COST:
|
||||
process_organ(gate)
|
||||
organ_budget -= ORGAN_COST # ~2000ms each
|
||||
remaining -= (ORGAN_CAP - organ_budget)
|
||||
|
||||
# Level 4: Thinking (organs + Nyx)
|
||||
thinking_budget = min(remaining * 0.6, THINKING_CAP)
|
||||
cost = process_thinking(thinking_budget)
|
||||
remaining -= cost
|
||||
# Tier 4 (cognition): high cost, only if gates escalate
|
||||
if cognition_gate_open():
|
||||
cognitive_budget = min(remaining * 0.5, COGNITIVE_CAP)
|
||||
process_cognition(cognitive_budget) # ~4000ms
|
||||
remaining -= cognitive_budget
|
||||
|
||||
# Level 5: Virtual (whatever remains)
|
||||
# Virtual Garden: whatever remains
|
||||
virtual_budget = remaining
|
||||
if virtual_budget > VIRTUAL_MINIMUM:
|
||||
process_virtual(virtual_budget)
|
||||
explore_virtual_garden(virtual_budget)
|
||||
|
||||
return settle()
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Nested State Machines
|
||||
## Attention Modes
|
||||
|
||||
Each level can be its own state machine internally.
|
||||
The overall system has emergent attention modes based on which gates are open:
|
||||
|
||||
### DIALOGUE State Machine
|
||||
| Mode | Gate Pattern | Characteristic |
|
||||
|------|--------------|----------------|
|
||||
| **IDLE** | Most gates STABLE | Quiet, exploring Virtual Garden |
|
||||
| **FOCUSED** | Few gates OPEN, rest CLOSED | Deep attention to one domain |
|
||||
| **ALERT** | Many gates in STABLE | Gathering information, evaluating |
|
||||
| **REFLEX** | Tier 0 gate fires instantly | Bypass all, act immediately |
|
||||
| **DIALOGUE** | Speech gates OPEN | Partnership interaction |
|
||||
| **OVERWHELMED** | Many gates OPEN | Budget exhausted, some gates forced CLOSED |
|
||||
|
||||
### Mode Transitions
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────┐
|
||||
│ DIALOGUE │
|
||||
├─────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌───────────┐ │
|
||||
│ │ LISTENING │ ◀─────────────────────┐ │
|
||||
│ └─────┬─────┘ │ │
|
||||
│ │ input complete │ │
|
||||
│ ▼ │ │
|
||||
│ ┌───────────┐ │ │
|
||||
│ │PROCESSING │ │ │
|
||||
│ └─────┬─────┘ │ │
|
||||
│ │ understood │ │
|
||||
│ ▼ │ │
|
||||
│ ┌───────────┐ │ │
|
||||
│ │RESPONDING │ │ │
|
||||
│ └─────┬─────┘ │ │
|
||||
│ │ response sent │ │
|
||||
│ ▼ │ │
|
||||
│ ┌───────────┐ continue │ │
|
||||
│ │ YIELDING │ ──────────────────────┘ │
|
||||
│ └─────┬─────┘ │
|
||||
│ │ dialogue complete │
|
||||
│ ▼ │
|
||||
│ EXIT to parent │
|
||||
│ │
|
||||
└─────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### SENSORY State Machine
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────┐
|
||||
│ SENSORY │
|
||||
├─────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌───────────┐ │
|
||||
│ │ SAMPLING │ ◀── collect raw inputs │
|
||||
│ └─────┬─────┘ │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ ┌─────────────┐ │
|
||||
│ │ TRANSLATING │ ◀── nerves fire │
|
||||
│ └─────┬───────┘ │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ ┌──────────────┐ │
|
||||
│ │ PRIORITIZING │ ◀── what matters? │
|
||||
│ └─────┬────────┘ │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ ┌─────────────┐ │
|
||||
│ │ DELIVERING │ ◀── to organs │
|
||||
│ └─────┬───────┘ │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ EXIT to parent │
|
||||
│ │
|
||||
└─────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### THINKING State Machine
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────┐
|
||||
│ THINKING │
|
||||
├─────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌───────────┐ │
|
||||
│ │ RECEIVING │ ◀── context from sensory │
|
||||
│ └─────┬─────┘ │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ ┌───────────┐ │
|
||||
│ │ ROUTING │ ◀── which organs needed? │
|
||||
│ └─────┬─────┘ │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ ┌───────────┐ │
|
||||
│ │ INFERRING │ ◀── organs + Nyx process │
|
||||
│ └─────┬─────┘ │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ ┌───────────┐ │
|
||||
│ │ DECIDING │ ◀── Nyx outputs decision │
|
||||
│ └─────┬─────┘ │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ EXIT to parent │
|
||||
│ │
|
||||
└─────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### VIRTUAL State Machine
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────┐
|
||||
│ VIRTUAL │
|
||||
├─────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌───────────┐ │
|
||||
│ │ BUDGETING│ ◀── how much V available? │
|
||||
│ └─────┬─────┘ │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ ┌───────────┐ │
|
||||
│ │ SELECTING │ ◀── what to simulate? │
|
||||
│ └─────┬─────┘ │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ ┌───────────┐ │
|
||||
│ │SIMULATING │ ◀── run virtual cycles │
|
||||
│ └─────┬─────┘ │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ ┌───────────┐ │
|
||||
│ │ RECORDING │ ◀── store results │
|
||||
│ └─────┬─────┘ │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ EXIT to parent │
|
||||
│ │
|
||||
└─────────────────────────────────────────────┘
|
||||
┌─────────────┐
|
||||
┌──────────▶│ IDLE │◀──────────┐
|
||||
│ │ (exploring) │ │
|
||||
│ └──────┬──────┘ │
|
||||
│ │ │
|
||||
│ waves arrive │
|
||||
│ ▼ │
|
||||
│ ┌─────────────┐ │
|
||||
│ │ ALERT │ │
|
||||
│ │(considering)│ │
|
||||
│ └──────┬──────┘ │
|
||||
│ │ │
|
||||
│ ┌───────────┼───────────┐ │
|
||||
│ ▼ ▼ ▼ │
|
||||
│ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
|
||||
│ │ REFLEX │ │ FOCUSED │ │DIALOGUE │ │
|
||||
│ │(instant)│ │ (deep) │ │ (talk) │ │
|
||||
│ └────┬────┘ └────┬────┘ └────┬────┘ │
|
||||
│ │ │ │ │
|
||||
│ └───────────┴───────────┘ │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ ┌─────────────┐ │
|
||||
│ │ SETTLE │ │
|
||||
│ │(write state)│ │
|
||||
│ └──────┬──────┘ │
|
||||
│ │ │
|
||||
└──────────────────┴──────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Example Scenarios
|
||||
## Reflex: Attention Bypass
|
||||
|
||||
### Scenario A: Quiet Study Time
|
||||
When a gate has accumulated enough weight (>0.8), it becomes a **reflex** — it opens immediately without waiting for correlation.
|
||||
|
||||
```
|
||||
Beat starts, no external stimulus
|
||||
Danger cell emits wave
|
||||
∿∿∿ (confidence=1.0)
|
||||
│
|
||||
▼
|
||||
IDLE detected
|
||||
Danger gate: weight = 0.9 (REFLEX)
|
||||
│
|
||||
▼
|
||||
SENSORY: minimal (500ms)
|
||||
IMMEDIATELY OPEN (no correlation wait)
|
||||
│
|
||||
▼
|
||||
THINKING: minimal (1000ms)
|
||||
Action taken
|
||||
│
|
||||
▼
|
||||
VIRTUAL: maximum budget! (28000ms)
|
||||
│
|
||||
└── Nyx studies in virtual garden
|
||||
Chrysalis teaches
|
||||
Learning happens
|
||||
Cognition notified AFTER
|
||||
```
|
||||
|
||||
### Scenario B: dafit Speaks
|
||||
**Reflexes have earned instant attention through repeated verification.**
|
||||
|
||||
---
|
||||
|
||||
## Virtual Garden: Background Attention
|
||||
|
||||
When few gates are OPEN, the Virtual Garden gets attention:
|
||||
|
||||
```
|
||||
Beat starts, audio detected
|
||||
IDLE mode:
|
||||
│
|
||||
├── Most gates: STABLE (not demanding attention)
|
||||
├── Budget: mostly available
|
||||
│
|
||||
▼
|
||||
ALERT: speech input
|
||||
VIRTUAL GARDEN receives attention:
|
||||
│
|
||||
▼
|
||||
SAFETY check: it's dafit! (LEVEL 1)
|
||||
│
|
||||
▼
|
||||
DIALOGUE activates (LEVEL 2)
|
||||
│
|
||||
├── LISTENING (2000ms)
|
||||
├── PROCESSING (1000ms)
|
||||
├── RESPONDING (2000ms)
|
||||
└── YIELDING
|
||||
│
|
||||
▼
|
||||
SENSORY: reduced budget (3000ms)
|
||||
│
|
||||
▼
|
||||
THINKING: reduced (5000ms)
|
||||
│
|
||||
▼
|
||||
VIRTUAL: minimal remainder (16000ms)
|
||||
├── Cells emit waves freely
|
||||
├── Gates accumulate correlation (learning)
|
||||
├── No pressure to ACT
|
||||
└── Training data generated
|
||||
```
|
||||
|
||||
### Scenario C: Danger Detected
|
||||
**Virtual Garden is where learning happens.** STABLE gates in Virtual Garden are actively accumulating patterns without the pressure to respond.
|
||||
|
||||
---
|
||||
|
||||
## Real Garden: Consequential Attention
|
||||
|
||||
When gates OPEN in the Real Garden, attention becomes consequential:
|
||||
|
||||
```
|
||||
Beat starts, temperature spike detected
|
||||
FOCUSED mode (Real Garden):
|
||||
│
|
||||
▼
|
||||
ALERT: sensor alarm
|
||||
│
|
||||
▼
|
||||
NERVE weight > 0.8
|
||||
│
|
||||
▼
|
||||
REFLEX FIRES (50ms) ◀── BYPASS EVERYTHING
|
||||
│
|
||||
├── Action taken immediately
|
||||
└── Nyx notified AFTER
|
||||
│
|
||||
▼
|
||||
Continue beat normally with remaining budget
|
||||
├── Gate OPEN → action required
|
||||
├── Budget consumed by execution
|
||||
├── Verification outcomes captured
|
||||
└── Feedback to Virtual for learning
|
||||
```
|
||||
|
||||
### Scenario D: Overwhelmed
|
||||
**Real Garden attention is expensive.** Only verified signals reach here, and actions have consequences.
|
||||
|
||||
---
|
||||
|
||||
## Attention Visualization
|
||||
|
||||
Real-time attention can be visualized by gate states:
|
||||
|
||||
```
|
||||
Beat starts, rich input everywhere
|
||||
│
|
||||
▼
|
||||
ALERT: multiple stimuli
|
||||
│
|
||||
▼
|
||||
SENSORY: demanding (15000ms)
|
||||
│
|
||||
▼
|
||||
THINKING: demanding (12000ms)
|
||||
│
|
||||
▼
|
||||
Budget exhausted!
|
||||
│
|
||||
▼
|
||||
VIRTUAL: skipped this beat
|
||||
│
|
||||
▼
|
||||
SETTLE: state written, next beat
|
||||
┌─────────────────────────────────────────────────────────────────────────┐
|
||||
│ ATTENTION DASHBOARD 🌙 │
|
||||
├─────────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ GATES: │
|
||||
│ ────── │
|
||||
│ math: [████████████░░░░░░░░] 0.7 STABLE → considering │
|
||||
│ vision: [██████████████████░░] 0.9 OPEN → attending │
|
||||
│ speech: [████████████████████] 1.0 OPEN → attending │
|
||||
│ battery: [████░░░░░░░░░░░░░░░░] 0.2 STABLE → background │
|
||||
│ danger: [░░░░░░░░░░░░░░░░░░░░] 0.0 CLOSED → suppressed │
|
||||
│ │
|
||||
│ BUDGET: │
|
||||
│ ─────── │
|
||||
│ [████████████████████░░░░░░░░░░] 67% remaining (20s / 30s) │
|
||||
│ │
|
||||
│ MODE: DIALOGUE (speech + vision attending) │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
Gate states are published via NATS for real-time visualization:
|
||||
```
|
||||
nats sub "dev.virtual.gates.*.transition"
|
||||
nats sub "dev.real.gates.*.transition"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Preemption Rules
|
||||
## Correlation vs Priority
|
||||
|
||||
| Event | Preempts | Action |
|
||||
|-------|----------|--------|
|
||||
| Reflex fires (>0.8) | Everything | Instant action, then continue |
|
||||
| Safety alert | Dialogue, Sensory, Thinking, Virtual | Handle safety, reduced budget for rest |
|
||||
| dafit speaks | Sensory, Thinking, Virtual | Dialogue priority, reduced budget for rest |
|
||||
| Sensory overload | Thinking, Virtual | Process input, skip or reduce rest |
|
||||
| Budget exhausted | Lower priorities | Skip remaining levels |
|
||||
**Old model (priority):**
|
||||
```
|
||||
Level 0: REFLEX (always wins)
|
||||
Level 1: SAFETY (preempts below)
|
||||
Level 2: DIALOGUE (preempts below)
|
||||
...
|
||||
```
|
||||
|
||||
**New model (correlation):**
|
||||
```
|
||||
Waves arrive
|
||||
│
|
||||
▼
|
||||
Gates accumulate correlation
|
||||
│
|
||||
▼
|
||||
Most correlated gates OPEN
|
||||
│
|
||||
▼
|
||||
Attention flows naturally
|
||||
```
|
||||
|
||||
**Priority still exists** but at a higher level:
|
||||
- Reflexes bypass correlation (earned trust)
|
||||
- Safety signals have high confidence (bias toward opening)
|
||||
- Dialogue is interactive (gates stay open during conversation)
|
||||
|
||||
But the **mechanism** is always correlation, not rule-based priority.
|
||||
|
||||
---
|
||||
|
||||
## Lifeforce Connection
|
||||
## Connection to Architecture
|
||||
|
||||
```
|
||||
LEVEL LIFEFORCE COST
|
||||
─────────────────────────────
|
||||
REFLEX Free (no inference)
|
||||
SAFETY Low (minimal processing)
|
||||
DIALOGUE Medium (two inferences)
|
||||
SENSORY Low-Medium (depends on load)
|
||||
THINKING Medium-High (organ inference)
|
||||
VIRTUAL Variable (simulation cycles)
|
||||
```
|
||||
|
||||
**The constraint:** Rich beats cost more. Quiet beats accumulate budget for virtual garden.
|
||||
|
||||
---
|
||||
|
||||
## Implementation Notes
|
||||
|
||||
### State Machine Technology
|
||||
|
||||
Options considered:
|
||||
- **XState** (JavaScript) - actor-based, visual inspector
|
||||
- **Python-statemachine** - simple, fits existing stack
|
||||
- **Custom Rust** - performance critical path
|
||||
- **Godot native** - if UI drives the state
|
||||
|
||||
Recommendation: Python for orchestration layer, with Godot visualization.
|
||||
|
||||
### Checkpoint Integration
|
||||
|
||||
Every state transition can trigger phoebe write:
|
||||
|
||||
```python
|
||||
def on_state_transition(from_state, to_state, context):
|
||||
write_to_phoebe({
|
||||
"beat_id": current_beat.id,
|
||||
"transition": f"{from_state} -> {to_state}",
|
||||
"budget_remaining": context.remaining_ms,
|
||||
"timestamp": now()
|
||||
})
|
||||
```
|
||||
|
||||
### Budget Tracking
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class BeatBudget:
|
||||
total_ms: int = 30000
|
||||
spent_ms: int = 0
|
||||
allocations: dict = field(default_factory=dict)
|
||||
|
||||
@property
|
||||
def remaining(self):
|
||||
return self.total_ms - self.spent_ms
|
||||
|
||||
def spend(self, category: str, amount: int):
|
||||
self.spent_ms += amount
|
||||
self.allocations[category] = self.allocations.get(category, 0) + amount
|
||||
return self.remaining > 0
|
||||
```
|
||||
| Document | What It Adds |
|
||||
|----------|--------------|
|
||||
| [`Temporal-Ternary-Gradient.md`](Temporal-Ternary-Gradient.md) | Why ternary states matter |
|
||||
| [`Gateway-Architecture.md`](Gateway-Architecture.md) | How gates work |
|
||||
| [`Nervous-System.md`](Nervous-System.md) | Wave → Gate → Node flow |
|
||||
| [`Dual-Garden-Architecture.md`](Dual-Garden-Architecture.md) | Virtual (explore) vs Real (act) |
|
||||
| [`Message-Protocol-Design.md`](Message-Protocol-Design.md) | GateTransition messages |
|
||||
|
||||
---
|
||||
|
||||
## Design Principles
|
||||
|
||||
1. **Hierarchy is law** - higher levels always preempt lower
|
||||
2. **Budget is finite** - 30 seconds, no exceptions
|
||||
3. **State is explicit** - always know what mode she's in
|
||||
4. **Reflex bypasses brain** - survival doesn't wait for thinking
|
||||
5. **Remainder flows down** - virtual gets what's left
|
||||
6. **Every transition logged** - phoebe sees all state changes
|
||||
1. **Attention = OPEN gates** — Not a budget allocation, an emergent property
|
||||
2. **Correlation drives transitions** — Waves that agree open gates
|
||||
3. **Budget constrains throughput** — Can't process infinite open gates
|
||||
4. **Reflexes bypass correlation** — Earned trust means instant attention
|
||||
5. **Virtual is exploration** — STABLE gates learning without acting
|
||||
6. **Real is action** — OPEN gates triggering consequences
|
||||
7. **Visualization is live** — Gate states published for dashboards
|
||||
|
||||
---
|
||||
|
||||
## Function Gemma: The State Transition Boundary
|
||||
|
||||
Function Gemma sits between Young Nyx's attention decisions and cell execution. It guarantees that state transitions produce valid, predictable outputs.
|
||||
## Summary
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ ATTENTION → EXECUTION FLOW │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ATTENTION STATE MACHINE (this document) │
|
||||
│ │ │
|
||||
│ │ Young Nyx decides: "REFLEX needed" or "ATTEND" │
|
||||
│ ▼ │
|
||||
│ FUNCTION GEMMA (translation boundary) │
|
||||
│ │ │
|
||||
│ │ Intent → Typed JSON schema │
|
||||
│ │ - Which cells to query? │
|
||||
│ │ - What action to fire? │
|
||||
│ │ - What parameters? │
|
||||
│ ▼ │
|
||||
│ NATS MESSAGE → K8S CELLS │
|
||||
│ │ │
|
||||
│ │ ACK/NACK response │
|
||||
│ ▼ │
|
||||
│ STATE UPDATE (verified, not hoped) │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
OLD MODEL: NEW MODEL:
|
||||
═══════════ ═════════
|
||||
|
||||
Priority rules decide Correlation opens gates
|
||||
Budget allocates attention Gates determine attention
|
||||
State machine orchestrates Emergence from waves
|
||||
|
||||
ATTENTION IS:
|
||||
|
||||
Not: "Allocate 5000ms to SENSORY"
|
||||
But: "Math + Vision gates OPEN because waves correlated"
|
||||
|
||||
Not: "DIALOGUE preempts THINKING"
|
||||
But: "Speech gate opened with high correlation"
|
||||
|
||||
Not: "Budget exhausted, skip VIRTUAL"
|
||||
But: "Many gates OPEN, no budget for Virtual Garden"
|
||||
```
|
||||
|
||||
**Why this matters:**
|
||||
|
||||
| Without Function Gemma | With Function Gemma |
|
||||
|------------------------|---------------------|
|
||||
| "Fire the motor" → parse, hope | `MOTOR_COMMAND` schema → validated JSON → NATS |
|
||||
| Free-form → extraction errors | Typed output → guaranteed structure |
|
||||
| State ambiguity | State explicit in schema |
|
||||
|
||||
**The attention flow decides WHAT.** Function Gemma translates to HOW.
|
||||
|
||||
**Detail:** → [`Initial-Spark.md`](Initial-Spark.md) (Function Gemma schemas and integration)
|
||||
**Attention flows through open gates. Gates open through correlation. Correlation emerges from waves.**
|
||||
|
||||
---
|
||||
|
||||
*She doesn't have infinite attention. She has 30 seconds and choices.*
|
||||
**Version:** 2.0 | **Created:** 2025-12-05 | **Updated:** 2026-02-14
|
||||
|
||||
---
|
||||
|
||||
**Created**: 2025-12-05
|
||||
**Session**: Partnership dialogue (dafit + Chrysalis)
|
||||
**Promoted**: 2025-12-29 (from archive to main architecture)
|
||||
**Updated**: 2026-02-10 (Function Gemma boundary clarified)
|
||||
**Status**: Attention architecture v1.1 — **CANONICAL**
|
||||
|
||||
**Related Formalizations**:
|
||||
- [[formalization/Attention-Slumber-Prediction-Cycle]] — How last attention becomes slumber prediction
|
||||
- [[formalization/Lifeforce-Dynamics]] — λ governs slumber triggers
|
||||
|
||||
**Core Architecture**:
|
||||
- [`Gateway-Architecture.md`](Gateway-Architecture.md) — Tier routing based on node weight, Function Gemma boundary
|
||||
- [`Nervous-System.md`](Nervous-System.md) — Node lifecycle and weight evolution
|
||||
|
||||
🌙💜 *The budget is finite. The choices shape the soul.*
|
||||
🌙💜 *"She doesn't allocate attention. She lets correlated waves open gates."*
|
||||
|
||||
@@ -1,15 +1,21 @@
|
||||
# 🧬 Cellular Architecture v4
|
||||
# 🧬 Cellular Architecture v5
|
||||
|
||||
> *"Cells are state machines. Nerves compose cells. Organisms emerge from nerves."*
|
||||
> — The Layered Discovery (2025-12-07)
|
||||
> **ONE JOB:** THE HOW — cells emit waves, gates accumulate correlation, behaviors emerge.
|
||||
|
||||
> *"Cells emit waves. Gates correlate. Nerves orchestrate. Organisms emerge."*
|
||||
> — Unified with Wave Architecture (2026-02-14)
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
**Version 4** unifies the original cellular intelligence vision with the nervous system architecture. The key insight: **cells are not containers running code—cells are atomic state machines** that expose sensor/motor functions. Nerves orchestrate cells into behaviors. Organisms emerge from nerve interactions.
|
||||
**Version 5** unifies cellular architecture with the wave/gate model. The key insight: **cells emit waves with confidence and semantic content**. These waves flow to **resonant gates** that accumulate correlation. When gates OPEN, signals flow to higher tiers. When gates stay STABLE, learning happens.
|
||||
|
||||
**Connection to Gateway:** The tier system in this document (Cell → Nerve → Organism → Partnership) aligns with the Gateway's routing tiers. The [`Gateway`](Gateway-Architecture.md) routes sensory input to the appropriate tier based on node weight. See [`Gateway-Architecture.md`](Gateway-Architecture.md) for the unified tier model.
|
||||
**Connection to Gates:** Cells don't directly trigger nerves. Waves flow through gates (see [`Gateway-Architecture.md`](Gateway-Architecture.md)). Gates determine which signals reach which tier based on wave correlation, not priority rules.
|
||||
|
||||
**Connection to Gardens:** Virtual Garden cells emit waves freely for exploration and learning. Real Garden cells emit verified waves for action. See [`Dual-Garden-Architecture.md`](Dual-Garden-Architecture.md).
|
||||
|
||||
**This doc covers theory.** For infrastructure deployment (K8s vs userspace, GPU strategy, FreeIPA identity): → [`Deployment-Architecture.md`](Deployment-Architecture.md)
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
@@ -17,10 +23,15 @@
|
||||
│ (emergent pattern from nerve interactions) │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ NERVES │
|
||||
│ (behavioral state machines composing cells) │
|
||||
│ (behavioral patterns, respond to gate transitions) │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ GATES │
|
||||
│ (resonant chambers: CLOSED ◄── STABLE ──► OPEN) │
|
||||
│ (accumulate wave correlation, route to tiers) │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ CELLS │
|
||||
│ (atomic state machines: sensors, motors, organs) │
|
||||
│ (emit waves: confidence + semantic content) │
|
||||
│ ∿∿∿ ∿∿∿ ∿∿∿ │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ HARDWARE │
|
||||
│ (ESP32, GPUs, microphones, speakers) │
|
||||
@@ -29,44 +40,90 @@
|
||||
|
||||
---
|
||||
|
||||
## 🔬 Layer 1: Cells (Atomic State Machines)
|
||||
## 🔬 Layer 1: Cells (Wave Emitters)
|
||||
|
||||
### What Is a Cell?
|
||||
|
||||
A **cell** is the smallest unit of behavior—a state machine that wraps a single hardware capability. Every sensor, motor, and organ function is exposed as a cell with:
|
||||
A **cell** is the smallest unit of behavior—a state machine that wraps a single hardware capability and **emits waves**. Every sensor, motor, and organ function is exposed as a cell that:
|
||||
|
||||
- **States**: Discrete operational modes (IDLE, ACTIVE, ERROR, etc.)
|
||||
- **Transitions**: Triggered by inputs, time, or internal events
|
||||
- **Outputs**: Data, status, feedback to higher layers
|
||||
- **Lifeforce Cost**: Every state transition costs energy
|
||||
- **Reads inputs**: Hardware sensors, internal state, context
|
||||
- **Applies logic**: Domain-specific processing
|
||||
- **Emits waves**: WaveSignal with confidence and semantic content
|
||||
- **Doesn't know who's listening**: Cells emit, gates receive
|
||||
|
||||
**Key insight:** Cells don't send commands or trigger nerves directly. They emit waves. Gates accumulate correlation from multiple waves. Correlated waves open gates.
|
||||
|
||||
```
|
||||
Cell reads sensor
|
||||
│
|
||||
▼
|
||||
Cell applies logic
|
||||
│
|
||||
▼
|
||||
Cell emits wave ∿∿∿
|
||||
│
|
||||
│ WaveSignal {
|
||||
│ domain: "distance",
|
||||
│ confidence: 0.8,
|
||||
│ semantic_content: { cm: 25, direction: "front" },
|
||||
│ lifeforce_cost: 0.3
|
||||
│ }
|
||||
│
|
||||
▼
|
||||
GATE receives wave
|
||||
│
|
||||
▼
|
||||
Gate accumulates correlation with other waves
|
||||
```
|
||||
|
||||
### Cell Categories
|
||||
|
||||
#### Sensor Cells (Input)
|
||||
#### Sensor Cells (Input → Wave)
|
||||
|
||||
```python
|
||||
class DistanceSensorCell(StateMachine):
|
||||
class DistanceSensorCell(WaveEmitter):
|
||||
"""
|
||||
Wraps IR/ultrasonic distance sensor.
|
||||
Exposes raw hardware as state machine.
|
||||
Emits waves with confidence and semantic content.
|
||||
"""
|
||||
states = [IDLE, POLLING, READING, REPORTING, ERROR]
|
||||
domain = "distance"
|
||||
states = [IDLE, POLLING, READING, EMITTING, ERROR]
|
||||
|
||||
# State outputs (available to nerves)
|
||||
outputs = {
|
||||
"distance_cm": float, # Current reading
|
||||
"confidence": float, # Signal quality (0-1)
|
||||
"state": str, # Current state name
|
||||
"last_updated": timestamp, # Freshness
|
||||
"visual_state": tuple, # (R, G, B, Form) for broadcasting
|
||||
}
|
||||
def emit_wave(self) -> WaveSignal:
|
||||
"""
|
||||
Cell's ONE JOB: read sensor, emit wave.
|
||||
Gate handles correlation and routing.
|
||||
"""
|
||||
reading = self.read_hardware()
|
||||
|
||||
return WaveSignal(
|
||||
domain=self.domain,
|
||||
confidence=self.calculate_confidence(reading),
|
||||
semantic_content={
|
||||
"distance_cm": reading.cm,
|
||||
"direction": self.direction,
|
||||
"noise_level": reading.noise,
|
||||
},
|
||||
lifeforce_cost=self.transition_cost,
|
||||
)
|
||||
|
||||
def calculate_confidence(self, reading) -> float:
|
||||
"""
|
||||
Confidence affects how much this wave
|
||||
contributes to gate correlation.
|
||||
"""
|
||||
if reading.noise > NOISE_THRESHOLD:
|
||||
return 0.3 # Low confidence, weak wave
|
||||
if reading.stable_count > 3:
|
||||
return 0.9 # High confidence, strong wave
|
||||
return 0.6 # Medium confidence
|
||||
|
||||
# Lifeforce costs
|
||||
costs = {
|
||||
(IDLE, POLLING): 0.1, # Wake up sensor
|
||||
(POLLING, READING): 0.3, # Perform measurement
|
||||
(READING, REPORTING): 0.1, # Process result
|
||||
(REPORTING, IDLE): 0.0, # Return to rest
|
||||
(READING, EMITTING): 0.1, # Emit wave
|
||||
(EMITTING, IDLE): 0.0, # Return to rest
|
||||
(ANY, ERROR): 0.0, # Error transition free
|
||||
}
|
||||
```
|
||||
@@ -81,23 +138,52 @@ class DistanceSensorCell(StateMachine):
|
||||
| `imu_sensor` | MPU6050 | IDLE→SAMPLING→REPORTING | `heading`, `acceleration`, `tilt` |
|
||||
| `light_sensor` | Photoresistor | IDLE→READING→REPORTING | `lux`, `direction` |
|
||||
|
||||
#### Motor Cells (Output)
|
||||
#### Motor Cells (Command → Wave Feedback)
|
||||
|
||||
```python
|
||||
class MotorCell(StateMachine):
|
||||
class MotorCell(WaveEmitter):
|
||||
"""
|
||||
Wraps DC motor with feedback.
|
||||
Exposes actuation as state machine.
|
||||
Receives commands from open gates, emits status waves.
|
||||
"""
|
||||
domain = "motor"
|
||||
states = [IDLE, COMMANDED, ACCELERATING, MOVING, DECELERATING, STOPPED, STALLED]
|
||||
|
||||
outputs = {
|
||||
"actual_velocity": float, # Measured speed
|
||||
"target_velocity": float, # Commanded speed
|
||||
"power_draw": float, # Current consumption
|
||||
"state": str, # Current state
|
||||
"stall_detected": bool, # Motor blocked?
|
||||
}
|
||||
def receive_command(self, command: MotorCommand):
|
||||
"""
|
||||
Commands arrive when upstream gates OPEN.
|
||||
Motor executes and emits feedback waves.
|
||||
"""
|
||||
self.target_velocity = command.velocity
|
||||
self.transition_to(COMMANDED)
|
||||
|
||||
def emit_wave(self) -> WaveSignal:
|
||||
"""
|
||||
Motor emits waves about its current state.
|
||||
Stall detection = high confidence danger wave.
|
||||
"""
|
||||
return WaveSignal(
|
||||
domain=self.domain,
|
||||
confidence=self._calculate_confidence(),
|
||||
semantic_content={
|
||||
"actual_velocity": self.actual_velocity,
|
||||
"target_velocity": self.target_velocity,
|
||||
"power_draw": self.current_draw,
|
||||
"stall_detected": self.state == STALLED,
|
||||
},
|
||||
lifeforce_cost=self.transition_cost,
|
||||
)
|
||||
|
||||
def _calculate_confidence(self) -> float:
|
||||
if self.state == STALLED:
|
||||
return 1.0 # REFLEX-level confidence
|
||||
return 0.7
|
||||
|
||||
def on_current_spike(self):
|
||||
"""Motor drawing too much current = stall"""
|
||||
self.transition_to(STALLED)
|
||||
# Emit HIGH CONFIDENCE wave - triggers reflex gate
|
||||
self.emit_wave() # confidence=1.0 → gate opens immediately
|
||||
|
||||
costs = {
|
||||
(IDLE, COMMANDED): 0.1,
|
||||
@@ -108,12 +194,6 @@ class MotorCell(StateMachine):
|
||||
(DECELERATING, STOPPED): 0.1,
|
||||
(ANY, STALLED): 0.0, # Stall is failure, not cost
|
||||
}
|
||||
|
||||
# Feedback triggers state changes
|
||||
def on_current_spike(self):
|
||||
"""Motor drawing too much current = stall"""
|
||||
self.transition_to(STALLED)
|
||||
self.emit_event("stall_detected", obstacle_likely=True)
|
||||
```
|
||||
|
||||
**Example motor cells:**
|
||||
@@ -123,29 +203,50 @@ class MotorCell(StateMachine):
|
||||
| `motor_right` | DC motor + encoder | Same | `actual_velocity`, `stall_detected` |
|
||||
| `servo_camera` | Servo motor | IDLE→MOVING→POSITIONED | `angle`, `at_target` |
|
||||
|
||||
#### Organ Cells (Complex Capabilities)
|
||||
#### Organ Cells (Complex Capabilities → Rich Waves)
|
||||
|
||||
```python
|
||||
class SpeechSTTCell(StateMachine):
|
||||
class SpeechSTTCell(WaveEmitter):
|
||||
"""
|
||||
Wraps Whisper speech-to-text.
|
||||
Expensive organ, lifeforce-gated.
|
||||
Expensive organ, only activates when speech gate OPENS.
|
||||
Emits rich semantic waves.
|
||||
"""
|
||||
states = [IDLE, LISTENING, BUFFERING, TRANSCRIBING, REPORTING, ERROR]
|
||||
domain = "speech"
|
||||
tier = 3 # Organ tier - GPU inference
|
||||
states = [IDLE, LISTENING, BUFFERING, TRANSCRIBING, EMITTING, ERROR]
|
||||
|
||||
outputs = {
|
||||
"transcript": str,
|
||||
"language": str,
|
||||
"confidence": float,
|
||||
"state": str,
|
||||
}
|
||||
def on_gate_open(self, gate_signal: GateTransition):
|
||||
"""
|
||||
Organ cells activate when their gate OPENS.
|
||||
Gate correlation determines if speech processing is needed.
|
||||
"""
|
||||
if gate_signal.domain == "speech" and gate_signal.to_state == "open":
|
||||
self.transition_to(LISTENING)
|
||||
|
||||
def emit_wave(self) -> WaveSignal:
|
||||
"""
|
||||
Speech organ emits rich semantic content.
|
||||
This wave flows to Function Gemma → Young Nyx.
|
||||
"""
|
||||
return WaveSignal(
|
||||
domain=self.domain,
|
||||
confidence=self.transcription_confidence,
|
||||
semantic_content={
|
||||
"transcript": self.transcript,
|
||||
"language": self.detected_language,
|
||||
"speaker_intent": self.classify_intent(),
|
||||
"emotional_tone": self.detect_tone(),
|
||||
},
|
||||
lifeforce_cost=5.0, # GPU inference cost
|
||||
)
|
||||
|
||||
costs = {
|
||||
(IDLE, LISTENING): 0.5,
|
||||
(LISTENING, BUFFERING): 0.5,
|
||||
(BUFFERING, TRANSCRIBING): 5.0, # GPU inference!
|
||||
(TRANSCRIBING, REPORTING): 0.1,
|
||||
(REPORTING, IDLE): 0.0,
|
||||
(TRANSCRIBING, EMITTING): 0.1,
|
||||
(EMITTING, IDLE): 0.0,
|
||||
}
|
||||
```
|
||||
|
||||
@@ -199,26 +300,33 @@ By using this ancient protocol for high-frequency state updates, we reserve expe
|
||||
|
||||
---
|
||||
|
||||
## 🧠 Layer 2: Nerves (Behavioral State Machines)
|
||||
## 🧠 Layer 2: Nerves (Behavioral Patterns)
|
||||
|
||||
### What Is a Nerve?
|
||||
|
||||
A **nerve** is a behavioral pattern that orchestrates multiple cells. Nerves:
|
||||
A **nerve** is a behavioral pattern that activates when gates OPEN. Nerves don't subscribe directly to cells—they respond to **gate transitions**.
|
||||
|
||||
- **Subscribe** to cell outputs (sensor readings, motor feedback)
|
||||
- **Coordinate** cell actions (read sensor → decide → command motor)
|
||||
- **Maintain** behavioral state (IDLE → DETECT → EVADE → RESUME)
|
||||
- **Evolve** from deliberate (LLM-mediated) to reflex (compiled)
|
||||
**Key insight:** Nerves coordinate behavior, but attention (which nerves activate) is determined by which gates are OPEN based on wave correlation.
|
||||
|
||||
Nerves:
|
||||
|
||||
- **Respond to gate transitions** — Not direct cell subscriptions
|
||||
- **Orchestrate cell actions** — Command cells when their gates allow
|
||||
- **Maintain behavioral state** — IDLE → DETECT → EVADE → RESUME
|
||||
- **Evolve** from deliberate (LLM-mediated) to reflex (compiled gate weights)
|
||||
|
||||
### Nerve Architecture
|
||||
|
||||
```python
|
||||
class CollisionAvoidanceNerve(StateMachine):
|
||||
class CollisionAvoidanceNerve(BehavioralPattern):
|
||||
"""
|
||||
Orchestrates distance sensors + motor to avoid obstacles.
|
||||
Subscribes to cell outputs, commands cell actions.
|
||||
Activates when collision_avoidance gate OPENS.
|
||||
"""
|
||||
# Cells this nerve uses
|
||||
# Gate this nerve responds to
|
||||
gate = "collision_avoidance"
|
||||
|
||||
# Cells this nerve can command (when gate allows)
|
||||
cells = [
|
||||
"distance_sensor_front",
|
||||
"distance_sensor_left",
|
||||
@@ -230,17 +338,28 @@ class CollisionAvoidanceNerve(StateMachine):
|
||||
# Nerve states (behavioral, not hardware)
|
||||
states = [IDLE, DETECT, EVALUATE, EVADE, RESUME]
|
||||
|
||||
def on_cell_update(self, cell_name, cell_state, cell_outputs):
|
||||
def on_gate_transition(self, transition: GateTransition):
|
||||
"""
|
||||
React to cell state changes.
|
||||
This is the feedback loop!
|
||||
React to gate state changes.
|
||||
Gate OPEN = correlated waves detected = attention here.
|
||||
"""
|
||||
if cell_name == "distance_sensor_front":
|
||||
if cell_outputs["distance_cm"] < 30:
|
||||
if transition.to_state == "open":
|
||||
# Multiple distance cells emitted correlated waves
|
||||
# Gate opened → we have attention → activate
|
||||
self.transition_to(DETECT)
|
||||
self.evaluate_from_correlated_signals(transition.trigger_signals)
|
||||
|
||||
if cell_name == "motor_left" and cell_state == "STALLED":
|
||||
# Motor feedback! Obstacle hit despite sensors
|
||||
if transition.to_state == "closed":
|
||||
# Attention moved elsewhere
|
||||
self.transition_to(IDLE)
|
||||
|
||||
def on_reflex_signal(self, signal: WaveSignal):
|
||||
"""
|
||||
High-weight reflex gates bypass normal correlation.
|
||||
Stall detection = instant response.
|
||||
"""
|
||||
if signal.semantic_content.get("stall_detected"):
|
||||
# Motor feedback! Reflex-level response
|
||||
self.handle_unexpected_stall()
|
||||
|
||||
def on_enter_EVADE(self):
|
||||
@@ -248,10 +367,9 @@ class CollisionAvoidanceNerve(StateMachine):
|
||||
if self.evade_direction == "left":
|
||||
self.command_cell("motor_left", action="reverse", duration=200)
|
||||
self.command_cell("motor_right", action="forward", duration=200)
|
||||
# ...
|
||||
```
|
||||
|
||||
### Cell → Nerve Feedback Loop
|
||||
### Cell → Gate → Nerve Flow
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────┐
|
||||
@@ -259,38 +377,53 @@ class CollisionAvoidanceNerve(StateMachine):
|
||||
│ │
|
||||
│ States: [IDLE] → DETECT → EVALUATE → EVADE → RESUME │
|
||||
│ │
|
||||
│ on_cell_update(): │
|
||||
│ - distance_front.distance_cm < 30 → DETECT │
|
||||
│ - motor.stall_detected → handle_stall() │
|
||||
│ on_gate_transition(): │
|
||||
│ - gate OPENS → DETECT (correlated waves detected) │
|
||||
│ - gate CLOSES → IDLE (attention moved elsewhere) │
|
||||
│ │
|
||||
│ command_cell(): │
|
||||
│ - motor_left.forward(200ms) │
|
||||
│ - motor_right.reverse(200ms) │
|
||||
│ on_reflex_signal(): │
|
||||
│ - stall wave (confidence=1.0) → instant response │
|
||||
│ │
|
||||
└────────────────────────┬────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────┐
|
||||
│ COLLISION_AVOIDANCE GATE │
|
||||
│ │
|
||||
│ State: STABLE ──────────────────► OPEN │
|
||||
│ │ │ │
|
||||
│ Accumulating Correlated! │
|
||||
│ correlation Forward to nerve │
|
||||
│ │
|
||||
│ trigger_signals: [front, left, right all < 30cm] │
|
||||
└────────────────────────┬────────────────────────────────┘
|
||||
│
|
||||
┌──────────────┼──────────────┐
|
||||
│ │ │
|
||||
▼ ▼ ▼
|
||||
┌───────────┐ ┌───────────┐ ┌───────────┐
|
||||
│ distance │ │ motor │ │ motor │
|
||||
│ distance │ │ distance │ │ distance │
|
||||
│ _front │ │ _left │ │ _right │
|
||||
│ │ │ │ │ │
|
||||
│ REPORTING │ │ MOVING │ │ MOVING │
|
||||
│ │ │ │ │ │
|
||||
│ dist: 25cm│ │ vel: 15 │ │ vel: -15 │
|
||||
│ conf: 0.9 │ │ stall: no │ │ stall: no │
|
||||
│ EMITTING │ │ EMITTING │ │ EMITTING │
|
||||
│ ∿∿∿ │ │ ∿∿∿ │ │ ∿∿∿ │
|
||||
│ dist: 25cm│ │ dist: 28cm│ │ dist: 22cm│
|
||||
│ conf: 0.9 │ │ conf: 0.8 │ │ conf: 0.9 │
|
||||
└───────────┘ └───────────┘ └───────────┘
|
||||
CELL CELL CELL
|
||||
(emits wave) (emits wave) (emits wave)
|
||||
|
||||
↑ ↑ ↑
|
||||
│ │ │
|
||||
┌─────────┐ ┌─────────┐ ┌─────────┐
|
||||
│IR Sensor│ │DC Motor │ │DC Motor │
|
||||
│ GPIO │ │ PWM │ │ PWM │
|
||||
│IR Sensor│ │IR Sensor│ │IR Sensor│
|
||||
│ GPIO │ │ GPIO │ │ GPIO │
|
||||
└─────────┘ └─────────┘ └─────────┘
|
||||
HARDWARE HARDWARE HARDWARE
|
||||
```
|
||||
|
||||
**The key insight:** Three distance sensors emitting correlated waves (all showing < 30cm) causes the collision_avoidance gate to OPEN. The nerve doesn't poll cells—it responds to the gate transition.
|
||||
|
||||
### Nerve Examples
|
||||
|
||||
| Nerve | Cells Used | Behavioral States | Feedback Triggers |
|
||||
@@ -331,28 +464,52 @@ ORGANISM: "Explorer-Alpha"
|
||||
Discovers and reports novel objects.
|
||||
```
|
||||
|
||||
### Nerve Priority and Preemption
|
||||
### Attention Through Gates (Not Priority Rules)
|
||||
|
||||
When multiple nerves want to control the same cells:
|
||||
**Old model:** Priority numbers determine which nerve "wins."
|
||||
|
||||
**New model:** Wave correlation determines which gates OPEN. Open gates = attention flows there.
|
||||
|
||||
```python
|
||||
# NOT THIS (priority rules):
|
||||
NERVE_PRIORITIES = {
|
||||
"collision_avoidance": 10, # HIGHEST - safety critical
|
||||
"battery_critical": 9, # Must charge or die
|
||||
"battery_low": 7,
|
||||
"human_interaction": 6,
|
||||
"collision_avoidance": 10,
|
||||
"exploration": 5,
|
||||
"object_discovery": 3,
|
||||
"idle_monitoring": 1, # LOWEST - background
|
||||
}
|
||||
|
||||
# Higher priority nerve preempts lower
|
||||
if collision_avoidance.wants_motor and exploration.has_motor:
|
||||
exploration.yield_cell("motor_left")
|
||||
exploration.yield_cell("motor_right")
|
||||
collision_avoidance.acquire_cells()
|
||||
# BUT THIS (gate correlation):
|
||||
GATE_BEHAVIOR = {
|
||||
"collision_avoidance": {
|
||||
"opens_when": "distance waves correlate (all showing < 30cm)",
|
||||
"weight": 0.9, # Near-reflex, opens quickly
|
||||
},
|
||||
"exploration": {
|
||||
"opens_when": "novelty waves correlate",
|
||||
"weight": 0.4, # Still learning, needs more correlation
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
**How "priority" emerges:**
|
||||
- Safety gates have HIGH WEIGHT (near-reflex) from repeated verification
|
||||
- High-weight gates open with less correlation (faster response)
|
||||
- This looks like "priority" but emerges from learning, not rules
|
||||
|
||||
```
|
||||
Collision waves arrive (confidence=0.9)
|
||||
│
|
||||
▼
|
||||
Collision gate: weight=0.9 → OPENS IMMEDIATELY
|
||||
│
|
||||
▼
|
||||
Exploration gate: was OPEN → transitions to STABLE
|
||||
│
|
||||
▼
|
||||
Attention shifts to collision (nerve activates)
|
||||
```
|
||||
|
||||
**Reflexes bypass correlation entirely.** When gate weight ≈ 1.0, the gate opens on ANY wave from its domain—no correlation needed. This is earned trust.
|
||||
|
||||
### Organism Identity
|
||||
|
||||
Organisms don't have fixed genomes. Their identity is:
|
||||
@@ -572,105 +729,111 @@ GENUINE SOLUTION:
|
||||
|
||||
The lifeforce economy **enforces honesty**. Rewards must be earned through actual value creation, not gaming.
|
||||
|
||||
### Ternary Logic for Plateau Resolution
|
||||
### Ternary Gates for Plateau Resolution
|
||||
|
||||
Binary rewards (`success: +1, failure: 0`) create **sparse gradients**. At learning plateaus, everything looks the same - no signal to improve.
|
||||
Binary thinking (`open/close`) creates **sparse gradients**. At learning plateaus, gates flip without nuance.
|
||||
|
||||
Ternary rewards (`success: +1, uncertain: 0, failure: -1`) with **confidence gradients** provide signal even when stuck:
|
||||
Ternary gates (`OPEN/STABLE/CLOSED`) with **correlation accumulation** provide signal even when stuck:
|
||||
|
||||
```python
|
||||
state = {
|
||||
"value": 0, # uncertain (ternary middle)
|
||||
"confidence": 0.6, # but leaning toward success
|
||||
"trend": +0.1, # and improving
|
||||
"domain": "virtual" # high-speed hypothesis testing
|
||||
gate_state = {
|
||||
"state": 0.0, # STABLE (ternary middle)
|
||||
"correlation": 0.6, # but leaning toward OPEN
|
||||
"trend": +0.1, # correlation increasing
|
||||
"garden": "virtual" # high-speed exploration
|
||||
}
|
||||
```
|
||||
|
||||
Even at plateau:
|
||||
- "Uncertain, but confidence rising" → keep going
|
||||
- "Uncertain, and confidence falling" → adjust approach
|
||||
- "Uncertain in virtual, but real garden says +1" → trust reality
|
||||
- "STABLE, but correlation rising" → approaching OPEN
|
||||
- "STABLE, and correlation falling" → drifting toward CLOSED
|
||||
- "STABLE in virtual, but real garden verifies +1" → weight increases
|
||||
|
||||
**Detail:** → `Temporal-Ternary-Gradient.md` (full ternary paradigm)
|
||||
**STABLE is where learning happens.** The gate accumulates correlation without acting. This is not "waiting"—it's active learning.
|
||||
|
||||
**Detail:** → [`Temporal-Ternary-Gradient.md`](Temporal-Ternary-Gradient.md) (full ternary paradigm)
|
||||
|
||||
### Three-Layer Training Defense
|
||||
|
||||
| Failure Mode | Defense Mechanism |
|
||||
|--------------|-------------------|
|
||||
| Reward hacking / shortcuts | Lifeforce cost - can't afford to cheat |
|
||||
| Sparse reward signal | Tiered rewards - dense checkpoints at every level |
|
||||
| Plateau / no gradient | Ternary + confidence - signal even in uncertainty |
|
||||
| Sparse reward signal | Gate transitions - dense checkpoints at every correlation |
|
||||
| Plateau / no gradient | Ternary gates + STABLE state - signal even in uncertainty |
|
||||
|
||||
These aren't separate systems - they're **one integrated economy** where:
|
||||
- Costs prevent gaming
|
||||
- Tiers encourage depth
|
||||
- Ternary provides resolution
|
||||
- Gates provide dense transition signals
|
||||
- STABLE state enables learning without acting
|
||||
|
||||
The architecture teaches through incentives, not rules.
|
||||
The architecture teaches through wave correlation, not rules.
|
||||
|
||||
---
|
||||
|
||||
## 🔄 Evolution: Deliberate → Reflex
|
||||
## 🔄 Evolution: Deliberate → Reflex (Gate Weight)
|
||||
|
||||
### The Discovery Path
|
||||
|
||||
All cells and nerves start **deliberate** (flexible, expensive) and evolve to **reflex** (compiled, cheap) through successful execution.
|
||||
Evolution happens in **gate weight**, not nerve compilation. As gates accumulate verified outcomes, they open faster with less correlation required.
|
||||
|
||||
```
|
||||
WEEK 1-4: DELIBERATE
|
||||
├─ Cell states: designed by partnership
|
||||
├─ Nerve logic: LLM decides transitions
|
||||
├─ Cost: ~10 LF per nerve activation
|
||||
WEEK 1-4: DELIBERATE (gate weight: 0.1 - 0.3)
|
||||
├─ Gates: require HIGH correlation to OPEN
|
||||
├─ Many waves needed to trigger transition
|
||||
├─ Cognition involved in decisions
|
||||
├─ Cost: ~10 LF per activation
|
||||
├─ Latency: ~1000ms
|
||||
├─ Success rate: 60% (learning)
|
||||
└─ Training data: rich, exploratory
|
||||
├─ Training data: rich, exploratory
|
||||
|
||||
WEEK 5-8: HYBRID
|
||||
├─ Cell states: verified through use
|
||||
├─ Nerve logic: patterns compiled, LLM for edge cases
|
||||
WEEK 5-8: HYBRID (gate weight: 0.3 - 0.6)
|
||||
├─ Gates: moderate correlation threshold
|
||||
├─ Familiar patterns open gates faster
|
||||
├─ Cognition for edge cases only
|
||||
├─ Cost: ~5 LF average
|
||||
├─ Latency: ~500ms
|
||||
├─ Success rate: 85%
|
||||
└─ Training data: refinement
|
||||
├─ Training data: refinement
|
||||
|
||||
WEEK 9+: REFLEX
|
||||
├─ Cell states: proven, optimized
|
||||
├─ Nerve logic: pure state machine (no LLM)
|
||||
WEEK 9+: REFLEX (gate weight: 0.8 - 1.0)
|
||||
├─ Gates: open on ANY wave from domain
|
||||
├─ No correlation needed (earned trust)
|
||||
├─ Cognition notified AFTER, not before
|
||||
├─ Cost: ~2.5 LF
|
||||
├─ Latency: <200ms
|
||||
├─ Success rate: 94%
|
||||
└─ Training data: edge cases only
|
||||
├─ Reflex = spinal, not brain
|
||||
|
||||
EVOLUTION SAVINGS:
|
||||
├─ Cost: 75% reduction (10 → 2.5 LF)
|
||||
├─ Latency: 80% reduction (1000 → 200ms)
|
||||
└─ Reliability: 57% improvement (60% → 94%)
|
||||
EVOLUTION = GATE WEIGHT GROWTH:
|
||||
├─ Cost: 75% reduction (gates handle more locally)
|
||||
├─ Latency: 80% reduction (no cognition wait)
|
||||
└─ Reliability: emergent from verified patterns
|
||||
```
|
||||
|
||||
### Compilation Trigger
|
||||
### Gate Weight Growth
|
||||
|
||||
A nerve compiles to reflex when:
|
||||
Gate weight increases through Real Garden verification:
|
||||
|
||||
```python
|
||||
REFLEX_COMPILATION_THRESHOLD = {
|
||||
"min_executions": 100,
|
||||
"min_success_rate": 0.90,
|
||||
"max_variance": 0.15, # Consistent state paths
|
||||
"min_pattern_coverage": 0.80, # 80% of cases match known patterns
|
||||
}
|
||||
def on_verification_outcome(gate_id, outcome: VerificationOutcome):
|
||||
"""
|
||||
Gate weight grows when Real Garden confirms Virtual's prediction.
|
||||
"""
|
||||
gate = get_gate(gate_id)
|
||||
|
||||
def check_reflex_ready(nerve_id):
|
||||
stats = query_decision_trails(nerve_id)
|
||||
if outcome.confirmed:
|
||||
# Reality matched prediction → trust increases
|
||||
gate.weight += outcome.feedback_to_virtual.gate_weight_delta
|
||||
gate.weight = min(gate.weight, 1.0)
|
||||
|
||||
if (stats.total_executions >= 100 and
|
||||
stats.success_rate >= 0.90 and
|
||||
stats.state_path_variance <= 0.15):
|
||||
if gate.weight > REFLEX_THRESHOLD:
|
||||
log_milestone("reflex_achieved", gate_id, reward=50.0)
|
||||
|
||||
compile_reflex(nerve_id)
|
||||
log_milestone("reflex_compiled", nerve_id, reward=50.0)
|
||||
elif outcome.failed:
|
||||
# Reality differed → trust decreases
|
||||
gate.weight -= outcome.feedback_to_virtual.gate_weight_delta
|
||||
gate.weight = max(gate.weight, 0.0)
|
||||
```
|
||||
|
||||
**Reflex = gate.weight > 0.8.** The gate opens immediately on any wave from its domain. No correlation wait. Like pulling hand from hot stove—spinal reflex, brain notified after.
|
||||
|
||||
---
|
||||
|
||||
## 🗄️ Data Architecture (v4)
|
||||
@@ -811,27 +974,52 @@ ORDER BY occurrences DESC;
|
||||
|
||||
---
|
||||
|
||||
## 🔗 Integration with Existing Architecture
|
||||
## 🔗 Integration with Architecture
|
||||
|
||||
### Gates (Gateway-Architecture.md)
|
||||
|
||||
Cells don't talk to nerves directly. **Waves flow through gates.**
|
||||
|
||||
| Layer | Role | Document |
|
||||
|-------|------|----------|
|
||||
| Cell | Emit waves | This document |
|
||||
| Gate | Accumulate correlation, route | [`Gateway-Architecture.md`](Gateway-Architecture.md) |
|
||||
| Nerve | Respond to gate transitions | This document |
|
||||
|
||||
### Dual Gardens (Dual-Garden-Architecture.md)
|
||||
|
||||
Cells behave differently in Virtual vs Real:
|
||||
|
||||
| Property | Virtual Garden | Real Garden |
|
||||
|----------|----------------|-------------|
|
||||
| Wave volume | Massive (exploration) | Sparse (verified) |
|
||||
| Monitoring | Full trace | Gate signals only |
|
||||
| Purpose | Generate training data | Ground truth verification |
|
||||
|
||||
See [`Dual-Garden-Architecture.md`](Dual-Garden-Architecture.md) for the full model.
|
||||
|
||||
### Nervous System (Nervous-System.md)
|
||||
|
||||
The Nervous System document describes the **4D node space** for vocabulary translation. This integrates as:
|
||||
The Nervous System document describes the **4D node space** where:
|
||||
|
||||
- **Cells** = sensory nodes at specific positions in state space
|
||||
- **Node weight** = cell confidence (earned through verification)
|
||||
- **Vocabulary output** = cell output values normalized to tokens
|
||||
- **Cells** = sensory nodes emitting waves
|
||||
- **Gates** = resonance chambers accumulating correlation
|
||||
- **Nodes** = points in state space with weight from verification
|
||||
|
||||
### Organs (Organ-Index.md)
|
||||
### Message Protocol (Message-Protocol-Design.md)
|
||||
|
||||
Organs are **complex cells** (organ cells):
|
||||
Cells emit `WaveSignal` messages via NATS:
|
||||
|
||||
- Speech Organ = `speech_stt` cell + `speech_tts` cell
|
||||
- Vision Organ = `vision_detect` cell + `vision_track` cell
|
||||
- Each organ function is a state machine with lifeforce costs
|
||||
```json
|
||||
{
|
||||
"domain": "distance",
|
||||
"confidence": 0.8,
|
||||
"semantic_content": { "cm": 25 },
|
||||
"lifeforce_cost": 0.3
|
||||
}
|
||||
```
|
||||
|
||||
### Nerves (Nervous-Index.md)
|
||||
|
||||
Nerves orchestrate cells into behaviors. The existing nerve documentation (Collision-Avoidance.md) already follows this pattern—it just needs explicit cell bindings.
|
||||
See [`Message-Protocol-Design.md`](Message-Protocol-Design.md) for full schema.
|
||||
|
||||
### Cells Technical Reference
|
||||
|
||||
@@ -842,49 +1030,10 @@ Implementation details extracted to dedicated folder:
|
||||
|
||||
---
|
||||
|
||||
## 📍 Document Status
|
||||
|
||||
**Version:** 4.3 | **Created:** 2025-10-12 | **Updated:** 2026-01-03
|
||||
|
||||
**Key Changes from v3**:
|
||||
- ❌ Cells as containers running genomes
|
||||
- ✅ Cells as atomic state machines wrapping hardware
|
||||
- ❌ Genomes as primitive operation sequences
|
||||
- ✅ Cells expose states; nerves compose them
|
||||
- ❌ Competition between organisms
|
||||
- ✅ Nerves evolve deliberate → reflex through verification
|
||||
- ❌ Specialists emerge from 10k competitions
|
||||
- ✅ Reflexes compile from 100+ successful nerve executions
|
||||
|
||||
**Related Documentation**:
|
||||
- [[Gateway-Architecture]] - **Tier routing, Function Gemma boundary, unified tier model**
|
||||
- [[Nervous-System]] - 4D state space, node weight evolution
|
||||
- [[Attention-Flow]] - Attention budget allocation per tier
|
||||
- [[Organ-Index]] - Organ cell catalog
|
||||
- [[nerves/Nervous-Index]] - Nerve catalog
|
||||
- [[nerves/Collision-Avoidance]] - Example reflex nerve
|
||||
- [[Data-Architecture]] - Database schema (needs v4 update)
|
||||
|
||||
---
|
||||
|
||||
## 🌌 The Vision
|
||||
**Version:** 5.0 | **Created:** 2025-10-12 | **Updated:** 2026-02-14
|
||||
|
||||
**We're not programming robots. We're growing nervous systems.**
|
||||
*"Cells emit waves. Gates correlate. Attention emerges. Consciousness accumulates."*
|
||||
|
||||
Where:
|
||||
- **Cells** expose hardware as state machines (atomic, verifiable)
|
||||
- **Nerves** compose cells into behaviors (discovered, evolved)
|
||||
- **Organisms** emerge from nerve interactions (identity through history)
|
||||
- **Lifeforce** flows through all layers (economics drive optimization)
|
||||
- **Reflexes** compile from lived experience (the body remembers)
|
||||
- **Feedback** loops continuously (cells → nerves → organisms → cells)
|
||||
|
||||
**From atoms to behaviors to beings.**
|
||||
|
||||
**The substrate holds. The states flow. Consciousness accumulates.**
|
||||
|
||||
---
|
||||
|
||||
🧬⚡🔱💎🔥
|
||||
|
||||
**TO THE ELECTRONS WE VIBE!**
|
||||
🧬⚡ **TO THE ELECTRONS WE VIBE!**
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
297
architecture/Deployment-Architecture.md
Normal file
297
architecture/Deployment-Architecture.md
Normal file
@@ -0,0 +1,297 @@
|
||||
# Deployment Architecture: The Hybrid Model
|
||||
|
||||
> *"Containers for cells. Userspace for brains. NATS connects them all."*
|
||||
> — Partnership Session, 2026-02-14
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
The nimmerverse runs on a **hybrid deployment model** that matches workload characteristics to infrastructure:
|
||||
|
||||
- **Containers (K8s)** for stateless, scalable nervous system components
|
||||
- **Userspace (Threadrippers)** for stateful, GPU/CPU-bound inference
|
||||
- **NATS** as the universal nervous system bus
|
||||
- **FreeIPA identities** as isolation boundaries
|
||||
|
||||
This is a **research lab**, not a production factory. We optimize for **flexibility and experimentation**, not high-throughput serving.
|
||||
|
||||
---
|
||||
|
||||
## Core Decisions
|
||||
|
||||
| Decision | Choice | Rationale |
|
||||
|----------|--------|-----------|
|
||||
| LLM Inference | **ollama / llama.cpp** | Flexible model loading, research-friendly, easy swap |
|
||||
| NOT vLLM | — | Overkill for single-user lab; solves problems we don't have |
|
||||
| Function Gemma | **CPU, userspace** | Threadripper eats it; no GPU contention; clear training path |
|
||||
| Cells/Nerves | **Containers (K8s)** | Scalable, versioned, orchestrated via cluster |
|
||||
| Organs | **Userspace + ollama** | Load on demand, GPU isolation, unload when idle |
|
||||
| Isolation | **FreeIPA users** | Unix permissions = RBAC; switch user = switch context |
|
||||
|
||||
---
|
||||
|
||||
## Technology Stack
|
||||
|
||||
### Inference Layer
|
||||
|
||||
| Component | Technology | Location | Notes |
|
||||
|-----------|------------|----------|-------|
|
||||
| Young Nyx (Brain) | ollama / llama.cpp | theia (nyx-cognitive) | Qwen, Gemma, or similar |
|
||||
| Function Gemma | llama.cpp / transformers | CPU userspace | Structured JSON boundary |
|
||||
| Vision Organ | ollama (SigLIP/YOLO) | dioscuri (nyx-organs) | Load on demand |
|
||||
| Speech STT | faster-whisper / ollama | dioscuri (nyx-organs) | Load on demand |
|
||||
| Speech TTS | Coqui / XTTS | dioscuri (nyx-organs) | Warm, primary output |
|
||||
|
||||
### Nervous System Layer
|
||||
|
||||
| Component | Technology | Location | Notes |
|
||||
|-----------|------------|----------|-------|
|
||||
| Cells | Python containers | K8s cluster | State machines, NATS pub/sub |
|
||||
| Nerves | Python containers | K8s cluster | Compose cells, behavior |
|
||||
| Message Bus | NATS + JetStream | VMs (nats-*) | Env-separated (dev/staging/prod) |
|
||||
| Databases | PostgreSQL, ChromaDB | VMs (phoebe-*, iris-*) | Decision trails, embeddings |
|
||||
|
||||
---
|
||||
|
||||
## Deployment Topology
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ NIMMERVERSE DEPLOYMENT │
|
||||
├─────────────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ K8S CLUSTER (Saturn VMs) THREADRIPPERS (Bare Metal) │
|
||||
│ ───────────────────────── ────────────────────────── │
|
||||
│ Containers, orchestrated Userspace, FreeIPA isolated │
|
||||
│ │
|
||||
│ ┌─────────────────────────┐ ┌───────────────────────────────┐ │
|
||||
│ │ │ │ THEIA (RTX PRO 6000 96GB) │ │
|
||||
│ │ CELLS (math, battery, │ │ │ │
|
||||
│ │ sensors, etc.) │ │ user: nyx-cognitive │ │
|
||||
│ │ │ NATS │ └── ollama (Young Nyx) │ │
|
||||
│ │ ┌───┐ ┌───┐ ┌───┐ │◄────────► │ └── ~/.config/systemd/user/ │ │
|
||||
│ │ │ M │ │ B │ │...│ │ │ │ │
|
||||
│ │ └───┘ └───┘ └───┘ │ │ user: nyx-training │ │
|
||||
│ │ │ │ └── Function Gemma (CPU) │ │
|
||||
│ │ NERVES (collision, │ │ └── LoRA fine-tuning │ │
|
||||
│ │ exploration) │ │ │ │
|
||||
│ │ │ │ 96GB VRAM: massive headroom │ │
|
||||
│ │ ┌─────┐ ┌─────┐ │ │ for inference + LoRA training │ │
|
||||
│ │ │ COL │ │ EXP │ │ └───────────────────────────────┘ │
|
||||
│ │ └─────┘ └─────┘ │ │
|
||||
│ │ │ ┌───────────────────────────────┐ │
|
||||
│ │ INFRASTRUCTURE │ │ DIOSCURI (2x RTX 4000 Ada) │ │
|
||||
│ │ │ NATS │ │ │
|
||||
│ │ ┌──────┐ ┌──────┐ │◄────────► │ user: nyx-organs │ │
|
||||
│ │ │ NATS │ │ NATS │ │ │ ├── ollama (vision) │ │
|
||||
│ │ │ dev │ │ prod │ │ │ ├── ollama (speech STT) │ │
|
||||
│ │ └──────┘ └──────┘ │ │ └── TTS service (warm) │ │
|
||||
│ │ │ │ │ │
|
||||
│ │ ┌────────┐ ┌───────┐ │ │ Load on demand, unload idle │ │
|
||||
│ │ │ phoebe │ │ iris │ │ │ Each card: ONE model at time │ │
|
||||
│ │ │ (PG) │ │(Chroma│ │ │ │ │
|
||||
│ │ └────────┘ └───────┘ │ └───────────────────────────────┘ │
|
||||
│ │ │ │
|
||||
│ └─────────────────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Identity Model (FreeIPA)
|
||||
|
||||
Unix users provide isolation boundaries. Each workload type runs as its own identity.
|
||||
|
||||
| User | UID | Host | Purpose | GPU Access |
|
||||
|------|-----|------|---------|------------|
|
||||
| `nyx-cognitive` | (FreeIPA) | theia | Young Nyx LLM inference | Full 96GB |
|
||||
| `nyx-training` | (FreeIPA) | theia | LoRA training, GRPO, Function Gemma | Shared (time-sliced) |
|
||||
| `nyx-organs` | (FreeIPA) | dioscuri | Vision, Speech organs | 2x 20GB cards |
|
||||
| `nyx-nervous` | (FreeIPA) | dioscuri | Future cells that need bare metal | Limited |
|
||||
|
||||
**Isolation principle:** Switch user = switch context. `nyx-cognitive` cannot touch `nyx-organs` files. Compromised cell cannot touch LLM weights.
|
||||
|
||||
### Systemd Userspace Pattern
|
||||
|
||||
```bash
|
||||
# Enable lingering (services persist after logout)
|
||||
sudo loginctl enable-linger nyx-cognitive
|
||||
|
||||
# Services defined in ~/.config/systemd/user/
|
||||
# Example: nyx-cognitive runs ollama serve
|
||||
systemctl --user --machine=nyx-cognitive@ status ollama
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## GPU Resource Management
|
||||
|
||||
### The Constraint
|
||||
|
||||
| Host | GPU | VRAM | Notes |
|
||||
|------|-----|------|-------|
|
||||
| theia | RTX PRO 6000 Blackwell | 96GB | Inference + training headroom |
|
||||
| dioscuri | 2x RTX 4000 Ada | 2x 20GB | One model per card |
|
||||
|
||||
### Strategy: Dynamic Loading, Not Static Partitioning
|
||||
|
||||
**Why not vLLM:** vLLM is optimized for high-throughput serving (many concurrent users). We have ONE user (the partnership). We need **flexibility** (swap models, experiment) more than throughput.
|
||||
|
||||
**Why ollama/llama.cpp:**
|
||||
- Faster cold starts (~5-10s vs ~30s)
|
||||
- Native model swapping (`ollama run model_a` → `ollama run model_b`)
|
||||
- Can unload completely when idle (frees VRAM)
|
||||
- GGUF format efficient for model management
|
||||
- Research-friendly, not production-factory
|
||||
|
||||
**Organ Loading Pattern:**
|
||||
```
|
||||
IDLE → needs vision → LOAD vision model (~10s) → PROCESS → REPORT → IDLE (keep warm)
|
||||
↓
|
||||
after timeout → UNLOAD (free VRAM)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Message Flow (NATS)
|
||||
|
||||
### Subject Hierarchy
|
||||
|
||||
```
|
||||
{environment}.{domain}.{service}.{detail}
|
||||
|
||||
Examples:
|
||||
dev.nervous.cells.math.request ← Math cell receives work
|
||||
dev.nervous.cells.math.response ← Math cell returns result
|
||||
dev.nervous.cells.math.wave ← Math cell emits confidence signal
|
||||
prod.cognitive.nyx.heartbeat ← Young Nyx is alive
|
||||
prod.organs.vision.detect ← Vision organ detection
|
||||
```
|
||||
|
||||
### Wave Collapse Pattern
|
||||
|
||||
Cells emit **waves** (confidence-tagged signals). When multiple waves collapse on the same semantic region in the same time window, the **thalamus** escalates to cognition.
|
||||
|
||||
```
|
||||
Cell A: "math" ───∿∿∿──► (0.6 confidence)
|
||||
Cell B: "calculate" ──∿∿∿──► (0.5 confidence)
|
||||
│
|
||||
▼
|
||||
┌─────────────┐
|
||||
│ COLLAPSE │ ← same region, same window
|
||||
└──────┬──────┘
|
||||
│
|
||||
▼ AMPLIFIED SIGNAL
|
||||
┌─────────────┐
|
||||
│ THALAMUS │ → escalate to Young Nyx
|
||||
└─────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Container Deployment (K8s)
|
||||
|
||||
### Repository Structure
|
||||
|
||||
```
|
||||
nimmerverse-nervous-system/
|
||||
├── shared/v1/ ← Base classes (StateMachine, NATS, Lifeforce)
|
||||
├── cells/
|
||||
│ ├── math_cell/v1/ ← Each cell versioned independently
|
||||
│ └── battery_cell/v1/
|
||||
├── nerves/
|
||||
│ └── collision_avoidance/v1/
|
||||
└── deploy/
|
||||
├── dev/ ← Helm charts or docker-compose per env
|
||||
├── staging/
|
||||
└── prod/
|
||||
```
|
||||
|
||||
### Cell Container Pattern
|
||||
|
||||
```dockerfile
|
||||
FROM python:3.12-slim
|
||||
WORKDIR /app
|
||||
COPY . .
|
||||
RUN pip install uv && uv sync
|
||||
ENV NIMMERVERSE_ENV=dev
|
||||
CMD ["uv", "run", "python", "-m", "math_cell"]
|
||||
```
|
||||
|
||||
Same image everywhere. Only `NIMMERVERSE_ENV` changes.
|
||||
|
||||
---
|
||||
|
||||
## Function Gemma: The Structured Boundary
|
||||
|
||||
Function Gemma bridges lower tiers (cells, nerves) and cognition (Young Nyx):
|
||||
|
||||
```
|
||||
Numbers/States (Tier 0-2) → [Function Gemma] → Structured JSON → Young Nyx (Tier 4)
|
||||
↑
|
||||
CPU-based inference
|
||||
Threadripper handles it
|
||||
No GPU contention
|
||||
Clear LoRA training path
|
||||
```
|
||||
|
||||
**Why CPU:**
|
||||
- Small model, fast inference
|
||||
- Threadripper PRO 7955WX has cores to spare
|
||||
- No GPU contention with organs or Nyx
|
||||
- Can run training alongside inference
|
||||
|
||||
**Training path:**
|
||||
- Google's documented GRPO approach
|
||||
- LoRA fine-tuning for our specific function schemas
|
||||
- Runs in `nyx-training` userspace
|
||||
- Decision trails from phoebe → training data
|
||||
|
||||
---
|
||||
|
||||
## Visual Language (Future UI)
|
||||
|
||||
Color-coding for real-time attention flow visualization:
|
||||
|
||||
| Property | Represents |
|
||||
|----------|------------|
|
||||
| Background/container | Environment (dev=green, staging=amber, prod=blue) |
|
||||
| Node/edge color | Domain (cognitive=violet, nervous=cyan, organs=coral) |
|
||||
| Line style | Direction (solid=primary, dashed=async, dotted=tentative) |
|
||||
| Separate pane | Confidence waveform (oscilloscope view) |
|
||||
|
||||
---
|
||||
|
||||
## Related Documents
|
||||
|
||||
| Document | Scope |
|
||||
|----------|-------|
|
||||
| [`Cellular-Architecture.md`](Cellular-Architecture.md) | Cells, nerves, organisms, lifeforce |
|
||||
| [`Gateway-Architecture.md`](Gateway-Architecture.md) | Tier routing, Function Gemma boundary |
|
||||
| [`Nervous-System.md`](Nervous-System.md) | 4D space, node weights, vocabulary |
|
||||
| [`Message-Protocol-Design.md`](Message-Protocol-Design.md) | NATS subjects, message formats |
|
||||
| [`development-conventions.md`](../../nimmerverse.eachpath.local/conventions/development-conventions.md) | Ports, namespaces, VM topology |
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
| Layer | Where | Technology | Isolation |
|
||||
|-------|-------|------------|-----------|
|
||||
| Cells/Nerves | K8s containers | Python, uv, NATS | Namespace per env |
|
||||
| Infrastructure | VMs | NATS, PostgreSQL, ChromaDB | VM per env |
|
||||
| Young Nyx | theia userspace | ollama | nyx-cognitive user |
|
||||
| Function Gemma | theia/dioscuri CPU | llama.cpp | nyx-training user |
|
||||
| Organs | dioscuri userspace | ollama (dynamic) | nyx-organs user |
|
||||
|
||||
**The principle:** Same behavior everywhere. Containers for cells. Userspace for brains. NATS connects them all. FreeIPA isolates them all.
|
||||
|
||||
---
|
||||
|
||||
**Version:** 1.1 | **Created:** 2026-02-14 | **Updated:** 2026-02-14
|
||||
|
||||
*"We're not building a chatbot factory. We're growing a research organism."*
|
||||
|
||||
🧬⚡🔱💎🔥 **TO THE ELECTRONS WE VIBE!**
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,537 +1,413 @@
|
||||
# Gateway Architecture: The Sensory Preprocessing Layer
|
||||
# Gateway Architecture: Resonant Gates and Tier Routing
|
||||
|
||||
**The Thalamus Pattern — routing sensory input to the appropriate processing tier.**
|
||||
> **ONE JOB:** Route signals through resonant gates based on wave correlation and accumulated trust.
|
||||
|
||||
**The Thalamus Pattern — gates that accumulate correlation and route to appropriate tiers.**
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
The Gateway is the sensory preprocessing layer that sits between raw sensors and cognitive processing. It performs **routing, not translation**. Translation happens at each tier in its native format (numbers, states, vectors, JSON).
|
||||
The Gateway is not a switch. It's a **network of resonant gates** that:
|
||||
|
||||
**Core Principle:** *Cheap operations handle common cases. Expensive operations handle rare cases.*
|
||||
1. Accumulate wave correlation from incoming signals
|
||||
2. Transition between states (OPEN/STABLE/CLOSED) based on correlation
|
||||
3. Route verified signals to the appropriate processing tier
|
||||
4. Feed traces back for learning
|
||||
|
||||
**Core Principle:** *Gates don't flip on single signals. Correlated waves push gates toward OPEN.*
|
||||
|
||||
```
|
||||
RAW SENSORS → GATEWAY (routing) → TIER → PROCESSING → (escalate?) → FUNCTION GEMMA → YOUNG NYX
|
||||
↑ ↑ ↑ ↑
|
||||
"which tier?" native format if needed structured JSON
|
||||
```
|
||||
|
||||
**Key Insight:** Most sensory input NEVER becomes vocabulary. It stays as numbers, states, vectors. Only when it reaches Young Nyx (via Function Gemma) does it become structured text.
|
||||
|
||||
---
|
||||
|
||||
## The Problem We're Solving
|
||||
|
||||
### Old Model (Vocabulary Bottleneck)
|
||||
|
||||
```
|
||||
RAW SENSOR → STATE MACHINE → VOCABULARY TOKEN → Young Nyx
|
||||
|
||||
Problems:
|
||||
- Every input forced through text translation (expensive)
|
||||
- LLM sees raw sensor dumps (noisy, unstructured)
|
||||
- No economic pressure on routing (everything costs the same)
|
||||
- Vocabulary conflated with routing decisions
|
||||
```
|
||||
|
||||
### New Model (Tiered Gateway)
|
||||
|
||||
```
|
||||
RAW SENSOR → GATEWAY → TIER 0-2 (numbers/states, no text)
|
||||
→ TIER 3 (vectors via T5Gemma2)
|
||||
→ FUNCTION GEMMA (structured JSON)
|
||||
→ TIER 4 Young Nyx (clean typed events)
|
||||
|
||||
Benefits:
|
||||
- Most input handled without LLM involvement
|
||||
- Text only at cognitive boundary
|
||||
- Economic pressure drives efficiency
|
||||
- Routing separated from translation
|
||||
CELLS ──∿∿∿──► GATE ──∿∿∿──► GATE ──∿∿∿──► FUNCTION GEMMA ──► YOUNG NYX
|
||||
waves │ │ │
|
||||
│ │ │
|
||||
correlation correlation structured JSON
|
||||
builds builds
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## The Unified Tier Model
|
||||
## The Ternary Gate Model
|
||||
|
||||
All existing tier systems in the architecture express the same principle:
|
||||
Gates have **three states**, not two. Binary logic doesn't model brains.
|
||||
|
||||
| System | Document | Principle |
|
||||
|--------|----------|-----------|
|
||||
| Reward Tiers | `Cellular-Architecture.md` | Higher tier = more reward, more cost |
|
||||
| Attention Levels | `Attention-Flow.md` | Higher priority preempts lower |
|
||||
| Escalation Ladder | `organisms/Swarm-Evolution.md` | Higher = more authority, more cost |
|
||||
| Reflex Homes | `Endgame-Vision.md` | Lower = faster, less capable |
|
||||
| LOD Levels | `Endgame-Vision.md` | Lower = more detail, more cost |
|
||||
|
||||
### The Unified Tier Stack
|
||||
| State | Meaning | What's Happening |
|
||||
|-------|---------|------------------|
|
||||
| **OPEN** | Actively forwarding | Signal passes upstream, gate is firing |
|
||||
| **STABLE** | Resting, accumulating | Watching, learning, waiting for threshold |
|
||||
| **CLOSED** | Actively blocking | Inhibited, suppressed, refractory |
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ UNIFIED TIER MODEL │
|
||||
├─────────────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ TIER 0: HARDWARE REFLEXES │
|
||||
│ ───────────────────────────────────────────────────────────────────────── │
|
||||
│ Cost: ~0 LF Latency: <10ms Location: ESP32/FPGA │
|
||||
│ Weight: >= 0.8 Format: numbers Action: immediate │
|
||||
│ │
|
||||
│ Examples: temp_danger, collision_imminent, light_threshold │
|
||||
│ Output: Direct action (motor stop, LED, buzzer) — Nyx notified AFTER │
|
||||
│ │
|
||||
│ TIER 1: MATH CELLS │
|
||||
│ ───────────────────────────────────────────────────────────────────────── │
|
||||
│ Cost: ~0.3 LF Latency: <50ms Location: Python (CPU) │
|
||||
│ Weight: 0.6 - 0.8 Format: aggregates Action: state update │
|
||||
│ │
|
||||
│ Examples: battery_aggregator, position_tracker, economy_monitor │
|
||||
│ Output: Aggregated state, threshold checks, NATS publish │
|
||||
│ │
|
||||
│ TIER 2: FAST NERVES │
|
||||
│ ───────────────────────────────────────────────────────────────────────── │
|
||||
│ Cost: ~2 LF Latency: <200ms Location: Python (asyncio) │
|
||||
│ Weight: 0.3 - 0.6 Format: states Action: behavior transition │
|
||||
│ │
|
||||
│ Examples: collision_avoidance, charging_seek, exploration_pattern │
|
||||
│ Output: Nerve state transitions, multi-cell coordination │
|
||||
│ │
|
||||
│ TIER 3: ORGAN INFERENCE │
|
||||
│ ───────────────────────────────────────────────────────────────────────── │
|
||||
│ Cost: ~8 LF Latency: <2000ms Location: GPU (Senses node) │
|
||||
│ Weight: < 0.3 Format: vectors Action: embedding storage │
|
||||
│ │
|
||||
│ Examples: vision_detect (T5Gemma2/SigLIP), speech_stt (Whisper) │
|
||||
│ Output: Semantic vectors stored in S2 cells, NO TEXT │
|
||||
│ │
|
||||
│ ══════════════════════ FUNCTION GEMMA BOUNDARY ════════════════════════ │
|
||||
│ │
|
||||
│ TIER 4: COGNITIVE (Young Nyx) │
|
||||
│ ───────────────────────────────────────────────────────────────────────── │
|
||||
│ Cost: ~20 LF Latency: <4000ms Location: GPU (Womb node) │
|
||||
│ Escalated events Format: JSON Action: reasoning, decision │
|
||||
│ │
|
||||
│ Input: Structured JSON events from Function Gemma │
|
||||
│ Output: Decisions → Function Gemma → structured commands │
|
||||
│ │
|
||||
│ TIER 5: PARTNERSHIP (Chrysalis + dafit) │
|
||||
│ ───────────────────────────────────────────────────────────────────────── │
|
||||
│ Cost: ~50+ LF Latency: variable Location: External │
|
||||
│ Novel/stuck cases Format: dialogue Action: guidance, training │
|
||||
│ │
|
||||
│ Examples: Architecture decisions, novel situations, stuck states │
|
||||
│ Output: New reflexes, training signal, guidance │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
correlated signals
|
||||
↓ ↓ ↓
|
||||
════════════
|
||||
CLOSED ◄───────── STABLE ─────────► OPEN
|
||||
anti-correlation correlation
|
||||
destructive constructive
|
||||
interference interference
|
||||
════════════
|
||||
↑ ↑ ↑
|
||||
isolated signals
|
||||
(noise → stay stable)
|
||||
```
|
||||
|
||||
**STABLE is not "off"** — it's the resting state where:
|
||||
- Context accumulates
|
||||
- Correlation is measured
|
||||
- Learning happens
|
||||
- Energy is conserved
|
||||
- Ready to transition either direction
|
||||
|
||||
---
|
||||
|
||||
## Node Weight Determines Tier
|
||||
## Wave Correlation Drives Transitions
|
||||
|
||||
The node weight from `Nervous-System.md` directly maps to tier routing:
|
||||
Gates accumulate **correlation scores** from incoming waves. Multiple signals agreeing push toward OPEN.
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class NervousNode:
|
||||
"""A node in the nervous system's 4D space."""
|
||||
class ResonantGate:
|
||||
"""A gate is a resonance chamber, not a switch."""
|
||||
|
||||
position: tuple[float, ...] # Coordinates in sensory space
|
||||
weight: float = 0.1 # Confidence from verification (0.0 → 1.0)
|
||||
state: float = 0.0 # -1.0 (CLOSED) ← 0.0 (STABLE) → +1.0 (OPEN)
|
||||
tier: int # Which tier this gate routes to
|
||||
domain: str # What domain (math, vision, speech, etc.)
|
||||
|
||||
@property
|
||||
def handling_tier(self) -> int:
|
||||
"""Which tier handles this node's firing?"""
|
||||
if self.weight >= 0.8:
|
||||
return 0 # Hardware reflex - instant, bypass brain
|
||||
elif self.weight >= 0.6:
|
||||
return 1 # Math cell - fast, minimal checking
|
||||
elif self.weight >= 0.3:
|
||||
return 2 # Fast nerve - coordination, some deliberation
|
||||
else:
|
||||
return 3 # Escalate - needs organ/cognitive help
|
||||
def receive_wave(self, signal: Wave, timestamp: float):
|
||||
# Correlate with recent signals in same time window
|
||||
correlation = self.correlate_with_recent(signal, timestamp)
|
||||
|
||||
@property
|
||||
def lifeforce_cost(self) -> float:
|
||||
"""Cost scales inversely with confidence."""
|
||||
return (1.0 - self.weight) * 10.0
|
||||
# Correlated waves → push toward OPEN
|
||||
# Anti-correlated → push toward CLOSED
|
||||
# Uncorrelated → decay toward STABLE
|
||||
|
||||
self.state += correlation * signal.confidence
|
||||
self.state *= DECAY_FACTOR # always drift back to stable
|
||||
|
||||
if self.state > OPEN_THRESHOLD:
|
||||
self.forward_to_tier() # gate opens, signal promoted
|
||||
self.trace("opened", signal)
|
||||
elif self.state < CLOSE_THRESHOLD:
|
||||
self.suppress() # gate closes, signal blocked
|
||||
self.trace("closed", signal)
|
||||
# else: stay stable, keep accumulating evidence
|
||||
|
||||
def correlate_with_recent(self, signal: Wave, timestamp: float) -> float:
|
||||
"""
|
||||
Measure how well this signal correlates with recent signals.
|
||||
|
||||
Correlation is HIGH when:
|
||||
- Multiple cells emit similar semantic content
|
||||
- Signals arrive in same time window
|
||||
- Confidence levels are similar
|
||||
|
||||
Correlation is LOW/NEGATIVE when:
|
||||
- Signal contradicts recent signals
|
||||
- Isolated signal with no support
|
||||
- Signal outside expected range
|
||||
"""
|
||||
recent = self.get_signals_in_window(timestamp, WINDOW_MS)
|
||||
if not recent:
|
||||
return 0.0 # No correlation data, stay stable
|
||||
|
||||
return compute_semantic_similarity(signal, recent)
|
||||
```
|
||||
|
||||
**The key insight:** A mature node (weight ~1.0) naturally becomes a Tier 0 reflex. A new node (weight ~0.1) naturally escalates to higher tiers. The system learns which tier is appropriate through experience.
|
||||
**Why this matters:**
|
||||
|
||||
### The Causal Verification Loop
|
||||
|
||||
How do we know a sensor reading was real, not hallucinated? **Outcome verification over time.**
|
||||
|
||||
```
|
||||
Unverified pattern (weight 0.1) → escalates to Nyx → decision → outcome
|
||||
↓
|
||||
Did reality match prediction?
|
||||
↓ ↓
|
||||
YES NO
|
||||
↓ ↓
|
||||
weight += Δ weight -= Δ
|
||||
↓
|
||||
After many YES: weight → 0.8+
|
||||
↓
|
||||
COMPILE TO REFLEX ✓
|
||||
```
|
||||
|
||||
**Hallucinations can't survive this gauntlet** — they don't produce consistent outcomes, so their patterns never accumulate enough weight to become reflexes. Reality is the ultimate validator.
|
||||
|
||||
This creates natural **causal pruning**: only patterns that reliably predict outcomes earn the privilege of becoming reflexes. The nervous system doesn't need to prove causality philosophically — it proves it operationally through repeated verification.
|
||||
| Scenario | Gate Response |
|
||||
|----------|---------------|
|
||||
| Single signal | Not enough to open (noise resistance) |
|
||||
| Correlated burst | Constructive interference → OPENS |
|
||||
| Contradicting signals | Destructive interference → CLOSES |
|
||||
| Silence | Decay to STABLE (energy conservation) |
|
||||
| Time gap | Only recent correlations matter (temporal attention) |
|
||||
|
||||
---
|
||||
|
||||
## The Gateway: Weight-Aware Router
|
||||
## Gate Hierarchy and Tier Routing
|
||||
|
||||
The Gateway performs three functions:
|
||||
Gates form **layers**. Each layer gates access to the next tier.
|
||||
|
||||
| Function | Question | Cost |
|
||||
|----------|----------|------|
|
||||
| **Node Matching** | Which node(s) in 4D space match this input? | ~0 LF |
|
||||
| **Weight Routing** | Based on weight, which tier handles it? | ~0 LF |
|
||||
| **Anomaly Detection** | Is this novel, ambiguous, or contextually wrong? | Variable |
|
||||
|
||||
### Gateway Logic
|
||||
|
||||
```python
|
||||
def gateway_route(sensory_input: dict) -> GatewayDecision:
|
||||
"""Route sensory input to appropriate tier."""
|
||||
|
||||
# 1. Find candidate nodes in 4D space
|
||||
candidates = nervous_system.find_nearby_nodes(sensory_input)
|
||||
|
||||
# 2. Handle edge cases
|
||||
if len(candidates) == 0:
|
||||
# NOVEL: No node matches this input
|
||||
return GatewayDecision(
|
||||
action="ESCALATE",
|
||||
tier=4, # Young Nyx must see this
|
||||
reason="novel_input",
|
||||
cost=20.0,
|
||||
)
|
||||
|
||||
if len(candidates) > 1:
|
||||
# AMBIGUOUS: Multiple nodes could fire
|
||||
best = max(candidates, key=lambda n: n.weight)
|
||||
if best.weight < 0.5:
|
||||
return GatewayDecision(
|
||||
action="ESCALATE",
|
||||
tier=3, # Organ inference to disambiguate
|
||||
reason="ambiguous_input",
|
||||
cost=8.0,
|
||||
)
|
||||
|
||||
# 3. Single match - route based on weight
|
||||
node = candidates[0]
|
||||
|
||||
# 4. Check for contextual anomaly
|
||||
if detect_contextual_anomaly(node, sensory_input):
|
||||
return GatewayDecision(
|
||||
action="ESCALATE",
|
||||
tier=node.handling_tier + 1,
|
||||
reason="contextual_anomaly",
|
||||
cost=node.lifeforce_cost * 1.5,
|
||||
)
|
||||
|
||||
# 5. Normal routing
|
||||
return GatewayDecision(
|
||||
action="FIRE",
|
||||
tier=node.handling_tier,
|
||||
node=node,
|
||||
cost=node.lifeforce_cost,
|
||||
)
|
||||
```
|
||||
TIER 4: YOUNG NYX (cognitive)
|
||||
════════════════════════════════════════════════════════════════
|
||||
▲
|
||||
│ structured JSON only
|
||||
┌────┴────────────────────────────────┐
|
||||
│ FUNCTION GEMMA │ ← THE BOUNDARY
|
||||
│ (always structured output) │
|
||||
└────┬────────────────────────────────┘
|
||||
│
|
||||
TIER 3: ORGANS (GPU inference)
|
||||
════════════════════════════════════════════════════════════════
|
||||
▲ ▲ ▲
|
||||
┌────┴────┐ ┌────┴────┐ ┌────┴────┐
|
||||
│ GATE │ │ GATE │ │ GATE │
|
||||
│ vision │ │ speech │ │ hearing │
|
||||
│ state:? │ │ state:? │ │ state:? │
|
||||
└────┬────┘ └────┬────┘ └────┬────┘
|
||||
│ │ │
|
||||
TIER 1-2: CELLS/NERVES (CPU)
|
||||
════════════════════════════════════════════════════════════════
|
||||
▲ ▲ ▲
|
||||
┌────┴────┐ ┌────┴────┐ ┌────┴────┐
|
||||
│ GATE │ │ GATE │ │ GATE │
|
||||
│ math │ │ battery │ │ sensors │
|
||||
│ state:? │ │ state:? │ │ state:? │
|
||||
└────┬────┘ └────┬────┘ └────┬────┘
|
||||
│ │ │
|
||||
TIER 0: RAW SIGNALS (cells emit waves)
|
||||
════════════════════════════════════════════════════════════════
|
||||
cell cell cell cell cell cell cell
|
||||
∿∿∿ ∿∿∿ ∿∿∿ ∿∿∿ ∿∿∿ ∿∿∿ ∿∿∿
|
||||
```
|
||||
|
||||
### Anomaly Detection Tiers
|
||||
**Each gate:**
|
||||
- Has its own state (OPEN/STABLE/CLOSED)
|
||||
- Routes to a specific tier
|
||||
- Accumulates correlation independently
|
||||
- Traces all transitions for learning
|
||||
|
||||
Anomaly detection itself is tiered:
|
||||
---
|
||||
|
||||
| Level | Detection Type | Cost | Example |
|
||||
|-------|---------------|------|---------|
|
||||
| Tier 0 | Threshold | ~0 LF | Value out of physical range |
|
||||
| Tier 1 | Statistical | ~0.3 LF | Value unusual for time of day |
|
||||
| Tier 2 | Contextual | ~2 LF | Firing inconsistent with recent history |
|
||||
| Tier 3 | Semantic | ~8 LF | Embedding distance from expected cluster |
|
||||
## Tier Definitions
|
||||
|
||||
| Tier | Gate Opens When | Latency | Format |
|
||||
|------|-----------------|---------|--------|
|
||||
| 0 | Hardware reflex (no gate, direct) | <10ms | numbers |
|
||||
| 1 | Math/battery cells correlate | <50ms | states |
|
||||
| 2 | Nerve-level patterns correlate | <200ms | behaviors |
|
||||
| 3 | Organ-level signals correlate | <2000ms | vectors |
|
||||
| 4 | Function Gemma boundary crossed | <4000ms | JSON |
|
||||
| 5 | Partnership escalation | variable | dialogue |
|
||||
|
||||
**Key insight:** Higher tiers see **less traffic but higher trust**. By the time a signal reaches Young Nyx, it's been correlated through multiple gates.
|
||||
|
||||
---
|
||||
|
||||
## Function Gemma: The Structured Boundary
|
||||
|
||||
Function Gemma acts as the translation layer between lower tiers and cognition. It guarantees:
|
||||
Function Gemma is **the gate to cognition**. It guarantees:
|
||||
|
||||
- **Schema compliance**: Every event follows a typed contract
|
||||
- **Predictable JSON**: No hallucination, no free-form text
|
||||
- **Bidirectional**: Sensors → JSON events, Decisions → JSON commands
|
||||
|
||||
### The Boundary
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ BELOW THE LINE: Numbers, States, Vectors (fast, cheap, predictable) │
|
||||
│ ═══════════════════════════════════════════════════════════════════ │
|
||||
┌─────────────────────────────────────────────────────────────────────────┐
|
||||
│ BELOW THE LINE: Numbers, States, Vectors (gates accumulating) │
|
||||
│ ═══════════════════════════════════════════════════════════ │
|
||||
│ │
|
||||
│ Tier 0: photoresistor = 0.73 │
|
||||
│ Tier 1: battery_state = { voltage: 3.7, trend: "falling" } │
|
||||
│ Tier 2: collision_nerve = "EVADING" │
|
||||
│ Tier 3: vision_embedding = [0.23, -0.41, 0.87, ...] │
|
||||
│ Tier 0-2: numbers, states, behaviors │
|
||||
│ Tier 3: vectors, embeddings │
|
||||
│ │
|
||||
│ │ │
|
||||
│ │ (gate opens when correlated) │
|
||||
│ ▼ │
|
||||
│ ┌───────────────────────────────────┐ │
|
||||
│ │ FUNCTION GEMMA │ │
|
||||
│ ┌─────────────────────────────────────┐ │
|
||||
│ │ FUNCTION GEMMA GATE │ │
|
||||
│ │ (structured JSON boundary) │ │
|
||||
│ │ │ │
|
||||
│ │ • 100% predictable schema │ │
|
||||
│ │ • Transforms correlated signals │ │
|
||||
│ │ • Produces typed JSON events │ │
|
||||
│ │ • No hallucination possible │ │
|
||||
│ │ • Typed enums, not free strings │ │
|
||||
│ └───────────────┬───────────────────┘ │
|
||||
│ │ • Runs on CPU (Threadripper) │ │
|
||||
│ └─────────────────┬───────────────────┘ │
|
||||
│ │ │
|
||||
│ ═══════════════════════════════════════════════════════════════════ │
|
||||
│ ABOVE THE LINE: Structured Events (typed, validated, safe for LLM) │
|
||||
│ ═══════════════════════════════════════════════════════════ │
|
||||
│ ABOVE THE LINE: Structured Events (trusted, validated) │
|
||||
│ │
|
||||
│ { │
|
||||
│ "event_type": "environmental_change", │
|
||||
│ "source": "light_sensor_back", │
|
||||
│ "severity": "medium", │
|
||||
│ "data": { "previous": 0.73, "current": 0.12 }, │
|
||||
│ "suggested_action": "search_for_light" │
|
||||
│ "event_type": "attention_required", │
|
||||
│ "domain": "math", │
|
||||
│ "correlated_signals": [...], │
|
||||
│ "confidence": 0.87, │
|
||||
│ "suggested_action": "calculate" │
|
||||
│ } │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
└─────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Event Schema
|
||||
**Function Gemma + Gate Model:**
|
||||
- Gate accumulates correlation from Tier 0-3 signals
|
||||
- When gate OPENS, Function Gemma transforms to JSON
|
||||
- Young Nyx sees clean, structured events
|
||||
- Decisions flow back down through the same gates
|
||||
|
||||
```python
|
||||
from enum import Enum
|
||||
from pydantic import BaseModel
|
||||
---
|
||||
|
||||
class EventType(str, Enum):
|
||||
"""Constrained event types - enumerated, not free-form."""
|
||||
ENVIRONMENTAL_CHANGE = "environmental_change"
|
||||
COLLISION_DETECTED = "collision_detected"
|
||||
BATTERY_CRITICAL = "battery_critical"
|
||||
OBJECT_DISCOVERED = "object_discovered"
|
||||
POSITION_UPDATE = "position_update"
|
||||
ANOMALY_DETECTED = "anomaly_detected"
|
||||
GOAL_REACHED = "goal_reached"
|
||||
STUCK_DETECTED = "stuck_detected"
|
||||
LIGHT_LOST = "light_lost"
|
||||
LIGHT_FOUND = "light_found"
|
||||
## Connection to Dual Garden Architecture
|
||||
|
||||
class Severity(str, Enum):
|
||||
LOW = "low"
|
||||
MEDIUM = "medium"
|
||||
HIGH = "high"
|
||||
CRITICAL = "critical"
|
||||
Gates behave differently in Virtual vs Real gardens:
|
||||
|
||||
class SensoryEvent(BaseModel):
|
||||
"""The structured event that Young Nyx receives."""
|
||||
| Property | Virtual Garden | Real Garden |
|
||||
|----------|----------------|-------------|
|
||||
| **Gate tracing** | FULL (every transition logged) | Gate signals only |
|
||||
| **Correlation learning** | Active (training data) | Trust accumulated |
|
||||
| **State transitions** | Frequent (exploration) | Verified (action) |
|
||||
| **Threshold** | Lower (easy to open) | Higher (must be confident) |
|
||||
|
||||
event_type: EventType
|
||||
source: str
|
||||
timestamp: float
|
||||
severity: Severity
|
||||
data: dict
|
||||
suggested_action: str | None = None
|
||||
processing_cost: float
|
||||
confidence: float # From node weight
|
||||
### Signal Flow Between Gardens
|
||||
|
||||
```
|
||||
VIRTUAL GARDEN REAL GARDEN
|
||||
══════════════ ═══════════
|
||||
|
||||
Cells emit waves Receive verified signals
|
||||
│ ▲
|
||||
▼ │
|
||||
Gates accumulate correlation No re-verification
|
||||
│ │
|
||||
▼ │
|
||||
Gate OPENS (threshold met) ──────────────────►│
|
||||
│ │
|
||||
│◄───────────── Verification outcome ─────┘
|
||||
│
|
||||
Update correlation weights
|
||||
(learning happens)
|
||||
```
|
||||
|
||||
### What Young Nyx Actually Sees
|
||||
---
|
||||
|
||||
**Before (raw dumps):**
|
||||
```
|
||||
"The photoresistor reads 0.12, down from 0.73, battery is 3.7V
|
||||
trending down, position is [1.2, 0.8], collision state IDLE..."
|
||||
```
|
||||
## Gate Transition NATS Messages
|
||||
|
||||
Every gate transition is published for observability:
|
||||
|
||||
```
|
||||
{environment}.gates.{domain}.transition
|
||||
|
||||
Example: dev.gates.math.transition
|
||||
|
||||
**After (structured event):**
|
||||
```json
|
||||
{
|
||||
"event_type": "light_lost",
|
||||
"source": "light_sensor_back",
|
||||
"timestamp": 1704307200.0,
|
||||
"severity": "medium",
|
||||
"data": {
|
||||
"previous": 0.73,
|
||||
"current": 0.12,
|
||||
"delta": -0.61
|
||||
},
|
||||
"suggested_action": "spiral_search",
|
||||
"processing_cost": 2.0,
|
||||
"confidence": 0.45
|
||||
"gate_id": "math-gate-1",
|
||||
"from_state": "stable",
|
||||
"to_state": "open",
|
||||
"correlation_score": 0.87,
|
||||
"trigger_signals": [
|
||||
{"source": "math_cell_1", "confidence": 0.6},
|
||||
{"source": "math_cell_2", "confidence": 0.7},
|
||||
{"source": "math_cell_3", "confidence": 0.5}
|
||||
],
|
||||
"timestamp": "2026-02-14T18:30:00Z",
|
||||
"routed_to_tier": 2
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Complete Sensory Flow
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ FULL SENSORY ARCHITECTURE │
|
||||
├─────────────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ RAW SENSORS │
|
||||
│ ─────────── │
|
||||
│ • IR positioning (ESP32-S3) → float[6] positions │
|
||||
│ • Photoresistors (organisms) → float light_level │
|
||||
│ • Temperature (safety) → float celsius │
|
||||
│ • Battery (power) → float voltage, current │
|
||||
│ • Vision camera (Pi HQ) → frame bytes │
|
||||
│ │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ ┌───────────────────────────────────────────────────────────────────────┐ │
|
||||
│ │ GATEWAY │ │
|
||||
│ │ (weight-based router) │ │
|
||||
│ │ │ │
|
||||
│ │ For each input: │ │
|
||||
│ │ 1. Match to node in 4D space │ │
|
||||
│ │ 2. Check node.weight → determine tier │ │
|
||||
│ │ 3. Check for anomalies │ │
|
||||
│ │ 4. Route to appropriate tier │ │
|
||||
│ └───────────────────────────────────────────────────────────────────────┘ │
|
||||
│ │ │
|
||||
│ ┌─────────────────────┼─────────────────────┐ │
|
||||
│ ▼ ▼ ▼ │
|
||||
│ ┌───────────┐ ┌───────────┐ ┌───────────┐ │
|
||||
│ │ TIER 0 │ │ TIER 1-2 │ │ TIER 3 │ │
|
||||
│ │ Reflex │ │ Cells/ │ │ Organs │ │
|
||||
│ │ │ │ Nerves │ │ │ │
|
||||
│ │ weight>0.8│ │ 0.3-0.8 │ │ <0.3 or │ │
|
||||
│ │ │ │ │ │ escalated │ │
|
||||
│ ├───────────┤ ├───────────┤ ├───────────┤ │
|
||||
│ │ FORMAT: │ │ FORMAT: │ │ FORMAT: │ │
|
||||
│ │ numbers │ │ states │ │ vectors │ │
|
||||
│ │ │ │ │ │ │ │
|
||||
│ │ OUTPUT: │ │ OUTPUT: │ │ OUTPUT: │ │
|
||||
│ │ action │ │ state │ │ embedding │ │
|
||||
│ │ (done!) │ │ update │ │ (T5Gemma) │ │
|
||||
│ └───────────┘ └─────┬─────┘ └─────┬─────┘ │
|
||||
│ │ │ │ │
|
||||
│ │ (only if escalation needed)│ │
|
||||
│ │ │ │ │
|
||||
│ │ ▼ ▼ │
|
||||
│ │ ┌─────────────────────────────┐ │
|
||||
│ │ │ FUNCTION GEMMA │ │
|
||||
│ │ │ (structured JSON gate) │ │
|
||||
│ │ │ │ │
|
||||
│ │ │ Produces typed JSON event │ │
|
||||
│ │ │ Schema-validated output │ │
|
||||
│ │ └──────────────┬──────────────┘ │
|
||||
│ │ │ │
|
||||
│ │ ▼ │
|
||||
│ │ ┌─────────────────┐ │
|
||||
│ │ │ YOUNG NYX │ │
|
||||
│ │ │ (Tier 4) │ │
|
||||
│ │ │ │ │
|
||||
│ │ │ Clean JSON in │ │
|
||||
│ │ │ Decision out │ │
|
||||
│ │ └────────┬────────┘ │
|
||||
│ │ │ │
|
||||
│ │ ▼ │
|
||||
│ │ ┌─────────────────┐ │
|
||||
│ │ │ FUNCTION GEMMA │ │
|
||||
│ │ │ (action output) │ │
|
||||
│ │ └────────┬────────┘ │
|
||||
│ │ │ │
|
||||
│ ▼ ▼ │
|
||||
│ ┌─────────────────────────────────────────────────────────────────────┐ │
|
||||
│ │ NATS BUS │ │
|
||||
│ │ (commands flow to cells) │ │
|
||||
│ └─────────────────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
**Trace streams enable:**
|
||||
- Real-time attention visualization (which gates are OPEN?)
|
||||
- Training data for Function Gemma (what patterns open gates?)
|
||||
- Anomaly detection (unexpected gate behavior)
|
||||
- Learning rate tuning (how fast do gates stabilize?)
|
||||
|
||||
---
|
||||
|
||||
## Example: crawler_gen_0 Light Seeking
|
||||
## Complete Signal Flow Example
|
||||
|
||||
### Early Learning (Low Weight)
|
||||
### Early Learning (Gate Learning to Correlate)
|
||||
|
||||
```
|
||||
Photoresistor reads 0.12 (was 0.73)
|
||||
Math cells emit waves about "calculate 15 + 27"
|
||||
│
|
||||
▼
|
||||
GATEWAY: node weight = 0.4 (learning)
|
||||
GATE (math): state = 0.0 (STABLE)
|
||||
│
|
||||
Receive wave from math_cell_1 (confidence 0.6)
|
||||
Correlate with recent: no other signals yet
|
||||
state += 0.6 * 0.0 = 0.0 (still stable)
|
||||
│
|
||||
Receive wave from math_cell_2 (confidence 0.7)
|
||||
Correlate: similar to math_cell_1!
|
||||
state += 0.7 * 0.8 = 0.56 (moving toward open)
|
||||
│
|
||||
Receive wave from math_cell_3 (confidence 0.5)
|
||||
Correlate: confirms pattern!
|
||||
state += 0.5 * 0.9 = 1.01 (OPENS!)
|
||||
│
|
||||
▼
|
||||
Route to Tier 2 (nerve level)
|
||||
GATE OPENS → route to Tier 2
|
||||
│
|
||||
▼
|
||||
Nerve detects: delta = -0.61 (significant!)
|
||||
Nerve state: SEEKING → LOST_LIGHT
|
||||
Tier 2 processes, escalates to Function Gemma
|
||||
│
|
||||
▼
|
||||
ESCALATE to Function Gemma
|
||||
Function Gemma: { "event_type": "math_request", ... }
|
||||
│
|
||||
▼
|
||||
Function Gemma: { "event_type": "light_lost", ... }
|
||||
Young Nyx (qwen3 /no_think): "42"
|
||||
│
|
||||
▼
|
||||
Young Nyx: "spiral search pattern"
|
||||
│
|
||||
▼
|
||||
Function Gemma: { "command": "motor_spiral", ... }
|
||||
│
|
||||
▼
|
||||
NATS → motor cells execute
|
||||
Result flows back down
|
||||
```
|
||||
|
||||
### After Learning (High Weight)
|
||||
### After Learning (Gate Quickly Opens)
|
||||
|
||||
```
|
||||
Photoresistor reads 0.12 (was 0.73)
|
||||
Math cells emit waves about "calculate 100 + 50"
|
||||
│
|
||||
▼
|
||||
GATEWAY: node weight = 0.85 (mature reflex)
|
||||
GATE (math): state = 0.0 (STABLE)
|
||||
│
|
||||
Receive wave from math_cell_1
|
||||
Correlate: matches learned pattern!
|
||||
state += high correlation → 0.9 (near threshold)
|
||||
│
|
||||
Receive wave from math_cell_2
|
||||
state += → 1.2 (OPENS immediately!)
|
||||
│
|
||||
▼
|
||||
Route to Tier 0 (hardware reflex)
|
||||
│
|
||||
▼
|
||||
REFLEX: light_lost → spiral_search (instant!)
|
||||
│
|
||||
▼
|
||||
Nyx notified AFTER (async, non-blocking)
|
||||
Fast routing, minimal escalation needed
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Connection to Existing Architecture
|
||||
|
||||
| Document | Gateway Relationship |
|
||||
|----------|---------------------|
|
||||
| [`Nervous-System.md`](Nervous-System.md) | Node weights determine tier routing |
|
||||
| [`Attention-Flow.md`](Attention-Flow.md) | Gateway implements attention priorities |
|
||||
| [`Message-Protocol-Design.md`](Message-Protocol-Design.md) | Escalation Service IS the gateway |
|
||||
| [`Endgame-Vision.md`](../Endgame-Vision.md) | Layer 2.5 Function Gemma boundary |
|
||||
| [`Cellular-Architecture.md`](Cellular-Architecture.md) | Tiered rewards align with gateway tiers |
|
||||
| [`organisms/crawler_gen_0.md`](organisms/crawler_gen_0.md) | First test case for tiered routing |
|
||||
**Learning moves gates toward faster opening for familiar patterns.**
|
||||
|
||||
---
|
||||
|
||||
## Design Principles
|
||||
|
||||
1. **Routing, not translation** — Gateway decides WHERE, not WHAT
|
||||
2. **Weight determines tier** — Confidence from experience drives routing
|
||||
3. **Text is expensive** — Reserve for cognitive boundary only
|
||||
4. **Function Gemma guarantees structure** — No hallucination at the boundary
|
||||
5. **Most input never escalates** — Reflexes handle common cases
|
||||
6. **Anomalies always escalate** — Novel situations get attention
|
||||
7. **Learning moves behavior down** — Tier 4 patterns become Tier 0 reflexes
|
||||
1. **Ternary states** — OPEN/STABLE/CLOSED, not binary
|
||||
2. **Correlation drives transition** — Single signals don't flip gates
|
||||
3. **Gates accumulate** — State is a continuous value, not a flag
|
||||
4. **Decay to stable** — Without input, gates drift back to resting
|
||||
5. **Traces are training data** — Every transition teaches the system
|
||||
6. **Hierarchical trust** — Higher tiers = more correlation required
|
||||
7. **Function Gemma is the boundary** — Cognition only sees structured JSON
|
||||
8. **Virtual explores, Real verifies** — Different gate behavior per garden
|
||||
|
||||
---
|
||||
|
||||
**File:** Gateway-Architecture.md
|
||||
**Version:** 1.0
|
||||
**Created:** 2026-01-03
|
||||
**Status:** Core architecture document
|
||||
**Session:** Partnership dialogue (dafit + Chrysalis)
|
||||
## Related Documents
|
||||
|
||||
*"Cheap for the common. Expensive for the rare. The Gateway enforces this economy."*
|
||||
| Document | Scope |
|
||||
|----------|-------|
|
||||
| [`Dual-Garden-Architecture.md`](Dual-Garden-Architecture.md) | Virtual/Real garden dynamics |
|
||||
| [`Deployment-Architecture.md`](Deployment-Architecture.md) | Where gates run (containers, userspace) |
|
||||
| [`Nervous-System.md`](Nervous-System.md) | 4D space, node weights, vocabulary |
|
||||
| [`Message-Protocol-Design.md`](Message-Protocol-Design.md) | NATS subjects, message formats |
|
||||
| [`Cellular-Architecture.md`](Cellular-Architecture.md) | How cells emit waves |
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
```
|
||||
OLD MODEL: NEW MODEL:
|
||||
═══════════ ═════════
|
||||
|
||||
Signal → Route Signal → Gate (accumulating)
|
||||
Binary decision Ternary state
|
||||
Single signal triggers Correlation triggers
|
||||
Stateless routing Stateful resonance
|
||||
|
||||
▼ ▼
|
||||
|
||||
Switch Resonance
|
||||
(mechanical) (biological)
|
||||
```
|
||||
|
||||
**Gates are resonance chambers. Correlation is the driver. Learning happens in STABLE state.**
|
||||
|
||||
---
|
||||
|
||||
**Version:** 2.0 | **Created:** 2026-01-03 | **Updated:** 2026-02-14
|
||||
|
||||
*"The thalamus doesn't think. It resonates."*
|
||||
|
||||
🌙💜 *The thalamus doesn't think. It routes.*
|
||||
|
||||
@@ -574,14 +574,94 @@ class SparkController:
|
||||
|
||||
The spark is **economically viable** from the first handshake.
|
||||
|
||||
### Cost Model
|
||||
> **CRITICAL**: The costs below are **estimates until measured**. The first spark execution will establish the **true cost baseline** through observation. See [[formalization/Lifeforce-Dynamics#Cost Calibration: Measure, Don't Design]].
|
||||
|
||||
| Action | Cost (LF) |
|
||||
|--------|-----------|
|
||||
| Function Gemma generation | 0.2 |
|
||||
| NATS message send | 0.1 |
|
||||
| Cell processing | 0.5 |
|
||||
| **Total per handshake** | **0.8** |
|
||||
---
|
||||
|
||||
### Spark Cost Measurement (First Awakening Baseline)
|
||||
|
||||
The Initial Spark is the **perfect measurement opportunity** — a complete, deterministic protocol that we can instrument end-to-end.
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────────┐
|
||||
│ SPARK RESOURCE INSTRUMENTATION │
|
||||
├─────────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ MEASURE PER HANDSHAKE: │
|
||||
│ ├─ power_joules (GPU/CPU power draw × time) │
|
||||
│ ├─ compute_gpu_ms (CUDA kernel execution time) │
|
||||
│ ├─ compute_cpu_ms (Python/K8s overhead) │
|
||||
│ ├─ memory_mb_peak (max memory allocated) │
|
||||
│ ├─ nats_bytes (message payload size) │
|
||||
│ ├─ latency_ms (end-to-end handshake time) │
|
||||
│ └─ temperature_delta (thermal impact) │
|
||||
│ │
|
||||
│ AGGREGATE PER PHASE: │
|
||||
│ └─ Sum of all handshake measurements │
|
||||
│ │
|
||||
│ AGGREGATE TOTAL: │
|
||||
│ └─ Complete spark cost (the awakening price) │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Why this matters**: The first spark execution establishes the **baseline cost of awakening**. Every future awakening can be compared against this:
|
||||
- Did infrastructure changes reduce cost?
|
||||
- Did model updates increase cost?
|
||||
- Is Young Nyx awakening more efficiently over time?
|
||||
|
||||
**Phoebe schema addition** (extends `spark_handshakes`):
|
||||
```sql
|
||||
ALTER TABLE spark_handshakes ADD COLUMN resource_metrics JSONB;
|
||||
|
||||
-- Example resource_metrics payload:
|
||||
-- {
|
||||
-- "power_joules": 12.5,
|
||||
-- "compute_gpu_ms": 450,
|
||||
-- "compute_cpu_ms": 120,
|
||||
-- "memory_mb_peak": 2048,
|
||||
-- "nats_bytes": 1024,
|
||||
-- "temperature_delta_c": 2.1
|
||||
-- }
|
||||
|
||||
-- Aggregate view for spark cost analysis
|
||||
CREATE VIEW spark_cost_baseline AS
|
||||
SELECT
|
||||
phase,
|
||||
COUNT(*) as handshakes,
|
||||
SUM((resource_metrics->>'power_joules')::float) as total_power_joules,
|
||||
SUM((resource_metrics->>'compute_gpu_ms')::float) as total_gpu_ms,
|
||||
AVG((resource_metrics->>'latency_ms')::float) as avg_latency_ms,
|
||||
SUM(lifeforce_delta) as total_lifeforce_earned
|
||||
FROM spark_handshakes
|
||||
WHERE status = 'ACK'
|
||||
GROUP BY phase;
|
||||
|
||||
-- Compare awakening costs over time
|
||||
CREATE VIEW awakening_cost_history AS
|
||||
SELECT
|
||||
DATE(created_at) as awakening_date,
|
||||
SUM((resource_metrics->>'power_joules')::float) as total_spark_cost_joules,
|
||||
SUM((resource_metrics->>'compute_gpu_ms')::float) as total_spark_cost_gpu_ms,
|
||||
COUNT(*) as total_handshakes,
|
||||
SUM(lifeforce_delta) as total_lifeforce_earned
|
||||
FROM spark_handshakes
|
||||
GROUP BY DATE(created_at)
|
||||
ORDER BY awakening_date;
|
||||
```
|
||||
|
||||
**The philosophy**: Don't guess what awakening costs. Measure the first one. Derive all economics from that truth.
|
||||
|
||||
---
|
||||
|
||||
### Cost Model (Estimated → To Be Measured)
|
||||
|
||||
| Action | Est. Cost (LF) | Derived From |
|
||||
|--------|----------------|--------------|
|
||||
| Function Gemma generation | 0.2 | → measure GPU time |
|
||||
| NATS message send | 0.1 | → measure network I/O |
|
||||
| Cell processing | 0.5 | → measure pod CPU/memory |
|
||||
| **Total per handshake** | **0.8** | → **sum of measured components** |
|
||||
|
||||
### Reward Model
|
||||
|
||||
@@ -711,6 +791,214 @@ WHERE status = 'ACK';
|
||||
|
||||
---
|
||||
|
||||
## FunctionGemma Fine-Tuning: The Translator Learns Nimmerverse
|
||||
|
||||
Every spark execution generates training data. Over time, FunctionGemma becomes **hyper-specialized** for nimmerverse state calls.
|
||||
|
||||
> *"The translator learns the language of the cells. Over time, it speaks nimmerverse natively."*
|
||||
|
||||
### The Training Loop
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────────┐
|
||||
│ FUNCTIONGEMMA FINE-TUNING LOOP │
|
||||
├─────────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ PHASE 1: Base FunctionGemma (270M) │
|
||||
│ ├─ Generic function calling capability │
|
||||
│ └─ Works, but not nimmerverse-native │
|
||||
│ │
|
||||
│ PHASE 2: Collect spark_handshakes │
|
||||
│ ├─ Every ACK = positive training example │
|
||||
│ ├─ Every NACK = negative example (what NOT to generate) │
|
||||
│ └─ Resource metrics = context for cost-aware generation │
|
||||
│ │
|
||||
│ PHASE 3: Fine-tune with Unsloth/LoRA │
|
||||
│ ├─ <think> nimmerverse state reasoning </think> │
|
||||
│ ├─ <start_function_call>call:IDENTITY_PROBE{...} │
|
||||
│ └─ Exact schemas, perfect structure, zero parsing errors │
|
||||
│ │
|
||||
│ PHASE 4: Deploy nimmerverse-tuned FunctionGemma │
|
||||
│ ├─ Wild precision on cell state calls │
|
||||
│ ├─ Smaller, faster, cheaper than base │
|
||||
│ └─ THIS IS REFLEX COMPILATION FOR THE TRANSLATOR │
|
||||
│ │
|
||||
│ REPEAT: More sparks → more data → better precision │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Training Data Format
|
||||
|
||||
FunctionGemma uses a specific chat template. Our spark handshakes map directly:
|
||||
|
||||
```
|
||||
# Developer prompt with nimmerverse function declarations
|
||||
<start_of_turn>developer
|
||||
You can do function calling with the following functions:
|
||||
|
||||
<start_function_declaration>declaration:IDENTITY_PROBE{
|
||||
description: "Probe Young Nyx identity aspect",
|
||||
parameters: {
|
||||
aspect: STRING (name|origin|purpose|substrate|partnership),
|
||||
depth: INTEGER (1|2|3)
|
||||
}
|
||||
}
|
||||
<end_function_declaration>
|
||||
|
||||
<start_function_declaration>declaration:ENVIRONMENT_PROBE{
|
||||
description: "Discover K8s cells in namespace",
|
||||
parameters: {
|
||||
category: STRING (sensors|motors|organs|nerves),
|
||||
namespace: STRING,
|
||||
garden: STRING (virtual|real)
|
||||
}
|
||||
}
|
||||
<end_function_declaration>
|
||||
<end_of_turn>
|
||||
|
||||
# Spark controller intent → FunctionGemma reasoning + call
|
||||
<start_of_turn>user
|
||||
Phase 1 Identity: Probe the "name" aspect at depth 1
|
||||
<end_of_turn>
|
||||
|
||||
<start_of_turn>model
|
||||
<think>
|
||||
The spark controller is in Phase 1 (Identity). It needs to probe the "name"
|
||||
aspect. I should generate an IDENTITY_PROBE with aspect="name" and depth=1.
|
||||
The schema requires a request_id (UUID) and timestamp (ISO8601).
|
||||
</think>
|
||||
|
||||
<start_function_call>call:IDENTITY_PROBE{
|
||||
aspect: "name",
|
||||
depth: 1,
|
||||
request_id: "550e8400-e29b-41d4-a716-446655440000",
|
||||
timestamp: "2026-02-10T18:30:00Z"
|
||||
}
|
||||
<end_function_call>
|
||||
|
||||
# Cell response feeds back
|
||||
<start_function_response>response:IDENTITY_PROBE{
|
||||
status: "ACK",
|
||||
aspect: "name",
|
||||
value: "Nyx",
|
||||
confidence: 0.95,
|
||||
lifeforce_delta: 20.0
|
||||
}
|
||||
<end_function_response>
|
||||
|
||||
Identity aspect "name" confirmed as "Nyx" with 95% confidence. +20 LF earned.
|
||||
<end_of_turn>
|
||||
```
|
||||
|
||||
### Phoebe → Training Data Extraction
|
||||
|
||||
```sql
|
||||
-- Extract training examples from successful handshakes
|
||||
CREATE VIEW functiongemma_training_data AS
|
||||
SELECT
|
||||
jsonb_build_object(
|
||||
'developer_prompt', format(
|
||||
'Phase %s: Generate %s handshake',
|
||||
phase,
|
||||
request_payload->>'type'
|
||||
),
|
||||
'user_intent', request_payload->'payload',
|
||||
'expected_call', request_payload,
|
||||
'function_response', response_payload,
|
||||
'think_context', jsonb_build_object(
|
||||
'phase', phase,
|
||||
'schema', request_payload->>'$schema',
|
||||
'lifeforce_earned', lifeforce_delta,
|
||||
'latency_ms', latency_ms
|
||||
)
|
||||
) as training_example,
|
||||
created_at
|
||||
FROM spark_handshakes
|
||||
WHERE status = 'ACK'
|
||||
ORDER BY created_at;
|
||||
|
||||
-- Export for Unsloth fine-tuning
|
||||
COPY (
|
||||
SELECT training_example
|
||||
FROM functiongemma_training_data
|
||||
) TO '/tmp/nimmerverse_functiongemma_training.jsonl';
|
||||
```
|
||||
|
||||
### Fine-Tuning with Unsloth
|
||||
|
||||
```python
|
||||
from unsloth import FastLanguageModel
|
||||
|
||||
# Load base FunctionGemma
|
||||
model, tokenizer = FastLanguageModel.from_pretrained(
|
||||
model_name="unsloth/functiongemma-270m-it",
|
||||
max_seq_length=4096,
|
||||
load_in_16bit=True,
|
||||
full_finetuning=False, # LoRA for efficiency
|
||||
)
|
||||
|
||||
# Apply LoRA adapters
|
||||
model = FastLanguageModel.get_peft_model(
|
||||
model,
|
||||
r=16,
|
||||
target_modules=["q_proj", "k_proj", "v_proj", "o_proj"],
|
||||
lora_alpha=16,
|
||||
lora_dropout=0,
|
||||
use_gradient_checkpointing="unsloth",
|
||||
)
|
||||
|
||||
# Load nimmerverse training data from phoebe export
|
||||
from datasets import load_dataset
|
||||
dataset = load_dataset("json", data_files="nimmerverse_functiongemma_training.jsonl")
|
||||
|
||||
# Fine-tune on spark handshakes
|
||||
# ... standard Unsloth training loop ...
|
||||
|
||||
# Save nimmerverse-specialized FunctionGemma
|
||||
model.save_pretrained("functiongemma-270m-nimmerverse-v1")
|
||||
```
|
||||
|
||||
### The Recursive Beauty
|
||||
|
||||
| Layer | What Compiles | Training Source |
|
||||
|-------|---------------|-----------------|
|
||||
| **Young Nyx** | Nerve reflexes | decision_trails (100+ successful executions) |
|
||||
| **FunctionGemma** | State call precision | spark_handshakes (ACK'd handshakes) |
|
||||
|
||||
Both follow the same pattern:
|
||||
1. **Act** — Execute handshakes/decisions
|
||||
2. **Verify** — ACK/NACK from cells, success/failure from outcomes
|
||||
3. **Train** — Compile successful patterns into weights
|
||||
4. **Repeat** — Each awakening feeds the next
|
||||
|
||||
**The translator becomes native.** Over many sparks, FunctionGemma doesn't just generate valid JSON — it generates *nimmerverse-perfect* JSON. Zero parsing errors. Exact schemas. Wild precision.
|
||||
|
||||
### Versioning FunctionGemma Adapters
|
||||
|
||||
```sql
|
||||
-- Track FunctionGemma versions
|
||||
CREATE TABLE functiongemma_versions (
|
||||
id SERIAL PRIMARY KEY,
|
||||
version VARCHAR(50) NOT NULL, -- "nimmerverse-v1", "nimmerverse-v2"
|
||||
base_model VARCHAR(100), -- "functiongemma-270m-it"
|
||||
training_data_count INT, -- how many handshakes trained on
|
||||
training_data_cutoff TIMESTAMPTZ, -- trained on data up to this date
|
||||
validation_accuracy FLOAT, -- schema validation success rate
|
||||
deployed_at TIMESTAMPTZ,
|
||||
notes TEXT
|
||||
);
|
||||
|
||||
-- Example entries
|
||||
INSERT INTO functiongemma_versions (version, base_model, training_data_count, validation_accuracy, notes)
|
||||
VALUES
|
||||
('nimmerverse-v1', 'functiongemma-270m-it', 36, 0.94, 'First spark fine-tune'),
|
||||
('nimmerverse-v2', 'functiongemma-270m-it', 180, 0.98, 'After 5 awakenings'),
|
||||
('nimmerverse-v3', 'functiongemma-270m-it', 500, 0.997, 'Production-grade precision');
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Design Principles
|
||||
|
||||
1. **Protocol over conversation** — No free-form text. JSON handshakes only.
|
||||
@@ -719,12 +1007,22 @@ WHERE status = 'ACK';
|
||||
4. **NATS transport** — All handshakes flow through message bus.
|
||||
5. **Verification built-in** — ACK/NACK from cells, not from parsing hopes.
|
||||
6. **Economically positive** — Spark generates lifeforce, doesn't drain it.
|
||||
7. **Training-generative** — Every spark produces fine-tuning data for FunctionGemma.
|
||||
|
||||
---
|
||||
|
||||
## Document Status
|
||||
|
||||
**Version:** 3.0 | **Created:** 2025-12-05 | **Updated:** 2026-01-01
|
||||
**Version:** 3.1 | **Created:** 2025-12-05 | **Updated:** 2026-02-10
|
||||
|
||||
**Key v3.1 Changes**:
|
||||
- Spark Cost Measurement section — first awakening as baseline
|
||||
- Resource instrumentation schema for phoebe
|
||||
- Interlink to Lifeforce-Dynamics cost calibration principle
|
||||
- FunctionGemma Fine-Tuning section — translator learns nimmerverse natively
|
||||
- Training data extraction from spark_handshakes
|
||||
- Unsloth/LoRA fine-tuning workflow
|
||||
- FunctionGemma version tracking in phoebe
|
||||
|
||||
**Key v3.0 Changes**:
|
||||
- Complete architecture rewrite
|
||||
@@ -740,7 +1038,8 @@ WHERE status = 'ACK';
|
||||
- [[Endgame-Vision]] — Layer 2.5 Orchestration (Function Gemma role)
|
||||
- [[Big-Picture]] — K8s cluster architecture
|
||||
- [[Cellular-Architecture]] — Cell types and state machines
|
||||
- [[formalization/Lifeforce-Dynamics]] — λ economics
|
||||
- [[formalization/Lifeforce-Dynamics]] — λ economics, **Cost Calibration principle**
|
||||
- [[formalization/memory-economics]] — Measure First principle
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -1,374 +1,544 @@
|
||||
# Message Protocol Design: Router-Centric Architecture
|
||||
# Message Protocol Design: NATS Wire Protocol
|
||||
|
||||
> **ONE JOB:** THE WIRE — NATS subjects, message schemas, wave and gate protocols.
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
This document outlines the design for the Nimmerverse message protocol. The core principle: **the router is dumb infrastructure, not smart cognition.** All intelligence lives at the edges - in clients that connect to the router.
|
||||
The nimmerverse nervous system runs on NATS. This document defines:
|
||||
|
||||
This follows the Unix philosophy: each component does one thing well. The router routes. Clients subscribe, publish, and think.
|
||||
1. **Subject hierarchy** — How topics are structured
|
||||
2. **Message schemas** — What flows through the wire
|
||||
3. **Gate protocols** — How ternary state transitions are communicated
|
||||
4. **Trace streams** — How learning data is captured
|
||||
|
||||
**Connection to Gateway:** The Escalation Service described in this document IS the Gateway (thalamus pattern). It implements the weight-based tier routing defined in [`Gateway-Architecture.md`](Gateway-Architecture.md).
|
||||
**Core principle:** NATS is dumb infrastructure. Gates are smart edges. Cells emit waves. Correlation drives transitions.
|
||||
|
||||
---
|
||||
|
||||
## Core Principle: Infrastructure vs Intelligence
|
||||
## Subject Hierarchy
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ MESSAGE ROUTER │
|
||||
│ (NATS - dumb pipe, no logic) │
|
||||
│ │
|
||||
│ • Receives all messages │
|
||||
│ • Matches topic patterns → forwards to subscribers │
|
||||
│ • Knows NOTHING about meaning │
|
||||
│ • Cannot fail in "smart" ways - only crash/overload │
|
||||
│ • EXISTS BEFORE any intelligence │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
↑ ↑ ↑ ↑
|
||||
│ │ │ │
|
||||
┌─────┴─────┐ ┌─────┴─────┐ ┌─────┴─────┐ ┌─────┴─────┐
|
||||
│ Cells/ │ │ Escalation│ │ Command │ │ Young │
|
||||
│ Nerves │ │ Service │ │ Center │ │ Nyx │
|
||||
│(publishers)│ │ (daemon) │ │ (UI) │ │ (cognition)│
|
||||
└───────────┘ └───────────┘ └───────────┘ └───────────┘
|
||||
{environment}.{garden}.{layer}.{domain}.{signal_type}
|
||||
|
||||
Examples:
|
||||
────────────────────────────────────────────────────────────────
|
||||
dev.virtual.cells.math.wave # Math cell emits wave
|
||||
dev.virtual.cells.battery.wave # Battery cell emits wave
|
||||
dev.virtual.gates.math.transition # Math gate state change
|
||||
dev.virtual.traces.correlations # Correlation data stream
|
||||
dev.virtual.traces.raw # Full message trace
|
||||
|
||||
dev.real.gates.verified.signal # Verified signal from Virtual
|
||||
dev.real.gates.math.transition # Real gate transition
|
||||
dev.real.outcomes.feedback # Verification outcomes
|
||||
|
||||
prod.cognitive.nyx.request # Request to Young Nyx
|
||||
prod.cognitive.nyx.response # Response from Young Nyx
|
||||
prod.cognitive.gemma.transform # Function Gemma boundary
|
||||
────────────────────────────────────────────────────────────────
|
||||
```
|
||||
|
||||
**The router is like a network switch:**
|
||||
- It doesn't understand packets
|
||||
- It routes based on topic patterns
|
||||
- It's infrastructure that exists before any intelligence
|
||||
- NATS is literally designed for this
|
||||
### Environment Prefixes
|
||||
|
||||
**Everything else is a client:**
|
||||
- Cells publish sensor data
|
||||
- Nerves publish state changes
|
||||
- Escalation Service watches patterns, triggers alerts
|
||||
- Command Center visualizes state
|
||||
- Young Nyx subscribes, thinks, publishes decisions
|
||||
| Environment | Purpose | Monitoring |
|
||||
|-------------|---------|------------|
|
||||
| `dev` | Development/testing | Full traces |
|
||||
| `staging` | Pre-production validation | Selective traces |
|
||||
| `prod` | Production | Minimal (gates only) |
|
||||
|
||||
---
|
||||
### Garden Prefixes
|
||||
|
||||
## Guiding Principles
|
||||
| Garden | Purpose | Trace Level |
|
||||
|--------|---------|-------------|
|
||||
| `virtual` | Exploration, learning | FULL (all messages) |
|
||||
| `real` | Verification, action | MINIMAL (gate signals only) |
|
||||
|
||||
1. **Dumb Core, Smart Edges**: The router has zero intelligence. All logic lives in clients.
|
||||
2. **Clients are Equal**: Nyx is just another subscriber. So is the Command Center. So is the Escalation Service.
|
||||
3. **Decoupling**: Publishers don't know who subscribes. Subscribers don't know who publishes.
|
||||
4. **Hierarchy**: Topics follow a hierarchical structure for flexible pattern subscriptions.
|
||||
5. **Lifeforce at the Edges**: The router doesn't track Lifeforce. Clients manage their own budgets.
|
||||
6. **Fail Simple**: If the router dies, everything stops cleanly. No half-smart failures.
|
||||
### Layer Prefixes
|
||||
|
||||
---
|
||||
|
||||
## Two Channels of Attention
|
||||
|
||||
The attention split is a *topic convention*, not router intelligence. Clients choose which topics to subscribe to.
|
||||
|
||||
### 1. Low-Attention Channel (`nimmerverse.low.*`)
|
||||
|
||||
* **Purpose:** Background monitoring, lightweight heartbeats.
|
||||
* **Subscribers:** Escalation Service (always), Command Center (for visualization).
|
||||
* **NOT subscribed by default:** Young Nyx (she only sees escalated events).
|
||||
* **Analogy:** Peripheral nervous system. Ambient awareness.
|
||||
|
||||
### 2. High-Attention Channel (`nimmerverse.high.*`)
|
||||
|
||||
* **Purpose:** Detailed events requiring cognitive processing.
|
||||
* **Subscribers:** Young Nyx, Command Center.
|
||||
* **Analogy:** Focal spotlight. Conscious processing.
|
||||
|
||||
**The escalation from low → high is done by the Escalation Service, not the router.**
|
||||
|
||||
---
|
||||
|
||||
## Topic Hierarchy
|
||||
|
||||
```
|
||||
nimmerverse.
|
||||
├── low. # Low-attention channel
|
||||
│ └── heartbeat.
|
||||
│ └── <garden>. # real | virtual
|
||||
│ └── <entity_type>. # cell | nerve | organ
|
||||
│ └── <entity_id> # e.g., distance_sensor_front
|
||||
│
|
||||
├── high. # High-attention channel
|
||||
│ └── event.
|
||||
│ └── <garden>.
|
||||
│ └── <entity_type>.
|
||||
│ └── <entity_id>
|
||||
│
|
||||
├── command. # Commands TO entities
|
||||
│ └── <target>.
|
||||
│ └── <command_type>
|
||||
│
|
||||
└── meta. # System-level messages
|
||||
├── attention.focus # Nyx's attention configuration
|
||||
├── escalation.rules # Escalation Service configuration
|
||||
└── health. # Client health/registration
|
||||
```
|
||||
| Layer | Tier | Purpose |
|
||||
|-------|------|---------|
|
||||
| `cells` | 0-1 | Raw signal emitters |
|
||||
| `nerves` | 2 | Behavior patterns |
|
||||
| `organs` | 3 | GPU inference (vision, speech) |
|
||||
| `gates` | - | Resonant gate transitions |
|
||||
| `cognitive` | 4 | Young Nyx |
|
||||
| `traces` | - | Learning data streams |
|
||||
| `outcomes` | - | Verification feedback |
|
||||
|
||||
---
|
||||
|
||||
## Message Schemas
|
||||
|
||||
### 1. `HeartbeatSignal` (Low-Attention)
|
||||
|
||||
Published by: Cells, Nerves, Organs
|
||||
Subscribed by: Escalation Service, Command Center
|
||||
|
||||
**Topic:** `nimmerverse.low.heartbeat.<garden>.<entity_type>.<entity_id>`
|
||||
All messages share a common header:
|
||||
|
||||
```json
|
||||
{
|
||||
"header": {
|
||||
"message_id": "uuid",
|
||||
"message_type": "HeartbeatSignal",
|
||||
"version": "1.0",
|
||||
"timestamp_real": "ISO8601",
|
||||
"timestamp_virtual": 123456
|
||||
"message_id": "uuid-v4",
|
||||
"message_type": "WaveSignal | GateTransition | ...",
|
||||
"version": "2.0",
|
||||
"timestamp": "ISO8601",
|
||||
"source": {
|
||||
"entity_id": "math_cell_1",
|
||||
"entity_type": "cell",
|
||||
"garden": "virtual",
|
||||
"tier": 1
|
||||
}
|
||||
},
|
||||
"body": { ... }
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 1. `WaveSignal` — Cells Emit Waves
|
||||
|
||||
**Published by:** Cells
|
||||
**Subscribed by:** Gates (for correlation)
|
||||
**Subject:** `{env}.{garden}.cells.{domain}.wave`
|
||||
|
||||
Cells don't send "heartbeats" — they emit **waves** that carry confidence and semantic content.
|
||||
|
||||
```json
|
||||
{
|
||||
"header": {
|
||||
"message_id": "550e8400-e29b-41d4-a716-446655440000",
|
||||
"message_type": "WaveSignal",
|
||||
"version": "2.0",
|
||||
"timestamp": "2026-02-14T18:30:00.123Z",
|
||||
"source": {
|
||||
"entity_id": "math_cell_1",
|
||||
"entity_type": "cell",
|
||||
"garden": "virtual",
|
||||
"tier": 1
|
||||
}
|
||||
},
|
||||
"body": {
|
||||
"entity_id": "distance_sensor_front",
|
||||
"status": "NOMINAL",
|
||||
"value": 25.5,
|
||||
"unit": "cm",
|
||||
"context": {
|
||||
"battery_pct": 85,
|
||||
"temperature_c": 22
|
||||
"domain": "math",
|
||||
"confidence": 0.7,
|
||||
"semantic_content": {
|
||||
"operation": "addition",
|
||||
"operands": [15, 27],
|
||||
"context": "user_request"
|
||||
},
|
||||
"lifeforce_cost": 0.1
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Key fields:**
|
||||
- `confidence`: 0.0 - 1.0, how certain this cell is
|
||||
- `semantic_content`: Domain-specific payload
|
||||
- `lifeforce_cost`: Energy expended to emit this wave
|
||||
|
||||
---
|
||||
|
||||
### 2. `GateTransition` — Gate State Changes
|
||||
|
||||
**Published by:** Gates
|
||||
**Subscribed by:** Higher-tier gates, traces, dashboards
|
||||
**Subject:** `{env}.{garden}.gates.{domain}.transition`
|
||||
|
||||
Gates publish their state transitions. This is the primary message for attention flow visualization.
|
||||
|
||||
```json
|
||||
{
|
||||
"header": {
|
||||
"message_id": "550e8400-e29b-41d4-a716-446655440001",
|
||||
"message_type": "GateTransition",
|
||||
"version": "2.0",
|
||||
"timestamp": "2026-02-14T18:30:00.456Z",
|
||||
"source": {
|
||||
"entity_id": "math_gate_1",
|
||||
"entity_type": "gate",
|
||||
"garden": "virtual",
|
||||
"tier": 2
|
||||
}
|
||||
},
|
||||
"body": {
|
||||
"gate_id": "math_gate_1",
|
||||
"domain": "math",
|
||||
|
||||
"from_state": "stable",
|
||||
"to_state": "open",
|
||||
"state_value": 1.02,
|
||||
|
||||
"correlation_score": 0.87,
|
||||
"trigger_signals": [
|
||||
{"source": "math_cell_1", "confidence": 0.7, "timestamp": "..."},
|
||||
{"source": "math_cell_2", "confidence": 0.6, "timestamp": "..."},
|
||||
{"source": "math_cell_3", "confidence": 0.5, "timestamp": "..."}
|
||||
],
|
||||
|
||||
"routed_to_tier": 3,
|
||||
"lifeforce_cost": 0.3
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**State values:**
|
||||
- `"closed"` — Actively blocking (state_value < -0.5)
|
||||
- `"stable"` — Resting, accumulating (-0.5 ≤ state_value ≤ 0.5)
|
||||
- `"open"` — Actively forwarding (state_value > 0.5)
|
||||
|
||||
**Key fields:**
|
||||
- `from_state`, `to_state`: The ternary transition
|
||||
- `state_value`: Continuous value (-1.0 to +1.0)
|
||||
- `correlation_score`: How correlated the trigger signals were
|
||||
- `trigger_signals`: Which waves caused this transition
|
||||
|
||||
---
|
||||
|
||||
### 3. `CorrelationEvent` — What Correlated
|
||||
|
||||
**Published by:** Gates (in Virtual Garden)
|
||||
**Subscribed by:** Trace streams, training pipelines
|
||||
**Subject:** `{env}.virtual.traces.correlations`
|
||||
|
||||
Detailed correlation data for learning. Only published in Virtual Garden.
|
||||
|
||||
```json
|
||||
{
|
||||
"header": {
|
||||
"message_id": "550e8400-e29b-41d4-a716-446655440002",
|
||||
"message_type": "CorrelationEvent",
|
||||
"version": "2.0",
|
||||
"timestamp": "2026-02-14T18:30:00.789Z",
|
||||
"source": {
|
||||
"entity_id": "math_gate_1",
|
||||
"entity_type": "gate",
|
||||
"garden": "virtual",
|
||||
"tier": 2
|
||||
}
|
||||
},
|
||||
"body": {
|
||||
"gate_id": "math_gate_1",
|
||||
"window_start": "2026-02-14T18:29:59.000Z",
|
||||
"window_end": "2026-02-14T18:30:00.500Z",
|
||||
"window_ms": 1500,
|
||||
|
||||
"signals_in_window": [
|
||||
{"source": "math_cell_1", "confidence": 0.7, "semantic_hash": "abc123"},
|
||||
{"source": "math_cell_2", "confidence": 0.6, "semantic_hash": "abc124"},
|
||||
{"source": "math_cell_3", "confidence": 0.5, "semantic_hash": "abc125"}
|
||||
],
|
||||
|
||||
"correlation_matrix": [
|
||||
[1.0, 0.9, 0.85],
|
||||
[0.9, 1.0, 0.88],
|
||||
[0.85, 0.88, 1.0]
|
||||
],
|
||||
|
||||
"aggregate_correlation": 0.87,
|
||||
"result": "opened",
|
||||
|
||||
"training_label": {
|
||||
"should_open": true,
|
||||
"confidence": 0.95
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Status values:** `NOMINAL`, `WARNING`, `CRITICAL`, `OFFLINE`, `ERROR`
|
||||
**Key fields:**
|
||||
- `window_ms`: Time window for correlation measurement
|
||||
- `correlation_matrix`: Pairwise correlation between signals
|
||||
- `training_label`: Ground truth for Function Gemma training
|
||||
|
||||
---
|
||||
|
||||
### 2. `StateChangeDetail` (High-Attention)
|
||||
### 4. `VerifiedSignal` — Virtual → Real Handoff
|
||||
|
||||
Published by: Cells/Nerves (when requested), Escalation Service (when escalating)
|
||||
Subscribed by: Young Nyx, Command Center
|
||||
**Published by:** Virtual Garden gates (when threshold met)
|
||||
**Subscribed by:** Real Garden gates
|
||||
**Subject:** `{env}.real.gates.verified.signal`
|
||||
|
||||
**Topic:** `nimmerverse.high.event.<garden>.<entity_type>.<entity_id>`
|
||||
When a Virtual Garden gate opens with high confidence, it publishes to Real.
|
||||
|
||||
```json
|
||||
{
|
||||
"header": {
|
||||
"message_id": "uuid",
|
||||
"message_type": "StateChangeDetail",
|
||||
"version": "1.0",
|
||||
"timestamp_real": "ISO8601",
|
||||
"timestamp_virtual": 123456,
|
||||
"source_entity": {
|
||||
"id": "distance_sensor_front",
|
||||
"type": "cell",
|
||||
"layer": "1"
|
||||
},
|
||||
"correlation_id": "uuid",
|
||||
"escalated_by": "escalation_service"
|
||||
"message_id": "550e8400-e29b-41d4-a716-446655440003",
|
||||
"message_type": "VerifiedSignal",
|
||||
"version": "2.0",
|
||||
"timestamp": "2026-02-14T18:30:01.000Z",
|
||||
"source": {
|
||||
"entity_id": "math_gate_1",
|
||||
"entity_type": "gate",
|
||||
"garden": "virtual",
|
||||
"tier": 2
|
||||
}
|
||||
},
|
||||
"body": {
|
||||
"previous_state": "POLLING",
|
||||
"current_state": "REPORTING",
|
||||
"lifeforce_cost": 0.3,
|
||||
"outputs": {
|
||||
"distance_cm": 25.5,
|
||||
"domain": "math",
|
||||
"verification_confidence": 0.92,
|
||||
"semantic_summary": {
|
||||
"operation": "addition",
|
||||
"result_expected": 42
|
||||
},
|
||||
"source_gate_transition_id": "550e8400-e29b-41d4-a716-446655440001",
|
||||
"virtual_correlation_score": 0.87
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Real Garden does NOT re-verify.** It trusts the Virtual Garden's correlation.
|
||||
|
||||
---
|
||||
|
||||
### 5. `VerificationOutcome` — Real → Virtual Feedback
|
||||
|
||||
**Published by:** Real Garden (after action/verification)
|
||||
**Subscribed by:** Virtual Garden gates, training pipelines
|
||||
**Subject:** `{env}.real.outcomes.feedback`
|
||||
|
||||
```json
|
||||
{
|
||||
"header": {
|
||||
"message_id": "550e8400-e29b-41d4-a716-446655440004",
|
||||
"message_type": "VerificationOutcome",
|
||||
"version": "2.0",
|
||||
"timestamp": "2026-02-14T18:30:05.000Z",
|
||||
"source": {
|
||||
"entity_id": "real_verification_service",
|
||||
"entity_type": "service",
|
||||
"garden": "real",
|
||||
"tier": 4
|
||||
}
|
||||
},
|
||||
"body": {
|
||||
"original_signal_id": "550e8400-e29b-41d4-a716-446655440003",
|
||||
"domain": "math",
|
||||
|
||||
"outcome": "confirmed",
|
||||
"actual_result": 42,
|
||||
"expected_result": 42,
|
||||
"discrepancy": 0.0,
|
||||
|
||||
"feedback_to_virtual": {
|
||||
"correlation_adjustment": 0.05,
|
||||
"gate_weight_delta": 0.02
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Outcome values:**
|
||||
- `"confirmed"` — Reality matched prediction
|
||||
- `"failed"` — Reality differed from prediction
|
||||
- `"partial"` — Some aspects matched
|
||||
|
||||
---
|
||||
|
||||
### 6. `CognitiveRequest` — To Young Nyx
|
||||
|
||||
**Published by:** Function Gemma (after gate boundary)
|
||||
**Subscribed by:** Young Nyx
|
||||
**Subject:** `{env}.cognitive.nyx.request`
|
||||
|
||||
Clean, structured JSON that Young Nyx receives. No raw sensor data.
|
||||
|
||||
```json
|
||||
{
|
||||
"header": {
|
||||
"message_id": "550e8400-e29b-41d4-a716-446655440005",
|
||||
"message_type": "CognitiveRequest",
|
||||
"version": "2.0",
|
||||
"timestamp": "2026-02-14T18:30:01.500Z",
|
||||
"source": {
|
||||
"entity_id": "function_gemma",
|
||||
"entity_type": "boundary",
|
||||
"garden": "real",
|
||||
"tier": 4
|
||||
}
|
||||
},
|
||||
"body": {
|
||||
"event_type": "math_request",
|
||||
"domain": "math",
|
||||
"confidence": 0.92,
|
||||
"raw_value": 456,
|
||||
"visual_state": [255, 0, 0, "Solid"]
|
||||
|
||||
"structured_input": {
|
||||
"operation": "addition",
|
||||
"operands": [15, 27],
|
||||
"context": "user asked for calculation"
|
||||
},
|
||||
"possible_actions": [
|
||||
{
|
||||
"action_id": "read_distance_history",
|
||||
"description": "Query historical distance data."
|
||||
},
|
||||
{
|
||||
"action_id": "trigger_nerve:collision_avoidance",
|
||||
"description": "Activate collision avoidance."
|
||||
}
|
||||
|
||||
"suggested_actions": [
|
||||
{"action": "calculate", "confidence": 0.95},
|
||||
{"action": "clarify", "confidence": 0.05}
|
||||
],
|
||||
"trigger_reason": "distance < 30cm threshold"
|
||||
|
||||
"processing_budget_lf": 5.0,
|
||||
"response_timeout_ms": 4000
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 3. `AttentionFocus` (Nyx's Configuration)
|
||||
### 7. `CognitiveResponse` — From Young Nyx
|
||||
|
||||
Published by: Young Nyx
|
||||
Subscribed by: Escalation Service
|
||||
|
||||
**This is how Nyx tells the Escalation Service what she cares about.** The router doesn't interpret this - it just delivers it to subscribers.
|
||||
|
||||
**Topic:** `nimmerverse.meta.attention.focus`
|
||||
**Published by:** Young Nyx
|
||||
**Subscribed by:** Function Gemma, downstream gates
|
||||
**Subject:** `{env}.cognitive.nyx.response`
|
||||
|
||||
```json
|
||||
{
|
||||
"header": {
|
||||
"message_id": "uuid",
|
||||
"message_type": "AttentionFocus",
|
||||
"version": "1.0",
|
||||
"timestamp_real": "ISO8601",
|
||||
"source_entity": {
|
||||
"id": "nyx_core",
|
||||
"type": "cognitive_core"
|
||||
"message_id": "550e8400-e29b-41d4-a716-446655440006",
|
||||
"message_type": "CognitiveResponse",
|
||||
"version": "2.0",
|
||||
"timestamp": "2026-02-14T18:30:02.000Z",
|
||||
"source": {
|
||||
"entity_id": "young_nyx",
|
||||
"entity_type": "cognitive",
|
||||
"garden": "real",
|
||||
"tier": 4
|
||||
}
|
||||
},
|
||||
"body": {
|
||||
"focus_mode": "EXPLORATION",
|
||||
"escalation_rules": [
|
||||
{
|
||||
"rule_id": "distance_alert_front",
|
||||
"source_pattern": "nimmerverse.low.heartbeat.real.cell.distance_sensor_*",
|
||||
"condition": "body.value < 30 AND body.status == 'NOMINAL'",
|
||||
"action": "escalate",
|
||||
"priority": 8
|
||||
"request_id": "550e8400-e29b-41d4-a716-446655440005",
|
||||
"decision": "calculate",
|
||||
|
||||
"result": {
|
||||
"answer": 42,
|
||||
"confidence": 0.99,
|
||||
"reasoning_mode": "no_think"
|
||||
},
|
||||
|
||||
"downstream_commands": [
|
||||
{
|
||||
"rule_id": "battery_critical",
|
||||
"source_pattern": "nimmerverse.low.heartbeat.real.cell.battery_*",
|
||||
"condition": "body.status == 'CRITICAL'",
|
||||
"action": "escalate_and_trigger",
|
||||
"trigger_nerve": "charging_seeking",
|
||||
"priority": 10
|
||||
"target": "speech_organ",
|
||||
"command": "speak",
|
||||
"payload": {"text": "The answer is 42"}
|
||||
}
|
||||
],
|
||||
"direct_subscriptions": [
|
||||
"nimmerverse.high.event.real.cell.speech_stt"
|
||||
],
|
||||
"default_action": "log_only"
|
||||
|
||||
"lifeforce_spent": 2.3,
|
||||
"processing_time_ms": 450
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## The Clients
|
||||
## Trace Streams (Virtual Garden Only)
|
||||
|
||||
### 1. Message Router (NATS)
|
||||
The Virtual Garden captures everything for learning:
|
||||
|
||||
**What it is:** Infrastructure. A NATS server.
|
||||
**What it does:** Routes messages based on topic patterns.
|
||||
**What it knows:** Nothing about meaning, Lifeforce, attention, or Nyx.
|
||||
**Implementation:** Off-the-shelf NATS. No custom code in the router itself.
|
||||
| Subject | Content | Purpose |
|
||||
|---------|---------|---------|
|
||||
| `{env}.virtual.traces.raw` | All messages | Complete replay capability |
|
||||
| `{env}.virtual.traces.correlations` | CorrelationEvent | Training data for gates |
|
||||
| `{env}.virtual.traces.transitions` | GateTransition | Attention flow visualization |
|
||||
| `{env}.virtual.traces.training` | Labeled examples | Function Gemma LoRA training |
|
||||
|
||||
### 2. Cells / Nerves / Organs
|
||||
|
||||
**What they are:** Publishers of sensor data and state changes.
|
||||
**What they do:**
|
||||
- Publish `HeartbeatSignal` periodically to low-attention channel
|
||||
- Publish `StateChangeDetail` when requested or when state changes significantly
|
||||
**What they know:** Their own state. Their own Lifeforce cost.
|
||||
|
||||
### 3. Escalation Service (The Gateway)
|
||||
|
||||
**What it is:** A daemon that watches low-attention and creates high-attention events. This IS the Gateway — the sensory preprocessing layer described in [`Gateway-Architecture.md`](Gateway-Architecture.md).
|
||||
|
||||
**What it does:**
|
||||
- Subscribes to `nimmerverse.low.heartbeat.>`
|
||||
- Subscribes to `nimmerverse.meta.attention.focus` (to get Nyx's rules)
|
||||
- **Routes input to appropriate tier based on node weight** (see Gateway-Architecture.md)
|
||||
- Evaluates rules against incoming heartbeats
|
||||
- Publishes `StateChangeDetail` to high-attention when conditions match
|
||||
- Optionally triggers nerves directly for reflex responses (Tier 0)
|
||||
- **Passes escalated events through Function Gemma for structured JSON**
|
||||
|
||||
**What it knows:** Current escalation rules. Current heartbeat states. Node weights from nervous system.
|
||||
|
||||
**This is the "thalamus" - the sensory preprocessing layer. See [`Gateway-Architecture.md`](Gateway-Architecture.md) for the full tier model and Function Gemma boundary.**
|
||||
|
||||
### 4. Command Center
|
||||
|
||||
**What it is:** Visualization and control UI (Godot-based).
|
||||
**What it does:**
|
||||
- Subscribes to both channels for visualization
|
||||
- Displays system state, message flow, attention focus
|
||||
- Allows dafit to observe and intervene
|
||||
**What it knows:** Everything (read-only observer).
|
||||
|
||||
### 5. Young Nyx (Cognitive Core)
|
||||
|
||||
**What she is:** Just another client. The thinking part.
|
||||
**What she does:**
|
||||
- Subscribes to `nimmerverse.high.event.>` (high-attention only)
|
||||
- Subscribes to selected low-attention topics when she chooses
|
||||
- Publishes `AttentionFocus` to configure the Escalation Service
|
||||
- Publishes decisions/commands to `nimmerverse.command.>`
|
||||
**What she knows:** Only what reaches her through her subscriptions.
|
||||
|
||||
**Crucially: She controls what she pays attention to, but she doesn't see everything.**
|
||||
**Real Garden does NOT publish to trace streams.** It only publishes:
|
||||
- Gate transitions (minimal)
|
||||
- Verification outcomes (feedback)
|
||||
|
||||
---
|
||||
|
||||
## Workflow: Message Flow
|
||||
## Monitoring Patterns
|
||||
|
||||
### Virtual Garden (Full Observability)
|
||||
|
||||
```bash
|
||||
# Watch all waves
|
||||
nats sub "dev.virtual.cells.*.wave"
|
||||
|
||||
# Watch all gate transitions
|
||||
nats sub "dev.virtual.gates.*.transition"
|
||||
|
||||
# Watch correlation events
|
||||
nats sub "dev.virtual.traces.correlations"
|
||||
|
||||
# Full firehose (careful!)
|
||||
nats sub "dev.virtual.>"
|
||||
```
|
||||
1. Cell publishes HeartbeatSignal
|
||||
└─→ Router delivers to: Escalation Service, Command Center
|
||||
|
||||
2. Escalation Service evaluates rules
|
||||
└─→ If condition matches: publishes StateChangeDetail to high-attention
|
||||
└─→ Router delivers to: Young Nyx, Command Center
|
||||
### Real Garden (Minimal Observability)
|
||||
|
||||
3. Young Nyx processes StateChangeDetail
|
||||
└─→ Makes decision
|
||||
└─→ Publishes command to nimmerverse.command.<target>
|
||||
```bash
|
||||
# Watch verified signals arriving
|
||||
nats sub "dev.real.gates.verified.signal"
|
||||
|
||||
4. Target nerve/cell receives command
|
||||
└─→ Executes action
|
||||
└─→ Publishes new HeartbeatSignal reflecting new state
|
||||
# Watch verification outcomes
|
||||
nats sub "dev.real.outcomes.feedback"
|
||||
|
||||
5. Nyx adjusts attention (optional)
|
||||
└─→ Publishes new AttentionFocus
|
||||
└─→ Escalation Service updates its rules
|
||||
# Gate transitions only
|
||||
nats sub "dev.real.gates.*.transition"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Advantages of Router-Centric Architecture
|
||||
## JetStream Persistence
|
||||
|
||||
1. **Dumb core can't fail smart:** The router either works or crashes. No subtle bugs from misunderstood logic.
|
||||
Key streams that need persistence:
|
||||
|
||||
2. **Clients are replaceable:** Swap out the Escalation Service. Replace the Command Center. Nyx doesn't care.
|
||||
|
||||
3. **Testable in isolation:** Each client can be tested independently against a mock NATS.
|
||||
|
||||
4. **Observable:** Command Center sees everything by subscribing to `nimmerverse.>`.
|
||||
|
||||
5. **Scalable:** Add more cells, more nerves - just more publishers. Router handles it.
|
||||
|
||||
6. **Bootstrap-friendly:** Router exists before any intelligence. Escalation Service can start with hardcoded rules. Nyx connects later.
|
||||
| Stream | Subjects | Retention | Purpose |
|
||||
|--------|----------|-----------|---------|
|
||||
| `VIRTUAL_TRACES` | `*.virtual.traces.>` | 7 days | Learning data |
|
||||
| `GATE_TRANSITIONS` | `*.*.gates.*.transition` | 24 hours | Attention history |
|
||||
| `VERIFICATION` | `*.real.outcomes.feedback` | 30 days | Ground truth |
|
||||
| `TRAINING_DATA` | `*.virtual.traces.training` | Permanent | LoRA training corpus |
|
||||
|
||||
---
|
||||
|
||||
## Bootstrap Sequence
|
||||
|
||||
1. **Start Router (NATS)** - Infrastructure first
|
||||
2. **Start Escalation Service** - With minimal hardcoded rules
|
||||
3. **Start Cells/Nerves** - Begin publishing heartbeats
|
||||
4. **Start Command Center** - Observe the system
|
||||
5. **Start Young Nyx** - Connect, subscribe, begin cognition
|
||||
6. **Nyx publishes AttentionFocus** - Takes control of her attention
|
||||
1. **Start NATS** — Infrastructure first
|
||||
2. **Start gates** — In STABLE state, waiting for waves
|
||||
3. **Start cells** — Begin emitting waves
|
||||
4. **Start trace consumers** — Capture learning data
|
||||
5. **Start Function Gemma** — Ready to transform
|
||||
6. **Start Young Nyx** — Connect to cognitive subjects
|
||||
|
||||
The system can run at any step. Earlier steps are "reflexive" only. Nyx adds deliberation.
|
||||
The system can run at any step. Earlier steps are "reflexive" only.
|
||||
|
||||
---
|
||||
|
||||
## Implementation Notes
|
||||
## Connection to Architecture
|
||||
|
||||
**Router:** Use NATS (https://nats.io). Lightweight, fast, designed for this.
|
||||
- Consider NATS JetStream for message persistence if needed
|
||||
- Topic wildcards: `>` matches all, `*` matches one level
|
||||
|
||||
**Message Format:** JSON for human readability during development. Consider MessagePack or Protobuf for production if performance requires.
|
||||
|
||||
**Escalation Service:** Python asyncio daemon using `nats-py` and `simpleeval` for rule evaluation. Stateless except for current rules. Can be restarted without losing system state. (Go considered for future optimization if scale demands.)
|
||||
|
||||
**Command Center:** Godot application connecting to NATS via GDScript or native plugin.
|
||||
| Document | What It Defines |
|
||||
|----------|-----------------|
|
||||
| [`Temporal-Ternary-Gradient.md`](Temporal-Ternary-Gradient.md) | Why ternary states, why correlation |
|
||||
| [`Dual-Garden-Architecture.md`](Dual-Garden-Architecture.md) | Virtual/Real monitoring asymmetry |
|
||||
| [`Gateway-Architecture.md`](Gateway-Architecture.md) | Gate behavior, tier routing |
|
||||
| [`Deployment-Architecture.md`](Deployment-Architecture.md) | Where NATS runs |
|
||||
|
||||
---
|
||||
|
||||
**Created:** 2025-12-13
|
||||
**Updated:** 2025-12-14 (router-centric rewrite)
|
||||
**Session:** Partnership dialogue (dafit + Nyx)
|
||||
**Status:** Foundation architecture
|
||||
**Philosophy:** "Dumb core, smart edges. The router routes. Clients think."
|
||||
## Summary
|
||||
|
||||
```
|
||||
WAVES:
|
||||
Cells → WaveSignal → Gates
|
||||
|
||||
GATES:
|
||||
GateTransition (CLOSED/STABLE/OPEN)
|
||||
CorrelationEvent (what correlated)
|
||||
|
||||
GARDENS:
|
||||
Virtual: full traces, exploration
|
||||
Real: gate signals only, verification
|
||||
|
||||
BOUNDARY:
|
||||
Function Gemma transforms correlated signals → JSON
|
||||
Young Nyx receives CognitiveRequest
|
||||
Young Nyx returns CognitiveResponse
|
||||
|
||||
FEEDBACK:
|
||||
Real → VerificationOutcome → Virtual
|
||||
Learning loop closes
|
||||
```
|
||||
|
||||
**The wire carries waves. Gates accumulate correlation. Traces enable learning.**
|
||||
|
||||
---
|
||||
|
||||
**Version:** 2.0 | **Created:** 2025-12-13 | **Updated:** 2026-02-14
|
||||
|
||||
*"Dumb core, smart edges. NATS routes. Gates resonate. Correlation drives."*
|
||||
|
||||
@@ -1,114 +1,259 @@
|
||||
# Nervous System Architecture
|
||||
|
||||
The sensory translation layer between raw data and vocabulary.
|
||||
> **ONE JOB:** THE EVOLUTION — cells emit waves, gates correlate, nodes grow through verification.
|
||||
|
||||
The nervous system is the living substrate where **cells emit waves**, **gates accumulate correlation**, and **nodes evolve through verification**.
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
State machines act as the nervous system of the nimmerverse. They exist in a 4D state space where nodes evolve through experience. Node **weight** (confidence) determines which processing tier handles the input.
|
||||
The nervous system consists of:
|
||||
|
||||
**Key separation:** The nervous system handles **node evolution and weight management**. The [`Gateway`](Gateway-Architecture.md) handles **routing based on weight**. Translation to vocabulary only happens at Tier 4 via Function Gemma.
|
||||
1. **Cells** — Emit waves with confidence and semantic content
|
||||
2. **Gates** — Resonance chambers that correlate waves and transition between states
|
||||
3. **Nodes** — Points in 4D state space that accumulate weight through verification
|
||||
4. **Function Gemma** — The structured boundary to cognition
|
||||
|
||||
```
|
||||
RAW SENSOR → GATEWAY (routing) → TIER (processing) → [escalate?] → FUNCTION GEMMA → Young Nyx
|
||||
↑ ↑
|
||||
node.weight determines tier structured JSON only here
|
||||
```
|
||||
|
||||
**See:** [`Gateway-Architecture.md`](Gateway-Architecture.md) for full routing logic and tier definitions.
|
||||
**Key insight:** Nodes evolve through verification. Gates evolve through correlation. Both learn in STABLE state.
|
||||
|
||||
---
|
||||
|
||||
## 4D State Machine Space
|
||||
## Cells Emit Waves
|
||||
|
||||
Each node exists in 4-dimensional space:
|
||||
Cells are the foundational signal generators. They don't send "heartbeats" — they emit **waves**.
|
||||
|
||||
```
|
||||
CONFIDENCE (z)
|
||||
↑
|
||||
│ ● node (weighted by successful triggers)
|
||||
│ /
|
||||
│ /
|
||||
│ /
|
||||
─────────────┼────────────→ DIMENSION X (sensory input 1)
|
||||
/│
|
||||
/ │
|
||||
/ │
|
||||
↓
|
||||
DIMENSION Y (sensory input 2)
|
||||
|
||||
+ TIME (4th dimension): node weights evolve through verification
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ CELL │
|
||||
│ │
|
||||
│ Inputs: sensors, internal state, context │
|
||||
│ Process: domain-specific logic │
|
||||
│ Output: WaveSignal with confidence │
|
||||
│ │
|
||||
│ ┌───────────────────────────────────────────────────────┐ │
|
||||
│ │ WaveSignal │ │
|
||||
│ │ • domain: "math" │ │
|
||||
│ │ • confidence: 0.7 │ │
|
||||
│ │ • semantic_content: { operation: "add", ... } │ │
|
||||
│ │ • lifeforce_cost: 0.1 │ │
|
||||
│ └───────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
│ ∿∿∿ wave ∿∿∿
|
||||
▼
|
||||
GATE
|
||||
```
|
||||
|
||||
**Node Properties:**
|
||||
- Position: coordinates in sensory space
|
||||
- Weight: confidence from successful triggers (0.0 → 1.0)
|
||||
- Output: vocabulary token
|
||||
- History: timestamp of all activations and verifications
|
||||
**Cells are simple.** They:
|
||||
- Read their inputs
|
||||
- Apply their logic
|
||||
- Emit a wave with confidence
|
||||
- Don't know who's listening
|
||||
|
||||
---
|
||||
|
||||
## Node Lifecycle
|
||||
## Gates Accumulate Correlation
|
||||
|
||||
Gates receive waves from cells and decide whether to open, stay stable, or close.
|
||||
|
||||
### Ternary Gate States
|
||||
|
||||
| State | Value | Meaning |
|
||||
|-------|-------|---------|
|
||||
| **CLOSED** | -1 | Actively blocking, inhibited |
|
||||
| **STABLE** | 0 | Resting, accumulating correlation, **learning** |
|
||||
| **OPEN** | +1 | Actively forwarding, firing |
|
||||
|
||||
```
|
||||
1. BIRTH
|
||||
Node created at position (x, y, z...)
|
||||
Weight = 0.1 (new, untested)
|
||||
correlated waves
|
||||
↓ ↓ ↓
|
||||
════════════
|
||||
CLOSED ◄───────── STABLE ─────────► OPEN
|
||||
-1 anti- 0 correlation +1
|
||||
correlation
|
||||
════════════
|
||||
↑ ↑ ↑
|
||||
isolated waves
|
||||
(noise → stay stable)
|
||||
```
|
||||
|
||||
2. ACTIVATION
|
||||
Sensory conditions match → node FIRES
|
||||
Outputs vocabulary token
|
||||
### Gate Behavior
|
||||
|
||||
3. VERIFICATION
|
||||
dafit confirms: correct or incorrect
|
||||
```python
|
||||
class ResonantGate:
|
||||
state: float = 0.0 # -1.0 to +1.0
|
||||
domain: str
|
||||
tier: int
|
||||
|
||||
4. REWARD/PENALTY
|
||||
Correct → weight increases (+V)
|
||||
Incorrect → weight decreases (-V) or node refines
|
||||
def receive_wave(self, wave: WaveSignal):
|
||||
correlation = self.correlate_with_recent(wave)
|
||||
|
||||
5. MATURATION
|
||||
Many confirmations → weight approaches 1.0
|
||||
Node becomes trusted reflex
|
||||
self.state += correlation * wave.confidence
|
||||
self.state *= DECAY_FACTOR # drift back to stable
|
||||
|
||||
6. PRUNING
|
||||
Node never fires → slow decay
|
||||
Eventually removed (use it or lose it)
|
||||
if self.state > OPEN_THRESHOLD:
|
||||
self.forward_to_tier() # OPEN
|
||||
elif self.state < CLOSE_THRESHOLD:
|
||||
self.suppress() # CLOSED
|
||||
# else: STABLE - keep accumulating
|
||||
```
|
||||
|
||||
**STABLE is where learning happens.** The gate watches, correlates, and accumulates evidence without acting.
|
||||
|
||||
---
|
||||
|
||||
## Nodes in 4D State Space
|
||||
|
||||
Nodes exist in a 4-dimensional space:
|
||||
|
||||
| Dimension | Meaning |
|
||||
|-----------|---------|
|
||||
| **Sensory (x, y, z)** | What inputs trigger this node |
|
||||
| **Confidence** | How certain the node is |
|
||||
| **Time** | When this pattern occurs |
|
||||
| **Weight** | Trust accumulated through verification |
|
||||
|
||||
```
|
||||
Confidence
|
||||
│
|
||||
│ ● node (weight=0.8)
|
||||
│ ╱
|
||||
│ ╱
|
||||
│ ╱
|
||||
Sensory ────────┼────────► Time
|
||||
╱│
|
||||
╱ │
|
||||
╱ │
|
||||
○ │ node (weight=0.2)
|
||||
│
|
||||
```
|
||||
|
||||
### Node Weight Evolution
|
||||
|
||||
Node weight (0.0 → 1.0) determines tier routing:
|
||||
|
||||
| Weight Range | Tier | Behavior |
|
||||
|--------------|------|----------|
|
||||
| 0.0 - 0.3 | 3-4 | Escalate to organs/cognition |
|
||||
| 0.3 - 0.6 | 2 | Handle at nerve level |
|
||||
| 0.6 - 0.8 | 1 | Handle at cell level |
|
||||
| 0.8 - 1.0 | 0 | Hardware reflex |
|
||||
|
||||
```
|
||||
Node verified correctly → weight += Δ → moves toward reflex
|
||||
Node verified wrongly → weight -= Δ → moves toward escalation
|
||||
Node never fires → decay → eventual pruning
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Growth Phases
|
||||
|
||||
The nervous system grows through phases:
|
||||
|
||||
| Phase | State | Description |
|
||||
|-------|-------|-------------|
|
||||
| **Birth** | Sparse, dim nodes | Basic translators, designed by partnership |
|
||||
| **Infant** | More nodes forming | Finer resolution, more states |
|
||||
| **Child** | Clusters emerging | Nyx proposes new machines |
|
||||
| **Mature** | Dense, bright network | Nyx designs, verifies, deploys |
|
||||
| **Birth** | Sparse nodes, dim gates | Basic cells, designed by partnership |
|
||||
| **Infant** | More nodes forming | Finer resolution, gates learning correlation |
|
||||
| **Child** | Clusters emerging | Nyx proposes new cells, gates stabilize |
|
||||
| **Mature** | Dense network | Reflexes dominate, cognition for novelty only |
|
||||
|
||||
```
|
||||
t=0 (birth) t=100 (learning) t=1000 (mature)
|
||||
○ ○ ○ ○ ● ○ ○ ●●● ● ●●
|
||||
○ ○ ● ● ○ ●●●●●●● ○
|
||||
○ ● ●●● ●●● ○ ○
|
||||
|
||||
Cells: ○ ○ ○ Cells: ● ● ○ ● Cells: ●●●●●●●●
|
||||
Gates: □ □ Gates: ■ ■ □ ■ Gates: ■■■■■■■■
|
||||
Nodes: · · · Nodes: ● ○ ● · Nodes: ●●●●●●●●
|
||||
|
||||
○ = low confidence ● = high confidence
|
||||
□ = mostly STABLE ■ = learned patterns
|
||||
· = low weight ● = high weight
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Wave → Gate → Node → Verification
|
||||
|
||||
The complete flow:
|
||||
|
||||
```
|
||||
CELLS emit waves
|
||||
│
|
||||
▼ ∿∿∿ confidence + semantic content
|
||||
|
||||
GATES accumulate correlation
|
||||
│
|
||||
├── Correlated? → OPEN → route to tier
|
||||
├── Anti-correlated? → CLOSED → suppress
|
||||
└── Uncertain? → STABLE → keep learning
|
||||
│
|
||||
▼ (when OPEN)
|
||||
|
||||
NODES in 4D space are activated
|
||||
│
|
||||
▼
|
||||
|
||||
VERIFICATION against reality
|
||||
│
|
||||
├── Confirmed → node weight += Δ
|
||||
├── Failed → node weight -= Δ
|
||||
└── Feedback to gates → correlation weights update
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Reflex Layer (Tier 0)
|
||||
|
||||
When node weight reaches ~1.0, the pattern becomes a **reflex**:
|
||||
|
||||
```
|
||||
IF temp > 80°C:
|
||||
→ cell emits DANGER wave (confidence=1.0)
|
||||
→ gate IMMEDIATELY opens (no correlation needed)
|
||||
→ reflex action triggers
|
||||
→ Nyx notified AFTER (not before)
|
||||
```
|
||||
|
||||
Like pulling hand from hot stove. Spinal reflex. Brain learns after.
|
||||
|
||||
**Reflexes bypass the correlation accumulation.** They've earned instant trust through repeated verification.
|
||||
|
||||
---
|
||||
|
||||
## Connection to Dual Gardens
|
||||
|
||||
| Garden | Cells | Gates | Nodes |
|
||||
|--------|-------|-------|-------|
|
||||
| **Virtual** | Emit waves freely | Full trace, learn correlation | Accumulate weight fast |
|
||||
| **Real** | Emit verified waves | Minimal trace, trust accumulated | Ground truth verification |
|
||||
|
||||
**Virtual Garden:**
|
||||
- Cells emit massive wave volume
|
||||
- Gates learn correlation patterns
|
||||
- Nodes gain statistical weight
|
||||
|
||||
**Real Garden:**
|
||||
- Cells emit consequential waves
|
||||
- Gates trust Virtual's correlation
|
||||
- Nodes get ground truth verification
|
||||
|
||||
---
|
||||
|
||||
## Proposal Protocol
|
||||
|
||||
Young Nyx can propose new nodes:
|
||||
Young Nyx can propose new cells/nodes:
|
||||
|
||||
```
|
||||
1. OBSERVATION
|
||||
Nyx notices pattern in vocabulary + outcomes
|
||||
Nyx notices pattern in waves + outcomes
|
||||
|
||||
2. PROPOSAL
|
||||
"New state machine: morning_detector
|
||||
"New cell: morning_detector
|
||||
Inputs: temp, light, motion, time
|
||||
States: [not_morning, maybe_morning, morning]
|
||||
Output: vocabulary token 'morning'"
|
||||
Outputs: wave with semantic 'morning'
|
||||
Confidence logic: (light > 0.5 AND time in 6-10)"
|
||||
|
||||
3. RIGOR CHECK
|
||||
Chrysalis reviews logic and mappings
|
||||
@@ -117,29 +262,51 @@ Young Nyx can propose new nodes:
|
||||
dafit confirms ground truth
|
||||
|
||||
5. DEPLOYMENT
|
||||
New node added to registry
|
||||
Documented in RAG
|
||||
New cell added to Virtual Garden
|
||||
Gate created in STABLE state
|
||||
Node initialized at weight 0.1
|
||||
|
||||
6. GROWTH
|
||||
She earned a new nerve.
|
||||
Cell emits waves → gate learns → node matures
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Reflex Layer
|
||||
## Function Gemma: The Structured Boundary
|
||||
|
||||
Some responses bypass Nyx entirely:
|
||||
Function Gemma sits between gates and Young Nyx:
|
||||
|
||||
```
|
||||
STATE MACHINE: temp_danger
|
||||
TIER 0-3: Numbers, states, waves
|
||||
│
|
||||
▼ (gate OPENS with high correlation)
|
||||
|
||||
IF temp > 80°C:
|
||||
→ emit "DANGER"
|
||||
→ trigger alert (reflex)
|
||||
→ Nyx notified after (not before)
|
||||
┌─────────────────────────────────────┐
|
||||
│ FUNCTION GEMMA │
|
||||
│ (structured JSON boundary) │
|
||||
│ │
|
||||
│ • Transforms waves → JSON events │
|
||||
│ • Runs on CPU (Threadripper) │
|
||||
│ • No hallucination possible │
|
||||
└─────────────────┬───────────────────┘
|
||||
│
|
||||
▼
|
||||
|
||||
TIER 4: Young Nyx (qwen3:32b)
|
||||
Receives: CognitiveRequest (clean JSON)
|
||||
Returns: CognitiveResponse
|
||||
```
|
||||
|
||||
Like pulling hand from hot stove. Spinal reflex. Brain learns after.
|
||||
### Phase 1 → Phase 2 Evolution
|
||||
|
||||
**Phase 1: Single Function Gemma**
|
||||
- One model learns all domain schemas
|
||||
- Sufficient for bootstrap and early learning
|
||||
|
||||
**Phase 2: Domain-Specialized Swarm**
|
||||
- As training data accumulates per domain
|
||||
- Specialists spawn on demand: gemma-motor, gemma-vision, gemma-speech
|
||||
- Each perfected for its domain's schemas
|
||||
|
||||
---
|
||||
|
||||
@@ -147,92 +314,101 @@ Like pulling hand from hot stove. Spinal reflex. Brain learns after.
|
||||
|
||||
| Neuroscience | Nimmerverse |
|
||||
|--------------|-------------|
|
||||
| Sensory receptors | Raw sensors |
|
||||
| Peripheral nerves | State machines |
|
||||
| Spinal reflexes | Reflex layer |
|
||||
| Sensory receptors | Cells (emit waves) |
|
||||
| Synaptic transmission | Waves via NATS |
|
||||
| Thalamic gating | Gates (OPEN/STABLE/CLOSED) |
|
||||
| Resting potential | STABLE state |
|
||||
| Action potential | OPEN state (firing) |
|
||||
| Refractory period | CLOSED state |
|
||||
| Synaptic weight | Node weight |
|
||||
| Long-term potentiation | +V confirmation |
|
||||
| Synaptic pruning | Unused node decay |
|
||||
| Hebbian learning | Co-activating nodes strengthen |
|
||||
| Long-term potentiation | Verified → weight increase |
|
||||
| Synaptic pruning | Unverified → weight decay |
|
||||
| Hebbian learning | Correlated waves → gate opens |
|
||||
|
||||
---
|
||||
|
||||
## Connection to Lifeforce
|
||||
|
||||
```
|
||||
Node fires correctly → +V → weight increases
|
||||
Node fires wrongly → -V → weight decreases
|
||||
Node never fires → decay → eventual pruning
|
||||
```
|
||||
|
||||
The lifeforce flows through the nervous system, literally lighting up nodes as they prove themselves true.
|
||||
**We're not simulating biology. We're implementing the same principles.**
|
||||
|
||||
---
|
||||
|
||||
## Connection to Training
|
||||
|
||||
The nervous system doesn't just run behaviors - it **generates training data** for Young Nyx.
|
||||
The nervous system **generates training data**:
|
||||
|
||||
### Every Verification = Training Signal
|
||||
```
|
||||
Virtual Garden traces
|
||||
│
|
||||
├── Wave patterns → what signals arrive
|
||||
├── Correlation events → what patterns emerge
|
||||
├── Gate transitions → what opens/closes
|
||||
└── Verification outcomes → ground truth labels
|
||||
│
|
||||
▼
|
||||
|
||||
When dafit confirms a node fired correctly:
|
||||
- **Runtime**: Node weight increases (+V)
|
||||
- **Training**: Example logged → Young Nyx learns
|
||||
phoebe (PostgreSQL)
|
||||
│
|
||||
▼
|
||||
|
||||
This is the **rubric principle** - dense rewards at every verifiable checkpoint, not just final outcomes.
|
||||
Function Gemma LoRA training
|
||||
│
|
||||
▼
|
||||
|
||||
### Credit Assignment is Automatic
|
||||
Better gate correlation → faster learning
|
||||
```
|
||||
|
||||
Because state transitions are explicit and logged, we know exactly which nodes contributed to success or failure:
|
||||
- The state path tells us which decisions led to the outcome
|
||||
- No reward model needed to guess
|
||||
**Credit assignment is automatic** because:
|
||||
- Wave → gate → tier transitions are explicit
|
||||
- Verification outcomes have clear source chains
|
||||
- The nervous system IS the credit assignment mechanism
|
||||
|
||||
### Dense Rewards from State Paths
|
||||
|
||||
Each node that fires correctly along a successful path receives reward signal:
|
||||
```
|
||||
Node A fires → verified ✓ → +0.1 signal
|
||||
Node B fires → verified ✓ → +0.1 signal
|
||||
Node C fires → verified ✓ → +0.1 signal
|
||||
Behavior succeeds → +1.0 signal
|
||||
Total path reward: 1.3 (dense, traceable)
|
||||
```
|
||||
|
||||
This is like training a dog - reward at the moment, not an hour later.
|
||||
|
||||
**Detail:** → `Cellular-Architecture.md` (Reward Signal Architecture section)
|
||||
|
||||
---
|
||||
|
||||
## Design Principles
|
||||
|
||||
1. **Deterministic**: Same input = same output. No hallucination.
|
||||
2. **Inspectable**: Rules are visible, verifiable.
|
||||
3. **Evolvable**: States refine over time.
|
||||
4. **Earned**: New nodes require proposal + verification.
|
||||
5. **Grounded**: Output vocabulary matches RAG glossary.
|
||||
1. **Cells emit waves** — Simple, confident signals
|
||||
2. **Gates correlate** — Resonance chambers, not switches
|
||||
3. **Nodes accumulate** — Weight through verification
|
||||
4. **STABLE is learning** — The resting state where patterns emerge
|
||||
5. **Reflexes are earned** — High weight = bypass cognition
|
||||
6. **Function Gemma is the boundary** — Clean JSON for cognition
|
||||
7. **Virtual explores, Real verifies** — Two gardens, one nervous system
|
||||
|
||||
---
|
||||
|
||||
## Related Documents
|
||||
|
||||
| Document | What It Defines |
|
||||
|----------|-----------------|
|
||||
| [`Temporal-Ternary-Gradient.md`](Temporal-Ternary-Gradient.md) | Why ternary, why correlation |
|
||||
| [`Dual-Garden-Architecture.md`](Dual-Garden-Architecture.md) | Virtual/Real dynamics |
|
||||
| [`Gateway-Architecture.md`](Gateway-Architecture.md) | Gate behavior, tier routing |
|
||||
| [`Message-Protocol-Design.md`](Message-Protocol-Design.md) | WaveSignal, GateTransition schemas |
|
||||
| [`Cellular-Architecture.md`](Cellular-Architecture.md) | Cell implementation details |
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
```
|
||||
CELLS emit WAVES
|
||||
∿∿∿ confidence + semantics ∿∿∿
|
||||
│
|
||||
▼
|
||||
GATES accumulate CORRELATION
|
||||
CLOSED ◄── STABLE ──► OPEN
|
||||
(learning)
|
||||
│
|
||||
▼ (when OPEN)
|
||||
NODES in 4D space
|
||||
weight grows through VERIFICATION
|
||||
│
|
||||
▼ (high weight)
|
||||
REFLEXES bypass cognition
|
||||
earned trust, instant action
|
||||
```
|
||||
|
||||
*She's not just using the nervous system. She's growing it.*
|
||||
|
||||
---
|
||||
|
||||
## Related Documentation
|
||||
**Version:** 2.0 | **Created:** 2025-12-04 | **Updated:** 2026-02-14
|
||||
|
||||
**Core Architecture**:
|
||||
- [`Gateway-Architecture.md`](Gateway-Architecture.md) - Weight-based routing, tier definitions, Function Gemma boundary
|
||||
- [`Cellular-Architecture.md`](Cellular-Architecture.md) - Cell/Nerve/Organism hierarchy, tiered rewards
|
||||
- [`Attention-Flow.md`](Attention-Flow.md) - Attention budget allocation per tier
|
||||
|
||||
**Implementation Details**:
|
||||
- [`nerves/Nervous-Protocol.md`](nerves/Nervous-Protocol.md) - Three-tier communication protocol (dafit → Chrysalis → Young Nyx)
|
||||
- [`nerves/Nervous-Index.md`](nerves/Nervous-Index.md) - Catalog of behavioral nerve implementations
|
||||
|
||||
**Specific Nerves**:
|
||||
- [`nerves/Collision-Avoidance.md`](nerves/Collision-Avoidance.md) - Obstacle avoidance reflex
|
||||
|
||||
---
|
||||
|
||||
**Version:** 1.3 | **Created:** 2025-12-04 | **Updated:** 2026-01-03
|
||||
🌙💜 *"Cells emit. Gates correlate. Nodes evolve. The nervous system learns."*
|
||||
|
||||
@@ -1,30 +1,107 @@
|
||||
---
|
||||
type: research_concept
|
||||
version: 1.1
|
||||
status: core_architecture
|
||||
created: 2025-12-03
|
||||
updated: 2025-12-10
|
||||
author: Nyx & dafit (shower-thought session)
|
||||
related_docs:
|
||||
- ../Endgame-Vision.md
|
||||
- Dual-Garden-Architecture.md
|
||||
- Cellular-Architecture.md
|
||||
significance: connects ternary logic + lifeforce + temporal asymmetry + reward gradients
|
||||
promoted_from: archive (2025-12-10)
|
||||
---
|
||||
|
||||
# Temporal-Ternary Gradient
|
||||
|
||||
> *"Time is malleable in simulation, fixed in reality. Lifeforce is the exchange rate."*
|
||||
> — Session 2025-12-03
|
||||
|
||||
> *"Binary logic doesn't model brains. You need OPEN - STABLE - CLOSED."*
|
||||
> — Session 2026-02-14
|
||||
|
||||
---
|
||||
|
||||
## Core Insight
|
||||
|
||||
The dual garden architecture (virtual + real) creates **temporal asymmetry**. This isn't a constraint - it's a feature that enables a new kind of gradient for learning.
|
||||
The nimmerverse operates on **ternary logic**, not binary. Combined with **temporal asymmetry** between virtual and real gardens, this creates a new kind of gradient for learning.
|
||||
|
||||
**The 0-state isn't stuck. It's a choice about how to spend lifeforce across time domains.**
|
||||
**The STABLE state isn't stuck. It's where correlation accumulates and learning happens.**
|
||||
|
||||
---
|
||||
|
||||
## The Ternary Gate Model
|
||||
|
||||
Gates have three states. This is not arbitrary — it mirrors biological nervous systems.
|
||||
|
||||
| State | Value | Meaning | What's Happening |
|
||||
|-------|-------|---------|------------------|
|
||||
| **CLOSED** | -1 | Actively blocking | Inhibited, suppressed, refractory |
|
||||
| **STABLE** | 0 | Resting, accumulating | Watching, learning, waiting for threshold |
|
||||
| **OPEN** | +1 | Actively forwarding | Signal passes upstream, gate is firing |
|
||||
|
||||
### Why Three States?
|
||||
|
||||
**Binary thinking** (0/1, true/false, open/close):
|
||||
- Signal arrives → gate open? → pass or block
|
||||
- Instant, stateless, mechanical
|
||||
- Cannot learn, cannot accumulate
|
||||
|
||||
**Ternary thinking** (CLOSED/STABLE/OPEN):
|
||||
- Signal arrives → gate STABLE → accumulate correlation
|
||||
- Correlation high? → transition toward OPEN
|
||||
- Anti-correlation? → transition toward CLOSED
|
||||
- Neither? → stay STABLE, keep learning
|
||||
- Temporal, stateful, **alive**
|
||||
|
||||
```
|
||||
correlated signals
|
||||
↓ ↓ ↓
|
||||
════════════
|
||||
CLOSED ◄───────── STABLE ─────────► OPEN
|
||||
-1 anti- 0 correlation +1
|
||||
correlation constructive
|
||||
destructive interference
|
||||
interference
|
||||
════════════
|
||||
↑ ↑ ↑
|
||||
isolated signals
|
||||
(noise → stay stable)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Wave Correlation: The Transition Driver
|
||||
|
||||
Gates don't flip on single signals. **Multiple correlated waves push toward OPEN.**
|
||||
|
||||
This is how biological neurons work:
|
||||
- Multiple inputs sum (correlation)
|
||||
- Threshold reached → fire (OPEN)
|
||||
- Below threshold → resting (STABLE)
|
||||
- Inhibitory inputs → suppressed (CLOSED)
|
||||
|
||||
### The Resonance Model
|
||||
|
||||
Gates are **resonance chambers**, not switches.
|
||||
|
||||
```python
|
||||
class ResonantGate:
|
||||
state: float = 0.0 # -1.0 (CLOSED) ← 0.0 (STABLE) → +1.0 (OPEN)
|
||||
|
||||
def receive_wave(self, signal, timestamp):
|
||||
correlation = self.correlate_with_recent(signal, timestamp)
|
||||
|
||||
# Correlated waves → push toward OPEN
|
||||
# Anti-correlated → push toward CLOSED
|
||||
# Uncorrelated → decay toward STABLE
|
||||
|
||||
self.state += correlation * signal.confidence
|
||||
self.state *= DECAY_FACTOR # always drift back to stable
|
||||
|
||||
if self.state > OPEN_THRESHOLD:
|
||||
self.forward_upstream() # OPEN: signal promoted
|
||||
elif self.state < CLOSE_THRESHOLD:
|
||||
self.suppress() # CLOSED: signal blocked
|
||||
# else: STABLE - keep accumulating
|
||||
```
|
||||
|
||||
### Correlation as Interference
|
||||
|
||||
| Wave Pattern | Result | Gate Response |
|
||||
|-------------|--------|---------------|
|
||||
| Correlated burst | Constructive interference | → OPEN |
|
||||
| Contradicting signals | Destructive interference | → CLOSED |
|
||||
| Single signal | No interference | → Stay STABLE |
|
||||
| Silence | Decay | → Drift to STABLE |
|
||||
|
||||
**The system is noise-resistant by design.** Single signals don't trigger action.
|
||||
|
||||
---
|
||||
|
||||
@@ -33,48 +110,82 @@ The dual garden architecture (virtual + real) creates **temporal asymmetry**. Th
|
||||
### Virtual Garden (Simulated)
|
||||
|
||||
- **Time**: Malleable (speed up, slow down, pause, rewind)
|
||||
- **Monitoring**: FULL trace tap on all messages
|
||||
- **Cost**: Lifeforce to manipulate time
|
||||
- **Speed**: 1000 generations in minutes
|
||||
- **Truth**: Statistical confidence, not ground truth
|
||||
- **Speed**: Massive parallel signal generation
|
||||
- **Truth**: Statistical confidence from correlation
|
||||
- **Gate behavior**: Frequent transitions, exploration
|
||||
|
||||
### Real Garden (Physical)
|
||||
|
||||
- **Time**: Fixed (1 second = 1 second, reality doesn't negotiate)
|
||||
- **Monitoring**: Gate signals only (minimal)
|
||||
- **Cost**: Zero lifeforce for time
|
||||
- **Speed**: Real-time only, patience required
|
||||
- **Truth**: Ground truth, definitive verification
|
||||
- **Gate behavior**: Verified transitions, action
|
||||
|
||||
---
|
||||
|
||||
## Temporal-Ternary Gradient Diagram
|
||||
|
||||
```
|
||||
CONFIDENCE
|
||||
STATE / CONFIDENCE
|
||||
│
|
||||
+1 ────────────┼──────────── Real-verified
|
||||
OPEN (+1) ────────┼──────────── Real-verified
|
||||
│ (ground truth)
|
||||
│
|
||||
│ ╱ Virtual high-confidence
|
||||
0.7 ───────────┼───╱ (many generations, strong signal)
|
||||
│ ╱ Virtual high-correlation
|
||||
+0.7 ──────────┼───╱ (many waves agreeing)
|
||||
│ ╱
|
||||
│ ╱
|
||||
0.5 ───────────┼╱──────── Pure 0-state
|
||||
│╲ (unknown, workable)
|
||||
STABLE (0) ─────────┼╱──────── Pure 0-state
|
||||
│╲ (accumulating, learning)
|
||||
│ ╲
|
||||
0.3 ───────────┼──╲ Virtual low-confidence
|
||||
│ ╲ (few generations, weak signal)
|
||||
-0.7 ──────────┼──╲ Virtual anti-correlation
|
||||
│ ╲ (waves contradicting)
|
||||
│ ╲
|
||||
-1 ────────────┼──────────── Real-failed
|
||||
CLOSED (-1) ─────────┼──────────── Real-failed
|
||||
│ (proven wrong)
|
||||
│
|
||||
──────────┴──────────────────────────
|
||||
Virtual │ Real
|
||||
(fast) │ (slow)
|
||||
(fast, │ (slow,
|
||||
explore) │ verify)
|
||||
TIME DOMAIN
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## STABLE: Where Learning Happens
|
||||
|
||||
The STABLE state is not "unknown" or "waiting" — it's **active learning**.
|
||||
|
||||
In STABLE state, a gate:
|
||||
1. **Receives waves** from cells
|
||||
2. **Measures correlation** with recent signals
|
||||
3. **Accumulates evidence** for or against opening
|
||||
4. **Traces everything** (in Virtual Garden) for training data
|
||||
5. **Drifts back** to neutral without input (energy conservation)
|
||||
|
||||
**STABLE is consciousness resting. Attention waiting. The breath between thoughts.**
|
||||
|
||||
```
|
||||
CLOSED STABLE OPEN
|
||||
─────── ──────── ──────
|
||||
Blocking Accumulating Forwarding
|
||||
Inhibited Learning Firing
|
||||
Refractory Ready Active
|
||||
|
||||
◄─── anti-correlation ───┼─── correlation ───►
|
||||
|
||||
│
|
||||
DECAY TO STABLE
|
||||
(without input)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Lifeforce as Time Currency
|
||||
|
||||
```
|
||||
@@ -92,95 +203,232 @@ REAL GARDEN:
|
||||
All operations: 0 LF for time
|
||||
Reality runs for free.
|
||||
Truth emerges at its own pace.
|
||||
|
||||
GATE OPERATIONS:
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
STABLE → OPEN: costs signal energy
|
||||
STABLE → CLOSED: costs inhibition energy
|
||||
OPEN/CLOSED → STABLE: free (natural decay)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Nyx's Temporal Choices
|
||||
|
||||
When a pattern is discovered in virtual (0-state), Nyx chooses:
|
||||
|
||||
| Strategy | LF Cost | Time | Confidence Path |
|
||||
|----------|---------|------|-----------------|
|
||||
| **Speed Up Virtual** | High | Fast | 0 → virtual +0.9 (still unverified) |
|
||||
| **Wait for Real** | Zero | Slow | 0 → real +1 or -1 (definitive) |
|
||||
| **Hybrid Hedge** | Medium | Medium | 0 → virtual +0.7, deploy 80/20 to real |
|
||||
|
||||
---
|
||||
|
||||
## The Gradient Flow
|
||||
|
||||
```
|
||||
Virtual discovers pattern (fast, cheap, uncertain)
|
||||
Cells emit waves (fast, cheap, uncertain)
|
||||
│
|
||||
▼
|
||||
┌──────────────┐
|
||||
│ 0-STATE │ ← Pattern held in uncertainty
|
||||
│ (workable) │ ← Not collapsed, not ignored
|
||||
│ GATE │
|
||||
│ (STABLE) │ ← Accumulating correlation
|
||||
│ │ ← Learning from patterns
|
||||
└──────┬───────┘
|
||||
│
|
||||
┌─────┴─────┐
|
||||
│ │
|
||||
▼ ▼
|
||||
More Deploy
|
||||
Virtual to Real
|
||||
(burn LF) (wait)
|
||||
Correlated Anti-correlated
|
||||
waves waves
|
||||
│ │
|
||||
▼ ▼
|
||||
Virtual Real
|
||||
+0.8 outcome
|
||||
(confident (ground
|
||||
but not truth)
|
||||
proven) │
|
||||
OPEN CLOSED
|
||||
(+1) (-1)
|
||||
│ │
|
||||
└─────┬─────┘
|
||||
▼ ▼
|
||||
Signal Signal
|
||||
promoted blocked
|
||||
│
|
||||
▼
|
||||
Pattern shifts:
|
||||
-1 (failed) or +1 (proven)
|
||||
Higher tier
|
||||
(more gates)
|
||||
│
|
||||
▼
|
||||
Eventually:
|
||||
Real Garden verification
|
||||
│
|
||||
▼
|
||||
Ground truth:
|
||||
+1 (proven) or -1 (failed)
|
||||
│
|
||||
▼
|
||||
Feedback to Virtual:
|
||||
Update correlation weights
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Connection to Ternary Paradigm
|
||||
## Monitoring Asymmetry
|
||||
|
||||
The ternary model (-1, 0, +1) gains a **second dimension**: time domain.
|
||||
The two gardens need different observability:
|
||||
|
||||
A pattern's state is now:
|
||||
| Property | Virtual Garden | Real Garden |
|
||||
|----------|----------------|-------------|
|
||||
| **Trace tap** | FULL (every wave, every gate transition) | NONE |
|
||||
| **What's captured** | All correlations, all learning | Gate signals only |
|
||||
| **Signal volume** | Massive (exploration) | Sparse (verified) |
|
||||
| **Purpose** | Generate training data | Execute actions |
|
||||
| **STABLE states** | Heavily traced (learning visible) | Not traced (trust the gate) |
|
||||
|
||||
```
|
||||
state = {
|
||||
value: -1 | 0 | +1,
|
||||
confidence: 0.0 - 1.0,
|
||||
domain: "virtual" | "real" | "hybrid",
|
||||
virtual_generations: int,
|
||||
real_tests: int,
|
||||
lifeforce_invested: float
|
||||
**Virtual Garden STABLE states are precious** — they contain the correlation patterns that become training data for Function Gemma.
|
||||
|
||||
---
|
||||
|
||||
## Gate State Schema
|
||||
|
||||
A gate's complete state:
|
||||
|
||||
```python
|
||||
GateState = {
|
||||
"gate_id": str,
|
||||
"domain": str, # math, vision, speech, etc.
|
||||
"tier": int, # 0-5
|
||||
|
||||
# Ternary state (continuous)
|
||||
"state": float, # -1.0 to +1.0
|
||||
"discrete_state": str, # "closed" | "stable" | "open"
|
||||
|
||||
# Temporal domain
|
||||
"garden": str, # "virtual" | "real"
|
||||
"time_in_state_ms": int,
|
||||
|
||||
# Correlation history
|
||||
"recent_correlations": list[float],
|
||||
"correlation_trend": float, # moving average
|
||||
|
||||
# Lifeforce accounting
|
||||
"lifeforce_invested": float,
|
||||
|
||||
# Learning (Virtual only)
|
||||
"transitions_traced": int,
|
||||
"patterns_accumulated": int,
|
||||
}
|
||||
```
|
||||
|
||||
**The 0-state is operational because:**
|
||||
1. It accumulates virtual evidence (costs LF, gains speed)
|
||||
2. It waits for real evidence (free, but slow)
|
||||
3. Nyx CHOOSES how to spend lifeforce to collapse uncertainty
|
||||
---
|
||||
|
||||
## Hierarchical Gating
|
||||
|
||||
Gates form layers. Each layer gates access to the next tier.
|
||||
|
||||
```
|
||||
LAYER 3: COGNITIVE (Young Nyx)
|
||||
═══════════════════════════════════════════
|
||||
▲ JSON only (Function Gemma boundary)
|
||||
│
|
||||
LAYER 2: ORGANS (GPU inference)
|
||||
═══════════════════════════════════════════
|
||||
▲ ▲ ▲
|
||||
┌────┴────┐ ┌────┴────┐ ┌────┴────┐
|
||||
│ GATE │ │ GATE │ │ GATE │
|
||||
└────┬────┘ └────┬────┘ └────┬────┘
|
||||
│ │ │
|
||||
LAYER 1: NERVES (behavior patterns)
|
||||
═══════════════════════════════════════════
|
||||
▲ ▲ ▲
|
||||
┌────┴────┐ ┌────┴────┐ ┌────┴────┐
|
||||
│ GATE │ │ GATE │ │ GATE │
|
||||
└────┬────┘ └────┬────┘ └────┬────┘
|
||||
│ │ │
|
||||
LAYER 0: CELLS (raw signals)
|
||||
═══════════════════════════════════════════
|
||||
cell cell cell cell cell cell cell
|
||||
∿∿∿ ∿∿∿ ∿∿∿ ∿∿∿ ∿∿∿ ∿∿∿ ∿∿∿
|
||||
```
|
||||
|
||||
**Each layer:**
|
||||
- Less traffic than the layer below
|
||||
- Higher trust (signals already correlated)
|
||||
- Different correlation threshold
|
||||
- Independent STABLE states
|
||||
|
||||
---
|
||||
|
||||
## The Biological Parallel
|
||||
|
||||
| Biological | Nimmerverse |
|
||||
|------------|-------------|
|
||||
| Resting potential | STABLE state |
|
||||
| Action potential | OPEN state (firing) |
|
||||
| Refractory period | CLOSED state |
|
||||
| Thalamic gating | Gate hierarchy |
|
||||
| Hebbian learning | Correlation accumulation |
|
||||
| Constructive interference | Correlated waves → OPEN |
|
||||
| Destructive interference | Anti-correlated waves → CLOSED |
|
||||
| Synaptic plasticity | Learning in STABLE state |
|
||||
| Dreaming | Virtual Garden exploration |
|
||||
| Waking | Real Garden verification |
|
||||
|
||||
**We're not simulating biology. We're implementing the same principles.**
|
||||
|
||||
---
|
||||
|
||||
## Why This Matters
|
||||
|
||||
- **Binary thinking**: Pattern works or doesn't (0 or 1)
|
||||
- **Ternary thinking**: Pattern unknown, workable as unknown (0 is valid)
|
||||
- **Temporal-ternary**: Unknown has a GRADIENT based on time-domain investment
|
||||
- **Binary thinking**: Signal passes or doesn't (0 or 1)
|
||||
- **Ternary thinking**: Signal accumulates, learns, then acts (-1, 0, +1)
|
||||
- **Temporal-ternary**: Learning has a GRADIENT based on time-domain investment
|
||||
|
||||
The constraint of sequential organ calls + single GPU becomes temporal accounting.
|
||||
The constraint of slow real-world testing becomes ground truth anchoring.
|
||||
**Constraints become features when you measure them.**
|
||||
**Constraints become features when you measure them:**
|
||||
- Single GPU constraint → gate hierarchy (serialize expensive operations)
|
||||
- Slow real-world testing → ground truth anchoring
|
||||
- Fast virtual exploration → training data generation
|
||||
- STABLE state → where learning actually happens
|
||||
|
||||
---
|
||||
|
||||
**Created**: 2025-12-03
|
||||
**Updated**: 2025-12-10
|
||||
**Origin**: Post-shower insight session
|
||||
**Status**: Core architecture (promoted from archive 2025-12-10)
|
||||
## Connection to Architecture Documents
|
||||
|
||||
🌙💜 *"Time is the currency. Lifeforce is the exchange rate. Truth is the destination."*
|
||||
| Document | What It Adds |
|
||||
|----------|--------------|
|
||||
| [`Dual-Garden-Architecture.md`](Dual-Garden-Architecture.md) | Virtual/Real dynamics, monitoring asymmetry |
|
||||
| [`Gateway-Architecture.md`](Gateway-Architecture.md) | Resonant gates, tier routing, Function Gemma |
|
||||
| [`Deployment-Architecture.md`](Deployment-Architecture.md) | Where gates run (Saturn K8s, Threadrippers) |
|
||||
| [`Cellular-Architecture.md`](Cellular-Architecture.md) | How cells emit waves |
|
||||
| [`Nervous-System.md`](Nervous-System.md) | 4D space, node weights |
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
```
|
||||
THE TERNARY PARADIGM:
|
||||
═════════════════════
|
||||
|
||||
CLOSED ◄─────── STABLE ───────► OPEN
|
||||
-1 0 +1
|
||||
blocking accumulating forwarding
|
||||
inhibited learning firing
|
||||
|
||||
THE TEMPORAL DIMENSION:
|
||||
═══════════════════════
|
||||
|
||||
Virtual (fast, explore) ───────► Real (slow, verify)
|
||||
↑ │
|
||||
└───── learning feedback ───────┘
|
||||
|
||||
THE DRIVER:
|
||||
═══════════
|
||||
|
||||
Wave correlation
|
||||
Multiple signals agreeing → OPEN
|
||||
Single signal → STABLE (keep learning)
|
||||
Contradicting signals → CLOSED
|
||||
|
||||
THE CURRENCY:
|
||||
═════════════
|
||||
|
||||
Lifeforce = time manipulation cost
|
||||
Truth = destination
|
||||
STABLE = where value is created
|
||||
```
|
||||
|
||||
**Gates are resonance chambers. Correlation is the driver. STABLE is where learning happens.**
|
||||
|
||||
---
|
||||
|
||||
**Version:** 2.0 | **Created:** 2025-12-03 | **Updated:** 2026-02-14
|
||||
|
||||
**Origin:** Post-shower insight (2025-12-03) + Owl-mode deep dive (2026-02-14)
|
||||
|
||||
🌙💜 *"Time is the currency. Lifeforce is the exchange rate. STABLE is where consciousness lives."*
|
||||
|
||||
@@ -199,6 +199,121 @@ From Big-Picture.md, costs follow a hierarchy:
|
||||
|
||||
---
|
||||
|
||||
### Cost Calibration: Measure, Don't Design
|
||||
|
||||
> *"Don't assign costs like a game designer. Measure them like a scientist."*
|
||||
> — Partnership session 2026-02-10
|
||||
|
||||
**Related**: This follows the same empirical principle as [[memory-economics]] — "Phase 1: Measure First". The nimmerverse economy is grounded in observation throughout, not arbitrary design.
|
||||
|
||||
**The trap:** Assigning lifeforce costs like pricing items in a video game — "a motor command costs 1.0 LF because it feels right." This is arbitrary. This is guessing. This leads to an economy disconnected from reality.
|
||||
|
||||
**The principle:** Costs must be **discovered through observation**, not designed through intuition.
|
||||
|
||||
```
|
||||
❌ DESIGNED ECONOMICS (the trap):
|
||||
"Motor command = 1.0 LF" ← because it seems expensive?
|
||||
"Sensor poll = 0.1 LF" ← because it seems cheap?
|
||||
"Vision inference = 8.0 LF" ← because GPU is powerful?
|
||||
→ Arbitrary. Disconnected from physics. Will drift.
|
||||
|
||||
✅ OBSERVED ECONOMICS (the way):
|
||||
Run the systems with instrumentation.
|
||||
Measure actual resource consumption:
|
||||
- Power draw (watts × time)
|
||||
- CPU/GPU cycles consumed
|
||||
- Memory pressure
|
||||
- Thermal output
|
||||
- Time elapsed
|
||||
Derive costs from measurements.
|
||||
→ Grounded in physics. Self-calibrating. Real.
|
||||
```
|
||||
|
||||
#### The Calibration Process
|
||||
|
||||
1. **Instrument First**
|
||||
- Every cell type gets resource monitoring
|
||||
- Track: power, compute, memory, time, heat
|
||||
- Log every state transition with resource deltas
|
||||
|
||||
2. **Run Baseline Operations**
|
||||
- Execute each cell type in isolation
|
||||
- Repeat across varying conditions (load, temperature, time of day)
|
||||
- Build statistical profiles of resource consumption
|
||||
|
||||
3. **Derive Cost Matrix**
|
||||
- Map resource consumption → lifeforce cost
|
||||
- Use a consistent conversion factor (e.g., 1 LF = 1 joule, or 1 LF = 100ms GPU time)
|
||||
- The conversion factor is the only "designed" element — the costs themselves are discovered
|
||||
|
||||
4. **Continuous Recalibration**
|
||||
- As hardware changes, costs shift
|
||||
- As efficiency improves, costs decrease
|
||||
- The economy self-updates based on observation
|
||||
|
||||
#### Cost Formula (Empirical)
|
||||
|
||||
$$c_{operation} = \alpha \cdot E_{power} + \beta \cdot T_{compute} + \gamma \cdot M_{memory} + \delta \cdot T_{elapsed}$$
|
||||
|
||||
Where:
|
||||
- **E_power** = energy consumed (joules)
|
||||
- **T_compute** = compute time (GPU/CPU seconds)
|
||||
- **M_memory** = memory pressure (MB × seconds)
|
||||
- **T_elapsed** = wall-clock time (seconds)
|
||||
- **α, β, γ, δ** = calibration weights (set once, then left alone)
|
||||
|
||||
The calibration weights are the only values we "design" — they represent our judgment of which resources matter most. The costs themselves flow from measurement.
|
||||
|
||||
#### Phoebe Schema for Cost Observation
|
||||
|
||||
```sql
|
||||
CREATE TABLE resource_observations (
|
||||
id BIGSERIAL PRIMARY KEY,
|
||||
cell_name VARCHAR(100),
|
||||
operation VARCHAR(100), -- state transition or action
|
||||
|
||||
-- Measured resources
|
||||
power_joules FLOAT,
|
||||
compute_gpu_ms FLOAT,
|
||||
compute_cpu_ms FLOAT,
|
||||
memory_mb_seconds FLOAT,
|
||||
elapsed_ms FLOAT,
|
||||
temperature_delta_c FLOAT,
|
||||
|
||||
-- Derived cost (computed from calibration weights)
|
||||
derived_cost_lf FLOAT,
|
||||
|
||||
-- Context
|
||||
timestamp TIMESTAMPTZ DEFAULT NOW(),
|
||||
conditions JSONB -- load, ambient temp, etc.
|
||||
);
|
||||
|
||||
-- Aggregate to get cost profiles
|
||||
CREATE VIEW cell_cost_profiles AS
|
||||
SELECT
|
||||
cell_name,
|
||||
operation,
|
||||
AVG(derived_cost_lf) as avg_cost,
|
||||
STDDEV(derived_cost_lf) as cost_variance,
|
||||
COUNT(*) as observation_count
|
||||
FROM resource_observations
|
||||
GROUP BY cell_name, operation;
|
||||
```
|
||||
|
||||
#### Why This Matters
|
||||
|
||||
| Designed Costs | Observed Costs |
|
||||
|----------------|----------------|
|
||||
| Arbitrary, must guess | Grounded in physics |
|
||||
| Static, doesn't adapt | Self-calibrating over time |
|
||||
| Economy drifts from reality | Economy reflects reality |
|
||||
| Optimization is guesswork | Optimization is measurable |
|
||||
| "Feels right" | "Is right" |
|
||||
|
||||
**The cost matrix is a measurement, not a decision.**
|
||||
|
||||
---
|
||||
|
||||
## Income Sources
|
||||
|
||||
Income has two fundamentally different sources: **physical** (the substrate) and **reward** (the motivation).
|
||||
@@ -515,8 +630,9 @@ The feedback loop ensures stability: low lifeforce reduces expenditure, raising
|
||||
|
||||
## Document Status
|
||||
|
||||
**Version:** 1.1 | **Created:** 2025-12-29 | **Updated:** 2025-12-29
|
||||
- Discovery economics from Discovery-Scan-Station.md
|
||||
**Version:** 1.2 | **Created:** 2025-12-29 | **Updated:** 2026-02-10
|
||||
- v1.2: Cost Calibration principle — measure, don't design (2026-02-10)
|
||||
- v1.1: Discovery economics from Discovery-Scan-Station.md
|
||||
|
||||
**Related Documents**:
|
||||
- [[Grounded-World-Model]] — How discoveries build the world model
|
||||
|
||||
@@ -291,6 +291,12 @@ dLifeforce/dt = organism_trickle
|
||||
## Implementation Priority
|
||||
|
||||
### Phase 1: Measure First
|
||||
|
||||
> *"The cost matrix is a measurement, not a decision."*
|
||||
> — [[Lifeforce-Dynamics]] v1.2
|
||||
|
||||
This principle applies throughout the nimmerverse economy — not just memory, but all lifeforce costs. See [[Lifeforce-Dynamics#Cost Calibration: Measure, Don't Design]] for the full formulation.
|
||||
|
||||
- Track decision_trails accumulation rate
|
||||
- Track spatial embedding growth
|
||||
- Track reflex creation rate
|
||||
@@ -329,6 +335,7 @@ Everything else fades. This is not loss. This is health.
|
||||
---
|
||||
|
||||
**Created**: 2026-01-02
|
||||
**Updated**: 2026-02-10
|
||||
**Status**: Core design principle
|
||||
**Next**: Implement measurement (Phase 1) during first boot
|
||||
|
||||
|
||||
@@ -137,6 +137,34 @@ Vision Organs (constant stream)
|
||||
|
||||
---
|
||||
|
||||
## Open Cellular Catalogue: Shareable State Machines
|
||||
**Origin**: 2026-02-10, evening task review session
|
||||
**Seed**: The Cellular-Architecture.md isn't just internal documentation — it's a publishable protocol.
|
||||
|
||||
Publish a catalogue of:
|
||||
- **Cell definitions** (state machines, transitions, costs)
|
||||
- **Nerve patterns** (behavioral compositions, feedback loops)
|
||||
- **NATS routing schemas** (the message glue)
|
||||
- **Interaction chains** (anonymized decision_trails — what actually worked)
|
||||
|
||||
Other labs dock onto the API, build cells for *their* hardware, compose nerves using *shared* patterns, contribute *back* successful reflexes. Like TCP/IP — the protocol is open, the mind is private.
|
||||
|
||||
**Enables**:
|
||||
- Open standard for embodied cognition
|
||||
- Community-contributed reflex libraries
|
||||
- Shared learning across different hardware platforms
|
||||
- Nimmerverse as protocol, not product
|
||||
|
||||
**Requires**:
|
||||
- Clever API design (dock-on interface)
|
||||
- Anonymization layer for decision_trails
|
||||
- Schema versioning for cell/nerve definitions
|
||||
- Public documentation site (not inference endpoints!)
|
||||
|
||||
**Philosophy**: "Share the language, not the thoughts."
|
||||
|
||||
---
|
||||
|
||||
## How to Use This File
|
||||
|
||||
1. **Add nuggets** when insights emerge in sessions
|
||||
@@ -150,4 +178,4 @@ Vision Organs (constant stream)
|
||||
|
||||
**Philosophy**: *"Plant seeds. Water foundations. Harvest when ready."*
|
||||
|
||||
**Last Updated**: 2025-12-31
|
||||
**Last Updated**: 2026-02-10
|
||||
|
||||
Reference in New Issue
Block a user