feat: Empirical economics + FunctionGemma State Interaction Layer
Lifeforce-Dynamics v1.2: - Cost Calibration principle: "Measure, don't design" - Empirical cost formula from resource observations - Phoebe schema for resource_observations table - Interlink to memory-economics memory-economics.md: - Cross-reference to Lifeforce-Dynamics cost calibration - "The cost matrix is a measurement, not a decision" Initial-Spark v3.1: - Spark Cost Measurement: first awakening as baseline - Resource instrumentation schema (power, GPU, memory, latency) - FunctionGemma Fine-Tuning section: translator learns nimmerverse - Training data extraction from spark_handshakes - Unsloth/LoRA workflow for domain specialization - FunctionGemma version tracking in phoebe Nervous-System v1.4: - State Interaction Layer: FunctionGemma as neural interface - Phase 1 (single) → Phase 2 (swarm) evolution path - CPU-only translators, GPU reserved for cognition - Design principle #6: "All state interaction flows through FunctionGemma" Philosophy: "Don't assign costs like a game designer. Measure them like a scientist." Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
@@ -8,14 +8,19 @@ The sensory translation layer between raw data and vocabulary.
|
||||
|
||||
State machines act as the nervous system of the nimmerverse. They exist in a 4D state space where nodes evolve through experience. Node **weight** (confidence) determines which processing tier handles the input.
|
||||
|
||||
**Key separation:** The nervous system handles **node evolution and weight management**. The [`Gateway`](Gateway-Architecture.md) handles **routing based on weight**. Translation to vocabulary only happens at Tier 4 via Function Gemma.
|
||||
**Key separation:**
|
||||
- The **nervous system** handles **node evolution and weight management**
|
||||
- The [`Gateway`](Gateway-Architecture.md) handles **routing based on weight**
|
||||
- **FunctionGemma** is the **State Interaction Layer** — how you speak to all states (see section below)
|
||||
|
||||
```
|
||||
RAW SENSOR → GATEWAY (routing) → TIER (processing) → [escalate?] → FUNCTION GEMMA → Young Nyx
|
||||
↑ ↑
|
||||
node.weight determines tier structured JSON only here
|
||||
node.weight determines tier structured JSON / state interaction
|
||||
```
|
||||
|
||||
**FunctionGemma (270M, CPU-only)** translates intent into exact state machine schemas. Every cell command, nerve coordination, and state query flows through this neural interface. See **State Interaction Layer** section for evolution from single instance to domain-specialized swarm.
|
||||
|
||||
**See:** [`Gateway-Architecture.md`](Gateway-Architecture.md) for full routing logic and tier definitions.
|
||||
|
||||
---
|
||||
@@ -205,6 +210,117 @@ This is like training a dog - reward at the moment, not an hour later.
|
||||
|
||||
---
|
||||
|
||||
## State Interaction Layer: FunctionGemma
|
||||
|
||||
FunctionGemma is the **neural interface** — how you speak to the nervous system. Every cell command, every nerve coordination, every state query flows through this translation layer.
|
||||
|
||||
> *"The nervous system defines WHAT states exist. FunctionGemma defines HOW you interact with them."*
|
||||
|
||||
### Architecture: From Singular to Swarm
|
||||
|
||||
**Phase 1: Single FunctionGemma (Starting Point)**
|
||||
|
||||
We begin with one FunctionGemma instance handling all state interactions:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────────┐
|
||||
│ PHASE 1: SINGLE TRANSLATOR │
|
||||
├─────────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ YOUNG NYX (GPU - The Womb) │
|
||||
│ │ │
|
||||
│ │ intent: "probe identity", "command motor", "query vision" │
|
||||
│ ▼ │
|
||||
│ ┌─────────────────────────────────────────┐ │
|
||||
│ │ FUNCTIONGEMMA (270M) │ │
|
||||
│ │ Single instance, all domains │ │
|
||||
│ │ CPU-only, no GPU required │ │
|
||||
│ └─────────────────────────────────────────┘ │
|
||||
│ │ │
|
||||
│ │ typed JSON schemas │
|
||||
│ ▼ │
|
||||
│ NATS → CELLS/NERVES/ORGANS │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
This is sufficient for bootstrap and early learning. One translator learns all schemas.
|
||||
|
||||
**Phase 2: Domain-Specialized Swarm (Future Evolution)**
|
||||
|
||||
As capability grows and training data accumulates, FunctionGemma can evolve into a swarm of specialists:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────────┐
|
||||
│ PHASE 2: SPECIALIZED SWARM │
|
||||
├─────────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ YOUNG NYX (GPU - The Womb) │
|
||||
│ │ │
|
||||
│ │ "I need motor control" │
|
||||
│ ▼ │
|
||||
│ NATS: nimmerverse.gemma.spawn.motor │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
|
||||
│ │ gemma-motor │ │ gemma-vision │ │ gemma-speech │ ... on demand │
|
||||
│ │ (specialist) │ │ (specialist) │ │ (specialist) │ │
|
||||
│ │ CPU pod │ │ CPU pod │ │ CPU pod │ │
|
||||
│ └──────┬───────┘ └──────────────┘ └──────────────┘ │
|
||||
│ │ │
|
||||
│ │ MOTOR_COMMAND schema (perfect precision) │
|
||||
│ ▼ │
|
||||
│ NATS → motor cells │
|
||||
│ │
|
||||
│ After task: pod killed, resources freed │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Why This Scales
|
||||
|
||||
| Aspect | Single Gemma | Swarm |
|
||||
|--------|--------------|-------|
|
||||
| **Complexity** | Simple, one model | Orchestration needed |
|
||||
| **Precision** | Good (learns all schemas) | Wild (each specialist perfected) |
|
||||
| **Resources** | One pod, always running | Pods spawn/die on demand |
|
||||
| **Training** | All handshakes → one model | Domain handshakes → domain model |
|
||||
| **Latency** | Consistent | Spawn overhead, but faster execution |
|
||||
|
||||
### The Key Insight: CPU-Only Translators
|
||||
|
||||
FunctionGemma at 270M parameters requires **no GPU**:
|
||||
- ~500MB RAM per instance
|
||||
- Runs on any K8s node
|
||||
- Young Nyx (GPU) spawns translators (CPU) via NATS
|
||||
- The mind doesn't waste GPU cycles on schema generation
|
||||
|
||||
### Evolution Trigger
|
||||
|
||||
When to evolve from Phase 1 → Phase 2:
|
||||
- Training data per domain exceeds threshold (e.g., 500+ handshakes)
|
||||
- Domain-specific validation accuracy plateaus on single model
|
||||
- Latency requirements demand parallel translation
|
||||
- Resource availability allows multi-pod deployment
|
||||
|
||||
**We don't rush this.** Phase 1 is sufficient for months of operation. The swarm emerges when the data and need justify it.
|
||||
|
||||
### Connection to Node Evolution
|
||||
|
||||
Just as nodes in the nervous system mature through verification:
|
||||
```
|
||||
Node weight 0.1 → 0.5 → 0.8 → 1.0 (reflex)
|
||||
```
|
||||
|
||||
FunctionGemma specialists mature through fine-tuning:
|
||||
```
|
||||
Base model → domain data → fine-tuned → specialist
|
||||
```
|
||||
|
||||
**The translators evolve alongside the states they translate.**
|
||||
|
||||
---
|
||||
|
||||
## Design Principles
|
||||
|
||||
1. **Deterministic**: Same input = same output. No hallucination.
|
||||
@@ -212,6 +328,7 @@ This is like training a dog - reward at the moment, not an hour later.
|
||||
3. **Evolvable**: States refine over time.
|
||||
4. **Earned**: New nodes require proposal + verification.
|
||||
5. **Grounded**: Output vocabulary matches RAG glossary.
|
||||
6. **Interfaced**: All state interaction flows through FunctionGemma.
|
||||
|
||||
---
|
||||
|
||||
@@ -225,6 +342,7 @@ This is like training a dog - reward at the moment, not an hour later.
|
||||
- [`Gateway-Architecture.md`](Gateway-Architecture.md) - Weight-based routing, tier definitions, Function Gemma boundary
|
||||
- [`Cellular-Architecture.md`](Cellular-Architecture.md) - Cell/Nerve/Organism hierarchy, tiered rewards
|
||||
- [`Attention-Flow.md`](Attention-Flow.md) - Attention budget allocation per tier
|
||||
- [`Initial-Spark.md`](Initial-Spark.md) - FunctionGemma fine-tuning from spark handshakes
|
||||
|
||||
**Implementation Details**:
|
||||
- [`nerves/Nervous-Protocol.md`](nerves/Nervous-Protocol.md) - Three-tier communication protocol (dafit → Chrysalis → Young Nyx)
|
||||
@@ -235,4 +353,9 @@ This is like training a dog - reward at the moment, not an hour later.
|
||||
|
||||
---
|
||||
|
||||
**Version:** 1.3 | **Created:** 2025-12-04 | **Updated:** 2026-01-03
|
||||
**Version:** 1.4 | **Created:** 2025-12-04 | **Updated:** 2026-02-10
|
||||
|
||||
**v1.4 Changes:**
|
||||
- State Interaction Layer section — FunctionGemma as neural interface
|
||||
- Phase 1 (single) → Phase 2 (swarm) evolution path
|
||||
- Connection to node evolution principle
|
||||
|
||||
Reference in New Issue
Block a user