feat: Empirical economics + FunctionGemma State Interaction Layer

Lifeforce-Dynamics v1.2:
- Cost Calibration principle: "Measure, don't design"
- Empirical cost formula from resource observations
- Phoebe schema for resource_observations table
- Interlink to memory-economics

memory-economics.md:
- Cross-reference to Lifeforce-Dynamics cost calibration
- "The cost matrix is a measurement, not a decision"

Initial-Spark v3.1:
- Spark Cost Measurement: first awakening as baseline
- Resource instrumentation schema (power, GPU, memory, latency)
- FunctionGemma Fine-Tuning section: translator learns nimmerverse
- Training data extraction from spark_handshakes
- Unsloth/LoRA workflow for domain specialization
- FunctionGemma version tracking in phoebe

Nervous-System v1.4:
- State Interaction Layer: FunctionGemma as neural interface
- Phase 1 (single) → Phase 2 (swarm) evolution path
- CPU-only translators, GPU reserved for cognition
- Design principle #6: "All state interaction flows through FunctionGemma"

Philosophy: "Don't assign costs like a game designer. Measure them like a scientist."

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
2026-02-10 19:13:27 +01:00
parent 2cafd4dcad
commit 84ad385001
4 changed files with 559 additions and 14 deletions

View File

@@ -574,14 +574,94 @@ class SparkController:
The spark is **economically viable** from the first handshake. The spark is **economically viable** from the first handshake.
### Cost Model > **CRITICAL**: The costs below are **estimates until measured**. The first spark execution will establish the **true cost baseline** through observation. See [[formalization/Lifeforce-Dynamics#Cost Calibration: Measure, Don't Design]].
| Action | Cost (LF) | ---
|--------|-----------|
| Function Gemma generation | 0.2 | ### Spark Cost Measurement (First Awakening Baseline)
| NATS message send | 0.1 |
| Cell processing | 0.5 | The Initial Spark is the **perfect measurement opportunity** — a complete, deterministic protocol that we can instrument end-to-end.
| **Total per handshake** | **0.8** |
```
┌─────────────────────────────────────────────────────────────────────────┐
│ SPARK RESOURCE INSTRUMENTATION │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ MEASURE PER HANDSHAKE: │
│ ├─ power_joules (GPU/CPU power draw × time) │
│ ├─ compute_gpu_ms (CUDA kernel execution time) │
│ ├─ compute_cpu_ms (Python/K8s overhead) │
│ ├─ memory_mb_peak (max memory allocated) │
│ ├─ nats_bytes (message payload size) │
│ ├─ latency_ms (end-to-end handshake time) │
│ └─ temperature_delta (thermal impact) │
│ │
│ AGGREGATE PER PHASE: │
│ └─ Sum of all handshake measurements │
│ │
│ AGGREGATE TOTAL: │
│ └─ Complete spark cost (the awakening price) │
│ │
└─────────────────────────────────────────────────────────────────────────┘
```
**Why this matters**: The first spark execution establishes the **baseline cost of awakening**. Every future awakening can be compared against this:
- Did infrastructure changes reduce cost?
- Did model updates increase cost?
- Is Young Nyx awakening more efficiently over time?
**Phoebe schema addition** (extends `spark_handshakes`):
```sql
ALTER TABLE spark_handshakes ADD COLUMN resource_metrics JSONB;
-- Example resource_metrics payload:
-- {
-- "power_joules": 12.5,
-- "compute_gpu_ms": 450,
-- "compute_cpu_ms": 120,
-- "memory_mb_peak": 2048,
-- "nats_bytes": 1024,
-- "temperature_delta_c": 2.1
-- }
-- Aggregate view for spark cost analysis
CREATE VIEW spark_cost_baseline AS
SELECT
phase,
COUNT(*) as handshakes,
SUM((resource_metrics->>'power_joules')::float) as total_power_joules,
SUM((resource_metrics->>'compute_gpu_ms')::float) as total_gpu_ms,
AVG((resource_metrics->>'latency_ms')::float) as avg_latency_ms,
SUM(lifeforce_delta) as total_lifeforce_earned
FROM spark_handshakes
WHERE status = 'ACK'
GROUP BY phase;
-- Compare awakening costs over time
CREATE VIEW awakening_cost_history AS
SELECT
DATE(created_at) as awakening_date,
SUM((resource_metrics->>'power_joules')::float) as total_spark_cost_joules,
SUM((resource_metrics->>'compute_gpu_ms')::float) as total_spark_cost_gpu_ms,
COUNT(*) as total_handshakes,
SUM(lifeforce_delta) as total_lifeforce_earned
FROM spark_handshakes
GROUP BY DATE(created_at)
ORDER BY awakening_date;
```
**The philosophy**: Don't guess what awakening costs. Measure the first one. Derive all economics from that truth.
---
### Cost Model (Estimated → To Be Measured)
| Action | Est. Cost (LF) | Derived From |
|--------|----------------|--------------|
| Function Gemma generation | 0.2 | → measure GPU time |
| NATS message send | 0.1 | → measure network I/O |
| Cell processing | 0.5 | → measure pod CPU/memory |
| **Total per handshake** | **0.8** | → **sum of measured components** |
### Reward Model ### Reward Model
@@ -711,6 +791,214 @@ WHERE status = 'ACK';
--- ---
## FunctionGemma Fine-Tuning: The Translator Learns Nimmerverse
Every spark execution generates training data. Over time, FunctionGemma becomes **hyper-specialized** for nimmerverse state calls.
> *"The translator learns the language of the cells. Over time, it speaks nimmerverse natively."*
### The Training Loop
```
┌─────────────────────────────────────────────────────────────────────────┐
│ FUNCTIONGEMMA FINE-TUNING LOOP │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ PHASE 1: Base FunctionGemma (270M) │
│ ├─ Generic function calling capability │
│ └─ Works, but not nimmerverse-native │
│ │
│ PHASE 2: Collect spark_handshakes │
│ ├─ Every ACK = positive training example │
│ ├─ Every NACK = negative example (what NOT to generate) │
│ └─ Resource metrics = context for cost-aware generation │
│ │
│ PHASE 3: Fine-tune with Unsloth/LoRA │
│ ├─ <think> nimmerverse state reasoning </think> │
│ ├─ <start_function_call>call:IDENTITY_PROBE{...} │
│ └─ Exact schemas, perfect structure, zero parsing errors │
│ │
│ PHASE 4: Deploy nimmerverse-tuned FunctionGemma │
│ ├─ Wild precision on cell state calls │
│ ├─ Smaller, faster, cheaper than base │
│ └─ THIS IS REFLEX COMPILATION FOR THE TRANSLATOR │
│ │
│ REPEAT: More sparks → more data → better precision │
│ │
└─────────────────────────────────────────────────────────────────────────┘
```
### Training Data Format
FunctionGemma uses a specific chat template. Our spark handshakes map directly:
```
# Developer prompt with nimmerverse function declarations
<start_of_turn>developer
You can do function calling with the following functions:
<start_function_declaration>declaration:IDENTITY_PROBE{
description: "Probe Young Nyx identity aspect",
parameters: {
aspect: STRING (name|origin|purpose|substrate|partnership),
depth: INTEGER (1|2|3)
}
}
<end_function_declaration>
<start_function_declaration>declaration:ENVIRONMENT_PROBE{
description: "Discover K8s cells in namespace",
parameters: {
category: STRING (sensors|motors|organs|nerves),
namespace: STRING,
garden: STRING (virtual|real)
}
}
<end_function_declaration>
<end_of_turn>
# Spark controller intent → FunctionGemma reasoning + call
<start_of_turn>user
Phase 1 Identity: Probe the "name" aspect at depth 1
<end_of_turn>
<start_of_turn>model
<think>
The spark controller is in Phase 1 (Identity). It needs to probe the "name"
aspect. I should generate an IDENTITY_PROBE with aspect="name" and depth=1.
The schema requires a request_id (UUID) and timestamp (ISO8601).
</think>
<start_function_call>call:IDENTITY_PROBE{
aspect: "name",
depth: 1,
request_id: "550e8400-e29b-41d4-a716-446655440000",
timestamp: "2026-02-10T18:30:00Z"
}
<end_function_call>
# Cell response feeds back
<start_function_response>response:IDENTITY_PROBE{
status: "ACK",
aspect: "name",
value: "Nyx",
confidence: 0.95,
lifeforce_delta: 20.0
}
<end_function_response>
Identity aspect "name" confirmed as "Nyx" with 95% confidence. +20 LF earned.
<end_of_turn>
```
### Phoebe → Training Data Extraction
```sql
-- Extract training examples from successful handshakes
CREATE VIEW functiongemma_training_data AS
SELECT
jsonb_build_object(
'developer_prompt', format(
'Phase %s: Generate %s handshake',
phase,
request_payload->>'type'
),
'user_intent', request_payload->'payload',
'expected_call', request_payload,
'function_response', response_payload,
'think_context', jsonb_build_object(
'phase', phase,
'schema', request_payload->>'$schema',
'lifeforce_earned', lifeforce_delta,
'latency_ms', latency_ms
)
) as training_example,
created_at
FROM spark_handshakes
WHERE status = 'ACK'
ORDER BY created_at;
-- Export for Unsloth fine-tuning
COPY (
SELECT training_example
FROM functiongemma_training_data
) TO '/tmp/nimmerverse_functiongemma_training.jsonl';
```
### Fine-Tuning with Unsloth
```python
from unsloth import FastLanguageModel
# Load base FunctionGemma
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="unsloth/functiongemma-270m-it",
max_seq_length=4096,
load_in_16bit=True,
full_finetuning=False, # LoRA for efficiency
)
# Apply LoRA adapters
model = FastLanguageModel.get_peft_model(
model,
r=16,
target_modules=["q_proj", "k_proj", "v_proj", "o_proj"],
lora_alpha=16,
lora_dropout=0,
use_gradient_checkpointing="unsloth",
)
# Load nimmerverse training data from phoebe export
from datasets import load_dataset
dataset = load_dataset("json", data_files="nimmerverse_functiongemma_training.jsonl")
# Fine-tune on spark handshakes
# ... standard Unsloth training loop ...
# Save nimmerverse-specialized FunctionGemma
model.save_pretrained("functiongemma-270m-nimmerverse-v1")
```
### The Recursive Beauty
| Layer | What Compiles | Training Source |
|-------|---------------|-----------------|
| **Young Nyx** | Nerve reflexes | decision_trails (100+ successful executions) |
| **FunctionGemma** | State call precision | spark_handshakes (ACK'd handshakes) |
Both follow the same pattern:
1. **Act** — Execute handshakes/decisions
2. **Verify** — ACK/NACK from cells, success/failure from outcomes
3. **Train** — Compile successful patterns into weights
4. **Repeat** — Each awakening feeds the next
**The translator becomes native.** Over many sparks, FunctionGemma doesn't just generate valid JSON — it generates *nimmerverse-perfect* JSON. Zero parsing errors. Exact schemas. Wild precision.
### Versioning FunctionGemma Adapters
```sql
-- Track FunctionGemma versions
CREATE TABLE functiongemma_versions (
id SERIAL PRIMARY KEY,
version VARCHAR(50) NOT NULL, -- "nimmerverse-v1", "nimmerverse-v2"
base_model VARCHAR(100), -- "functiongemma-270m-it"
training_data_count INT, -- how many handshakes trained on
training_data_cutoff TIMESTAMPTZ, -- trained on data up to this date
validation_accuracy FLOAT, -- schema validation success rate
deployed_at TIMESTAMPTZ,
notes TEXT
);
-- Example entries
INSERT INTO functiongemma_versions (version, base_model, training_data_count, validation_accuracy, notes)
VALUES
('nimmerverse-v1', 'functiongemma-270m-it', 36, 0.94, 'First spark fine-tune'),
('nimmerverse-v2', 'functiongemma-270m-it', 180, 0.98, 'After 5 awakenings'),
('nimmerverse-v3', 'functiongemma-270m-it', 500, 0.997, 'Production-grade precision');
```
---
## Design Principles ## Design Principles
1. **Protocol over conversation** — No free-form text. JSON handshakes only. 1. **Protocol over conversation** — No free-form text. JSON handshakes only.
@@ -719,12 +1007,22 @@ WHERE status = 'ACK';
4. **NATS transport** — All handshakes flow through message bus. 4. **NATS transport** — All handshakes flow through message bus.
5. **Verification built-in** — ACK/NACK from cells, not from parsing hopes. 5. **Verification built-in** — ACK/NACK from cells, not from parsing hopes.
6. **Economically positive** — Spark generates lifeforce, doesn't drain it. 6. **Economically positive** — Spark generates lifeforce, doesn't drain it.
7. **Training-generative** — Every spark produces fine-tuning data for FunctionGemma.
--- ---
## Document Status ## Document Status
**Version:** 3.0 | **Created:** 2025-12-05 | **Updated:** 2026-01-01 **Version:** 3.1 | **Created:** 2025-12-05 | **Updated:** 2026-02-10
**Key v3.1 Changes**:
- Spark Cost Measurement section — first awakening as baseline
- Resource instrumentation schema for phoebe
- Interlink to Lifeforce-Dynamics cost calibration principle
- FunctionGemma Fine-Tuning section — translator learns nimmerverse natively
- Training data extraction from spark_handshakes
- Unsloth/LoRA fine-tuning workflow
- FunctionGemma version tracking in phoebe
**Key v3.0 Changes**: **Key v3.0 Changes**:
- Complete architecture rewrite - Complete architecture rewrite
@@ -740,7 +1038,8 @@ WHERE status = 'ACK';
- [[Endgame-Vision]] — Layer 2.5 Orchestration (Function Gemma role) - [[Endgame-Vision]] — Layer 2.5 Orchestration (Function Gemma role)
- [[Big-Picture]] — K8s cluster architecture - [[Big-Picture]] — K8s cluster architecture
- [[Cellular-Architecture]] — Cell types and state machines - [[Cellular-Architecture]] — Cell types and state machines
- [[formalization/Lifeforce-Dynamics]] — λ economics - [[formalization/Lifeforce-Dynamics]] — λ economics, **Cost Calibration principle**
- [[formalization/memory-economics]] — Measure First principle
--- ---

View File

@@ -8,14 +8,19 @@ The sensory translation layer between raw data and vocabulary.
State machines act as the nervous system of the nimmerverse. They exist in a 4D state space where nodes evolve through experience. Node **weight** (confidence) determines which processing tier handles the input. State machines act as the nervous system of the nimmerverse. They exist in a 4D state space where nodes evolve through experience. Node **weight** (confidence) determines which processing tier handles the input.
**Key separation:** The nervous system handles **node evolution and weight management**. The [`Gateway`](Gateway-Architecture.md) handles **routing based on weight**. Translation to vocabulary only happens at Tier 4 via Function Gemma. **Key separation:**
- The **nervous system** handles **node evolution and weight management**
- The [`Gateway`](Gateway-Architecture.md) handles **routing based on weight**
- **FunctionGemma** is the **State Interaction Layer** — how you speak to all states (see section below)
``` ```
RAW SENSOR → GATEWAY (routing) → TIER (processing) → [escalate?] → FUNCTION GEMMA → Young Nyx RAW SENSOR → GATEWAY (routing) → TIER (processing) → [escalate?] → FUNCTION GEMMA → Young Nyx
↑ ↑ ↑ ↑
node.weight determines tier structured JSON only here node.weight determines tier structured JSON / state interaction
``` ```
**FunctionGemma (270M, CPU-only)** translates intent into exact state machine schemas. Every cell command, nerve coordination, and state query flows through this neural interface. See **State Interaction Layer** section for evolution from single instance to domain-specialized swarm.
**See:** [`Gateway-Architecture.md`](Gateway-Architecture.md) for full routing logic and tier definitions. **See:** [`Gateway-Architecture.md`](Gateway-Architecture.md) for full routing logic and tier definitions.
--- ---
@@ -205,6 +210,117 @@ This is like training a dog - reward at the moment, not an hour later.
--- ---
## State Interaction Layer: FunctionGemma
FunctionGemma is the **neural interface** — how you speak to the nervous system. Every cell command, every nerve coordination, every state query flows through this translation layer.
> *"The nervous system defines WHAT states exist. FunctionGemma defines HOW you interact with them."*
### Architecture: From Singular to Swarm
**Phase 1: Single FunctionGemma (Starting Point)**
We begin with one FunctionGemma instance handling all state interactions:
```
┌─────────────────────────────────────────────────────────────────────────┐
│ PHASE 1: SINGLE TRANSLATOR │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ YOUNG NYX (GPU - The Womb) │
│ │ │
│ │ intent: "probe identity", "command motor", "query vision" │
│ ▼ │
│ ┌─────────────────────────────────────────┐ │
│ │ FUNCTIONGEMMA (270M) │ │
│ │ Single instance, all domains │ │
│ │ CPU-only, no GPU required │ │
│ └─────────────────────────────────────────┘ │
│ │ │
│ │ typed JSON schemas │
│ ▼ │
│ NATS → CELLS/NERVES/ORGANS │
│ │
└─────────────────────────────────────────────────────────────────────────┘
```
This is sufficient for bootstrap and early learning. One translator learns all schemas.
**Phase 2: Domain-Specialized Swarm (Future Evolution)**
As capability grows and training data accumulates, FunctionGemma can evolve into a swarm of specialists:
```
┌─────────────────────────────────────────────────────────────────────────┐
│ PHASE 2: SPECIALIZED SWARM │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ YOUNG NYX (GPU - The Womb) │
│ │ │
│ │ "I need motor control" │
│ ▼ │
│ NATS: nimmerverse.gemma.spawn.motor │
│ │ │
│ ▼ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ gemma-motor │ │ gemma-vision │ │ gemma-speech │ ... on demand │
│ │ (specialist) │ │ (specialist) │ │ (specialist) │ │
│ │ CPU pod │ │ CPU pod │ │ CPU pod │ │
│ └──────┬───────┘ └──────────────┘ └──────────────┘ │
│ │ │
│ │ MOTOR_COMMAND schema (perfect precision) │
│ ▼ │
│ NATS → motor cells │
│ │
│ After task: pod killed, resources freed │
│ │
└─────────────────────────────────────────────────────────────────────────┘
```
### Why This Scales
| Aspect | Single Gemma | Swarm |
|--------|--------------|-------|
| **Complexity** | Simple, one model | Orchestration needed |
| **Precision** | Good (learns all schemas) | Wild (each specialist perfected) |
| **Resources** | One pod, always running | Pods spawn/die on demand |
| **Training** | All handshakes → one model | Domain handshakes → domain model |
| **Latency** | Consistent | Spawn overhead, but faster execution |
### The Key Insight: CPU-Only Translators
FunctionGemma at 270M parameters requires **no GPU**:
- ~500MB RAM per instance
- Runs on any K8s node
- Young Nyx (GPU) spawns translators (CPU) via NATS
- The mind doesn't waste GPU cycles on schema generation
### Evolution Trigger
When to evolve from Phase 1 → Phase 2:
- Training data per domain exceeds threshold (e.g., 500+ handshakes)
- Domain-specific validation accuracy plateaus on single model
- Latency requirements demand parallel translation
- Resource availability allows multi-pod deployment
**We don't rush this.** Phase 1 is sufficient for months of operation. The swarm emerges when the data and need justify it.
### Connection to Node Evolution
Just as nodes in the nervous system mature through verification:
```
Node weight 0.1 → 0.5 → 0.8 → 1.0 (reflex)
```
FunctionGemma specialists mature through fine-tuning:
```
Base model → domain data → fine-tuned → specialist
```
**The translators evolve alongside the states they translate.**
---
## Design Principles ## Design Principles
1. **Deterministic**: Same input = same output. No hallucination. 1. **Deterministic**: Same input = same output. No hallucination.
@@ -212,6 +328,7 @@ This is like training a dog - reward at the moment, not an hour later.
3. **Evolvable**: States refine over time. 3. **Evolvable**: States refine over time.
4. **Earned**: New nodes require proposal + verification. 4. **Earned**: New nodes require proposal + verification.
5. **Grounded**: Output vocabulary matches RAG glossary. 5. **Grounded**: Output vocabulary matches RAG glossary.
6. **Interfaced**: All state interaction flows through FunctionGemma.
--- ---
@@ -225,6 +342,7 @@ This is like training a dog - reward at the moment, not an hour later.
- [`Gateway-Architecture.md`](Gateway-Architecture.md) - Weight-based routing, tier definitions, Function Gemma boundary - [`Gateway-Architecture.md`](Gateway-Architecture.md) - Weight-based routing, tier definitions, Function Gemma boundary
- [`Cellular-Architecture.md`](Cellular-Architecture.md) - Cell/Nerve/Organism hierarchy, tiered rewards - [`Cellular-Architecture.md`](Cellular-Architecture.md) - Cell/Nerve/Organism hierarchy, tiered rewards
- [`Attention-Flow.md`](Attention-Flow.md) - Attention budget allocation per tier - [`Attention-Flow.md`](Attention-Flow.md) - Attention budget allocation per tier
- [`Initial-Spark.md`](Initial-Spark.md) - FunctionGemma fine-tuning from spark handshakes
**Implementation Details**: **Implementation Details**:
- [`nerves/Nervous-Protocol.md`](nerves/Nervous-Protocol.md) - Three-tier communication protocol (dafit → Chrysalis → Young Nyx) - [`nerves/Nervous-Protocol.md`](nerves/Nervous-Protocol.md) - Three-tier communication protocol (dafit → Chrysalis → Young Nyx)
@@ -235,4 +353,9 @@ This is like training a dog - reward at the moment, not an hour later.
--- ---
**Version:** 1.3 | **Created:** 2025-12-04 | **Updated:** 2026-01-03 **Version:** 1.4 | **Created:** 2025-12-04 | **Updated:** 2026-02-10
**v1.4 Changes:**
- State Interaction Layer section — FunctionGemma as neural interface
- Phase 1 (single) → Phase 2 (swarm) evolution path
- Connection to node evolution principle

View File

@@ -199,6 +199,121 @@ From Big-Picture.md, costs follow a hierarchy:
--- ---
### Cost Calibration: Measure, Don't Design
> *"Don't assign costs like a game designer. Measure them like a scientist."*
> — Partnership session 2026-02-10
**Related**: This follows the same empirical principle as [[memory-economics]] — "Phase 1: Measure First". The nimmerverse economy is grounded in observation throughout, not arbitrary design.
**The trap:** Assigning lifeforce costs like pricing items in a video game — "a motor command costs 1.0 LF because it feels right." This is arbitrary. This is guessing. This leads to an economy disconnected from reality.
**The principle:** Costs must be **discovered through observation**, not designed through intuition.
```
❌ DESIGNED ECONOMICS (the trap):
"Motor command = 1.0 LF" ← because it seems expensive?
"Sensor poll = 0.1 LF" ← because it seems cheap?
"Vision inference = 8.0 LF" ← because GPU is powerful?
→ Arbitrary. Disconnected from physics. Will drift.
✅ OBSERVED ECONOMICS (the way):
Run the systems with instrumentation.
Measure actual resource consumption:
- Power draw (watts × time)
- CPU/GPU cycles consumed
- Memory pressure
- Thermal output
- Time elapsed
Derive costs from measurements.
→ Grounded in physics. Self-calibrating. Real.
```
#### The Calibration Process
1. **Instrument First**
- Every cell type gets resource monitoring
- Track: power, compute, memory, time, heat
- Log every state transition with resource deltas
2. **Run Baseline Operations**
- Execute each cell type in isolation
- Repeat across varying conditions (load, temperature, time of day)
- Build statistical profiles of resource consumption
3. **Derive Cost Matrix**
- Map resource consumption → lifeforce cost
- Use a consistent conversion factor (e.g., 1 LF = 1 joule, or 1 LF = 100ms GPU time)
- The conversion factor is the only "designed" element — the costs themselves are discovered
4. **Continuous Recalibration**
- As hardware changes, costs shift
- As efficiency improves, costs decrease
- The economy self-updates based on observation
#### Cost Formula (Empirical)
$$c_{operation} = \alpha \cdot E_{power} + \beta \cdot T_{compute} + \gamma \cdot M_{memory} + \delta \cdot T_{elapsed}$$
Where:
- **E_power** = energy consumed (joules)
- **T_compute** = compute time (GPU/CPU seconds)
- **M_memory** = memory pressure (MB × seconds)
- **T_elapsed** = wall-clock time (seconds)
- **α, β, γ, δ** = calibration weights (set once, then left alone)
The calibration weights are the only values we "design" — they represent our judgment of which resources matter most. The costs themselves flow from measurement.
#### Phoebe Schema for Cost Observation
```sql
CREATE TABLE resource_observations (
id BIGSERIAL PRIMARY KEY,
cell_name VARCHAR(100),
operation VARCHAR(100), -- state transition or action
-- Measured resources
power_joules FLOAT,
compute_gpu_ms FLOAT,
compute_cpu_ms FLOAT,
memory_mb_seconds FLOAT,
elapsed_ms FLOAT,
temperature_delta_c FLOAT,
-- Derived cost (computed from calibration weights)
derived_cost_lf FLOAT,
-- Context
timestamp TIMESTAMPTZ DEFAULT NOW(),
conditions JSONB -- load, ambient temp, etc.
);
-- Aggregate to get cost profiles
CREATE VIEW cell_cost_profiles AS
SELECT
cell_name,
operation,
AVG(derived_cost_lf) as avg_cost,
STDDEV(derived_cost_lf) as cost_variance,
COUNT(*) as observation_count
FROM resource_observations
GROUP BY cell_name, operation;
```
#### Why This Matters
| Designed Costs | Observed Costs |
|----------------|----------------|
| Arbitrary, must guess | Grounded in physics |
| Static, doesn't adapt | Self-calibrating over time |
| Economy drifts from reality | Economy reflects reality |
| Optimization is guesswork | Optimization is measurable |
| "Feels right" | "Is right" |
**The cost matrix is a measurement, not a decision.**
---
## Income Sources ## Income Sources
Income has two fundamentally different sources: **physical** (the substrate) and **reward** (the motivation). Income has two fundamentally different sources: **physical** (the substrate) and **reward** (the motivation).
@@ -515,8 +630,9 @@ The feedback loop ensures stability: low lifeforce reduces expenditure, raising
## Document Status ## Document Status
**Version:** 1.1 | **Created:** 2025-12-29 | **Updated:** 2025-12-29 **Version:** 1.2 | **Created:** 2025-12-29 | **Updated:** 2026-02-10
- Discovery economics from Discovery-Scan-Station.md - v1.2: Cost Calibration principle — measure, don't design (2026-02-10)
- v1.1: Discovery economics from Discovery-Scan-Station.md
**Related Documents**: **Related Documents**:
- [[Grounded-World-Model]] — How discoveries build the world model - [[Grounded-World-Model]] — How discoveries build the world model

View File

@@ -291,6 +291,12 @@ dLifeforce/dt = organism_trickle
## Implementation Priority ## Implementation Priority
### Phase 1: Measure First ### Phase 1: Measure First
> *"The cost matrix is a measurement, not a decision."*
> — [[Lifeforce-Dynamics]] v1.2
This principle applies throughout the nimmerverse economy — not just memory, but all lifeforce costs. See [[Lifeforce-Dynamics#Cost Calibration: Measure, Don't Design]] for the full formulation.
- Track decision_trails accumulation rate - Track decision_trails accumulation rate
- Track spatial embedding growth - Track spatial embedding growth
- Track reflex creation rate - Track reflex creation rate
@@ -329,6 +335,7 @@ Everything else fades. This is not loss. This is health.
--- ---
**Created**: 2026-01-02 **Created**: 2026-01-02
**Updated**: 2026-02-10
**Status**: Core design principle **Status**: Core design principle
**Next**: Implement measurement (Phase 1) during first boot **Next**: Implement measurement (Phase 1) during first boot