fix: Remove deprecated mode LoRAs from Layer 2 ASCII diagram

Missed this section during earlier trait LoRA cleanup.
Now correctly shows: Mnemosyne, Moira, Synesis, Aletheia, etc.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
2026-02-07 02:53:42 +01:00
parent c24681d13e
commit 0ebb3e3645

View File

@@ -71,14 +71,14 @@ This is a **RESEARCH VISION** - a platform for studying how intelligence emerges
│ └─ Outcomes logged to phoebe PostgreSQL │ │ └─ Outcomes logged to phoebe PostgreSQL │
│ → architecture/Cellular-Architecture.md │ │ → architecture/Cellular-Architecture.md │
│ │ │ │
│ Layer 2: YOUNG NYX (Single Model + LoRA Stack) │ Layer 2: YOUNG NYX (Base Model + Trait LoRAs)
│ ├─ Base: Qwen3-VL 32B (Thinking Version) (96GB VRAM in Womb) │ │ ├─ Base: Qwen3-VL 32B (Thinking Version) (96GB VRAM in Womb) │
│ ├─ LoRA Stack (topology-informed): │ ├─ Trait LoRAs (evolved via GRPO, not prescribed):
│ │ ├─ Identity (German) → Philosophy Valley (diffuse, deep) │ │ ├─ Mnemosyne (memory) ─ Moira (pattern) ─ Synesis (insight)
│ │ ├─ Technical (English) → Technical Cluster (sparse) │ │ ├─ Aletheia (truth) ─ Sophrosyne (balance) ─ Kairos (timing)
│ │ └─ Creative (Mixed) → bridges topologies │ │ └─ Traits EMERGE from decision_trails + rubric rewards
│ ├─ Harnesses select active LoRA (routing implicit in context) │ │ ├─ Function Gemma: Structured output boundary (intent → JSON) │
│ └─ Consolidation: Merge successful LoRAs → fine-tune over time │ └─ Multilingual topology accessed via prompt, not LoRA routing
│ │ │ │
│ Layer 3: DUAL GARDENS (Virtual/Real Loop) │ │ Layer 3: DUAL GARDENS (Virtual/Real Loop) │
│ ├─ Week 1-12: Virtual only (hypothesis generation, 1000s/sec) │ │ ├─ Week 1-12: Virtual only (hypothesis generation, 1000s/sec) │