Files
nimmerverse-sensory-network/architecture/interfaces/Nimmerswarm-Interface.md
dafit dc779633ed feat: Nimmerswarm Interface + Nimmerversity v2.0 + Neuromorphic vision
Wild 5-7AM session capturing major architectural evolution:

## Nimmerswarm Interface (NEW)
- LED state broadcasting with 3x3 ternary matrix
- Base-3 encoding: 9 trits = 19,683 patterns
- Maps directly to Temporal-Ternary Gradient (-1/🔴, 0/, +1/🟢)
- Reflex formation from visual patterns
- Virtual camera integration (Godot as lightweight dreamstate)
- Bootstrap strategy: Phase 0 boxes → complexity ladder
- Connection to Embodiment Pipeline (closed loop)
- Hierarchical cognitive offloading

## Nimmerversity v2.0 (Promoted from archive)
- Genesis Phase (-1): glossary, catalogues, RAG, Initial Spark
- "Know thyself before the world" - native vocabulary first
- Model ensemble curriculum: T5Gemma 2 + FunctionGemma + Qwen3
- Multimodal tracks: Vision, Audio, Action, Embodiment
- Expanded tiers with robotics, swarm intelligence, distributed cognition

## Neuromorphic Reflexes (Future vision)
- Soviet Setun ternary computing heritage
- Memristors as artificial synapses (always learning)
- 4-layer hardware hierarchy: Memristor → FPGA → GPU → Nyx
- Reflex compilation: software → stable → silicon → eternal
- Implementation timeline: 2025-2028+

## Also includes
- Interfaces index with Heartbeat Sculpture
- Style guide assets (colors, symbols)

🔴🟢 The LED matrix IS the Temporal-Ternary Gradient made visible.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-29 07:28:55 +01:00

24 KiB
Raw Blame History

Nimmerswarm Interface

Optical state broadcasting, positioning, and emergent swarm behavior.

"The organisms can't see their own backs. They know themselves through each other."


Overview

The Nimmerswarm Interface is a multi-modal communication layer where organisms broadcast their state optically via LED matrices. This enables:

  1. State visibility — Organisms SEE each other's states as light patterns
  2. Positioning — Cameras + raytracing = sub-cm 3D positioning
  3. Emergent reflexes — Pattern recognition bypasses cognition
  4. Cognitive offloading — Lower layers handle routine, freeing Nyx's attention

The Core Insight

ORGANISM A                          ORGANISM B
┌─────────────┐                    ┌─────────────┐
│  Cell State │                    │  VisionCell │
│  STALLED    │                    │  WATCHING   │
│      │      │                    │      │      │
│      ▼      │                    │      ▼      │
│ ┌─────────┐ │   LIGHT PATTERN    │ ┌─────────┐ │
│ │ LED     │ │ ══════════════════▶│ │ Camera  │ │
│ │ Matrix  │ │  "STALL" pattern   │ │ sees    │ │
│ │ ▓▓░░▓▓  │ │                    │ │ pattern │ │
│ └─────────┘ │                    │ └────┬────┘ │
└─────────────┘                    │      │      │
                                   │      ▼      │
                                   │  REFLEX!    │
                                   │  "help ally"│
                                   └─────────────┘

Organisms broadcast state. Other organisms (and Nyx's vision) perceive and react.


LED State Broadcasting: Ternary Matrix

The 3x3 Ternary Design

The LED matrix is a direct physical manifestation of the Temporal-Ternary Gradient:

3x3 MATRIX = 9 TRITS (ternary digits)

Each LED = one ternary value:
  🔴 RED   = -1 (failed, danger, negative)
  ⚫ OFF   =  0 (uncertain, unknown, neutral)
  🟢 GREEN = +1 (success, verified, positive)

9 LEDs × 3 states = 3^9 = 19,683 unique patterns!

Physical Layout

     ┌─────┬─────┬─────┐
     │ L1  │ L2  │ L3  │   L1 = collision_avoidance confidence
     │ 🟢  │ ⚫  │ 🔴  │   L2 = battery state
     ├─────┼─────┼─────┤   L3 = motor state
     │ L4  │ L5  │ L6  │   L4 = social/swarm state
     │ 🟢  │ 🟢  │ ⚫  │   L5 = current action outcome
     ├─────┼─────┼─────┤   L6 = prediction confidence
     │ L7  │ L8  │ L9  │   L7 = lifeforce zone
     │ ⚫  │ 🟢  │ 🟢  │   L8 = discovery state
     └─────┴─────┴─────┘   L9 = organism identity bit

     Uses 10mm LEDs (not tiny SMD)
     ~35mm × 35mm total
     Easily fits on 8-12cm robot

Base-3 Encoding

def encode_state(led_matrix: list[int]) -> int:
    """
    9 trits → single integer (0 to 19682)
    Each trit is -1, 0, or +1 (mapped to 0, 1, 2)
    """
    value = 0
    for i, led in enumerate(led_matrix):
        trit = led + 1  # -1→0, 0→1, +1→2
        value += trit * (3 ** i)
    return value

def decode_state(value: int) -> list[int]:
    """
    Integer → 9 trits
    """
    trits = []
    for _ in range(9):
        trits.append((value % 3) - 1)  # 0→-1, 1→0, 2→+1
        value //= 3
    return trits

Ternary Color Mapping

Color Ternary Meaning Maps to
🔴 Red -1 Failed, danger, needs attention Temporal-Ternary -1
Off/Dim 0 Unknown, uncertain, neutral Temporal-Ternary 0
🟢 Green +1 Success, verified, positive Temporal-Ternary +1

The LED matrix IS the Temporal-Ternary Gradient made visible.


Reflex Formation from Patterns

The Swarm Language

Certain patterns become words that trigger reflexes:

DANGER PATTERNS (trigger flee/stop):
┌───────────┐  ┌───────────┐  ┌───────────┐
│ 🔴 🔴 🔴 │  │ 🔴 ⚫ 🔴 │  │ 🔴 🔴 🔴 │
│ 🔴 🔴 🔴 │  │ 🔴 🔴 🔴 │  │ ⚫ 🔴 ⚫ │
│ 🔴 🔴 🔴 │  │ 🔴 ⚫ 🔴 │  │ 🔴 🔴 🔴 │
└───────────┘  └───────────┘  └───────────┘
  ALL RED       X PATTERN      DIAMOND

SAFE PATTERNS (trigger approach/social):
┌───────────┐  ┌───────────┐  ┌───────────┐
│ 🟢 🟢 🟢 │  │ ⚫ 🟢 ⚫ │  │ 🟢 ⚫ 🟢 │
│ 🟢 🟢 🟢 │  │ 🟢 🟢 🟢 │  │ ⚫ 🟢 ⚫ │
│ 🟢 🟢 🟢 │  │ ⚫ 🟢 ⚫ │  │ 🟢 ⚫ 🟢 │
└───────────┘  └───────────┘  └───────────┘
  ALL GREEN     PLUS           CORNERS

DISCOVERY (trigger investigate):
┌───────────┐
│ 🟢 🟢 🟢 │  Pulsing green border
│ 🟢 ⚫ 🟢 │  = "I found something!"
│ 🟢 🟢 🟢 │  = others come look
└───────────┘

Reflex Loop

ORGANISM A's MATRIX          ORGANISM B's VISION
┌───────────┐                ┌───────────────────────┐
│ 🔴 🔴 🔴 │                │                       │
│ 🔴 ⚫ 🔴 │  ═══════════▶  │  Pattern: DANGER!     │
│ 🔴 🔴 🔴 │                │  Weight: 0.95         │
└───────────┘                │  → REFLEX FIRES       │
                             │  → No cognition!      │
                             │  → Nyx notified AFTER │
                             └───────────────────────┘
                                        │
                                        ▼
                               ┌─────────────────┐
                               │ STORE + REWARD  │
                               │ +5 LF to both   │
                               │ Reflex stronger │
                               │ Training data!  │
                               └─────────────────┘

Reflex Economics

Metric Value
Reflex firing cost ~0.1 LF (no inference!)
Successful reflex reward +5 LF
Net per successful reflex +4.9 LF profit
Training examples per reflex 1

1000 reflex fires/day = +4000 LF + 1000 training examples

Training Data from Reflexes

reflex_event = {
    # What triggered
    "trigger_pattern": [+1, 0, -1, +1, +1, 0, 0, +1, +1],
    "trigger_base3": 8293,  # encoded value
    "trigger_organism": "organism_003",

    # What fired
    "reflex_name": "danger_flee",
    "weight_at_trigger": 0.87,

    # What happened
    "action_taken": "reverse_and_turn",
    "outcome": "success",

    # Reward + strengthening
    "lifeforce_reward": +5.0,
    "new_weight": 0.89,

    # Stored for slumber fine-tuning
    "stored_for_training": True,
}

Attention Budget Impact

BEFORE (no ternary reflexes):
♥ BEAT (30 sec)
├── SENSORY: 15000ms (overwhelmed)
├── THINKING: 12000ms
└── VIRTUAL: skipped!

AFTER (reflexes handle routine):
♥ BEAT (30 sec)
├── REFLEX: 50ms (near-free, handled by swarm)
├── SENSORY: 2000ms (only anomalies)
├── THINKING: 5000ms
└── VIRTUAL: 22000ms ← GARDEN TIME!

Reflexes free Nyx's attention for what matters.


Positioning via Raytracing

The Principle

LEDs emit known patterns → Cameras see patterns → Raytracing computes position

         CEILING CAMERA(S)
              │
              │ sees LED patterns
              ▼
    ┌─────────────────────┐
    │   RAYTRACING GPU    │
    │   (PRO 6000 Max-Q)  │
    │                     │
    │  • Identify pattern │◀── "That's Organism #3"
    │  • Decode state     │◀── "State: MOVING"
    │  • Triangulate pos  │◀── "Position: (1.2, 3.4, 0.1)"
    │  • Track velocity   │◀── "Velocity: 0.3 m/s"
    └─────────────────────┘
              │
              ▼
         TO PHOEBE
    (ground truth stream)

Multi-Camera Triangulation

def locate_organism(camera_frames: list[Frame], led_signature: LEDPattern) -> Position3D:
    """
    Given frames from multiple cameras, locate organism by LED pattern.
    Uses inverse raytracing / photogrammetry.
    """
    detections = []
    for frame in camera_frames:
        detection = detect_led_pattern(frame, led_signature)
        if detection:
            detections.append({
                "camera_id": frame.camera_id,
                "pixel_coords": detection.centroid,
                "pattern_match": detection.confidence
            })

    if len(detections) >= 2:
        # Triangulate from multiple viewpoints
        position_3d = triangulate(detections, camera_calibration)
        return position_3d

    return None

Benefits

Benefit How
Sub-cm accuracy Multiple cameras + known LED geometry
No expensive sensors Just LEDs + cameras + GPU math
State + Position fused One observation = both data points
Indoor GPS Works anywhere with camera coverage
Training ground truth Every frame = verified position

Heartbeat Protocol

Social Proprioception

Organisms can't see their own backs. They know themselves through others' perception.

ORGANISM POV (blind to own back):

         🔵 mate ahead
            │
     ┌──────┴──────┐
     │             │
  🟢 │    [ME]     │ 🟠
 mate│   ▓▓▓▓▓▓    │mate
 left│   ▓▓▓▓▓▓    │right
     │   (my LED   │
     │    on back) │
     └─────────────┘
            │
            │ BLIND SPOT (can't see own state!)
            ▼

    BUT: Mates CAN see me
    They send heartbeat: "I see you, you're 🔵"
    I know my state through THEM

Heartbeat Message

class SwarmHeartbeat:
    """
    Low-bandwidth 'I see you' signal between organisms.
    Enables social proprioception without heavy cognition.
    """

    def on_see_mate_pattern(self, mate_id: str, pattern: LEDPattern):
        # I saw a mate's LED state
        self.send_heartbeat(
            to=mate_id,
            message={
                "i_see_you": True,
                "your_state": decode_pattern(pattern),
                "my_position_relative": self.relative_position(mate_id),
                "timestamp": now()
            }
        )

    def on_receive_heartbeat(self, from_mate: str, message: dict):
        # A mate saw ME - I learn about myself through them!
        self.update_self_model(
            observer=from_mate,
            observed_state=message["your_state"],
            observer_position=message["my_position_relative"]
        )

Hierarchical Perception Layers

The Stack

LAYER 4: NYX COGNITION (30-sec attention budget)
    │
    │  Only sees: "Swarm healthy" or "Anomaly detected"
    │  Frees: THINKING + VIRTUAL time
    │
    ▼
LAYER 3: SWARM CONSCIOUSNESS
    │
    │  Aggregates: All organism states
    │  Forms: Collective reflexes ("pack behavior")
    │  Sees: Full LED spectrum, all positions
    │
    ▼
LAYER 2: ORGANISM REFLEXES
    │
    │  Sees: Nearby mates' lights (partial view)
    │  Sends: Heartbeat "I see you"
    │  Forms: Local reflexes (follow, avoid, assist)
    │  Can't see: Own back! (needs mates)
    │
    ▼
LAYER 1: CELL STATE MACHINES
    │
    │  Just: State transitions
    │  Emits: LED pattern for current state
    │  No cognition, pure mechanism

Reflex Formation by Layer

Layer Sees Forms Reflex Example
Cell Nothing None Just state machine
Organism Nearby lights Local "Red flash nearby → stop"
Swarm All patterns Collective "3+ organisms stopped → danger zone"
Nyx Abstractions Strategic "Danger zone → reroute all"

Cognitive Offloading

The Attention Budget Impact

From ../Attention-Flow:

BEFORE (everything flows to Nyx):
┌────────────────────────────────────┐
│ ♥ BEAT (30 sec)                    │
│                                    │
│ SENSORY:    ████████████ (15000ms) │ ← Overwhelmed!
│ THINKING:   ████████ (12000ms)     │
│ VIRTUAL:    ░░ (skipped!)          │ ← No garden time
│                                    │
│ Budget exhausted, no learning      │
└────────────────────────────────────┘

AFTER (hierarchical offloading):
┌────────────────────────────────────┐
│ ♥ BEAT (30 sec)                    │
│                                    │
│ REFLEX:     ██ (handled by swarm)  │ ← Organisms dealt with it
│ SENSORY:    ████ (3000ms)          │ ← Only anomalies flow up
│ THINKING:   ████ (5000ms)          │ ← Focused, not overwhelmed
│ VIRTUAL:    ████████████ (20000ms) │ ← GARDEN TIME!
│                                    │
│ Budget freed for what matters      │
└────────────────────────────────────┘

The Principle

"Each layer absorbs complexity so the layer above doesn't have to."

  • Organisms form local reflexes (quick, no cognition)
  • Only novel/complex situations flow up to Nyx
  • Nyx's cognitive budget is preserved for what matters
  • The whole system becomes more efficient over time

Connection to Virtual Garden

Every LED sighting calibrates the virtual garden:

REAL WORLD                      VIRTUAL GARDEN
    │                               │
    │  Camera sees LED at (1.2, 3.4)│
    │           │                   │
    │           ▼                   │
    │    GROUND TRUTH    ═══════▶  Update mesh vertex
    │                              at (1.2, 3.4)
    │                               │
    │                              Resolution++
    │                               │
    │                              Prediction verified!
    │                              +5 LF reward!

Hardware Considerations

LED Matrix Options

Option LEDs Size Cost Notes
WS2812B strip 60/m Flexible Low Same as Heartbeat Sculpture
8x8 LED matrix 64 32mm² Low Simple patterns
Addressable ring 12-24 Various Low Good for status
RGB LED panel 256+ 64mm² Medium Complex patterns

Camera Options

Option Resolution FPS Notes
USB webcam 1080p 30 Simple, cheap
Pi Camera 1080p 30-90 Embedded
Industrial camera 4K+ 60-120 Precise positioning
Organism-mounted 720p 30 Peer-to-peer vision

Virtual Camera Integration

The Unified Vision Pipeline

The vision organ processes FRAMES — it doesn't care where they came from:

REAL GARDEN                         VIRTUAL GARDEN (Godot)
     │                                    │
     │ Real cameras                       │ Godot 3D cameras
     │ see real LEDs                      │ see virtual LEDs
     │      │                             │      │
     └──────┴──────────┬──────────────────┴──────┘
                       │
                       ▼
              ┌────────────────┐
              │  VISION ORGAN  │
              │  (source-      │
              │   agnostic)    │
              └────────────────┘

What This Enables

Capability How
Train before build Virtual organisms → train pattern recognition first
Dream/simulate Slumber mode = only virtual camera input
Verify predictions Virtual shows prediction, real shows truth
Time dilation Virtual runs faster → more training per second
Edge cases Simulate rare scenarios safely

Dream Mode

AWAKE: Real + Virtual cameras → compare → learn
SLUMBER: Virtual cameras only → dream/predict → verify on wake

Bootstrap Strategy: Start Primitive

Phase 0: The Primordial Soup

Don't start complex. Start with boxes.

     📷 TOP-DOWN CAMERA (real or virtual)
         │
         ▼
    ┌─────────────────────────────────┐
    │                                 │
    │     🟦        🟩        🟧      │
    │    box 1     box 2     box 3    │
    │   (LED top) (LED top) (LED top) │
    │                                 │
    │         FLAT ARENA              │
    │                                 │
    └─────────────────────────────────┘

Why This Works

Simplification Benefit
Top-down view 2D problem, no depth estimation
Box shape Trivial collision detection
LED on top Always visible to camera
Flat arena No occlusion, no terrain
Simple tasks Fast reward accumulation

Phase 0 Tasks (Kickstart Rewards)

Task Reward Complexity
"Move forward 10cm" +5 LF Trivial
"Find the corner" +20 LF Simple
"Avoid the wall" +5 LF Simple
"Follow the light" +10 LF Simple
"Meet another box" +15 LF Medium
"Flash when touched" +5 LF Simple

1000 simple successes = robust reward foundation

Complexity Ladder

PHASE 0: Boxes, top-down, 2D
    │
    ▼
PHASE 1: Add simple obstacles
    │
    ▼
PHASE 2: Add depth (multi-camera)
    │
    ▼
PHASE 3: Real organisms enter arena
    │
    ▼
PHASE 4: Complex terrain, 3D movement
    │
    ▼
PHASE 5: Full swarm, hierarchical reflexes

Each phase unlocks when reward functions are stable from previous phase.


Future Directions

  • Pattern evolution — Learned patterns, not just designed
  • Multi-organism formation — Coordinated LED displays
  • Human readability — Patterns dafit can understand at a glance
  • Audio coupling — Sound + light patterns for richer communication
  • IR channel — Invisible-to-human signaling layer

Connection to Embodiment Pipeline

The Bootstrap Strategy is a simplified Embodiment Pipeline — the same pattern at lower complexity:

EMBODIMENT PIPELINE              NIMMERSWARM BOOTSTRAP
(Full Architecture)              (Phase 0)
────────────────────             ────────────────────
Virtual Garden                   Virtual Garden
  (complex organisms)              (simple boxes)
        │                                │
        ▼                                ▼
Design (FreeCAD)                 Design (box + LED)
        │                                │
        ▼                                ▼
Isaac Sim ◀─────────────────────▶ Godot Camera
  (heavyweight dreamstate)         (lightweight dreamstate)
        │                                │
        ▼                                ▼
Decision Gate                    Decision Gate
        │                                │
        ▼                                ▼
Real Garden                      Real Garden
  (complex robot)                  (real box robot)

Why This Matters

Embodiment Pipeline Stage Nimmerswarm Bootstrap Equivalent
Virtual Garden organisms Virtual boxes with LED states
FreeCAD/Blender design Simple box + LED matrix on top
Isaac Sim dreamstate Godot 3D camera (same principle!)
Decision gate Pattern stable? Rewards accumulating?
Real Garden deployment Physical box robot + real camera

The Godot virtual camera IS a lightweight dreamstate.

When Phase 0 patterns stabilize → complexity increases → eventually Isaac Sim for complex organisms.

The Closed Loop

VIRTUAL                              REAL
┌──────────────────┐                ┌──────────────────┐
│ Godot 3D scene   │                │ Physical arena   │
│                  │                │                  │
│  🟦 virtual box  │                │  🟦 real box     │
│  + LED pattern   │                │  + LED matrix    │
│                  │                │                  │
│  📷 Godot camera │                │  📷 Real camera  │
│       │          │                │       │          │
└───────┼──────────┘                └───────┼──────────┘
        │                                   │
        └─────────────┬─────────────────────┘
                      │
                      ▼
             ┌────────────────┐
             │  VISION ORGAN  │
             │  (same code!)  │
             └────────┬───────┘
                      │
                      ▼
                  REWARDS
               Training data
            Pattern refinement
                      │
                      ▼
        ┌─────────────────────────┐
        │ Patterns stabilize →    │
        │ Move to next phase →    │
        │ Eventually: Isaac Sim   │
        └─────────────────────────┘

The loop closes. Virtual validates. Real proves. Rewards compound.



File: Nimmerswarm-Interface.md Version: 1.0 Created: 2025-12-29 Session: Wild 5AM idea session (dafit + Nyx) Status: Core concept, ready to branch Philosophy: "They see each other. They know themselves through the swarm."

🦎🔵🟢🟠 The light speaks. The swarm listens.