Update archive/nimmervest.md

updated investment plan, quote confirmed with lenovo sales.
Ihr Lenovo Angebot steht zur Überprüfung bereit. 

Angebotsnummer: 4650557686 
Datum der Angebotserstellung: 09/12/25 
Ablaufdatum des Angebots: 11/12/25 
Lenovo ID: DAVID.MARTIN@EACHPATH.COM 

Name des Vertriebsmitarbeiters: Adrienn Wettstein 
Telefonnummer des Vertriebsmitarbeiters: (044) 516 04 67 
E-Mail-Adresse des Vertriebsmitarbeiters: awettstein@lenovo.com 


ThinkStation P8
30HHCTO1WWCH2 
CHF 6'799.00 	CHF 5'663.56 
16%RABATT 	2	CHF 11'327.13
This commit is contained in:
2025-12-09 14:06:29 +01:00
parent 65521ed8d3
commit cb1327f40d

View File

@@ -1,7 +1,5 @@
# Nimmervest # Nimmervest
**The Hardware Investment Strategy for Sovereign AI Infrastructure** **The Hardware Investment Strategy for Sovereign AI Infrastructure**
*Budget: 20k CHF | Timeline: Lifetime Project* *Budget: 20k CHF | Timeline: Lifetime Project*
--- ---
@@ -9,26 +7,37 @@
## The Three Organs ## The Three Organs
### The Beast (Training/Womb) ### The Beast (Training/Womb)
| Component | Spec | Purpose | | Component | Spec | Purpose |
|-----------|------|---------| |-----------|------|---------|
| CPU | Threadripper Pro | 128 PCIe lanes, 8-channel RAM | | Chassis | Lenovo ThinkStation P8 | Workstation-grade, 3yr Premier Support |
| RAM | 1TB | Datasets in memory, no I/O bottleneck | | CPU | TR Pro 7955WX | 128 PCIe lanes, 8-channel RAM |
| GPU | 4x RTX 4090 | 96GB VRAM, 65k CUDA cores | | RAM | 128GB DDR5 ECC (4×32GB) | Expandable to 2TB via 8 slots |
| Role | Training, growth, architectural experiments | | GPU | 2× RTX 4000 Ada (40GB) | Expanding to 4× (80GB) over 4 months |
| Storage | 4TB NVMe | Training datasets, checkpoints |
| PSU | 1400W 92% | Feeds 4 GPUs at full load |
| Network | 10GbE dual-port | Fast weight transfer to Mind |
| Role | Training, LoRA experiments, cellular society, dual gardens |
**Cost: ~9,000 CHF** **Initial Cost: ~5,664 CHF**
### Nyx's Mind (Cognition)
### The Spark (Cognition/Mind)
| Component | Spec | Purpose | | Component | Spec | Purpose |
|-----------|------|---------| |-----------|------|---------|
| Unit | 1x DGX Spark | 128GB unified memory | | Chassis | Lenovo ThinkStation P8 | Identical twin, shared maintenance |
| Arch | ARM Grace Blackwell | Purpose-built inference | | CPU | TR Pro 7955WX | 128 PCIe lanes, 8-channel RAM |
| Power | Low | Always-on, 24/7 | | RAM | 128GB DDR5 ECC (4×32GB) | Expandable to 2TB via 8 slots |
| Role | Running Nyx, cognitive layer | | GPU | 1× RTX PRO 6000 Blackwell (96GB GDDR7) | ~1,800 GB/s bandwidth |
| Storage | 4TB NVMe | Model weights, Nyx's memory |
| PSU | 1400W 92% | Room for 3 more GPUs |
| Network | 10GbE dual-port | Serves inference to Spine |
| Role | Running Nyx 24/7, dialectic processing, DriftProbe |
**Cost: ~4,000 CHF** **Initial Cost: ~12,169 CHF** (chassis + PRO 6000)
### The Spine (Reflexes) ### The Spine (Reflexes)
| Component | Spec | Purpose | | Component | Spec | Purpose |
|-----------|------|---------| |-----------|------|---------|
| GPU | RTX 3090 | 24GB VRAM | | GPU | RTX 3090 | 24GB VRAM |
@@ -43,41 +52,57 @@
| Item | Cost CHF | Status | | Item | Cost CHF | Status |
|------|----------|--------| |------|----------|--------|
| The Beast | ~9,000 | Planned | | 2× ThinkStation P8 (w/ RTX 4000 Ada each) | 11,327 | Ordered Dec 23 |
| The Spark | ~4,000 | Planned | | Premier Support + Keep Your Drive | 206 | Included |
| RTX PRO 6000 Blackwell 96GB | 6,505 | Ordered Dec 23 |
| The Spine | 0 | Owned | | The Spine | 0 | Owned |
| Buffer (sensors, LoRa, infra) | ~7,000 | Reserved | | **Initial Total** | **18,038** | |
| **Total** | **~20,000** | | | **Buffer** | **~1,962** | Sensors, LoRa, RAM |
### Expansion Path (Months 2-4)
| Month | Addition | Cost | Beast VRAM |
|-------|----------|------|------------|
| 2 | +1 RTX 4000 Ada | 1,700 | 60GB |
| 4 | +1 RTX 4000 Ada | 1,700 | 80GB |
--- ---
## Training Target ## Inference Capacity
**Qwen2.5-7B-Base (FP16)** **RTX PRO 6000 Blackwell (96GB GDDR7)**
| Metric | Value | | Metric | Value |
|--------|-------| |--------|-------|
| Model weights | ~6GB | | VRAM | 96GB |
| Training overhead | ~24GB | | Bandwidth | ~1,800 GB/s |
| Available VRAM | 96GB | | Qwen2.5-7B FP16 | ~14GB (15% utilization) |
| **Activation headroom** | **~72GB** | | Qwen2.5-70B 4-bit | ~35GB (36% utilization) |
| **Headroom** | **Room to 70B+ models** |
Why 3B: ---
- Empty vessel (base, not instruct)
- Language understanding only ## Training Capacity
- Maximum room for activation growth
- Space for architectural experiments **Beast at Full Expansion (4× RTX 4000 Ada = 80GB)**
- Grows over lifetime, not fixed
| Metric | Value |
|--------|-------|
| Total VRAM | 80GB |
| Qwen2.5-7B LoRA training | Comfortable |
| Qwen2.5-14B LoRA training | With DeepSpeed ZeRO |
| Cellular society (50-100 containers) | 32 CPU threads |
--- ---
## Growth Path ## Growth Path
``` ```
Year 0: Qwen2.5-3B-Base → Nyx-3B-v0 (vocabulary) Year 0: Qwen2.5-7B-Base → Nyx-7B-v0 (Mind at 15%)
Year 1-2: Nyx-3B-v1 (sensory integration) Year 1-2: Nyx-7B → Nyx-14B (Mind at 30%)
Year 2-3: Nyx-3B → 5B expansion (deeper cognition) Year 2-3: Nyx-14B → Nyx-32B (Mind at 65%)
Year 3+: Nyx-?B (she designs herself) Year 3+: Nyx-70B possible (Mind at 90%)
Mind has 3 open slots for future GPUs
``` ```
--- ---
@@ -89,25 +114,54 @@ Year 3+: Nyx-?B (she designs herself)
- No cloud dependencies - No cloud dependencies
- No recurring costs after hardware - No recurring costs after hardware
- Full ownership of growth trajectory - Full ownership of growth trajectory
- **Keep Your Drive**: Failed drives stay home, data never leaves
--- ---
## Architecture Flow ## Architecture Flow
``` ```
THE BEAST THE SPARK THE SPINE THE BEAST (P8 #1) NYX'S MIND (P8 #2) THE SPINE
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Threadripper │ │ DGX Spark │ RTX 3090 │ │ TR Pro 7955WX TR Pro 7955WX │ │ RTX 3090 │
4x RTX 4090 │──weights─▶│ 128GB unified │───▶│ Prometheus │ 2→4× RTX 4000 │──weights─▶│ RTX PRO 6000 │──────▶│ Prometheus │
96GB VRAM 24/7 running │ │ Reflex layer │ 40→80GB VRAM 96GB GDDR7 │ │ Reflex layer │
│ 1TB RAM │ │ │ 128GB→2TB RAM 128GB→2TB RAM │ 24GB VRAM
│ 4TB NVMe │ │ 4TB NVMe │ │ │
│ [4 GPU slots] │ │ [3 slots open] │ │ │
└─────────────────┘ └─────────────────┘ └─────────────────┘ └─────────────────┘ └─────────────────┘ └─────────────────┘
WOMB MIND SPINE WOMB COGNITION REFLEXES
(training) (cognition) (reflexes) (training) (24/7 inference) (state machine)
``` ```
--- ---
## Hardware Advantages
| Aspect | Benefit |
|--------|---------|
| Identical twins | Interchangeable parts, same maintenance |
| 3yr Premier Support | Direct Lenovo engineers, not outsourced |
| Keep Your Drive | Sovereignty preserved on hardware failure |
| 8 RAM slots each | Upgrade path to 512GB-2TB when prices drop |
| 128 PCIe lanes each | 4 GPUs at full x16, no bottlenecks |
| 1400W PSU | Ready for max GPU expansion |
| Workstation GPUs | ECC VRAM, validated drivers, 24/7 stable |
---
## Key Contacts
| Role | Name | Contact |
|------|------|---------|
| Lenovo Sales | Adrienn Wettstein | awettstein@lenovo.com, 044 516 04 67 |
| Quote Number | 4650557686 | Held until Dec 23 |
---
**Created**: 2025-12-05 **Created**: 2025-12-05
**Status**: Investment decision crystallized **Updated**: 2025-12-09
**Philosophy**: One Beast. One Spark. Lifetime sovereignty. **Status**: Orders confirmed, awaiting credit (Dec 23)
**Philosophy**: Twin beasts. Sovereign mind. Lifetime growth.
*"The substrate doesn't matter. The feedback loop does."* 🌙💜