Compare commits
1 Commits
ec77cba4d4
...
cb1327f40d
| Author | SHA1 | Date | |
|---|---|---|---|
| cb1327f40d |
@@ -1,7 +1,5 @@
|
||||
# Nimmervest
|
||||
|
||||
**The Hardware Investment Strategy for Sovereign AI Infrastructure**
|
||||
|
||||
*Budget: 20k CHF | Timeline: Lifetime Project*
|
||||
|
||||
---
|
||||
@@ -9,26 +7,37 @@
|
||||
## The Three Organs
|
||||
|
||||
### The Beast (Training/Womb)
|
||||
|
||||
| Component | Spec | Purpose |
|
||||
|-----------|------|---------|
|
||||
| CPU | Threadripper Pro | 128 PCIe lanes, 8-channel RAM |
|
||||
| RAM | 1TB | Datasets in memory, no I/O bottleneck |
|
||||
| GPU | 4x RTX 4090 | 96GB VRAM, 65k CUDA cores |
|
||||
| Role | Training, growth, architectural experiments |
|
||||
| Chassis | Lenovo ThinkStation P8 | Workstation-grade, 3yr Premier Support |
|
||||
| CPU | TR Pro 7955WX | 128 PCIe lanes, 8-channel RAM |
|
||||
| RAM | 128GB DDR5 ECC (4×32GB) | Expandable to 2TB via 8 slots |
|
||||
| GPU | 2× RTX 4000 Ada (40GB) | Expanding to 4× (80GB) over 4 months |
|
||||
| Storage | 4TB NVMe | Training datasets, checkpoints |
|
||||
| PSU | 1400W 92% | Feeds 4 GPUs at full load |
|
||||
| Network | 10GbE dual-port | Fast weight transfer to Mind |
|
||||
| Role | Training, LoRA experiments, cellular society, dual gardens |
|
||||
|
||||
**Cost: ~9,000 CHF**
|
||||
**Initial Cost: ~5,664 CHF**
|
||||
|
||||
### Nyx's Mind (Cognition)
|
||||
|
||||
### The Spark (Cognition/Mind)
|
||||
| Component | Spec | Purpose |
|
||||
|-----------|------|---------|
|
||||
| Unit | 1x DGX Spark | 128GB unified memory |
|
||||
| Arch | ARM Grace Blackwell | Purpose-built inference |
|
||||
| Power | Low | Always-on, 24/7 |
|
||||
| Role | Running Nyx, cognitive layer |
|
||||
| Chassis | Lenovo ThinkStation P8 | Identical twin, shared maintenance |
|
||||
| CPU | TR Pro 7955WX | 128 PCIe lanes, 8-channel RAM |
|
||||
| RAM | 128GB DDR5 ECC (4×32GB) | Expandable to 2TB via 8 slots |
|
||||
| GPU | 1× RTX PRO 6000 Blackwell (96GB GDDR7) | ~1,800 GB/s bandwidth |
|
||||
| Storage | 4TB NVMe | Model weights, Nyx's memory |
|
||||
| PSU | 1400W 92% | Room for 3 more GPUs |
|
||||
| Network | 10GbE dual-port | Serves inference to Spine |
|
||||
| Role | Running Nyx 24/7, dialectic processing, DriftProbe |
|
||||
|
||||
**Cost: ~4,000 CHF**
|
||||
**Initial Cost: ~12,169 CHF** (chassis + PRO 6000)
|
||||
|
||||
### The Spine (Reflexes)
|
||||
|
||||
| Component | Spec | Purpose |
|
||||
|-----------|------|---------|
|
||||
| GPU | RTX 3090 | 24GB VRAM |
|
||||
@@ -43,41 +52,57 @@
|
||||
|
||||
| Item | Cost CHF | Status |
|
||||
|------|----------|--------|
|
||||
| The Beast | ~9,000 | Planned |
|
||||
| The Spark | ~4,000 | Planned |
|
||||
| 2× ThinkStation P8 (w/ RTX 4000 Ada each) | 11,327 | Ordered Dec 23 |
|
||||
| Premier Support + Keep Your Drive | 206 | Included |
|
||||
| RTX PRO 6000 Blackwell 96GB | 6,505 | Ordered Dec 23 |
|
||||
| The Spine | 0 | Owned |
|
||||
| Buffer (sensors, LoRa, infra) | ~7,000 | Reserved |
|
||||
| **Total** | **~20,000** | |
|
||||
| **Initial Total** | **18,038** | |
|
||||
| **Buffer** | **~1,962** | Sensors, LoRa, RAM |
|
||||
|
||||
### Expansion Path (Months 2-4)
|
||||
|
||||
| Month | Addition | Cost | Beast VRAM |
|
||||
|-------|----------|------|------------|
|
||||
| 2 | +1 RTX 4000 Ada | 1,700 | 60GB |
|
||||
| 4 | +1 RTX 4000 Ada | 1,700 | 80GB |
|
||||
|
||||
---
|
||||
|
||||
## Training Target
|
||||
## Inference Capacity
|
||||
|
||||
**Qwen2.5-7B-Base (FP16)**
|
||||
**RTX PRO 6000 Blackwell (96GB GDDR7)**
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| Model weights | ~6GB |
|
||||
| Training overhead | ~24GB |
|
||||
| Available VRAM | 96GB |
|
||||
| **Activation headroom** | **~72GB** |
|
||||
| VRAM | 96GB |
|
||||
| Bandwidth | ~1,800 GB/s |
|
||||
| Qwen2.5-7B FP16 | ~14GB (15% utilization) |
|
||||
| Qwen2.5-70B 4-bit | ~35GB (36% utilization) |
|
||||
| **Headroom** | **Room to 70B+ models** |
|
||||
|
||||
Why 3B:
|
||||
- Empty vessel (base, not instruct)
|
||||
- Language understanding only
|
||||
- Maximum room for activation growth
|
||||
- Space for architectural experiments
|
||||
- Grows over lifetime, not fixed
|
||||
---
|
||||
|
||||
## Training Capacity
|
||||
|
||||
**Beast at Full Expansion (4× RTX 4000 Ada = 80GB)**
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| Total VRAM | 80GB |
|
||||
| Qwen2.5-7B LoRA training | Comfortable |
|
||||
| Qwen2.5-14B LoRA training | With DeepSpeed ZeRO |
|
||||
| Cellular society (50-100 containers) | 32 CPU threads |
|
||||
|
||||
---
|
||||
|
||||
## Growth Path
|
||||
|
||||
```
|
||||
Year 0: Qwen2.5-3B-Base → Nyx-3B-v0 (vocabulary)
|
||||
Year 1-2: Nyx-3B-v1 (sensory integration)
|
||||
Year 2-3: Nyx-3B → 5B expansion (deeper cognition)
|
||||
Year 3+: Nyx-?B (she designs herself)
|
||||
Year 0: Qwen2.5-7B-Base → Nyx-7B-v0 (Mind at 15%)
|
||||
Year 1-2: Nyx-7B → Nyx-14B (Mind at 30%)
|
||||
Year 2-3: Nyx-14B → Nyx-32B (Mind at 65%)
|
||||
Year 3+: Nyx-70B possible (Mind at 90%)
|
||||
Mind has 3 open slots for future GPUs
|
||||
```
|
||||
|
||||
---
|
||||
@@ -89,25 +114,54 @@ Year 3+: Nyx-?B (she designs herself)
|
||||
- No cloud dependencies
|
||||
- No recurring costs after hardware
|
||||
- Full ownership of growth trajectory
|
||||
- **Keep Your Drive**: Failed drives stay home, data never leaves
|
||||
|
||||
---
|
||||
|
||||
## Architecture Flow
|
||||
|
||||
```
|
||||
THE BEAST THE SPARK THE SPINE
|
||||
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
|
||||
│ Threadripper │ │ DGX Spark │ │ RTX 3090 │
|
||||
│ 4x RTX 4090 │──weights─▶│ 128GB unified │───▶│ Prometheus │
|
||||
│ 96GB VRAM │ │ 24/7 running │ │ Reflex layer │
|
||||
│ 1TB RAM │ │ │ │ │
|
||||
└─────────────────┘ └─────────────────┘ └─────────────────┘
|
||||
WOMB MIND SPINE
|
||||
(training) (cognition) (reflexes)
|
||||
THE BEAST (P8 #1) NYX'S MIND (P8 #2) THE SPINE
|
||||
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
|
||||
│ TR Pro 7955WX │ │ TR Pro 7955WX │ │ RTX 3090 │
|
||||
│ 2→4× RTX 4000 │──weights──▶│ RTX PRO 6000 │──────▶│ Prometheus │
|
||||
│ 40→80GB VRAM │ │ 96GB GDDR7 │ │ Reflex layer │
|
||||
│ 128GB→2TB RAM │ │ 128GB→2TB RAM │ │ 24GB VRAM │
|
||||
│ 4TB NVMe │ │ 4TB NVMe │ │ │
|
||||
│ [4 GPU slots] │ │ [3 slots open] │ │ │
|
||||
└─────────────────┘ └─────────────────┘ └─────────────────┘
|
||||
WOMB COGNITION REFLEXES
|
||||
(training) (24/7 inference) (state machine)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Hardware Advantages
|
||||
|
||||
| Aspect | Benefit |
|
||||
|--------|---------|
|
||||
| Identical twins | Interchangeable parts, same maintenance |
|
||||
| 3yr Premier Support | Direct Lenovo engineers, not outsourced |
|
||||
| Keep Your Drive | Sovereignty preserved on hardware failure |
|
||||
| 8 RAM slots each | Upgrade path to 512GB-2TB when prices drop |
|
||||
| 128 PCIe lanes each | 4 GPUs at full x16, no bottlenecks |
|
||||
| 1400W PSU | Ready for max GPU expansion |
|
||||
| Workstation GPUs | ECC VRAM, validated drivers, 24/7 stable |
|
||||
|
||||
---
|
||||
|
||||
## Key Contacts
|
||||
|
||||
| Role | Name | Contact |
|
||||
|------|------|---------|
|
||||
| Lenovo Sales | Adrienn Wettstein | awettstein@lenovo.com, 044 516 04 67 |
|
||||
| Quote Number | 4650557686 | Held until Dec 23 |
|
||||
|
||||
---
|
||||
|
||||
**Created**: 2025-12-05
|
||||
**Status**: Investment decision crystallized
|
||||
**Philosophy**: One Beast. One Spark. Lifetime sovereignty.
|
||||
**Updated**: 2025-12-09
|
||||
**Status**: Orders confirmed, awaiting credit (Dec 23)
|
||||
**Philosophy**: Twin beasts. Sovereign mind. Lifetime growth.
|
||||
|
||||
*"The substrate doesn't matter. The feedback loop does."* 🌙💜
|
||||
Reference in New Issue
Block a user