RAG proxy that intercepts SkyrimNet LLM requests and enriches them with relevant Tamrielic lore from CHIM's Oghma Infinium database. Features: - FastAPI proxy compatible with OpenAI API - ChromaDB semantic search for lore retrieval - NPC profile extraction from SkyrimNet prompts - Google Sheets ingestion for CHIM's Oghma data - Kubernetes deployment manifests - Debug endpoint for RAG operation monitoring Collections ingested to iris-dev ChromaDB: - oghma_lore: 1951 entries (scholar knowledge) - oghma_basic: 1949 entries (commoner knowledge) - oghma_visual: 1151 entries (Omnisight perception) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Oghma RAG Proxy
RAG (Retrieval-Augmented Generation) proxy for SkyrimNet that injects Tamrielic lore into NPC conversations based on their knowledge profile.
Overview
This proxy sits between SkyrimNet and your LLM inference endpoint (OpenRouter/vLLM), enriching prompts with relevant lore from CHIM's Oghma Infinium database.
Key Features:
- Zero changes to SkyrimNet — just change the endpoint URL
- NPC-aware filtering — guards don't know mage secrets
- Two-tier knowledge — scholars get deep lore, commoners get basics
- ChromaDB-powered semantic search
Quick Start
# Install
pip install -e .
# Ingest Oghma lore into ChromaDB
python -m oghma_proxy.ingest --host iris-dev.eachpath.local --port 35000
# Run proxy
python -m oghma_proxy.main
Configuration
Copy config.yaml to config.local.yaml and customize:
upstream:
url: https://openrouter.ai/api/v1
api_key: ${OPENROUTER_API_KEY}
chromadb:
host: iris-dev.eachpath.local
port: 35000
Kubernetes Deployment
kubectl apply -k k8s/
Architecture
See TECHNICAL-SPEC.md for full design documentation.
Part of the nimmerverse project.