Architecture
Overview
MuninnDB is a single-binary Go application embedding all its dependencies. There is no separate database process to manage, no configuration server, no external queue. The binary contains the full storage engine, all cognitive workers, all wire protocols, and the web UI.
Storage Engine
At the lowest layer, MuninnDB uses Pebble — CockroachDB's pure-Go embedded key-value store based on LevelDB/RocksDB. Pebble provides:
- LSM (Log-Structured Merge Tree) storage — excellent write throughput
- MVCC (Multi-Version Concurrency Control) — safe concurrent reads/writes
- Crash safety with WAL (Write-Ahead Log)
- Range scans with prefix iteration
On top of Pebble, MuninnDB implements the Engram Record Format (ERF v1) — a purpose-built binary encoding with a fixed 152-byte header, variable-length content, and optional embedding storage.
Keys are prefixed by type (0x01 full record, 0x02 metadata-only, 0x03 index entries) allowing the decay worker to scan only 100-byte metadata records rather than full 4KB engrams — a 40× bandwidth reduction.
Cognitive Workers
Four background goroutines run continuously on configurable schedules:
- Decay Worker — Scans all active engrams, computes Ebbinghaus R(t) = max(floor, e^(-t/S)), updates relevance scores. Metadata-only key scan for efficiency.
- Hebbian Worker — Reads the ring buffer of recent co-activations, applies multiplicative weight updates to associations. No LLM involved — pure math.
- Contradiction Worker — Scans for structural contradictions (same concept, conflicting content), concept-cluster contradictions (high BM25 similarity + low confidence alignment), and explicit supersession chains. Flags for confidence review.
- Confidence Worker — Applies Bayesian updating when engrams are reinforced or contradicted. Formula: posterior = (p × s) / (p × s + (1-p) × (1-s)) with Laplace smoothing.
Scaling Strategy
Vault ID is the natural shard key — each user or agent has their own vault, and vaults don't require cross-shard joins. Horizontal scaling is vault-level sharding across multiple MuninnDB instances behind a simple proxy.
For most use cases (up to 1M engrams), a single node handles the full load. The single-binary model starts as a monolith and scales out when needed.