How MuninnDB works
Three ideas from cognitive neuroscience, implemented as first-class database primitives. No LLM required — the math runs on every read and write.
The fundamental unit: the Engram
An engram is a neuroscience term for a physical memory trace in the brain. In MuninnDB, it's the unit of storage — think of it as a "row" that knows how relevant it is, how confident we are in it, and what other engrams it's related to.
Three cognitive primitives.
Built into the storage engine.
No LLMs. No configuration. No developer code. Recency, association, and proactive triggers are the engine — not features bolted on top.
Recency
The right memory, right now — automatically.
Nothing is ever deleted. Instead, MuninnDB computes a priority score at query time based on two things: how recently a memory was accessed, and how often. Fresh, frequently-recalled memories surface first. Old, unused ones stay quiet — but are never gone. The same query run twice returns the same result every time.
Why it matters: Your agent retrieves what matters right now — not what was stored most recently or ranked highest by cosine similarity alone. Time and frequency drive priority automatically, with no manual timestamps or TTL logic.
ACT-R base-level activation: B(M) = ln(n+1) − d × ln(ageDays / (n+1)) n = AccessCount how many times this was retrieved ageDays = days since last access (min 0.1) d = 0.5 power-law exponent (Anderson 1993) Final score: ContentMatch × softplus(B(M) + scale × HebbianBoost) × Confidence Deterministic — identical output across runs. No stochastic vector jitter.
ACT-R temporal priority — recent and frequently accessed memories score highest
Hebbian Learning
Related memories travel together — even across time.
When two memories are retrieved together — because both were relevant to your query — their association automatically strengthens. The more often two engrams co-activate, the stronger their bond. This is Hebbian learning — without any LLM involved. Crucially, a Hebbian link can rescue an old memory: even if temporal priority has lowered its score, a strong association with a recent memory brings it back to the surface.
Why it matters: Your agent learns what concepts belong together without you telling it. Context awareness emerges automatically from usage patterns.
Co-activation log (ring buffer, 50 entries) feeds Hebbian worker. Log-space weight update: logNew = log(w) + signal × log(1 + rate). Bidirectional — unused associations weaken symmetrically over time.
Semantic Triggers
The database pushes — you don't have to pull.
Subscribe to a semantic context, and MuninnDB will push a notification to your agent the moment a matching memory becomes highly relevant. No polling. No scanning. The database watches for relevance changes and delivers results to you — like an alert system for knowledge.
Why it matters: Your agent gets critical context at the right moment — not when it happens to query. Proactive intelligence instead of reactive polling.
Triggers evaluated against active engrams after decay/Hebbian cycles. Semantic matching via embedding cosine similarity or BM25 FTS. Push via WebSocket, SSE, or callback. Rate-limited per vault.
The ACTIVATE pipeline — 6 phases, one call
When you call ACTIVATE with a context string, MuninnDB runs a parallel 6-phase pipeline to find the most cognitively relevant engrams — and updates Hebbian weights so the next query is smarter than the last.
Embed + Tokenize
Convert your context string to embeddings and BM25 tokens simultaneously.
Parallel Retrieval
3 goroutines: FTS candidates (BM25), vector candidates (HNSW cosine), decay-filtered pool.
RRF Fusion
Reciprocal Rank Fusion merges the three result lists into one coherent ranking.
Hebbian Boost
Co-activation weights increase scores for engrams frequently retrieved together.
Graph Traversal
BFS graph walk (depth 2) surfaces associated engrams with hop penalty scoring.
Score + Why
Final composite score, Why builder (explains each result), streaming response.
System layers
The agents people will build in 2027 will have memory like this.
You can start today.
One command. Nothing to configure. Your AI tools have persistent, cognitive memory in under 60 seconds.
curl -fsSL https://muninndb.com/install.sh | sh
muninn init
# Your AI tools now have memory. Free to use · Source available · No account required