MuninnDB
Architecture

How MuninnDB works

Three ideas from cognitive neuroscience, implemented as first-class database primitives. No LLM required — the math runs on every read and write.

The fundamental unit: the Engram

An engram is a neuroscience term for a physical memory trace in the brain. In MuninnDB, it's the unit of storage — think of it as a "row" that knows how relevant it is, how confident we are in it, and what other engrams it's related to.

Concept
What this is about
Confidence
0–1 Bayesian posterior
Relevance
Current decay score
Associations
Weighted edges to other engrams
Cognitive Primitives

Three cognitive primitives. Built into the storage engine.

No LLMs. No configuration. No developer code. Recency, association, and proactive triggers are the engine — not features bolted on top.

01

Recency

The right memory, right now — automatically.

Nothing is ever deleted. Instead, MuninnDB computes a priority score at query time based on two things: how recently a memory was accessed, and how often. Fresh, frequently-recalled memories surface first. Old, unused ones stay quiet — but are never gone. The same query run twice returns the same result every time.

Why it matters: Your agent retrieves what matters right now — not what was stored most recently or ranked highest by cosine similarity alone. Time and frequency drive priority automatically, with no manual timestamps or TTL logic.

ACT-R Cognitive Model (Anderson, 1993) →
Technical Detail

ACT-R base-level activation: B(M) = ln(n+1) − d × ln(ageDays / (n+1)) n = AccessCount how many times this was retrieved ageDays = days since last access (min 0.1) d = 0.5 power-law exponent (Anderson 1993) Final score: ContentMatch × softplus(B(M) + scale × HebbianBoost) × Confidence Deterministic — identical output across runs. No stochastic vector jitter.

fresh (10 accesses) moderate (3 accesses) age → priority

ACT-R temporal priority — recent and frequently accessed memories score highest

02

Hebbian Learning

Related memories travel together — even across time.

When two memories are retrieved together — because both were relevant to your query — their association automatically strengthens. The more often two engrams co-activate, the stronger their bond. This is Hebbian learning — without any LLM involved. Crucially, a Hebbian link can rescue an old memory: even if temporal priority has lowered its score, a strong association with a recent memory brings it back to the surface.

Why it matters: Your agent learns what concepts belong together without you telling it. Context awareness emerges automatically from usage patterns.

Hebbian Theory →
Technical Detail

Co-activation log (ring buffer, 50 entries) feeds Hebbian worker. Log-space weight update: logNew = log(w) + signal × log(1 + rate). Bidirectional — unused associations weaken symmetrically over time.

Neural co-activation network visualization
03

Semantic Triggers

The database pushes — you don't have to pull.

Subscribe to a semantic context, and MuninnDB will push a notification to your agent the moment a matching memory becomes highly relevant. No polling. No scanning. The database watches for relevance changes and delivers results to you — like an alert system for knowledge.

Why it matters: Your agent gets critical context at the right moment — not when it happens to query. Proactive intelligence instead of reactive polling.

Trigger Documentation →
Technical Detail

Triggers evaluated against active engrams after decay/Hebbian cycles. Semantic matching via embedding cosine similarity or BM25 FTS. Push via WebSocket, SSE, or callback. Rate-limited per vault.

Semantic push trigger visualization

The ACTIVATE pipeline — 6 phases, one call

When you call ACTIVATE with a context string, MuninnDB runs a parallel 6-phase pipeline to find the most cognitively relevant engrams — and updates Hebbian weights so the next query is smarter than the last.

Phase 01

Embed + Tokenize

Convert your context string to embeddings and BM25 tokens simultaneously.

Phase 02

Parallel Retrieval

3 goroutines: FTS candidates (BM25), vector candidates (HNSW cosine), decay-filtered pool.

Phase 03

RRF Fusion

Reciprocal Rank Fusion merges the three result lists into one coherent ranking.

Phase 04

Hebbian Boost

Co-activation weights increase scores for engrams frequently retrieved together.

Phase 05

Graph Traversal

BFS graph walk (depth 2) surfaces associated engrams with hop penalty scoring.

Phase 06

Score + Why

Final composite score, Why builder (explains each result), streaming response.

System layers

Consumer Layer Claude, Cursor, custom agents
Interface Layer MBP (TCP 8474) · REST (8475) · gRPC (8477) · MCP (8750)
Plugin Layer Embed Plugin (HNSW vectors) · Enrich Plugin (LLM)
Core Engine Activation engine · Cognitive workers · Semantic triggers
Index Layer Inverted (BM25 FTS) · HNSW · Adjacency graph
Storage: ERF v1 Engram Record Format · Hot L1 cache · Warm disk · Cold archive
Pebble KV (LSM) Embedded Go key-value store · MVCC · Crash-safe

Full architecture docs →

The agents people will build in 2027 will have memory like this.

You can start today.

One command. Nothing to configure. Your AI tools have persistent, cognitive memory in under 60 seconds.

terminal
curl -fsSL https://muninndb.com/install.sh | sh
muninn init
# Your AI tools now have memory.

Free to use · Source available · No account required