Huginn thinks. Muninn remembers.
Your AI is brilliant. Its memory is broken. MuninnDB gives it total recall — nothing deleted, the right memory always first, associations built automatically from usage patterns alone.
// Store a memory — it scores, associates, and surfaces automatically
client := muninn.NewClient("http://localhost:8475", "your-token")
client.Write(ctx, "default",
"user prefers dark mode",
"Always use dark theme in UI responses",
[]string{"preference", "ui"})
// Activate — finds and ranks the most relevant memories
results, _ := client.Activate(ctx, "default",
[]string{"what does the user want?"}, 5)
# Store a memory — it scores, associates, and surfaces automatically
from muninn import MuninnClient
# async with — client handles connection lifecycle
async with MuninnClient("http://localhost:8475", token="your-token") as client:
await client.write(
vault="default", concept="user prefers dark mode",
content="Always use dark theme in UI responses",
tags=["preference", "ui"],
)
# Activate — finds and ranks the most relevant memories
results = await client.activate(
vault="default", context=["what does the user want?"],
)
The world has built AI that thinks in real-time, then stored its memories in databases designed for invoices. The mismatch is costing you.
Every AI team stitches memory together from Redis, Postgres, and a vector store. None of these were built for memory — they're storage systems wearing a costume.
A memory from six months ago ranks equal to one from six minutes ago. Vector stores return cosine similarity — not priority, not context, not recency. Your agent gets the same results whether something happened yesterday or last year.
Traditional databases are passive. You pull data when you ask. They never push data when something becomes relevant. Your AI is flying blind between queries.
The AI remembered something without being told to. The database made the connection — from usage patterns alone.
Why does the cluster keep crashing during peak hours?
Debugged the issue — identified a misconfigured pod autoscaler. Misconfigured maxReplicas caused OOM during peak. Fixed by setting limits on memory requests.
Hey the cluster is acting up again.
Based on what we worked through Monday — this looks like it could be the pod autoscaler again. Want me to check the replica limits?
The AI remembered without being asked.
Nobody told it what belongs together.
It figured it out.
curl -fsSL https://muninndb.com/install.sh | sh
muninn init
# Your AI tools have memory in under 60 seconds. Free to use · No account · No cloud · Full setup guide →
One command auto-configures Claude Desktop, Cursor, Windsurf, and VS Code. No API keys to generate. No config files to edit.
Single binary. No Docker, no cloud accounts, no dependencies. Brew tap coming with first release.
Detects your installed AI tools and auto-configures each one. Takes about 20 seconds.
Open Claude or Cursor and start chatting. Memory works automatically.
Building with Python or LangChain?
The MuninnDB Python SDK is on PyPI. Install it alongside your project — no Go required.
What does muninn init actually do?
It scans for Claude Desktop, Claude Code/CLI, Cursor, OpenClaw, Windsurf, Codex, and VS Code on your machine. For each one it finds, it writes the MCP server config automatically — so those tools can connect to MuninnDB and call its 19 MCP tools. Then it starts the MuninnDB server in the background. Your AI tools immediately gain persistent, searchable memory across all your conversations.
"Memory isn't storage.
It's a living system."
Your brain doesn't store memories like a hard drive. They strengthen when recalled, quiet when unused, connect to related ideas automatically, and surface unbidden when suddenly relevant. MuninnDB brings these same properties to your database — not as features, but as the foundation.
Named after Muninn — Odin's raven of memory in Norse mythology. Learn the mythology →
Memory wrappers add a cognitive layer on top of a storage system. MuninnDB starts from cognitive principles and builds storage to serve them. This is not a surface-level difference.
token cost · latency · non-deterministic
no time sense · no learning · no graph
more tokens · still a black box
zero token cost · <1ms · always deterministic
pure math · <2ms · no LLM · learns on every call
BM25(0.78) + hebbian(0.16) + temporal(0.94)
Ask a memory wrapper why it returned a result.
It can't tell you. At best, an LLM generates a vague explanation. No math. No proof. No audit trail. You have to trust it.
Every MuninnDB activation returns a Why field — the exact scoring math, broken down by component. Not a description. Not a summary. The actual numbers.
BM25(0.78) + hebbian_boost(0.16)
+ temporal_priority(0.94) + assoc_depth1(0.06)
access_count=14 last_access=2h ago confidence=0.95
// No LLM. No guessing. Just math. No LLM in the memory pipeline
Wrappers call an LLM to extract, categorize, and retrieve memories. Every operation costs tokens. MuninnDB's cognitive operations are pure math — zero LLM cost.
Deterministic by design
LLM-based extraction is non-deterministic. Ask the same question twice and you may get different memories. MuninnDB's ACT-R scoring is deterministic — same query, same result, always.
The database learns on every read
Memory wrappers are static once written. In MuninnDB, every ACTIVATE call is a learning event — Hebbian weights update, temporal scores improve. The database gets smarter with use.
Memory wrappers add intelligence above a database.
MuninnDB is a database where the intelligence is the engine.
Full-text search, smart priority scoring, and associative memory — all built in. No Redis. No Pinecone. No embeddings pipeline.
No model files. No LD_LIBRARY_PATH. Just a single static binary.
# Download and run — that's it
curl -fsSL https://muninndb.com/install.sh | sh
muninn start
# MCP :8750 AI tool integration ← your Claude / Cursor gets memory
# REST :8475 JSON API
# UI :8476 http://localhost:8476
Works with your AI tools today
No LLMs. No configuration. No developer code. Recency, association, and proactive triggers are the engine — not features bolted on top.
The right memory, right now — automatically.
Nothing is ever deleted. Instead, MuninnDB computes a priority score at query time based on two things: how recently a memory was accessed, and how often. Fresh, frequently-recalled memories surface first. Old, unused ones stay quiet — but are never gone. The same query run twice returns the same result every time.
Why it matters: Your agent retrieves what matters right now — not what was stored most recently or ranked highest by cosine similarity alone. Time and frequency drive priority automatically, with no manual timestamps or TTL logic.
ACT-R base-level activation: B(M) = ln(n+1) − d × ln(ageDays / (n+1)) n = AccessCount how many times this was retrieved ageDays = days since last access (min 0.1) d = 0.5 power-law exponent (Anderson 1993) Final score: ContentMatch × softplus(B(M) + scale × HebbianBoost) × Confidence Deterministic — identical output across runs. No stochastic vector jitter.
ACT-R temporal priority — recent and frequently accessed memories score highest
Related memories travel together — even across time.
When two memories are retrieved together — because both were relevant to your query — their association automatically strengthens. The more often two engrams co-activate, the stronger their bond. This is Hebbian learning — without any LLM involved. Crucially, a Hebbian link can rescue an old memory: even if temporal priority has lowered its score, a strong association with a recent memory brings it back to the surface.
Why it matters: Your agent learns what concepts belong together without you telling it. Context awareness emerges automatically from usage patterns.
Co-activation log (ring buffer, 50 entries) feeds Hebbian worker. Log-space weight update: logNew = log(w) + signal × log(1 + rate). Bidirectional — unused associations weaken symmetrically over time.
The database pushes — you don't have to pull.
Subscribe to a semantic context, and MuninnDB will push a notification to your agent the moment a matching memory becomes highly relevant. No polling. No scanning. The database watches for relevance changes and delivers results to you — like an alert system for knowledge.
Why it matters: Your agent gets critical context at the right moment — not when it happens to query. Proactive intelligence instead of reactive polling.
Triggers evaluated against active engrams after decay/Hebbian cycles. Semantic matching via embedding cosine similarity or BM25 FTS. Push via WebSocket, SSE, or callback. Rate-limited per vault.
Every AI application eventually needs persistent memory. The question is whether you want to manage it manually or let the database handle the cognitive work.
Stop stitching together Redis, Postgres, and a vector store. MuninnDB gives your agent one endpoint that stores, recalls, scores, and associates — automatically. Total recall with temporal priority: nothing is lost, and the right memory surfaces first.
Every activation includes a "Why" score showing exactly which memories surfaced and why. Confidence is tracked. Contradictions are detected. You can edit, correct, or archive any memory at any time. Compliance-ready from day one.
Every cognitive primitive is exposed as plain math. No black boxes. Watch Hebbian weights evolve. Observe decay curves. Subscribe to confidence updates. MuninnDB is a laboratory for studying how artificial cognitive memory behaves at scale.
These aren't missing features. These features didn't exist in a database before MuninnDB — because they were never designed for cognitive memory.
| Capability | PostgreSQL | Redis | Pinecone | Neo4j | Mem0, MemGPT, Zep, Letta Mem0 / Wrappers | ★ cognitive MuninnDB |
|---|---|---|---|---|---|---|
| Temporal priority scoring (ACT-R) | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ |
| Hebbian auto-learning | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ |
| Learns from every query (no LLM) | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ |
| Evolves on read — mathematically | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ |
| Semantic push triggers | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ |
| Bayesian confidence | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ |
| Explainable Why field (pure math) | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ |
| Full-text search (BM25) | ✓ | ✗ | ✗ | ✗ | ✗ | ✓ |
| Vector / semantic search | ✗ | ✗ | ✓ | ✗ | ✓ | ✓ |
| Graph traversal | ✗ | ✗ | ✗ | ✓ | ✗ | ✓ |
| Zero external dependencies | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ |
| Single binary deploy | ✗ | ✓ | ✗ | ✗ | ✗ | ✓ |
Comparisons reflect core native capabilities. Memory wrappers (Mem0, MemGPT, Zep, Letta) use LLMs for extraction and vector stores for retrieval — no engine-level cognitive primitives.
A single Activate() call does what you'd otherwise need five separate systems to do — and the database gets smarter with every query, without any application code.
Activate() replaces in your stack
| Tier | Engrams | Disk | Deployment |
|---|---|---|---|
| Personal | 10K | 17–40 MB | Single binary |
| Power User | 100K | 170–400 MB | Single binary |
| Team | 1M | 1.7–4 GB | Single node |
| Enterprise | 100M+ | 170–400 GB | Sharded cluster |
No Docker required. No cloud accounts. No dependencies to install. Just download and run.
curl -fsSL https://muninndb.com/install.sh | sh
# brew install scrypster/tap/muninn ← coming with first release # Guided setup — connects Claude Desktop, Cursor, VS Code, Windsurf
muninn init
# [1/3] Which AI tools would you like to connect?
# ✓ Claude Desktop ✓ Cursor ✓ Windsurf
# [2/3] Secure your MCP endpoint with a bearer token? [Y/n] Y
# [3/3] Start MuninnDB now? [Y/n] Y
#
# muninn started (pid 12345)
# MBP :8474 binary protocol
# REST :8475 JSON API
# gRPC :8477 gRPC API
# MCP :8750 AI tool integration
# UI :8476 http://localhost:8476
#
# Claude Desktop → configured ✓
# Cursor → configured ✓ Where we are
The databases people build AI on in 2027 will all look like this. The people running MuninnDB today are defining what cognitive memory infrastructure looks like before anyone else.
There is no benchmark for cognitive memory databases yet — because there were no cognitive memory databases. The builders here now will write the benchmarks everyone else chases.
github.com/scrypster/muninndb →ACT-R (Anderson, 1993). Hebbian learning. Bayesian confidence gating. Not a startup's blog-post heuristic — 30 years of peer-reviewed cognitive science turned into a storage engine.
Read the science →The full engine source is on GitHub. Free for individuals, small teams, and internal use — no cloud account, no usage billing, no vendor lock-in. You own your data and your memory layer. Becomes Apache 2.0 in 2030.
License details →The agents people will build in 2027 will have memory like this.
One command. Nothing to configure. Your AI tools have persistent, cognitive memory in under 60 seconds.
curl -fsSL https://muninndb.com/install.sh | sh
muninn init
# Your AI tools now have memory. Free to use · Source available · No account required