MuninnDB
Cognitive Memory Database

Every database stores data.
MuninnDB remembers it.

Huginn thinks. Muninn remembers.

Your AI is brilliant. Its memory is broken. MuninnDB gives it total recall — nothing deleted, the right memory always first, associations built automatically from usage patterns alone.

Claude Desktop Cursor Windsurf OpenClaw VS Code Any MCP tool
SDK preview · not a developer? skip ↓
quickstart.go
// Store a memory — it scores, associates, and surfaces automatically
client := muninn.NewClient("http://localhost:8475", "your-token")

client.Write(ctx, "default",
  "user prefers dark mode",
  "Always use dark theme in UI responses",
  []string{"preference", "ui"})

// Activate — finds and ranks the most relevant memories
results, _ := client.Activate(ctx, "default",
  []string{"what does the user want?"}, 5)
scroll
The Problem

Your AI deserves better than repurposed storage

The world has built AI that thinks in real-time, then stored its memories in databases designed for invoices. The mismatch is costing you.

🧩
3+ systems

Bolted-on memory

Every AI team stitches memory together from Redis, Postgres, and a vector store. None of these were built for memory — they're storage systems wearing a costume.

⏱️
No temporal sense

Memory with no sense of time

A memory from six months ago ranks equal to one from six minutes ago. Vector stores return cosine similarity — not priority, not context, not recency. Your agent gets the same results whether something happened yesterday or last year.

🔇
Polling only

Databases that just sit there

Traditional databases are passive. You pull data when you ask. They never push data when something becomes relevant. Your AI is flying blind between queries.

This is what emergent memory looks like

The AI remembered something without being told to. The database made the connection — from usage patterns alone.

Monday — 9:14 AM

Why does the cluster keep crashing during peak hours?

Debugged the issue — identified a misconfigured pod autoscaler. Misconfigured maxReplicas caused OOM during peak. Fixed by setting limits on memory requests.

muninn.Store(ctx, &StoreRequest{
Concept: "kubernetes pod autoscaler crash",
Content: "Misconfigured maxReplicas...",
Tags: []string{"kubernetes", "infrastructure"},
})
Friday — 3:47 PM

Hey the cluster is acting up again.

Based on what we worked through Monday — this looks like it could be the pod autoscaler again. Want me to check the replica limits?

The AI remembered without being asked.

// MuninnDB surfaced this automatically
// No query written. No keywords. Just relevance.

Nobody told it what belongs together.

It figured it out.

try it now — one command
curl -fsSL https://muninndb.com/install.sh | sh
muninn init
# Your AI tools have memory in under 60 seconds.

Free to use · No account · No cloud · Full setup guide →

No code required

Connect your AI tools in 20 seconds

One command auto-configures Claude Desktop, Cursor, Windsurf, and VS Code. No API keys to generate. No config files to edit.

🧠 Claude Desktop
Cursor
🌊 Windsurf
🦀 OpenClaw
🔵 VS Code
🔗 LangChain
🔌 Any MCP tool
1
Install
$ curl -fsSL https://muninndb.com/install.sh | sh

Single binary. No Docker, no cloud accounts, no dependencies. Brew tap coming with first release.

2
Run the wizard
$ muninn init

Detects your installed AI tools and auto-configures each one. Takes about 20 seconds.

3
Your AI remembers
Claude Desktop configured
Cursor configured
MuninnDB started

Open Claude or Cursor and start chatting. Memory works automatically.

🐍

Building with Python or LangChain?

The MuninnDB Python SDK is on PyPI. Install it alongside your project — no Go required.

$ pip install muninndb
$ pip install muninndb[langchain]

What does muninn init actually do?

It scans for Claude Desktop, Claude Code/CLI, Cursor, OpenClaw, Windsurf, Codex, and VS Code on your machine. For each one it finds, it writes the MCP server config automatically — so those tools can connect to MuninnDB and call its 19 MCP tools. Then it starts the MuninnDB server in the background. Your AI tools immediately gain persistent, searchable memory across all your conversations.

"Memory isn't storage.
It's a living system."

Your brain doesn't store memories like a hard drive. They strengthen when recalled, quiet when unused, connect to related ideas automatically, and surface unbidden when suddenly relevant. MuninnDB brings these same properties to your database — not as features, but as the foundation.

Named after Muninn — Odin's raven of memory in Norse mythology. Learn the mythology →

The Architecture Difference

Not a wrapper. A database.

Memory wrappers add a cognitive layer on top of a storage system. MuninnDB starts from cognitive principles and builds storage to serve them. This is not a surface-level difference.

Memory Wrapper Mem0 · MemGPT · Zep · Letta
1
Your code
2
LLM call — "What's worth remembering?"

token cost · latency · non-deterministic

3
Vector store — Cosine similarity only

no time sense · no learning · no graph

4
LLM call — "Why is this relevant?"

more tokens · still a black box

5
Your code — Result (probably)
MuninnDB cognitive database
1
Your code
2
Store engram — No LLM — instant

zero token cost · <1ms · always deterministic

3
ACTIVATE — ACT-R + Hebbian + BM25 + graph

pure math · <2ms · no LLM · learns on every call

4
Result + Why field — Mathematical proof

BM25(0.78) + hebbian(0.16) + temporal(0.94)

5
Your code — Hebbian weights updated silently

Ask a memory wrapper why it returned a result.

It can't tell you. At best, an LLM generates a vague explanation. No math. No proof. No audit trail. You have to trust it.

Every MuninnDB activation returns a Why field — the exact scoring math, broken down by component. Not a description. Not a summary. The actual numbers.

result.Why — on every activation, every time
BM25(0.78) + hebbian_boost(0.16)
    + temporal_priority(0.94) + assoc_depth1(0.06)

access_count=14  last_access=2h ago  confidence=0.95

// No LLM. No guessing. Just math.

No LLM in the memory pipeline

Wrappers call an LLM to extract, categorize, and retrieve memories. Every operation costs tokens. MuninnDB's cognitive operations are pure math — zero LLM cost.

🎯

Deterministic by design

LLM-based extraction is non-deterministic. Ask the same question twice and you may get different memories. MuninnDB's ACT-R scoring is deterministic — same query, same result, always.

🔄

The database learns on every read

Memory wrappers are static once written. In MuninnDB, every ACTIVATE call is a learning event — Hebbian weights update, temporal scores improve. The database gets smarter with use.

Memory wrappers add intelligence above a database.
MuninnDB is a database where the intelligence is the engine.

Zero Barrier

One binary. No dependencies. Running in 60 seconds.

Full-text search, smart priority scoring, and associative memory — all built in. No Redis. No Pinecone. No embeddings pipeline. No model files. No LD_LIBRARY_PATH. Just a single static binary.

Redis
Pinecone
Postgres
ONNX runtime
Model files
Vector index
terminal
# Download and run — that's it
curl -fsSL https://muninndb.com/install.sh | sh
muninn start

# MCP  :8750   AI tool integration  ← your Claude / Cursor gets memory
# REST :8475   JSON API
# UI   :8476   http://localhost:8476

Works with your AI tools today

Claude Desktop Cursor Windsurf VS Code Any MCP tool
Cognitive Primitives

Three cognitive primitives. Built into the storage engine.

No LLMs. No configuration. No developer code. Recency, association, and proactive triggers are the engine — not features bolted on top.

01

Recency

The right memory, right now — automatically.

Nothing is ever deleted. Instead, MuninnDB computes a priority score at query time based on two things: how recently a memory was accessed, and how often. Fresh, frequently-recalled memories surface first. Old, unused ones stay quiet — but are never gone. The same query run twice returns the same result every time.

Why it matters: Your agent retrieves what matters right now — not what was stored most recently or ranked highest by cosine similarity alone. Time and frequency drive priority automatically, with no manual timestamps or TTL logic.

ACT-R Cognitive Model (Anderson, 1993) →
Technical Detail

ACT-R base-level activation: B(M) = ln(n+1) − d × ln(ageDays / (n+1)) n = AccessCount how many times this was retrieved ageDays = days since last access (min 0.1) d = 0.5 power-law exponent (Anderson 1993) Final score: ContentMatch × softplus(B(M) + scale × HebbianBoost) × Confidence Deterministic — identical output across runs. No stochastic vector jitter.

fresh (10 accesses) moderate (3 accesses) age → priority

ACT-R temporal priority — recent and frequently accessed memories score highest

02

Hebbian Learning

Related memories travel together — even across time.

When two memories are retrieved together — because both were relevant to your query — their association automatically strengthens. The more often two engrams co-activate, the stronger their bond. This is Hebbian learning — without any LLM involved. Crucially, a Hebbian link can rescue an old memory: even if temporal priority has lowered its score, a strong association with a recent memory brings it back to the surface.

Why it matters: Your agent learns what concepts belong together without you telling it. Context awareness emerges automatically from usage patterns.

Hebbian Theory →
Technical Detail

Co-activation log (ring buffer, 50 entries) feeds Hebbian worker. Log-space weight update: logNew = log(w) + signal × log(1 + rate). Bidirectional — unused associations weaken symmetrically over time.

Neural co-activation network visualization
03

Semantic Triggers

The database pushes — you don't have to pull.

Subscribe to a semantic context, and MuninnDB will push a notification to your agent the moment a matching memory becomes highly relevant. No polling. No scanning. The database watches for relevance changes and delivers results to you — like an alert system for knowledge.

Why it matters: Your agent gets critical context at the right moment — not when it happens to query. Proactive intelligence instead of reactive polling.

Trigger Documentation →
Technical Detail

Triggers evaluated against active engrams after decay/Hebbian cycles. Semantic matching via embedding cosine similarity or BM25 FTS. Push via WebSocket, SSE, or callback. Rate-limited per vault.

Semantic push trigger visualization
Use Cases

Built for the age of AI

Every AI application eventually needs persistent memory. The question is whether you want to manage it manually or let the database handle the cognitive work.

🤖
AI Agent Developers

Give your agent a real memory

Stop stitching together Redis, Postgres, and a vector store. MuninnDB gives your agent one endpoint that stores, recalls, scores, and associates — automatically. Total recall with temporal priority: nothing is lost, and the right memory surfaces first.

  • Persistent cross-session context
  • Right memory, right moment — automatic
  • Push triggers for proactive recall
See the MCP integration →
🏢
Enterprise AI Teams

Memory you can audit and correct

Every activation includes a "Why" score showing exactly which memories surfaced and why. Confidence is tracked. Contradictions are detected. You can edit, correct, or archive any memory at any time. Compliance-ready from day one.

  • Explainable activation scores
  • Editable confidence levels
  • Full memory lifecycle audit
Read the architecture →
🔬
Researchers

Study emergent memory behavior

Every cognitive primitive is exposed as plain math. No black boxes. Watch Hebbian weights evolve. Observe decay curves. Subscribe to confidence updates. MuninnDB is a laboratory for studying how artificial cognitive memory behaves at scale.

  • All math exposed and observable
  • Prometheus metrics for every worker
  • Hackable plugin architecture
Explore the internals →
Comparison

Nothing else does this

These aren't missing features. These features didn't exist in a database before MuninnDB — because they were never designed for cognitive memory.

Capability PostgreSQL Redis Pinecone Neo4j Mem0, MemGPT, Zep, Letta Mem0 / Wrappers ★ cognitive MuninnDB
Temporal priority scoring (ACT-R)
Hebbian auto-learning
Learns from every query (no LLM)
Evolves on read — mathematically
Semantic push triggers
Bayesian confidence
Explainable Why field (pure math)
Full-text search (BM25)
Vector / semantic search
Graph traversal
Zero external dependencies
Single binary deploy

Comparisons reflect core native capabilities. Memory wrappers (Mem0, MemGPT, Zep, Letta) use LLMs for extraction and vector stores for retrieval — no engine-level cognitive primitives.

Capability

One call. Five systems eliminated.

A single Activate() call does what you'd otherwise need five separate systems to do — and the database gets smarter with every query, without any application code.

<20ms
Full activation pipeline
6-phase cognitive retrieval
🎯
<2ms
Point read by ID
single engram lookup
🧠
1 call
Replaces 5 systems
vector + BM25 + graph + learning + scoring
📦
1 binary
Deploy size
zero external deps
💾
~1.7KB
Per engram
with vector + 5 associations
🤖
19 tools
MCP integration
for Claude, Cursor, agents

What Activate() replaces in your stack

Pinecone, Weaviate
Vector DB
Elasticsearch, Typesense
Full-text search
Neo4j, Memgraph
Graph database
custom application code
Learning layer
custom ranking logic
Scoring pipeline
→ replaced by a single MuninnDB Activate() call — then Hebbian weights update silently in the background

Deployment tiers

Tier Engrams Disk Deployment
Personal 10K 17–40 MB Single binary
Power User 100K 170–400 MB Single binary
Team 1M 1.7–4 GB Single node
Enterprise 100M+ 170–400 GB Sharded cluster
Quick Start

Up and running in 5 minutes

No Docker required. No cloud accounts. No dependencies to install. Just download and run.

1
Step 1: Install
bash
curl -fsSL https://muninndb.com/install.sh | sh
# brew install scrypster/tap/muninn  ← coming with first release
2
Step 2: Initialize
bash
# Guided setup — connects Claude Desktop, Cursor, VS Code, Windsurf
muninn init

# [1/3] Which AI tools would you like to connect?
#        ✓ Claude Desktop   ✓ Cursor   ✓ Windsurf
# [2/3] Secure your MCP endpoint with a bearer token? [Y/n] Y
# [3/3] Start MuninnDB now? [Y/n] Y
#
# muninn started (pid 12345)
#   MBP  :8474   binary protocol
#   REST :8475   JSON API
#   gRPC :8477   gRPC API
#   MCP  :8750   AI tool integration
#   UI   :8476   http://localhost:8476
#
# Claude Desktop  → configured ✓
# Cursor          → configured ✓

Where we are

You're early. That's the point.

The databases people build AI on in 2027 will all look like this. The people running MuninnDB today are defining what cognitive memory infrastructure looks like before anyone else.

🛸 First Movers

There is no benchmark for cognitive memory databases yet — because there were no cognitive memory databases. The builders here now will write the benchmarks everyone else chases.

github.com/scrypster/muninndb →
🔬 Cognitive Science

ACT-R (Anderson, 1993). Hebbian learning. Bayesian confidence gating. Not a startup's blog-post heuristic — 30 years of peer-reviewed cognitive science turned into a storage engine.

Read the science →
🔓 Free to use. Source you can read.

The full engine source is on GitHub. Free for individuals, small teams, and internal use — no cloud account, no usage billing, no vendor lock-in. You own your data and your memory layer. Becomes Apache 2.0 in 2030.

License details →

The agents people will build in 2027 will have memory like this.

You can start today.

One command. Nothing to configure. Your AI tools have persistent, cognitive memory in under 60 seconds.

terminal
curl -fsSL https://muninndb.com/install.sh | sh
muninn init
# Your AI tools now have memory.

Free to use · Source available · No account required