MuninnDB
The World's First Cognitive Database

The Database
That Thinks Like
Your AI

Sub-20ms cognitive queries. Single binary. Zero dependencies. Memory that decays, learns, and notifies — automatically.

<20ms activation
1 binary zero ops
0 deps standalone
Apache 2.0 open source
quickstart.go
// Store a memory — it decays, learns, and triggers automatically
mem := muninn.NewMemory("api-key", "muninn://localhost:8747")

engram, _ := mem.Store(ctx, &muninn.StoreRequest{
  Concept:  "user prefers dark mode",
  Content:  "Always use dark theme in UI responses",
  Tags:     []string{"preference", "ui"},
})

// Activate — finds relevant memories in <20ms
results, _ := mem.Activate(ctx, "what does the user want?", 5)
scroll
The Problem

AI agents deserve better than repurposed storage

The world has built AI models that think in real-time, then stored their memories in databases designed for invoices. The mismatch is costing you.

🧩
3+ systems

Bolted-on memory

Every AI team stitches memory together from Redis, Postgres, and a vector store. None of these were built for memory — they're storage systems wearing a costume.

📉
0% decay

Memories that never forget

A fact from six months ago sits in your DB with the same weight as something from yesterday. Storage systems have no concept of relevance over time. Your AI agent's context gets cluttered.

🔇
Polling only

Databases that just sit there

Traditional databases are passive. You pull data when you ask. They never push data when something becomes relevant. Your AI agent is flying blind between queries.

"Memory isn't storage.
It's a living system."

Your brain doesn't store memories like a hard drive. They strengthen when recalled, fade when ignored, connect to related ideas automatically, and surface unbidden when suddenly relevant. MuninnDB brings these same properties to your database — not as features, but as the foundation.

Named after Muninn — Odin's raven of memory in Norse mythology. Learn the mythology →

Cognitive Primitives

Three ideas from neuroscience. One database that implements them.

These aren't features bolted on top — they're the storage engine itself. The math runs on every read and write.

01

Decay

Memories fade — just like real ones.

Every engram (memory) has a relevance score that naturally fades over time using the Ebbinghaus forgetting curve — the same math that describes how human memory works. Old, unused memories drift toward a background floor. Fresh, frequently-accessed memories stay strong.

Why it matters: Your agent's context stays fresh. Stale information fades naturally — no manual cleanup required.

Ebbinghaus Forgetting Curve →
Technical Detail

Continuous decay worker runs Ebbinghaus formula: R(t) = max(floor, e^(−t/S)) where S = stability grown through spaced retrieval. Default decay floor 0.05 — memories never fully vanish.

Ebbinghaus forgetting curve visualization
02

Hebbian Learning

Related memories grow stronger together.

When two memories are retrieved together — because both were relevant to your query — their association automatically strengthens. The more often two engrams fire together, the stronger their bond. This is "neurons that fire together, wire together" — Hebbian learning — without any LLM involved.

Why it matters: Your agent learns what concepts belong together without you telling it. Context awareness emerges automatically from usage patterns.

Hebbian Theory →
Technical Detail

Co-activation log (ring buffer, 200 entries) feeds Hebbian worker. Weight update: new_weight = old_weight × (1 + boost_factor). Bidirectional — unused associations weaken symmetrically.

Neural co-activation network visualization
03

Semantic Triggers

The database pushes — you don't have to pull.

Subscribe to a semantic context, and MuninnDB will push a notification to your agent the moment a matching memory becomes highly relevant. No polling. No scanning. The database watches for relevance changes and delivers results to you — like an alert system for knowledge.

Why it matters: Your agent gets critical context at the right moment — not when it happens to query. Proactive intelligence instead of reactive polling.

Trigger Documentation →
Technical Detail

Triggers evaluated against active engrams after decay/Hebbian cycles. Semantic matching via embedding cosine similarity or BM25 FTS. Push via WebSocket, SSE, or callback. Rate-limited per vault.

Semantic push trigger visualization
Open Source

Apache 2.0. Core engine is free and open forever. No cloud lock-in.

github.com/scrypster/muninndb →
🧠 Built for AI Agents

Native MCP integration. Works with Claude, Cursor, and any MCP-compatible agent out of the box.

See MCP integration →
Zero-Ops Deploy

Single binary. No Docker, no Redis, no external services. Download, run, use.

Start in 5 minutes →
Comparison

Nothing else does this

These aren't missing features. These features didn't exist in a database before MuninnDB — because they were never designed for cognitive memory.

Capability PostgreSQL Redis Pinecone Neo4j ★ cognitive MuninnDB
Memory decay (Ebbinghaus)
Hebbian auto-learning
Semantic push triggers
Bayesian confidence
Full-text search (BM25)
Vector / semantic search
Graph traversal
Zero external dependencies
Single binary deploy

Comparisons reflect core native capabilities. PostgreSQL pgvector extension provides limited vector support but no cognitive primitives.

Performance

Numbers you can build on

Cognitive doesn't mean slow. MuninnDB runs its full 6-phase activation pipeline — embedding, BM25, vector search, RRF fusion, Hebbian boost, and graph traversal — in under 20ms.

<20ms
Activation query
at 1M engrams
🎯
<2ms
Point read by ID
single engram lookup
📦
1 binary
Deploy size
zero external deps
💾
~1.7KB
Per engram
with vector + 5 associations
🌐
100M+
Engram scale
with sharded cluster
🤖
11 tools
MCP integration
for Claude, Cursor, agents

Deployment tiers

Tier Engrams Disk Deployment
Personal 10K 17–40 MB Single binary
Power User 100K 170–400 MB Single binary
Team 1M 1.7–4 GB Single node
Enterprise 100M+ 170–400 GB Sharded cluster
Quick Start

Up and running in 5 minutes

No Docker required. No cloud accounts. No dependencies to install. Just download and run.

1
Step 1: Download
bash
# macOS / Linux
curl -sSL https://get.muninndb.com | sh

# Or build from source
git clone https://github.com/scrypster/muninndb
cd muninndb && go build ./cmd/muninndb
2
Step 2: Run
bash
# Start the server (port 8747 MBP, 8749 REST, 8750 UI)
muninndb serve

# Open the web UI
open http://localhost:8750
3
Step 3: Connect
go
import "github.com/scrypster/muninndb/sdk/go/muninn"

mem := muninn.NewMemory("your-api-key", "muninn://localhost:8747")

// Store a memory
mem.Store(ctx, &muninn.StoreRequest{
    Concept: "user prefers concise replies",
    Content: "Keep responses under 200 words when possible",
    Tags:    []string{"preference", "style"},
})

// Activate relevant memories
results, _ := mem.Activate(ctx, "how should I respond?", 5)
Use Cases

Built for the age of AI

Every AI application eventually needs persistent memory. The question is whether you want to manage it manually or let the database handle the cognitive work.

🤖
AI Agent Developers

Give your agent a real memory

Stop stitching together Redis, Postgres, and a vector store. MuninnDB gives your agent one endpoint that stores, recalls, decays, and learns — automatically. Your agent remembers what it needs, forgets what it doesn't.

  • Persistent cross-session context
  • Automatic relevance decay
  • Push triggers for proactive recall
See the MCP integration →
🏢
Enterprise AI Teams

Memory you can audit and correct

Every activation includes a "Why" score showing exactly which memories surfaced and why. Confidence is tracked. Contradictions are detected. You can edit, correct, or archive any memory at any time. Compliance-ready from day one.

  • Explainable activation scores
  • Editable confidence levels
  • Full memory lifecycle audit
Read the architecture →
🔬
Researchers

Study emergent memory behavior

Every cognitive primitive is exposed as plain math. No black boxes. Watch Hebbian weights evolve. Observe decay curves. Subscribe to confidence updates. MuninnDB is a laboratory for studying how artificial cognitive memory behaves at scale.

  • All math exposed and observable
  • Prometheus metrics for every worker
  • Hackable plugin architecture
Explore the internals →

Start in 5 minutes.
No ops. No dependencies.

Download a single binary, run it, and your AI agent has a cognitive memory system. When you're ready to scale, it scales with you.

Apache 2.0 · Open Source · No account required