Quick Start
Up and running in 5 minutes. No Docker. No cloud account. No configuration required.
1. Install MuninnDB
One command downloads and installs the binary (macOS/Linux: curl; Windows: PowerShell):
# macOS / Linux
curl -fsSL https://muninndb.com/install.sh | sh
# macOS (Homebrew)
brew install scrypster/tap/muninn
# Windows (PowerShell)
irm https://muninndb.com/install.ps1 | iex 2. Initialize and connect your AI tools
One command sets everything up — connects Claude Desktop, Claude Code/CLI, Cursor, OpenClaw, Windsurf, Codex, VS Code (and more), generates a bearer token, and starts all services:
muninn init
# Guided wizard — connects Claude Desktop, Cursor, VS Code, Windsurf
# and starts all services automatically.
#
# muninn started (pid 12345)
# MBP :8474 binary protocol
# REST :8475 JSON API
# gRPC :8477 gRPC API
# MCP :8750 AI tool integration
# UI :8476 http://localhost:8476 Web dashboard
Visit http://localhost:8476 for the visual dashboard — priority charts, relationship graphs, live activation log.
3. Store your first memory
Connect via the Go or Python SDK (REST on port 8475):
client := muninn.NewClient("http://localhost:8475", "your-token")
engID, err := client.Write(ctx, "default",
"user prefers dark mode",
"Always render UI in dark theme for this user",
[]string{"preference", "ui"})
fmt.Printf("Stored: %s\n", engID) import asyncio
from muninn import MuninnClient
async def main():
async with MuninnClient("http://localhost:8475", token="your-token") as client:
eng_id = await client.write(
vault="default",
concept="user prefers dark mode",
content="Always render UI in dark theme for this user",
tags=["preference", "ui"],
)
print(f"Stored: {eng_id}")
asyncio.run(main()) 4. Activate relevant memories
Activate returns the N most cognitively relevant engrams for a given context — ranked by BM25 score, temporal priority (recency + access frequency), Hebbian associations, and graph depth:
results, err := client.Activate(ctx, "default", []string{"what does the user want?"}, 5)
for _, r := range results.Engrams {
fmt.Printf("%.2f — %s\n", r.Score, r.Concept)
fmt.Printf(" Why: %s\n", r.Why)
}
// Output:
// 0.94 — user prefers dark mode
// Why: BM25 match (0.78) + Hebbian boost (0.16) results = await client.activate(
vault="default", context=["what does the user want?"])
for r in results.engrams:
print(f"{r.score:.2f} — {r.concept}")
print(f" Why: {r.why}")
# Output:
# 0.94 — user prefers dark mode
# Why: BM25 match (0.78) + Hebbian boost (0.16) Done.
Temporal priority scoring, Hebbian learning, and association building happen automatically from here. Your memories improve the more they're used.