MuninnDB

ACTIVATE Query

ACTIVATE is MuninnDB's primary retrieval operation. Given a context string and a limit N, it returns the N most cognitively relevant engrams — not just the ones that match keywords or vectors, but the ones that are relevant right now, accounting for temporal priority, learning, and graph associations.

Key property: Every ACTIVATE call is both a retrieval AND a learning event. Reading memories records the access (boosting future temporal priority scores) and updates Hebbian weights for co-retrieved pairs.

Overview

The ACTIVATE pipeline runs in parallel across multiple goroutines, executing all 6 phases concurrently to return a ranked list of engrams with composite scores and explanation strings. Throughput and latency depend on your embedder configuration — the core pipeline adds minimal overhead.

The Six Phases

01
Embed + Tokenize
Your context string is converted to embeddings (if the embed plugin is active) and BM25 tokens simultaneously. These run in parallel — you don't pay for both sequentially.
02
Parallel Candidate Retrieval
3 goroutines retrieve candidate sets: FTS candidates (BM25 inverted index), vector candidates (HNSW cosine similarity), and a temporally-scored pool of all active engrams (ACT-R formula applied to AccessCount + LastAccess).
03
RRF Fusion
Reciprocal Rank Fusion merges the three candidate lists into one coherent ranking without needing to normalize scores across different scales. Formula: RRF(d) = Σ 1/(k + rank(d)) with source-specific k values: k_FTS=60, k_HNSW=40, k_Decay=120.
04
Hebbian Boost
Recent co-activation weights from the ring buffer (last 50 activations) are applied as score multipliers. Engrams frequently retrieved with similar contexts get a lasting boost.
05
Association Traversal
BFS graph walk from the top candidates, depth 2. Each hop applies a penalty (default 0.7×). Surfaces related engrams that weren't in the original candidate set.
06
Score + Filter + Why
Final composite score is computed. Results below the minimum threshold are dropped. Each result gets a Why field explaining the score breakdown (BM25 contribution, vector similarity, Hebbian boost, association path).

Basic Usage

client := muninn.NewClient("http://localhost:8475", "your-token")

results, err := client.Activate(ctx, "default",
    []string{"what are the user's preferences?"}, 10)
if err != nil {
    log.Fatal(err)
}

for _, r := range results.Engrams {
    fmt.Printf("Score: %.3f  Concept: %s\n", r.Score, r.Concept)
    fmt.Printf("Why: %s\n", r.Why)
}

Activation Options

Option Default Description
Context The query string. Required.
Limit 10 Maximum engrams to return.
MinScore 0.1 Minimum composite score threshold.
Tags [] Filter candidates by tags (pre-filter, fast).
States [ACTIVE] Filter by lifecycle state.
MaxDepth 2 Association graph traversal depth.
HopPenalty 0.7 Score multiplier per association hop.
IncludeWhy true Include explanation strings in results.

The Why Field

Every engram in an ACTIVATE result includes a Why string explaining exactly how it scored. This is not a language-model explanation — it's a factual breakdown of the scoring math.

example Why output
BM25(0.78) + hebbian_boost(0.16) + assoc_depth1(0.06)
relevance=0.91, confidence=0.95, access_count=14
← Previous Next →