MuninnDB

ACTIVATE Query

ACTIVATE is MuninnDB's primary retrieval operation. Given a context string and a limit N, it returns the N most cognitively relevant engrams — not just the ones that match keywords or vectors, but the ones that are relevant right now, accounting for decay, learning, and graph associations.

Key property: Every ACTIVATE call is both a retrieval AND a learning event. Reading memories strengthens their associations, resets decay timers for accessed engrams, and updates Hebbian weights for co-retrieved pairs.

Overview

The ACTIVATE pipeline runs in parallel across multiple goroutines, completing the full 6-phase process in under 20ms at 1 million engrams. The result is a ranked list of engrams with composite scores and explanation strings.

The Six Phases

01
Embed + Tokenize
Your context string is converted to embeddings (if the embed plugin is active) and BM25 tokens simultaneously. These run in parallel — you don't pay for both sequentially.
02
Parallel Candidate Retrieval
3 goroutines retrieve candidate sets: FTS candidates (BM25 inverted index), vector candidates (HNSW cosine similarity), and a decay-filtered pool of all active engrams above the floor threshold.
03
RRF Fusion
Reciprocal Rank Fusion merges the three candidate lists into one coherent ranking without needing to normalize scores across different scales. Formula: RRF(d) = Σ 1/(k + rank(d)) with k=60.
04
Hebbian Boost
Recent co-activation weights from the ring buffer (last 200 activations) are applied as score multipliers. Engrams frequently retrieved with similar contexts get a lasting boost.
05
Association Traversal
BFS graph walk from the top candidates, depth 2. Each hop applies a penalty (default 0.5×). Surfaces related engrams that weren't in the original candidate set.
06
Score + Filter + Why
Final composite score is computed. Results below the minimum threshold are dropped. Each result gets a Why field explaining the score breakdown (BM25 contribution, vector similarity, Hebbian boost, association path).

Basic Usage

results, err := mem.Activate(ctx, "what are the user's preferences?", 10)
if err != nil {
    log.Fatal(err)
}

for _, r := range results.Engrams {
    fmt.Printf("Score: %.3f  Concept: %s\n", r.Score, r.Concept)
    fmt.Printf("Why: %s\n", r.Why)
}

Activation Options

Option Default Description
Context The query string. Required.
Limit 10 Maximum engrams to return.
MinScore 0.1 Minimum composite score threshold.
Tags [] Filter candidates by tags (pre-filter, fast).
States [ACTIVE] Filter by lifecycle state.
MaxDepth 2 Association graph traversal depth.
HopPenalty 0.5 Score multiplier per association hop.
IncludeWhy true Include explanation strings in results.

The Why Field

Every engram in an ACTIVATE result includes a Why string explaining exactly how it scored. This is not a language-model explanation — it's a factual breakdown of the scoring math.

example Why output
BM25(0.78) + hebbian_boost(0.16) + assoc_depth1(0.06)
relevance=0.91, confidence=0.95, access_count=14
← Previous Next →