ACTIVATE Query
ACTIVATE is MuninnDB's primary retrieval operation. Given a context string and a limit N, it returns the N most cognitively relevant engrams — not just the ones that match keywords or vectors, but the ones that are relevant right now, accounting for temporal priority, learning, and graph associations.
Key property: Every ACTIVATE call is both a retrieval AND a learning event. Reading memories records the access (boosting future temporal priority scores) and updates Hebbian weights for co-retrieved pairs.
Overview
The ACTIVATE pipeline runs in parallel across multiple goroutines, executing all 6 phases concurrently to return a ranked list of engrams with composite scores and explanation strings. Throughput and latency depend on your embedder configuration — the core pipeline adds minimal overhead.
The Six Phases
Basic Usage
client := muninn.NewClient("http://localhost:8475", "your-token")
results, err := client.Activate(ctx, "default",
[]string{"what are the user's preferences?"}, 10)
if err != nil {
log.Fatal(err)
}
for _, r := range results.Engrams {
fmt.Printf("Score: %.3f Concept: %s\n", r.Score, r.Concept)
fmt.Printf("Why: %s\n", r.Why)
} async with MuninnClient("http://localhost:8475", token="your-token") as client:
results = await client.activate(
vault="default",
context=["what are the user's preferences?"])
for r in results.engrams:
print(f"Score: {r.score:.3f} Concept: {r.concept}")
print(f"Why: {r.why}") Activation Options
| Option | Default | Description |
|---|---|---|
| Context | — | The query string. Required. |
| Limit | 10 | Maximum engrams to return. |
| MinScore | 0.1 | Minimum composite score threshold. |
| Tags | [] | Filter candidates by tags (pre-filter, fast). |
| States | [ACTIVE] | Filter by lifecycle state. |
| MaxDepth | 2 | Association graph traversal depth. |
| HopPenalty | 0.7 | Score multiplier per association hop. |
| IncludeWhy | true | Include explanation strings in results. |
The Why Field
Every engram in an ACTIVATE result includes a Why string explaining exactly how it scored. This is not a language-model explanation — it's a factual breakdown of the scoring math.
BM25(0.78) + hebbian_boost(0.16) + assoc_depth1(0.06)
relevance=0.91, confidence=0.95, access_count=14