-
-
Notifications
You must be signed in to change notification settings - Fork 9
V3.3 Overview
SuperLocalMemory v3.3 (codename "The Living Brain Evolves") gives your memory a lifecycle. Memories strengthen when used, fade when neglected, compress when idle, and consolidate into reusable patterns --- automatically, locally.
V3.3 builds on the associative memory system introduced in V3.2 with six new capabilities that make SLM self-maintaining. No manual cleanup. No storage bloat. No zombie processes.
| Feature | What It Does | User Benefit |
|---|---|---|
| Adaptive Memory Lifecycle | Memories naturally strengthen with use and fade when neglected | No manual cleanup; configurable retention curves |
| Smart Compression | Embedding precision adapts to memory importance | Up to 32x storage savings for low-priority memories |
| Cognitive Consolidation | Extracts patterns from clusters of related memories | One decision referenced 50 times becomes one reusable insight |
| Pattern Learning (Soft Prompts) | Auto-learned preferences injected into agent context at session start | The system teaches itself what matters to you |
| Hopfield Retrieval (6th Channel) | Vague or partial queries complete themselves | Ask half a question, get the whole answer |
| Process Health | Orphaned SLM processes detected and cleaned automatically | No more zombie workers eating RAM |
| Metric | V3.2 | V3.3 | Change |
|---|---|---|---|
| RAM (Mode A/B) | ~4 GB | ~40 MB | 100x reduction |
| Retrieval channels | 5 | 6 | +1 (Hopfield) |
| MCP tools | 29 | 35 | +6 |
| CLI commands | 21 | 26 | +5 |
| Dashboard tabs | 20 | 23 | +3 |
| API endpoints | 9 | 16 | +7 |
| Tests passing | --- | 2,348 | Full coverage |
Memories now have a natural lifecycle. Frequently accessed memories grow stronger over time. Neglected memories gradually fade. This happens automatically based on your actual usage patterns.
- Configurable retention curves --- set how aggressively memories decay per project or globally
- Access-weighted scoring --- memories you rely on are never lost
- Review before removal --- faded memories are flagged, not silently deleted
- Full audit trail --- see why any memory was retained or marked for decay
Not all memories deserve the same storage budget. V3.3 automatically adapts embedding precision based on memory importance.
- High-value memories stay at full resolution --- zero quality loss
- Low-priority memories compress up to 32x --- significant storage savings at scale
- Automatic tiering --- compression level adjusts as importance changes
- Lossless for active memories --- only idle, low-access memories are compressed
When you reference the same decision, pattern, or preference across many memories, V3.3 detects the cluster and extracts a single reusable insight.
- Automatic pattern detection across related memories
- One insight instead of fifty references --- cleaner retrieval, less noise
- Source tracing --- every consolidated insight links back to its source memories
- Incremental --- new memories merge into existing patterns naturally
V3.3 learns your preferences over time and injects them into your agent's context at session start. No manual configuration needed.
- Auto-learned from usage --- the system observes what you care about
- Session-start injection --- your agent already knows your style before you type
- Fully transparent --- view and edit all learned patterns via CLI or dashboard
- Per-project scoping --- different projects can have different learned preferences
A new retrieval channel that handles vague, partial, or incomplete queries. When you can only remember fragments, this channel fills in the gaps.
- Partial query completion --- half a question returns the whole answer
- Noise-tolerant --- typos and fuzzy phrasing still find the right memory
- Complements existing channels --- works alongside keyword, semantic, temporal, graph, and active retrieval
- Automatic --- no special syntax needed; the channel activates when other channels return low-confidence results
SLM background workers sometimes outlive their parent sessions. V3.3 detects and cleans orphaned processes automatically.
- Automatic detection of zombie SLM processes
- Safe cleanup --- only orphaned processes are targeted
- RAM recovery --- reclaims memory from dead workers
- On-demand or scheduled --- run manually or let it happen automatically
| Command | Description |
|---|---|
slm decay |
Run memory lifecycle review; show faded and strengthened memories |
slm quantize |
Run smart compression cycle on eligible memories |
slm consolidate --cognitive |
Extract patterns from clusters of related memories |
slm soft-prompts |
View auto-learned patterns and preferences |
slm reap |
Clean orphaned SLM processes |
All existing CLI commands continue to work unchanged. See CLI Reference for the full list.
| Tool | Description |
|---|---|
forget |
Mark a memory for decay or immediate removal |
quantize |
Trigger smart compression on specific memories |
consolidate_cognitive |
Run cognitive consolidation on a memory cluster |
get_soft_prompts |
Retrieve current auto-learned patterns |
reap_processes |
Clean orphaned background processes |
get_retention_stats |
View memory lifecycle statistics |
All 29 existing MCP tools retain their signatures. See MCP Tools for the full list.
| Tab | What It Shows |
|---|---|
| Memory Lifecycle | Retention curves, decay candidates, access frequency heatmaps |
| Compression | Storage savings, compression ratios per memory tier |
| Patterns | Auto-learned preferences, consolidated insights, source tracing |
7 new API endpoints power these tabs. All 9 existing endpoints are unchanged.
All V3.3 features default to OFF. Zero breaking changes. Enable them as you evaluate each capability:
slm config set decay.enabled true
slm config set quantize.enabled true
slm config set cognitive_consolidation.enabled true
slm config set soft_prompts.enabled true
slm config set hopfield_retrieval.enabled true
slm config set process_health.auto_reap trueOr enable them one at a time at your own pace.
V3.3 is a strict superset of V3.2. No migration needed.
- All 29 existing MCP tools retain their signatures
- All 21 existing CLI commands work identically
- All existing API endpoints are unchanged
- Existing retrieval behavior is preserved when new features are disabled
- New database tables are created additively (no schema alterations to existing tables)
- Rollback SQL is provided for every new table
Every V3.3 feature respects the existing three-mode architecture. Mode A remains fully local.
| Feature | Mode A (Local Guardian) | Mode B (Smart Local) | Mode C (Full Power) |
|---|---|---|---|
| Memory lifecycle | Full (pure scoring math) | Full | Full |
| Smart compression | Full (local computation) | Full | Full |
| Cognitive consolidation | Rules-based extraction | Ollama-assisted | Cloud LLM-assisted |
| Soft prompts | Pattern matching | Ollama refinement | Cloud LLM refinement |
| Hopfield retrieval | Enabled with local embeddings | Full | Full |
| Process health | Full (OS-level detection) | Full | Full |
- V3.2 Overview --- Associative memory, auto-invoke, temporal intelligence
- CLI Reference --- Full command list including V3.3 additions
- MCP Tools --- Full tool list including V3.3 additions
- Modes Explained --- Mode A / B / C architecture
- Architecture Overview --- System design
Part of Qualixar | Created by Varun Pratap Bhardwaj
SuperLocalMemory V3 — Your AI Finally Remembers You. 100% local. 100% private. 100% free.
Part of Qualixar | Created by Varun Pratap Bhardwaj | GitHub
SuperLocalMemory V3
Getting Started
Reference
Architecture
Enterprise
V2 Documentation