Papers
arxiv:2604.04514

SuperLocalMemory V3.3: The Living Brain -- Biologically-Inspired Forgetting, Cognitive Quantization, and Multi-Channel Retrieval for Zero-LLM Agent Memory Systems

Published on Apr 6
· Submitted by
Bhardwaj
on Apr 17
Authors:

Abstract

A new local-first agent memory system implements comprehensive cognitive memory processes with enhanced retrieval and forgetting mechanisms, achieving superior performance in zero-LLM settings.

AI-generated summary

AI coding agents operate in a paradox: they possess vast parametric knowledge yet cannot remember a conversation from an hour ago. Existing memory systems store text in vector databases with single-channel retrieval, require cloud LLMs for core operations, and implement none of the cognitive processes that make human memory effective. We present SuperLocalMemory V3.3 ("The Living Brain"), a local-first agent memory system implementing the full cognitive memory taxonomy with mathematical lifecycle dynamics. Building on the information-geometric foundations of V3.2 (arXiv:2603.14588), we introduce five contributions: (1) Fisher-Rao Quantization-Aware Distance (FRQAD) -- a new metric on the Gaussian statistical manifold achieving 100% precision at preferring high-fidelity embeddings over quantized ones (vs 85.6% for cosine), with zero prior art; (2) Ebbinghaus Adaptive Forgetting with lifecycle-aware quantization -- the first mathematical forgetting curve in local agent memory coupled to progressive embedding compression, achieving 6.7x discriminative power; (3) 7-channel cognitive retrieval spanning semantic, keyword, entity graph, temporal, spreading activation, consolidation, and Hopfield associative channels, achieving 70.4% on LoCoMo in zero-LLM Mode A; (4) memory parameterization implementing Long-Term Implicit memory via soft prompts; (5) zero-friction auto-cognitive pipeline automating the complete memory lifecycle. On LoCoMo, V3.3 achieves 70.4% in Mode A (zero-LLM), with +23.8pp on multi-hop and +12.7pp on adversarial. V3.2 achieved 74.8% Mode A and 87.7% Mode C; the 4.4pp gap reflects a deliberate architectural trade-off. SLM V3.3 is open source under the Elastic License 2.0, runs entirely on CPU, with over 5,000 monthly downloads.

Community

SuperLocalMemory V3.3 "The Living Brain" — the first unified memory + learning
system for AI agents. Zero-cloud. EU AI Act compliant. And it KEEPS learning.

This is the sequel to our V3 paper (arXiv:2603.14588) — now with learning,
mesh coordination, and lifecycle management built in.

🧠 RETRIEVAL (the brain's eyes)
• 6-channel fusion: semantic + BM25 + temporal + Hopfield + spreading
activation + entity graph
• Fisher-Rao information geometry for distance computation
• Sheaf cohomology for contradiction detection across sessions
• ONNX-quantized reranker (local, <80MB, no API calls)

📚 LEARNING (the brain that grows)
• Triple-stream learning: tool events (statistical) + LLM observer
(Haiku-driven) + recall learning (PageRank + community detection)
• Behavioral pattern mining across sessions
• Cross-project assertion promotion: facts with 0.8+ confidence in 2+
projects auto-globalize
• Skill evolution via blind verification (+30pp from OpenSpace paper baseline)

🕸️ MESH (the brain that talks)
• SLM Mesh — P2P coordination across AI agent sessions via MCP
• Broadcast + project-scoped messaging, offline queue with 48h TTL
• Shared state, distributed locks, peer discovery
• Works with Claude Code, Cursor, VS Code, Windsurf, Antigravity

♾️ LIFECYCLE (the brain that manages itself)
• Riemannian Langevin lifecycle: active → warm → archive → forget
• Tiered storage with automatic promotion on access
• Entity graph with visual explorer + Leiden community clustering
• Fact consolidation + graph pruning (396K edges managed)

BENCHMARKS
• LoCoMo: 74.8% (beats Mem0's 64.2%, Zep, Letta, Supermemory)
• 8,329 npm downloads/month, 5,950 PyPI downloads/month
• 3,750+ atomic facts ingested per active user
• 1,730+ tests, 90%+ coverage

INSTALL:
npx superlocalmemory
pip install superlocalmemory

LINKS:
• Paper: https://arxiv.org/abs/2604.04514
• Repo: https://github.com/qualixar/superlocalmemory
• Docs: https://superlocalmemory.com
• Current version: v3.4.13 (we've shipped 10+ releases since the paper —
Neural Glass dashboard, skill evolution engine, cloud backup, and 3 novel
IPs per release)

This is agent memory that actually REMEMBERS, LEARNS, and COORDINATES. Would
love discussion on: (1) how you're solving cross-session memory today,
(2) whether behavioral pattern mining is on anyone's roadmap, (3) P2P mesh
vs centralized state — what's winning for you?

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2604.04514
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2604.04514 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2604.04514 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2604.04514 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.