SuperLocalMemory V3: Information-Geometric Foundations for Zero-LLM Enterprise Agent Memory
Abstract
Mathematical foundations for AI agent memory are established through information-geometric, sheaf-theoretic, and stochastic-dynamical approaches, enabling improved retrieval, lifecycle management, and contradiction detection.
Persistent memory is a central capability for AI agents, yet the mathematical foundations of memory retrieval, lifecycle management, and consistency remain unexplored. Current systems employ cosine similarity for retrieval, heuristic decay for salience, and provide no formal contradiction detection. We establish information-geometric foundations through three contributions. First, a retrieval metric derived from the Fisher information structure of diagonal Gaussian families, satisfying Riemannian metric axioms, invariant under sufficient statistics, and computable in O(d) time. Second, memory lifecycle formulated as Riemannian Langevin dynamics with proven existence and uniqueness of the stationary distribution via the Fokker-Planck equation, replacing hand-tuned decay with principled convergence guarantees. Third, a cellular sheaf model where non-trivial first cohomology classes correspond precisely to irreconcilable contradictions across memory contexts. On the LoCoMo benchmark, the mathematical layers yield +12.7 percentage points over engineering baselines across six conversations, reaching +19.9 pp on the most challenging dialogues. A four-channel retrieval architecture achieves 75% accuracy without cloud dependency. Cloud-augmented results reach 87.7%. A zero-LLM configuration satisfies EU AI Act data sovereignty requirements by architectural design. To our knowledge, this is the first work establishing information-geometric, sheaf-theoretic, and stochastic-dynamical foundations for AI agent memory systems.
Community
First AI agent memory system to report LoCoMo results mwithout any cloud dependency: 74.8% (Mode A, data stays local) and 87.7% (Mode C, full power).
Three mathematical contributions, each a first in agent
memory:
- Fisher-Rao geodesic distance for retrieval —
replaces cosine similarity with confidence-weighted
distance on statistical manifolds - Sheaf cohomology for consistency — detects global
contradictions algebraically (H¹ = 0), replacing O(n²)
pairwise checking - Riemannian Langevin dynamics for lifecycle —
self-organizing memory states with convergence
guarantees
Mode A is EU AI Act compliant by architecture — data
never leaves the device. All three techniques are open
source under MIT, designed to be adopted by any memory
system.
Project page
https://superlocalmemory.com
GitHub repo
https://github.com/qualixar/superlocalmemory
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- BMAM: Brain-inspired Multi-Agent Memory Framework (2026)
- Field-Theoretic Memory for AI Agents: Continuous Dynamics for Context Preservation (2026)
- Choosing How to Remember: Adaptive Memory Structures for LLM Agents (2026)
- FadeMem: Biologically-Inspired Forgetting for Efficient Agent Memory (2026)
- SmartSearch: How Ranking Beats Structure for Conversational Memory Retrieval (2026)
- MemWeaver: Weaving Hybrid Memories for Traceable Long-Horizon Agentic Reasoning (2026)
- Selective Memory for Artificial Intelligence: Write-Time Gating with Hierarchical Archiving (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper