Memory Transformers: Neural Memory Banks for Textual Memory

Author: — Dec 2025

Abstract

This paper introduces Memory Transformers, a class of compact neural systems that store and retrieve textual memories directly in trainable parameter banks. Memory Transformers combine lightweight sequence encoders with parametric memory slots to realize an end-to-end, locally runnable memory service that does not depend on external embedding providers or vector databases. The architecture supports write operations that sculpt slot vectors via short gradient loops, retrieval via cosine similarity augmented by lexical overlap and synaptic-strength weighting, and dynamic capacity growth through neurogenesis-like slot allocation. We describe the design and implementation of two representative instantiations: a transformer-inspired encoder paired with trainable memory slots, and a convolutional encoder with an explicit memory bank. The paper articulates evaluation protocols suitable for qualitative and quantitative assessment, presents illustrative demonstration experiments, and discusses the trade-offs between interpretability, privacy, and retrieval fidelity. We conclude by situating Memory Transformers within broader research on learned memory systems and proposing avenues for future work. The source code is available at github.com/Pro-GenAI/Memory-Transformer.

Keywords: Artificial Intelligence, AI, Large Language Models, LLM, LLMs, neural memory, transformers, information retrieval, Generative AI

PDF

PDF of "Memory Transformers: Neural Memory Banks for Textual Memory"
Download the PDF file