Understanding Memory-related Threats and Vulnerabilities in Large Language Models
Main Article Content
Abstract
Memory characteristics in large language models (LLMs) represent a transformative progress that enables relevant continuity, privatization, and adaptive learning in interactions. However, these capabilities introduce novel security vulnerabilities that extend beyond traditional concerns. This article examines the security implications of memory-enabled LLMs, categorizing architectural approaches and identifying distinct vulnerability classes, including temporal prompt injection, information persistence, and memory poisoning. Through documented case studies and empirical evidence, the article illustrates how these vulnerabilities manifest in production environments, leading to data leakage, system manipulation, and knowledge corruption. The article proposes comprehensive security frameworks incorporating memory segregation, temporal constraints, bidirectional filtering, differential privacy, and advanced auditing mechanisms. Since LLMS develops from stateless tools to constant assistants, safety paradigms must expand the traditional boundaries to address the entire memory lifestyle and ensure that these systems remain both functional and safe in sensitive operating contexts.