Open Source
Explore the latest AI open-source projects from GitHub and HuggingFace.
Explore the latest AI open-source projects from GitHub and HuggingFace.
ReMe is an open-source memory management framework for AI agents that solves two of the most persistent problems in agentic systems: the limited context window that causes agents to forget early conversation content, and the stateless session model where agents cannot recall previous interactions. Developed by the agentscope-ai team and licensed under Apache 2.0, ReMe gives agents persistent, searchable memory that survives across sessions and automatically manages information density through intelligent summarization. ## Why Agents Forget Standard LLM-based agents operate within a fixed context window. As conversations grow, either early messages get dropped, performance degrades, or costs spike from long context fees. Worse, each new session starts completely blank: the agent has no memory of previous conversations, user preferences, or completed tasks. This forces users to re-explain context every session and prevents agents from learning from experience. ReMe addresses both failure modes with a dual-architecture approach that gives developers two distinct memory systems to match their use case. ## File-Based ReMe The file-based system treats agent memory as plain-text Markdown files stored in a `.reme/` directory. Daily conversation summaries are written to `memory/YYYY-MM-DD.md` files, with long-term distilled information persisted in `MEMORY.md`. This design prioritizes transparency: developers and users can read, edit, and version-control their agent's memories using standard tools. There are no opaque database records or proprietary formats. When a conversation approaches the configured context limit, ReMe automatically compacts recent exchanges into a structured summary while preserving the most recent turns verbatim for immediate continuity. A file watcher monitors the `.reme/` directory and asynchronously updates a local search index as memories change. ## Vector-Based ReMe The vector-based system manages three distinct memory categories: personal memory stores user preferences and biographical details, task or procedural memory captures patterns from completed workflows, and tool memory records successful usage patterns for registered tools. All three categories are searchable via hybrid retrieval combining vector similarity and BM25 keyword matching, enabling the agent to surface relevant memories even when the query phrasing differs from how the memory was originally recorded. ## ReMeCli Terminal Agent ReMe ships with ReMeCli, a built-in terminal assistant that demonstrates the file-based memory system in action. Users interact through a natural conversation interface with access to file operations, command execution, code running, and memory search tools. System commands like `/compact` trigger manual memory summarization, `/new` starts a fresh session while preserving history, and `/clear` removes memory entirely. The CLI provides a practical entry point for developers evaluating the framework before integrating it into custom agent pipelines. ## Installation and Integration Installation requires a single pip command: `pip install -U reme-ai`. The framework supports multiple storage backends for embeddings and persistence including SQLite, Chroma, and local file stores. Integration into existing agent systems is designed to be lightweight, requiring configuration of the memory backend and token thresholds rather than significant architectural changes. ## Architecture: ReAct with Memory Tools ReMe implements the memory layer as a set of tools compatible with ReAct-style agents. The agent sees memory operations as regular tool calls: `memory_search` retrieves relevant past information, while write and read tools manage the underlying memory files. This design means ReMe can be layered on top of any ReAct-compatible agent framework without requiring changes to the core reasoning loop. ## Comparison with Native Context Management Most agents manage context by simply truncating or summarizing the conversation in the system prompt. ReMe differs by making memory a first-class, externalized concern: memories are stored outside the LLM context, retrieved on demand, and updated incrementally. This allows the agent's effective memory to scale indefinitely without proportionally increasing per-request token costs. ## Community and Status ReMe has accumulated 1,300 GitHub stars since its launch and is actively maintained by the agentscope-ai team. With Apache 2.0 licensing and straightforward pip installation, it is positioned as a drop-in memory layer for production agent systems. The project is part of a broader ecosystem alongside AgentScope, the team's larger agent orchestration framework.