Open Source
Explore the latest AI open-source projects from GitHub and HuggingFace.
Explore the latest AI open-source projects from GitHub and HuggingFace.
memU is an open-source memory framework by NevaMind AI designed to give AI agents persistent, structured memory for 24/7 autonomous operation. Rather than treating each conversation as an isolated session, memU continuously monitors agent interactions, extracts structured knowledge, and proactively prepares context for future tasks. The framework uses a three-layer hierarchical architecture (Resource, Item, Category) that treats memory like a file system, enabling both reactive queries and proactive context assembly. With over 10,000 GitHub stars and support for multiple LLM providers, memU addresses one of the most critical gaps in current agent infrastructure: long-term memory that actually works. ## Why Agent Memory Matters Now The AI agent landscape in 2026 has evolved far beyond simple chatbots. Agents now manage email, monitor markets, schedule meetings, and execute multi-step workflows autonomously. But most agent frameworks treat each interaction as stateless. When an agent helps you draft an email today, it has no memory of your communication style, preferred contacts, or ongoing projects from yesterday. This forces users into repetitive context-setting that undermines the promise of autonomous operation. memU solves this by providing a persistent memory layer that sits between the agent and its LLM backbone. Every interaction generates structured knowledge that is categorized, indexed, and made available for future reference. The system does not just remember facts; it learns patterns, preferences, and intentions over time. ## Dual-Agent Architecture The core innovation in memU is its dual-agent model. A Main Agent handles user-facing tasks and query responses as usual. In parallel, a MemU Bot continuously monitors all interactions, extracting insights and updating the memory hierarchy. This separation ensures that memory operations never block or slow down primary agent tasks. The MemU Bot operates on a background loop: observe interactions, extract structured knowledge, categorize and store insights, and prepare proactive recommendations. When the Main Agent needs context for a new task, the MemU Bot has already assembled relevant memory items based on detected intent patterns. ## Three-Layer Hierarchical Memory The memory system is organized into three layers, each serving a distinct purpose. The Resource Layer provides access to original data sources and handles background monitoring of external information. The Item Layer stores targeted facts, preferences, and extracted insights for real-time retrieval. The Category Layer maintains high-level summaries and automatically assembles contextual overviews across related memory items. This hierarchy enables efficient memory access at different granularity levels. Quick lookups retrieve specific facts from the Item Layer. Contextual understanding draws from Category-level summaries. Deep reference traces back to original Resources when precision is required. ## Proactive Intelligence memU's most distinctive feature is proactive context loading. Traditional memory systems are reactive: they respond to explicit queries. memU anticipates what context will be needed based on patterns in user behavior and current task signals. If you typically review market data before your morning meeting, memU pre-loads relevant financial summaries. If an incoming email matches a pattern from previous urgent communications, memU surfaces related context before you ask. This proactive approach dramatically reduces the back-and-forth typically required to bring an AI agent up to speed on ongoing work. The agent effectively maintains a running understanding of your priorities, preferences, and active projects. ## Token Cost Reduction One of memU's practical benefits is reduced LLM token consumption. Without persistent memory, agents must reconstruct context from scratch for each session, often consuming thousands of tokens on repeated context-setting. memU's structured memory allows agents to load precisely the relevant context needed for each task, avoiding redundant processing. Insight caching means frequently referenced knowledge is pre-summarized rather than re-extracted from raw data. ## Multi-Provider Support The framework supports OpenAI (default), Anthropic Claude via OpenRouter, Alibaba Qwen, and Voyage AI for embeddings. Custom LLM and embedding providers can be configured through extensible profiles. This flexibility allows teams to choose their preferred models without being locked into a single provider. ## Practical Use Cases memU demonstrates its value across three documented scenarios. Information recommendation agents track reading patterns and surface relevant content proactively. Email management agents learn communication styles, draft context-aware replies, and detect scheduling conflicts. Trading agents monitor market context and investment behavior to anticipate portfolio decisions. ## Limitations The framework requires Python 3.13+, which may limit adoption in environments running older Python versions. The proactive intelligence features depend on sufficient interaction history to detect patterns, meaning initial performance improves significantly after a learning period. Self-hosted deployment requires managing PostgreSQL with pgvector for production use. Documentation for advanced configuration and custom provider integration could be more comprehensive.