Open Source
Explore the latest AI open-source projects from GitHub and HuggingFace.
Explore the latest AI open-source projects from GitHub and HuggingFace.
Letta Code is a memory-first coding agent built on the Letta API that maintains persistent context across coding sessions. Unlike stateless AI coding assistants that forget everything when a session ends, Letta Code creates a single persisted agent that learns over time and is portable across different LLM providers. With 1,500 GitHub stars and Apache 2.0 licensing, it represents a fundamentally different approach to AI-assisted development. ## The Problem with Stateless Coding Agents Current AI coding assistants operate in isolated sessions. Each time a developer opens a new conversation, the AI starts from scratch. It does not remember the codebase architecture, past debugging sessions, coding style preferences, or project-specific conventions. Developers end up repeating context, re-explaining patterns, and losing the accumulated understanding that a human pair programmer would naturally retain. Letta Code addresses this by making memory a first-class feature rather than an afterthought. The agent persists between sessions, accumulating knowledge about the developer's projects, preferences, and patterns. ## Core Architecture ### Persistent Agent Model Each Letta Code installation creates a single agent that lives across all coding sessions. This agent is backed by the Letta API, which provides structured memory management with both core memory for essential context and archival memory for long-term knowledge storage. When the developer switches between projects or takes a break, the agent retains everything it has learned. The persistence layer is separate from the LLM provider, which means the same accumulated knowledge works regardless of which model is powering the responses. A developer can start with Claude Sonnet, switch to GPT-5.2-Codex for a specific task, and move to Gemini 3 Pro later, all while maintaining the same agent memory. ### Model Portability Letta Code supports multiple LLM providers including Claude Sonnet and Opus 4.5, GPT-5.2-Codex, Gemini 3 Pro, and GLM-4.7. The /connect command allows users to configure custom LLM endpoints, enabling integration with self-hosted models or specialized providers. This model-agnostic approach prevents vendor lock-in and lets developers choose the best model for each task. ### Skill Learning The /skill command enables the agent to learn new capabilities from task trajectories. When a developer completes a complex workflow, the agent can extract and memorize the pattern for future reuse. This transforms repetitive tasks into one-time teaching moments. The /remember command provides a more direct way to inject knowledge, allowing developers to explicitly tell the agent about project conventions, preferred patterns, or domain-specific information. ### Thread Management The /clear command starts a new conversation thread while preserving all agent memory. This provides a clean conversational context without losing accumulated knowledge, similar to starting a new page in a notebook while keeping access to all previous pages. ## Supported Models Letta Code is designed to work with the latest generation of LLMs. Officially supported models include Claude Sonnet 4.5 and Opus 4.5 from Anthropic, GPT-5.2-Codex from OpenAI, Gemini 3 Pro from Google, and GLM-4.7 from Zhipu AI. Additional models can be connected through custom endpoints. ## Deployment Options Letta Code is available as an npm package installed globally with npm install -g @letta-ai/letta-code. For users who prefer containerized deployment, Docker support is available through the LETTA_BASE_URL configuration. An Arch Linux AUR package is also maintained for Linux users. ## Release Cadence The project maintains an active release schedule with 80 releases to date. The latest release, v0.15.3, was published on February 17, 2026. This rapid iteration pace reflects active development and responsiveness to user feedback. ## Practical Benefits For individual developers, Letta Code reduces the friction of context-switching between projects. The agent remembers each project's structure, dependencies, and conventions without requiring explicit re-briefing. For teams, the persistent memory means onboarding a new developer can be partially automated by sharing agent knowledge about codebase patterns and architectural decisions. Debugging sessions benefit particularly from persistent memory. When a bug resurfaces or a related issue appears, the agent can draw on past debugging context to accelerate resolution. Code review suggestions become more consistent over time as the agent learns the team's standards and preferences. ## Limitations The memory-first approach introduces dependencies on the Letta API infrastructure. While the agent runs locally, memory management requires the Letta backend service. The project is still in active development with frequent breaking changes between minor versions. Model portability works well for text generation tasks but specialized capabilities like code completion may vary significantly between providers. The 1,500-star count reflects a relatively early-stage project compared to more established coding tools, though the 29 contributors and 80 releases indicate healthy development momentum. ## Technical Context Letta Code is built primarily in TypeScript, with the Letta API providing the memory and agent management layer. The separation between the coding interface and the memory backend allows each component to evolve independently. The Apache 2.0 license permits commercial use and modification.