Open Source
Explore the latest AI open-source projects from GitHub and HuggingFace.
Explore the latest AI open-source projects from GitHub and HuggingFace.
## Overview RLM is a novel inference library from MIT's OASYS Lab that reframes how language models handle long contexts. Instead of stuffing enormous inputs into a single context window, RLM allows a language model to programmatically examine, decompose, and recursively call itself over its input — treating context as a variable in a REPL-style execution environment. The result is effective handling of near-infinite length inputs without the degradation typical of long-context approaches. With 3,489 stars and 630 forks since its late 2025 release, RLM is gaining traction among researchers and practitioners exploring alternatives to brute-force context extension. ## Key Features - **Recursive Self-Calls**: The model can launch sub-calls over decomposed portions of its input, assembling results bottom-up — analogous to divide-and-conquer for language modeling - **Multiple Sandbox Backends**: Supports local Python exec, Docker containers, and cloud-isolated environments including Modal, Prime Intellect, Daytona, and E2B for safe code execution - **Broad Model Provider Support**: Works with OpenAI, Anthropic, OpenRouter, and Portkey APIs, plus local models via vLLM, making it provider-agnostic - **Trajectory Logging & Visualization**: Built-in interactive dashboard for inspecting execution paths, sub-call traces, and intermediate results — critical for debugging recursive reasoning chains - **Simple PyPI Installation**: Available as `pip install rlms` with minimal dependencies ## Use Cases RLM is particularly well-suited for processing large codebases where the model must understand cross-file dependencies, analyzing lengthy research papers or legal documents beyond any single context window, complex multi-step reasoning tasks that benefit from hierarchical decomposition, and research into advanced LM inference patterns such as chain-of-recursive-thought. ## Technical Details RLM implements the Recursive Language Model paradigm described in the accompanying arXiv paper (arXiv:2512.24601). The library abstracts away sandbox management, provider routing, and result assembly behind a clean Python API. Recursive depth and termination conditions are configurable. The REPL environment ensures that sub-calls are isolated, preventing context pollution between recursive levels. ## Getting Started ```bash pip install rlms ``` ```python from rlm import RLM rlm = RLM( backend="openai", backend_kwargs={"model_name": "gpt-4o"}, ) result = rlm.completion("Analyze this 100k-token codebase...").response print(result) ```