The paper, the code, the pip package, the sandbox options, and related work worth reading. No filler.
arXiv Paper
arxiv.org/abs/2512.24601
"Recursive Language Models" -- Alex L. Zhang, Tim Kraska, Omar Khattab. MIT OASYS Lab. Accepted at ICML 2025. The full paper with all benchmarks, ablations, and training details.
HTML Version
arxiv.org/html/2512.24601v2
Readable in-browser version with figures and tables rendered inline.
GitHub Repository
github.com/alexzhang13/rlm
The official inference engine. Supports OpenAI, Anthropic, OpenRouter, Portkey, LiteLLM, and vLLM backends. Multiple REPL environments (local, Docker, Modal, Prime Intellect). MIT license.
Minimal Implementation
github.com/alexzhang13/rlm-minimal
Stripped-down reference implementation for understanding the core algorithm without the full library overhead.
pip Package
pip install rlms
Install and use in three lines of Python. Supports all major model providers out of the box.
pip install rlms
from rlm import RLM
rlm = RLM(
backend="openai",
backend_kwargs={"model_name": "gpt-5-nano"},
verbose=True,
)
response = rlm.completion("Your very long prompt here...")
print(response.response)
That's it. The rlm.completion() call is a drop-in replacement for llm.completion(). Same input (a string prompt), same output (a string response). The recursion happens transparently.
Original Blogpost (October 2025)
alexzhang13.github.io/blog/2025/rlm/
Alex Zhang's original blog post that introduced the RLM idea before the expanded arXiv paper. Good for intuition-building.
Documentation
alexzhang13.github.io/rlm/
Official documentation for the rlms library. API reference, configuration options, and examples.
Google ADK Discussion
discuss.google.dev/t/recursive-language-models-in-adk/323523
Community discussion on integrating RLMs with Google's Agent Development Kit for long-context agent tasks.
ArXivIQ Coverage
arxiviq.substack.com/p/recursive-language-models
Summary and analysis of the RLM paper from the ArXivIQ newsletter.
DSPy RLM Support (v3.1.2+)
dspy.ai
Stanford's programmatic LLM framework has built-in RLM support. Initialize with dspy.RLM('input -> output') and it handles REPL setup, sub-calls, and aggregation. Supports separate sub-call models via sub_lm parameter.
Google ADK Implementation (by Liam Connell)
medium.com/google-cloud/recursive-language-models-in-adk-d9dc736f0478
Enterprise-ready reimplementation using Google's Agent Development Kit. Extends the paper with lazy file loading (GCS, local filesystem) and parallelism for sub-calls. Published January 2026.
VentureBeat -- "MIT's new 'recursive' framework lets LLMs process 10 million tokens without context rot"
Detailed technical breakdown of the paper with cost analysis and model comparison notes.
InfoQ -- "MIT's Recursive Language Models Improve Performance on Long-Context Tasks"
Engineering-focused summary, includes Alex Zhang's "bitter-lesson-pilled" framing and the partially observable problem insight.
Towards Data Science -- "Going Beyond the Context Window: Recursive Language Models in Action"
Practical walkthrough using DSPy's RLM implementation to process 386K tokens of articles with Claude Sonnet 4.5.
The Neuron -- "Recursive Language Models: The Clever Hack That Gives AI Infinite Memory"
Accessible explainer with the library analogy and key results summary.
WordLift -- "RLM-on-KG: Recursive Language Models and the Future of SEO"
Explores RLM applications for knowledge graphs and search engine optimization.
DEV Community -- "RLM: The Ultimate Evolution of AI?"
Developer-oriented overview of the paradigm shift from passive reading to active problem-solving.
Dextralabs -- "Why Recursive Language Models Beat Long-Context LLMs"
Enterprise-focused analysis with practical implications for legal, engineering, and data teams.
Inference-time scaling and reasoning:
Long-context benchmarks:
Agent scaffolds and context management:
Self-delegation:
@misc{zhang2025recursivelanguagemodels,
title={Recursive Language Models},
author={Alex L. Zhang and Tim Kraska and Omar Khattab},
year={2025},
eprint={2512.24601},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2512.24601},
}