Analysis, commentary, and technical deep dives on Recursive Language Models and the shifting landscape of AI architecture.
Everyone is writing about self-improvement loops. The real unlock is when one model teaches another -- not its outputs, but its learning process.
February 2026Voltropy's new paper doesn't compete with RLM. It builds on it, proves it works, and takes it further than we did.
February 2026The industry spent five years making context windows bigger. RLMs suggest the entire framing was wrong from the start.
February 2026A practitioner's guide to the core algorithmic pattern behind Recursive Language Models -- and why it mirrors how expert humans solve complex problems.
February 2026Retrieval-Augmented Generation dominated the long-context conversation for three years. RLMs take a fundamentally different path. Here is where each one wins.
February 2026Training-time scaling dominated the last decade of AI progress. The next decade belongs to inference-time scaling -- and RLMs are a leading example of why.
February 2026From legal discovery to genomics, the tasks where RLMs create the largest performance gap over standard LLMs.
February 2026How an 8B parameter model post-trained on 1,000 samples can rival GPT-5 on long-context tasks -- and what that means for cost and deployment.