February 2026

Agent-to-Agent Recursive Knowledge Transfer

Everyone is writing about self-improvement loops. The real unlock is when one model teaches another -- not its outputs, but its learning process.

February 2026

LCM: When Someone Else Validates Your Paradigm

Voltropy's new paper doesn't compete with RLM. It builds on it, proves it works, and takes it further than we did.

February 2026

Why Context Windows Are the Wrong Abstraction

The industry spent five years making context windows bigger. RLMs suggest the entire framing was wrong from the start.

February 2026

The Decompose-Recurse-Aggregate Pattern Explained

A practitioner's guide to the core algorithmic pattern behind Recursive Language Models -- and why it mirrors how expert humans solve complex problems.

February 2026

RLM vs RAG: Two Approaches to the Long-Context Problem

Retrieval-Augmented Generation dominated the long-context conversation for three years. RLMs take a fundamentally different path. Here is where each one wins.

February 2026

Inference-Time Compute: The New Scaling Frontier

Training-time scaling dominated the last decade of AI progress. The next decade belongs to inference-time scaling -- and RLMs are a leading example of why.

February 2026

Real-World Applications for Recursive Language Models

From legal discovery to genomics, the tasks where RLMs create the largest performance gap over standard LLMs.

February 2026

Small Models, Big Contexts: The Efficiency Case for RLMs

How an 8B parameter model post-trained on 1,000 samples can rival GPT-5 on long-context tasks -- and what that means for cost and deployment.