How Recursive Language Models Break the Context Ceiling: Processing 10M+ Tokens Without Expanding the Window

The race for larger context windows has defined LLM development for years. From GPT-4’s 128K tokens to Gemini’s 1M and beyond, the assumption has been simple: more context equals better performance. But a January 2026 paper from MIT CSAIL challenges this assumption entirely. Recursive Language Models (RLMs) don’t expand the context window—they render it irrelevant by treating prompts as external environments that models can programmatically explore, decompose, and recursively process. ...

7 min · 1468 words

When the Hidden State Becomes the Model: How Test-Time Training Rewrites the Rules of Sequence Modeling

The long-context problem has haunted transformer architectures since their inception. While self-attention’s $O(n^2)$ complexity is well-known, the real tragedy lies deeper: even modern RNNs like Mamba, despite their linear complexity, plateau after 16K tokens. They simply cannot compress enough information into their fixed-size hidden states. What if the hidden state wasn’t a fixed-size bottleneck, but a model that could grow in capacity through learning—even at test time? This is the radical proposition of Test-Time Training (TTT), introduced by Stanford researchers in July 2024 and extended to production-ready systems by NVIDIA and Stanford in December 2025. The results are striking: TTT-Linear matches Transformer performance while maintaining RNN efficiency, and the latest TTT-E2E achieves 2.7x faster inference than full attention at 128K context length. ...

9 min · 1743 words