When 10% Attention Beats 100%: The Mathematics Behind Sparse LLM Inference

The quadratic complexity of self-attention has haunted transformer architecture since its inception. As context windows expanded from 2K to 1M tokens, the O(N²) attention computation transformed from an annoyance into an existential bottleneck. Yet a counterintuitive discovery emerged in 2025-2026: computing only 5-20% of attention weights can match or exceed full attention performance. This isn’t compression with acceptable loss—it’s the revelation that transformers have been computing billions of unnecessary operations. The mathematics behind this phenomenon, and the engineering that exploits it, represents one of the most significant advances in LLM efficiency. ...

10 min · 2056 words

When AI Learns to Remember: How Google's Titans Architecture Solved the Long-Term Memory Problem

The Transformer architecture revolutionized machine learning with its attention mechanism, enabling models to capture dependencies across entire sequences. Yet despite their dominance, Transformers suffer from a fundamental limitation: they have amnesia. Every token beyond the context window vanishes into oblivion, and even within that window, the quadratic complexity of attention makes scaling prohibitively expensive. In December 2024, Google Research introduced Titans, a new family of architectures that fundamentally rethinks how neural networks handle memory. The breakthrough isn’t just another efficiency trick—it’s a paradigm shift that treats memory itself as a learnable neural network, updated in real-time during inference through gradient descent on a surprise-based objective. ...

8 min · 1691 words

How Recursive Language Models Break the Context Ceiling: Processing 10M+ Tokens Without Expanding the Window

The race for larger context windows has defined LLM development for years. From GPT-4’s 128K tokens to Gemini’s 1M and beyond, the assumption has been simple: more context equals better performance. But a January 2026 paper from MIT CSAIL challenges this assumption entirely. Recursive Language Models (RLMs) don’t expand the context window—they render it irrelevant by treating prompts as external environments that models can programmatically explore, decompose, and recursively process. ...

7 min · 1468 words

When the Hidden State Becomes the Model: How Test-Time Training Rewrites the Rules of Sequence Modeling

The long-context problem has haunted transformer architectures since their inception. While self-attention’s $O(n^2)$ complexity is well-known, the real tragedy lies deeper: even modern RNNs like Mamba, despite their linear complexity, plateau after 16K tokens. They simply cannot compress enough information into their fixed-size hidden states. What if the hidden state wasn’t a fixed-size bottleneck, but a model that could grow in capacity through learning—even at test time? This is the radical proposition of Test-Time Training (TTT), introduced by Stanford researchers in July 2024 and extended to production-ready systems by NVIDIA and Stanford in December 2025. The results are striking: TTT-Linear matches Transformer performance while maintaining RNN efficiency, and the latest TTT-E2E achieves 2.7x faster inference than full attention at 128K context length. ...

9 min · 1743 words

How Ring Attention Breaks the Memory Barrier: Enabling Million-Token Contexts Through Distributed Computation

In April 2025, Meta’s Llama 4 Scout achieved something previously thought impossible: processing 10 million tokens in a single context window. To put this in perspective, that’s roughly 20 novels, 40 hours of video, or an entire mid-sized codebase—all in one prompt. The secret behind this breakthrough isn’t a revolutionary new model architecture or exotic hardware. It’s a clever distributed computing technique called Ring Attention that fundamentally rethinks how we compute attention across multiple GPUs. ...

7 min · 1456 words