Serial vs Parallel: The Engineering Trade-offs Behind Inference-Time Compute Scaling

When OpenAI’s o1 model spent unprecedented computational resources during inference, the AI community witnessed a paradigm shift: models could now trade thinking time for intelligence. But the real engineering challenge isn’t whether to scale inference compute—it’s how to scale it optimally. The choice between serial thinking (longer chains) and parallel thinking (more branches) fundamentally changes the cost-performance curve, and getting it wrong can mean burning 4x more compute for identical results. ...

8 min · 1530 words

How Recursive Language Models Break the Context Ceiling: Processing 10M+ Tokens Without Expanding the Window

The race for larger context windows has defined LLM development for years. From GPT-4’s 128K tokens to Gemini’s 1M and beyond, the assumption has been simple: more context equals better performance. But a January 2026 paper from MIT CSAIL challenges this assumption entirely. Recursive Language Models (RLMs) don’t expand the context window—they render it irrelevant by treating prompts as external environments that models can programmatically explore, decompose, and recursively process. ...

7 min · 1468 words

When a 1B Model Beats a 405B Giant: How Test-Time Compute Is Rewriting the Rules of LLM Scaling

For years, the path to better LLMs seemed straightforward: more parameters, more training data, more compute. The scaling laws articulated by Kaplan et al. and refined by Chinchilla painted a clear picture—performance improved predictably with model size. Then OpenAI released o1, and suddenly the rules changed. A model that “thinks longer” at inference time was solving problems that eluded models 10x its size. The breakthrough wasn’t just engineering—it was a fundamental shift in how we think about compute allocation. The question flipped from “how big should we train?” to “how long should we let it think?” ...

9 min · 1722 words