Beyond Next-Token: How Multi-Token Prediction Is Rewriting LLM Training for 3x Faster Inference

For years, the next-token prediction (NTP) paradigm has been the unquestioned foundation of large language model training. Given a sequence of tokens $x_{1:t}$, the model learns to maximize $P(x_{t+1} | x_{1:t})$. Simple, elegant, and remarkably effective—until you realize the fundamental inefficiency baked into this approach. The problem is that transformers spend the same computational budget predicting filler words (“the”, “and”, “is”) as they do on information-carrying tokens (“quantum”, “entanglement”, “superposition”). Research from Apple and EPFL reveals that over 50% of English text consists of function words—linguistic glue that carries minimal semantic weight. Yet models trained on NTP treat every token with equal reverence, creating a massive computational inefficiency. ...

7 min · 1425 words

When the Answer Lies at the End of a Branch: The Complete Architecture of Inference-Time Search Methods for LLM Reasoning

The emergence of reasoning models like DeepSeek-R1, OpenAI’s o3, and Google’s Gemini thinking mode has fundamentally shifted how we think about LLM inference. These models don’t just generate—they search. The question is no longer “what should the model output?” but “how should the model search for the answer?” This shift from generation to search has spawned an entire taxonomy of inference-time algorithms, each with distinct trade-offs between computational cost and output quality. Understanding these methods—their mathematical foundations, implementation details, and practical performance—is essential for anyone deploying reasoning models in production. ...

5 min · 932 words

How Speculative Decoding Achieves 3x Faster LLM Inference Without Losing Quality: The Mathematics Behind Draft-Verify Acceleration

The sequential nature of autoregressive language models creates a fundamental bottleneck: generating each token requires a full forward pass through billions of parameters. A 70B parameter model processing a single token must load roughly 140GB of weights from memory (FP16), and memory bandwidth—not compute—becomes the limiting factor. This is why a 70B model might generate only 20-30 tokens per second on an H100, despite the GPU being capable of orders of magnitude more computation. ...

4 min · 737 words