How Mixture of Experts Scales to Trillion Parameters: The Sparse Architecture Revolution Behind Modern LLMs

When DeepSeek-V3 was released in December 2024, it achieved something remarkable: a 671-billion-parameter model that activates only 37 billion parameters per token. This isn’t a magic trick—it’s the power of Mixture of Experts (MoE), an architectural paradigm that has quietly become the backbone of nearly every frontier large language model. The math is compelling. A dense 671B model would require approximately 1,342 TFLOPs per token during inference. DeepSeek-V3 achieves comparable performance with roughly 74 TFLOPs—an 18x reduction in compute. This isn’t incremental optimization; it’s a fundamental rethinking of how neural networks scale. ...

9 min · 1822 words

When Round Robin Fails: The Hidden Mathematics of Load Balancing Algorithms

Imagine you’re running a service with 10 servers, each capable of handling 1,000 requests per second. You set up a round-robin load balancer—simple, elegant, fair. Every server gets its turn in sequence. Traffic flows smoothly until suddenly, at 2 AM, your monitoring alerts start screaming. Half your servers are overwhelmed, queues are growing, latencies are spiking, and the other half of your servers are nearly idle. What went wrong? The servers weren’t identical. Three of them were newer machines with faster CPUs and more memory. Three were legacy boxes running older hardware. The round-robin algorithm, in its mechanical fairness, sent exactly the same number of requests to a struggling legacy server as it did to a powerful new one. The legacy servers couldn’t keep up, requests piled up in their queues, and eventually they started timing out—cascading into a partial outage that woke up half your engineering team. ...

12 min · 2443 words

How Consistent Hashing Scales Distributed Systems: The Mathematics Behind Minimal Rebalancing

When Amazon engineers published the Dynamo paper in 2007, they revealed a technique that had been quietly powering some of the world’s largest distributed systems. The core idea—consistent hashing—originated from a 1997 MIT paper by David Karger and colleagues, but it took a decade before the industry fully embraced its elegance. Today, consistent hashing underpins Apache Cassandra, Amazon DynamoDB, Discord’s messaging infrastructure, Netflix’s content delivery network, and virtually every modern distributed database. The algorithm solves a deceptively simple problem: how do you distribute data across servers when those servers keep joining and leaving? ...

9 min · 1786 words