When 90% of Your KV Cache Doesn't Matter: The Mathematics Behind Intelligent Token Eviction
A 70B parameter model with a 128K context window needs approximately 40 GB of GPU memory just for the KV cache. That’s before counting model weights, activations, or any other overhead. TheKV cache grows linearly with sequence length, creating a fundamental barrier to long-context inference that no amount of GPU memory can solve. The breakthrough came from an unexpected observation: most tokens in your KV cache contribute almost nothing to the final output. Researchers discovered that intelligently evicting 90% of cached tokens often results in negligible accuracy loss. This isn’t compression through quantization—it’s compression through understanding which tokens actually matter. ...