From 1% Parameters to Full Capacity: The Mathematics and Engineering Behind LoRA's Evolution
Fine-tuning a 7-billion parameter model used to demand 100+ GB of VRAM—roughly the memory of four A100 GPUs. Today, the same task runs on a consumer RTX 4090 with 24 GB. This 4× reduction didn’t come from better hardware; it came from a mathematical insight about the structure of neural network adaptations. Low-Rank Adaptation (LoRA), introduced by Microsoft in 2021, fundamentally changed how we think about model fine-tuning. The core idea is deceptively simple: instead of updating all parameters, inject small trainable matrices that modify the model’s behavior. But behind this simplicity lies deep connections to linear algebra, information theory, and the geometry of neural network weight spaces. ...