Why Semantic Search Fails: The Hidden Geometry of Vector Embeddings

In 2013, Tomas Mikolov and his team at Google published a paper that would fundamentally change how machines understand language. They showed that by training a simple neural network to predict surrounding words, you could learn vector representations where “king” minus “man” plus “woman” approximately equals “queen.” This was the birth of modern word embeddings—a technique that compresses the meaning of words into dense numerical vectors. A decade later, embeddings have become the backbone of virtually every AI application involving text. They power semantic search, recommendation systems, and the retrieval component of RAG (Retrieval-Augmented Generation) architectures. But as organizations deploy these systems at scale, many discover an uncomfortable truth: semantic search often fails in ways that are hard to predict and even harder to debug. ...

11 min · 2169 words

How Search Engines Find a Needle in a 400 Billion-Haystack

When you type a query and hit enter, results appear in under half a second. Behind that instant response lies an engineering marvel: a system that must search through hundreds of billions of documents, score each one for relevance, and return the best matches—all before you can blink. The numbers are staggering. Google’s index contains approximately 400 billion documents according to testimony from their VP of Search during the 2023 antitrust trial. The index itself exceeds 100 million gigabytes. Yet the median response time for a search query remains under 200 milliseconds. ...

9 min · 1860 words