In 1994, mathematician Peter Shor published an algorithm that would factor large integers exponentially faster than any known classical method. The cryptography community took notice—most of the world’s encrypted communications relied on the assumption that factoring large numbers was computationally intractable. Shor hadn’t built a quantum computer. He had merely proven that if one could be constructed, much of modern security infrastructure would crumble.

Three decades later, quantum computers exist. They factor numbers, simulate molecules, and solve optimization problems. Yet they haven’t broken RSA encryption. The gap between having quantum computers and having useful quantum computers reveals something fundamental about the technology: quantum computing isn’t simply a faster version of classical computing. It’s an entirely different paradigm with its own physics, its own constraints, and its own challenges.

What Makes a Qubit Different from a Bit

A classical bit is binary—it’s either 0 or 1. A qubit, by contrast, can exist in a superposition of both states simultaneously. This is often misrepresented as “being in two states at once,” which suggests some kind of parallel exploration. The reality is more subtle.

A qubit’s state is described by a wave function:

$$|\psi\rangle = \alpha|0\rangle + \beta|1\rangle$$

Where $\alpha$ and $\beta$ are complex numbers satisfying $|\alpha|^2 + |\beta|^2 = 1$. When you measure the qubit, you get 0 with probability $|\alpha|^2$ and 1 with probability $|\beta|^2$. The measurement destroys the superposition—the qubit collapses to a definite classical state.

The Bloch sphere provides a geometric representation of this state. Any point on the surface represents a valid pure qubit state. The north pole is $|0\rangle$, the south pole is $|1\rangle$, and the equator contains states like $(|0\rangle + |1\rangle)/\sqrt{2}$ where both outcomes are equally likely.

graph TB
    subgraph BlochSphere["Bloch Sphere Representation"]
        N["|0⟩ (North Pole)"]
        S["|1⟩ (South Pole)"]
        E["Equator: Superposition States"]
        N --- E
        S --- E
    end

This geometric view reveals something important: superposition isn’t about “being in two places at once.” It’s about the qubit’s state vector pointing in a direction that has components along both the 0 and 1 axes. The measurement process projects this vector onto one of the poles.

The Real Power: Entanglement and Interference

A single qubit in superposition isn’t particularly useful. The computational advantage emerges when qubits interact through entanglement and when quantum gates manipulate probability amplitudes through interference.

When two qubits become entangled, their states can no longer be described independently. A Bell state like:

$$|\Phi^+\rangle = \frac{1}{\sqrt{2}}(|00\rangle + |11\rangle)$$

has the property that measuring either qubit instantly determines the state of the other. This correlation, which Einstein famously called “spooky action at a distance,” enables quantum algorithms to create complex interference patterns across many qubits simultaneously.

Quantum gates manipulate these amplitudes. The Hadamard gate, for instance, transforms $|0\rangle$ into $(|0\rangle + |1\rangle)/\sqrt{2}$ and $|1\rangle$ into $(|0\rangle - |1\rangle)/\sqrt{2}$. The CNOT gate entangles two qubits by flipping a target qubit conditional on the control qubit’s state.

graph LR
    subgraph QuantumCircuit["Bell State Creation Circuit"]
        Q0["q₀: |0⟩"] --> H["H"] --> CNOT
        Q1["q₁: |0⟩"] --> CNOT
        CNOT --> Output["|00⟩ + |11⟩ / √2"]
    end

The key insight is that quantum algorithms don’t “try all possibilities in parallel.” Instead, they orchestrate interference—constructive interference amplifies correct answers while destructive interference cancels wrong ones. The algorithm designer must carefully choreograph this interference pattern; it doesn’t happen automatically.

Shor’s Algorithm: How It Actually Works

Shor’s algorithm exemplifies this principle. The problem is factoring a composite number N into its prime factors. Classically, the best known algorithms run in sub-exponential time—slow enough that 2048-bit RSA keys remain secure.

Shor’s approach converts factoring into a period-finding problem. If you can find the period r of the function $f(x) = a^x \mod N$ for some a coprime to N, then with high probability, $\gcd(a^{r/2} - 1, N)$ and $\gcd(a^{r/2} + 1, N)$ yield non-trivial factors of N.

The quantum part uses the Quantum Fourier Transform (QFT) to find this period. The QFT creates interference that concentrates probability amplitude on values related to the period. When you measure, you get a value from which the period can be extracted classically.

The speedup comes from the QFT’s ability to process an exponential number of function values simultaneously—not by evaluating them all, but by creating the right interference pattern that encodes the period information.

Why Quantum Computers Are Hard to Build

Understanding why quantum computers remain rare despite three decades of development requires appreciating several physical challenges that have no classical analog.

Decoherence: Quantum states are extraordinarily fragile. Any interaction with the environment—thermal photons, electromagnetic fields, even cosmic rays—causes the qubit to lose its quantum properties. This is why most quantum processors operate at temperatures near absolute zero (around 15 millikelvins for superconducting qubits). Even at these temperatures, coherence times typically range from microseconds to milliseconds.

Gate Errors: Every quantum gate operation introduces errors. Current superconducting qubits achieve single-qubit gate fidelities around 99.9% and two-qubit gate fidelities around 99.5%. This sounds impressive, but a useful quantum algorithm might require millions of gates. At 99.9% fidelity per gate, a circuit with 1000 gates would have roughly a 37% chance of completing without error.

Scalability: Adding more qubits doesn’t linearly increase computational power. Each additional qubit must maintain coherence, interact selectively with other qubits, and be individually controllable. The control electronics, wiring, and cooling infrastructure scale poorly.

Quantum Error Correction: The Path to Fault Tolerance

Classical computers handle errors through redundancy—if one bit flips, error-correcting codes can recover the original data. Quantum error correction follows a similar principle but faces additional constraints imposed by quantum mechanics.

The no-cloning theorem proves that you cannot create perfect copies of an unknown quantum state. Instead, quantum error correction encodes one logical qubit across many physical qubits. The surface code, currently the leading approach, arranges physical qubits in a 2D lattice where neighboring qubits perform parity checks.

graph TB
    subgraph SurfaceCode["Surface Code: 1 Logical Qubit from Many Physical Qubits"]
        D1["Data Qubit"] --- D2["Data Qubit"] --- D3["Data Qubit"]
        D1 --- D4["Data Qubit"]
        D2 --- D5["Data Qubit"] --- D6["Data Qubit"]
        D3 --- D6
        D4 --- D7["Data Qubit"]
        D5 --- D8["Data Qubit"] --- D9["Data Qubit"]
        D6 --- D9
        D7 --- D8
        M1["Measure X"] -.-> D2
        M2["Measure Z"] -.-> D5
    end

The overhead is substantial. Creating one fault-tolerant logical qubit might require 1,000 to 10,000 physical qubits, depending on the error rate. This explains why current quantum computers with 50-100 physical qubits can’t yet run algorithms like Shor’s at useful scales.

In December 2024, Google demonstrated with their Willow processor that adding more physical qubits to a logical qubit actually reduces errors—a critical milestone called “error correction below threshold.” This doesn’t mean practical quantum computing has arrived, but it proves the principle works.

The NISQ Era: What We Have Now

Current quantum computers operate in what researchers call the Noisy Intermediate-Scale Quantum (NISQ) era. These devices have:

  • 50 to a few hundred qubits
  • No full error correction
  • Limited coherence times
  • Gate fidelities insufficient for long circuits

NISQ computers can still provide value for specific problems. Variational Quantum Eigensolvers (VQEs) combine short quantum circuits with classical optimization to find ground states of molecules. Quantum annealers tackle optimization problems through adiabatic evolution. These approaches trade theoretical guarantees for practical utility on imperfect hardware.

IBM, Google, IonQ, and others publish roadmaps projecting thousands of qubits by the late 2020s and fault-tolerant systems by the early 2030s. These projections assume continued progress in qubit quality, not just quantity. Adding more noisy qubits doesn’t help if error rates don’t improve.

Different Qubit Technologies

Not all qubits are created equal. The leading physical implementations each have distinct trade-offs:

Superconducting qubits (used by IBM, Google) are fabricated on silicon chips using Josephson junctions—two superconductors separated by a thin insulator. They offer fast gate operations (nanoseconds) but short coherence times (microseconds) and require millikelvin temperatures.

Trapped ions (used by IonQ, Quantinuum) confine individual charged atoms in electromagnetic fields and manipulate their internal states with lasers. They offer long coherence times (seconds to minutes) and high gate fidelities but slower operations (microseconds) and challenges with scaling.

Neutral atoms (used by QuEra, Pasqal) trap uncharged atoms in optical tweezers created by focused laser beams. They can pack thousands of qubits in a small volume but face challenges with individual control and measurement.

Photonic qubits (pursued by PsiQuantum) encode information in properties of photons. They can operate at room temperature and are naturally suited for quantum communication, but creating deterministic two-qubit gates remains difficult.

No technology has emerged as clearly superior. The “right” choice depends on the application, and hybrid approaches may eventually combine different modalities.

What Quantum Computers Will Actually Be Used For

The most promising applications fall into three categories:

Simulation of quantum systems: Nature is quantum mechanical. Simulating molecular behavior, materials properties, or chemical reactions requires tracking exponentially many quantum states. A quantum computer naturally operates in this regime. Drug discovery, catalyst design, and materials science could see substantial benefits.

Optimization: Many industrial problems—logistics, finance, scheduling—can be framed as optimization. Quantum algorithms like QAOA (Quantum Approximate Optimization Algorithm) might provide speedups, though evidence remains mixed for practical instances.

Cryptography: Shor’s algorithm can break RSA and elliptic curve cryptography. This isn’t imminent—breaking RSA-2048 would require millions of physical qubits with error rates far below current levels—but it’s real enough that NIST finalized post-quantum cryptography standards in 2024.

The much-hyped applications like breaking all encryption instantly or solving every optimization problem faster are, at best, premature. Quantum advantage will likely emerge first in specialized scientific computing before broader applicability.

The Fundamental Lesson

Quantum computing forces us to confront how differently information processing works at the quantum scale. The speedup isn’t from trying more possibilities faster—it’s from encoding problems in quantum amplitudes and manipulating interference patterns to extract answers.

This means quantum computers won’t replace classical computers for most tasks. Your laptop runs spreadsheets perfectly well. Quantum computers excel at specific problems where quantum mechanics provides a computational advantage that classical physics cannot match.

Building these machines requires solving some of the most challenging engineering problems in human history: maintaining quantum coherence at scale, implementing fault-tolerant error correction, and developing algorithms that exploit quantum interference. Progress has been steady but slow. The field has learned that adding more qubits is the easy part—making them reliable enough to run useful computations is the hard part.

When Richard Feynman proposed quantum computing in 1982, he asked whether a quantum system could efficiently simulate other quantum systems. The answer, we now know, is yes—but building such a simulator requires precision engineering at the boundary of what’s physically possible. The promise of quantum computing remains real, but the path to that promise runs through decades of incremental advances in physics, engineering, and computer science.


References

  1. Shor, P. W. (1994). “Algorithms for quantum computation: discrete logarithms and factoring.” Proceedings 35th Annual Symposium on Foundations of Computer Science.

  2. Feynman, R. P. (1982). “Simulating physics with computers.” International Journal of Theoretical Physics, 21(6-7), 467-488.

  3. Preskill, J. (2018). “Quantum Computing in the NISQ era and beyond.” Quantum, 2, 79.

  4. Fowler, A. G., et al. (2012). “Surface codes: Towards practical large-scale quantum computation.” Physical Review A, 86(3), 032324.

  5. Google Quantum AI (2024). “Quantum error correction below the surface code threshold.” Nature.

  6. IBM Quantum (2025). “IBM Quantum Roadmap.” IBM Corporation.

  7. Nielsen, M. A., & Chuang, I. L. (2010). Quantum Computation and Quantum Information. Cambridge University Press.

  8. Arute, F., et al. (2019). “Quantum supremacy using a programmable superconducting processor.” Nature, 574(7779), 505-510.

  9. NIST (2024). “NIST Releases First 3 Finalized Post-Quantum Encryption Standards.” National Institute of Standards and Technology.

  10. Bravyi, S., & Kitaev, A. (2005). “Universal quantum computation with ideal Clifford gates and noisy ancillas.” Physical Review A, 71(2), 022316.