I’m Dutch by birth, but lately, even though I haven’t been there, I’ve developed a strong affinity for Denmark. Probably the fact that KPMG’s Global Quantum Hub is based in Copenhagen has something to do with that—I’ve been able to build some wonderful professional friendships as a result. But there is another connection between our two countries. The 20th century Danish physicist, mathematician and poet Piet Hein is a direct descendant of the 17th century Dutch admiral Piet Pieterszoon Hein, a hero of the 70-year war between the Netherlands and Spain whose name is memorialized in children’s songs to this day.
Piet Hein (the younger) was a contemporary of Niels Bohr and enjoyed “playing mental ping-pong” with that father of quantum physics during the 1920s and 1930s. I wonder if he was anticipating the problems of noise-induced errors in quantum computing when he penned the following lines:
The road to wisdom?—Well, it’s plain
and simple to express:
Err
and err
and err again
but less
and less
and less.1
I’ve written before about the concept of noise in quantum computers—anything from temperature fluctuations to cosmic radiation—that can cause qubits to decohere and therefore produce errors in the computer’s calculations. In the same post, I introduced the notions of quantum advantage and quantum supremacy—the to-date-elusive cases where a quantum computer might be demonstrably superior to a classical computer. Now I’d like to elaborate on both those themes and introduce some new developments in the field that may accelerate the arrival of useful quantum computing to, possibly, the present day.
Attenuate the negative
Controversial claims have already been made for both quantum advantage and quantum supremacy, but the consensus is that neither has yet been clearly demonstrated in any useful way. The claims have typically been in extremely abstract, even contrived, problems in mathematics or physics that have turned out to have no scientific or commercial value whatsoever.
This brings us to a new term, recently introduced by a large quantum computer manufacturer: quantum utility. The idea behind quantum utility is simple: we shouldn’t have to wait years for advantage or supremacy to be realized if the current generation of quantum computers can be made to perform useful tasks now. They may solve some problems slightly better than today’s classical computers, and classical computers can then be improved to catch up to quantum and check the results. This back-and-forth continues until the quantum results get to a sufficiently high degree of confidence. At that point, quantum utility is achieved with a quantum computer outperforming classical in speed or efficiency of computation, or both. Quantum utility means working with the good enough now, while we continue our efforts to achieve the much better, or the best, later.
The obstacle to achieving quantum utility (let alone advantage or supremacy) is still the aforementioned nasty problem of noise causing errors in quantum systems. To eliminate errors, quantum manufacturers have taken two approaches. The first is trying to avoid errors by shielding qubits from noise—using protective barriers to block cosmic radiation, or super-cooling the quantum computer to temperatures near absolute zero. (I’ve even heard of researchers going so far as to place their quantum lab at the bottom of a mine shaft, finding that the earth’s crust provides some natural shielding of qubits.) This has had limited effectiveness for now, so the other technique is building in fault tolerance—for example, creating a logical qubit out of multiple physical qubits in the hope that a correct result will emerge when some of the physical qubits are measured. The problem with this is that so many physical qubits are required for a single logical qubit that quantum computers will need to scale up to hundreds of thousands or even millions of qubits before they will be fully fault-tolerant and able to accomplish groundbreaking tasks—and this is years, maybe decades, in the future.
In June 2023, the same quantum manufacturer who introduced the concept of quantum utility also published a paper suggesting that it could be achieved using a concept that they define as error mitigation. This is a fascinating approach to managing quantum errors. It recognizes that noise will always be there—but we can use what we know about the noise to detect patterns and then mathematically undo the effects of noise on qubits to get back to the correct solution. There are three possible approaches to error mitigation: Zero-Noise Extrapolation, Probabilistic Error Cancellation and Post-Selection Error Mitigation.
Listen down
Let’s start with Zero-Noise Extrapolation. If you remember your high school algebra, think of a graph where we represent noise on the X axis and errors on the Y axis. Then, for a certain level of noise, x, we can plot the observed erroneous result, y. Next, we amplify the noise level in our quantum system, increasing the value of x to x1, x2, and so on, moving right along the graph. This could be done for example by removing some of the shielding or temperature controls. For each new noise level, we can observe a new erroneous result y1, y2, etc. and plot these values on the graph. Eventually, a function begins to take shape—possibly some kind of polynomial or exponential curve. With enough data points, we’ll have sufficient confidence in the shape of the curve that we can turn around and extrapolate what might happen if the value of x decreases toward zero—in other words, if we were able to reduce or eliminate the noise. The resulting extrapolated value of y when x becomes zero tells us the expected correct answer to our quantum calculation. Zero-Noise Extrapolation has the advantage of being easy to execute with low overhead. But extrapolation is not always exact, so we end up with a very good, but not necessarily perfect, solution.
Probabilistic Error Cancellation is a lot like putting a pair of noise-cancelling headphones over our sensitive qubits. Think back to high school physics, where you learned that sound travels through the air in a wave pattern with regular peaks and troughs. Noise-cancelling headphones contain microphones that detect external sounds, and then cause the speaker to generate the same sound but with the peaks and troughs shifted by half a wavelength so that the two sound waves cancel each other out. Similarly, in a quantum system, we can perform sampling to understand the characteristics and distribution of the noise. Then, we effectively insert qubits that produce noise inverse to the observed noise, cancelling it out. The technique works well, but it does come at a cost of significant overhead to observe and analyze the noise before it can be cancelled. And, just like noise-cancelling headphones that work well on constant sounds like an airplane engine but don’t handle high frequencies or variable sounds like voices particularly well—probabilistic error cancellation is good on average but is less effective the more the noise in the quantum system varies over time.
Post-Selection Error Mitigation implies that we have some idea of what the result of the qubit operations in a quantum calculation should be. We figure this out by running experimental calculations on small quantum computer systems and correlating the results to the same calculations on a classical system that is not affected by quantum noise. Once we have this baseline, we can scale up and run larger calculations on larger numbers of qubits—and select results from the qubits that meet our expectations, throwing out the rest. Post-Selection Error Mitigation can be done with low overhead but for the moment is still difficult to generalize beyond specific types of quantum circuits.
Mid-way there
These three error mitigation techniques provide another example of the good-enough approach. We haven’t achieved perfection by eliminating noise or building systems large and robust enough that they can fully tolerate noise. We have, however, been able to use creative mathematical techniques to compensate for the noise, allowing small or medium-sized quantum computers to work at their full capacity and deliver adequate results. That June 2023 paper suggests that a commercially available quantum computer of 127 qubits, using Zero-Noise Extrapolation, can outperform a classical computer in solving a particular, albeit simplified, problem in condensed-matter physics—which, unlike the earlier claims made for quantum supremacy, has real-world applicability in materials science, engineering and chemistry. In July, a follow-up paper was published demonstrating quantum utility again, this time solving a problem in statistical probability distributions. With a few hundred, or maybe a thousand, qubits, it’s expected that these quantum implementations can scale up to solve even more complicated and more useful problems—and this will be feasible in a matter of months rather than years.
Good enough is, indeed, good enough—for now. As we continue our long-term quest for quantum advantage and supremacy, we know that quantum computers will err, and err and err again. Using error mitigation, we can achieve quantum utility now by making the effects of the errors less, and less, and less. Piet Hein would be proud.
Multilingual post
This post is also available in the following languages
Stay up to date with what matters to you
Gain access to personalized content based on your interests by signing up today