The Wind in the Willow

Stefano Gogioso · December 11, 2024

In this post, we briefly react to Google’s Willow Announcement. TL;DR:

  • 😴 Tired: The RCS benchmark is a useless problem.
  • 💯 Based: 2x qubits and 5x longer coherence times vs Google’s previous chip.
  • 🚀 Ascended: Experimental demonstration that scalable quantum computing is possible.

The Tired Part 😴

The Random Circuit Sampling (RCS) benchmarking problem is practically useless:

“Look! My quantum computer is doing quantum computer stuff!”

In a nutshell, the benchmark asks to approximate the probability distribution of measurement outcomes for a randomly sampled quantum circuit. By design, RCS is the easiest thing a quantum computer can do, and there are no practical applications.

The hardness of classical simulation is exponential in both the number of qubits and the fidelity: if you push this far enough, you get nonsensical numbers such as the 10 septillion years mentioned in Google’s Willow announcement. Framing this as “Willow solved a standard computation in <5 mins that would take a leading supercomputer over 10^25 years” is just showboating.

Google made similarly bombastic claims in their 2019 Quantum supremacy using a programmable superconducting processor, albeit with a slightly less nonsensical number of 10000 years. Within 18 months, advancements in tensor contraction demonstrated that the very same task is tractable on classical hardware. For IBM’s 2023 claim in Evidence for the utility of quantum computing before fault tolerance, simulation by hyper-optimized tensor contraction came within two weeks. We’re not suggesting that this will happen again, but it is a caveat worth mentioning: at the very least, it is likely that quite a few orders of magnitude can be shaved off the claim in the coming years.

The Based Part 💯

The truly important takeaway from the RCS benchmarking results is that Google is on-track with their quantum computing development roadmap:

  • number of physical qubits increased from 54 to 105 vs their previous chip
  • coherence times increased from 20μs to 100μs vs their previous chip

This is a genuinely admirable technical achievement, albeit an incremental one. It is both exciting and terrifying, depending on whom you ask:

  • 👍 For many scientific fields, this is the harbinger of a blossoming of quantum-enabled applications.
  • 👎 For crypto and other security-focussed communities, this is a wake-up call: migration to quantum-resistant primitives should not be delayed.

It is important to mention that Google is not alone in this race: quantum hardware leaders such as Quantinuum, IBM and QuEra are also steadily progressing on their roadmaps towards fault-tolerand quantum computing. Not to mention PsiQuantum, the wildcard of the quantum hardware industry.

For more information, see Google’s blog post on Validating random circuit sampling as a benchmark for measuring quantum progress.

The Ascended Part 🚀

The truly monumental achievement is that Google demonstrated a quantum chip which can be operated below the “error correction threshold”. This is the first experimental demonstration that scalable quantum computing is possible.

Being “below the threshold” means that the more physical qubits we use to simulate a single logical qubit, the better that logical qubit becomes. In Google’s experiment, using 97 physical qubits resulted in 1 logical qubit with 2x coherence time vs the best individual physical qubit.

It took almost all physical qubits on the chip to assemble a single logical qubit, which loses its coherence 100x faster than the blink of an eye. But, from this moment forward, bigger chips will translate in better quality logical qubits and, further down the line, into multiple high-quality logical qubits.

The fault-tolerant quantum computing race has officially started.

There are several important challenges still to overcome:

  • Increasing the number of qubits while keeping the same physical error rate is hard.
  • Performing error-corrected quantum gates on the logical qubits is hard.
  • Decoding errors fast enough on many physical qubits is hard.

That said, there’s already enormous potential in having few high-quality logical qubits with limited or no logical gates: with a high-fidelity photonic interface, they could be used to construct a practical quantum memory. In turn, this would would unlock some of the fancier applications on our own roadmap. Exciting times ahead 😎

For more information, see Google’s blog post on Making Quantum Error Correction Work.

Twitter, Facebook

Comments