
On the Quantum Computing Revolution
The centenary year of quantum mechanics is wrapping up; it is time to take stock of the quantum industry. It is a pivotal moment, and it is not entirely unreasonable to believe that, in the coming year, we will see an inflection in investments in the AI sphere, followed by renewed attention to quantum technologies. Before we definitively declare Quantum the “next big thing”, though, it is important to anchor our expectations to firm ground.
The quantum scene is more multifaceted than it may at first glance appear, with tides of hype not doing the best job of representing where the real opportunities are. Of course, the prospect of pioneering a new model of computation — with the hope of re-enacting success stories such as Microsoft, IBM, or Apple — sits at the back of the minds of many. There is, however, a fundamental difference with previous computing revolutions: We do not have definitive proof that quantum computers will outperform classical computers in the short term, at least not on computational tasks which we could characterise as broadly disruptive.
Mostly, the capital markets are buying into quantum computing because they are convinced either (i) that these devices will be able to solve complex problems by trying all solutions at once, perhaps even in a “multiverse”, or (ii) that the exotic structure of quantum states will unlock more efficient machine learning, or (iii) that these devices will be successful at sampling high-quality solutions to hard optimisation problems of real-world relevance. Sadly, conviction (i) stems from a misunderstanding of the quantum computational model, conviction (ii) has been mostly disavowed for many machine-learning application domains, and conviction (iii) is premature at best, with little in terms of concrete evidence of potential short-term advantage on many interesting problem classes. What quantum computers can be used for — with good confidence that they will outperform classical solutions in the short term — are highly structured and very specific problems, a far cry from the looming multi-purpose computing revolution that some quantum computing manufacturers are trying to spin.
But, wait! — you say — Quantum computers can be used to factor large numbers!
This is an important use case, one where we have strong evidence — trillions of dollars worth of evidence, to be precise — that quantum technology provides exponential advantage over classical algorithms. But it is also a problem with a very regular structure, a structure which quantum algorithms can exploit to find solutions. To better understand this, we look at the discrete logarithm problem, a broader, but closely related, instance of the same ability: Given a large integer $n$ with thousands of binary digits, as well as two integers $g, h$ modulo $n$, find a secret integer exponent $s$ such that $h = g^s \text{ mod } n$. The classical hardness of the discrete logarithm problem underpins the security of many contemporary cryptographic algorithms, because it implies that the function $s \mapsto g^s \text{ mod } n$ is effectively one-way (for $s$ sufficiently large to trigger modular reductions).
Unlike cryptographic hash functions such as SHA — which are designed on principles of entropy and mixing — the discrete logarithm problem is difficult because it is regular, not because it is chaotic. The function $s \mapsto g^s \text{ mod } n$ has a strict, exact periodicity: For adequately chosen $n$, it possesses few asymmetries which could be exploited to solve the problem any more efficiently than checking a large fraction of the possible exponents. The smoothness of the structure conceals the information from any local probe, but quantum mechanics allows us to switch perspective, probing the global structure instead: Regular information shared across the space of possibilities becomes directly accessible, in a way which is computationally unfeasible for classical machines. Indeed, many clever quantum algorithms follow this pattern, one way or another, by revealing global properties of highly structured mathematical objects which were carefully designed to look random at a local level.
In Defence of Quantum Key Distribution
It is in cryptography, rather than computation, that quantum advantage has already been demonstrated: Not in the language of computational complexity, but in the language of randomness and information. While complexity theory is mostly concerned with the internal structure of computations, it is in the interactions between computations that a truly revolutionary use of quantum resources is revealed: By controlling communications between the parties involved, near-term quantum technology can be used to design cryptographic protocols with security guarantees unachievable by classical technology.
Unlike its quantum computational counterpart, the quantum cryptographic advantage is entirely non-speculative: It is mathematically proven, it has multiple implementations, and it is demonstrably within reach of near-term quantum technology. It exploits the fundamental randomness at the very heart of quantum physics — randomness which cannot be faked nor predicted — to create a whole new class of cryptographic resources, protocols, and applications. The better known and most widely deployed of those applications is Quantum Key Distribution (QKD), which is the topic of our discussion today.
Last year, a report by the NSA discouraged the use of QKD for the post-quantum era. Analogously, recent guidance by the DoD CIO for migration to post-quantum cryptography explicitly rules out procurement and deployment of “quantum confidentiality or keying technologies”, including QKD, as a means of achieving security. We group their objections in four broad categories below, which we then address individually:
-
QKD Requires Special-Purpose Infrastructure. QKD deployments rely on quantum internet infrastructure: they needs dedicated fibre or managed free-space links for communication, and specialised devices for cryptography. It’s not something you can deploy as pure software on general-purpose classical hardware — reducing upgrade and patch flexibility — or nor something you can easily drop into existing network infrastructure.
-
QKD Is Hard to Secure in Practice. Real-world QKD security is limited by engineering and implementation, not just “physics”. Networks rely on trusted relays, with additional overheads and larger attack surfaces. Quantum devices are harder to validate in practice, and imperfections in design or manufacturing have lead to vulnerabilities in deployed QKD systems.
-
Greater Denial-of-Service Exposure. The point-to-point nature of quantum resource distribution and the high sensitivity of quantum networking hardware makes it arguably easier to disrupt quantum communication at a distance, raising the risk of Denial-of-Service (DoS) attacks.
-
QKD is only a Partial Solution. It is argued that QKD only produces shared key material for confidentiality purposes: It does not by itself guarantee that you’re talking to the desired counterparty, so you still need quantum-resistant public-key cryptography for source authentication across a network. In this scenario, the argument goes, the confidentiality goal can usually be met using the same quantum-resistant primitives used for authentication, at a lower price and with a risk profile which is better understood.
The objections above all have a core of fairness, with important warnings which be heeded as we progress from theory to implementation. But, in their categorically negative posture, they also betray a misunderstanding of both the scope and the possibilities of modern quantum cryptography. Quantum Key Distribution is not something to be distrusted as almost mystical: It is the first example in a fundamentally new cryptographic paradigm. Such vigorous objections from highly influential institutions have the potential to significantly damage the progress of one of the most exciting use cases for quantum technologies in the nearest future, and should therefore be vigorously rebutted.
On Hardware Requirements for QKD
QKD cannot be implemented purely via software, because it represents a fundamental shift in the nature of computational resources. Quantum resources are physical objects — more like an apples than like bits, to adopt a metaphor used by our very own Fabrizio Genovese at Devconnect. Unlike digital information, they cannot be copied, and hence require dedicated analog support for their storage and transmission. One might argue that, with sufficiently advanced molecular technology, an apple could almost perfectly be cloned, but for quantum resources this is a provable mathematical impossibility, not merely a limitation of engineering.
A certain degree of resistance to infrastructural changes is understandable, but when it comes to cryptography such a resistance is a historically losing position. Secret exchange has constantly evolved throughout history, and QKD — as an anti-tampering technology — bears many similarities to a forgotten art popular before the invention of the gummed envelope in the 1830s: Letterlocking, the practice of turning a letter itself into its own tamper-evident container, by carefully folding the paper. QKD achieves this “paper folding” in way which is mathematically, unequivocally verifiable, and this is precisely what makes it so exciting.
On Practical Security of QKD
In real cryptographic systems, big mathematical ideas are rarely where things routinely break: More mundane details such as timing behaviour, detector response, or small uncontrolled correlations in the hardware provide most of the attack surface instead. Far from an objection, this is precisely what makes QKD research so exciting, and where a fundamental difference exists between old-school protocol designs and modern device-independent ones.
Device-independent protocols are designed to not require trust in the underlying devices: The certification of security is done at the level of observed correlations, not at the level of a putative physical description from which classical data is supposed to emanate. This removes all trust in the physical layer from the equation, pushing it instead to the controlled boundaries of the computation.
The price we pay is shifted to sensitivity in the engineering: The degree of failure is less forgiving, and one relies more crucially on tight bounds in the parameter-verification part of the protocol. Quantum networking is subject to tighter constraints, because loss and noise grow with distance and quantum signals cannot be amplified and regenerated in a way quite as simple as classical signals would. But the literature has consistently shown that these bounds can be improved, and that tightening the security model often comes at much lower expense than what early day results might have suggested.
On Availability in QKD
The lower noise requirements imposed by quantum protocols on the engineering of quantum internet infrastructure make DoS attacks a more pressing concern, and the issues associated with the amplification and preservation of quantum resources render many of the existing mitigation strategies unworkable. As we shall see at the end of the next section, however, availability is essentially the only trade-off left in the device-independent model: confidentiality is unbreakable by design, and authenticity can be easily integrated. At the end of the day, the promise of device-independent QKD is exactly that if a key can be established, then the key is confidential: a protocol that refuses to produce a key because communication was tampered with is doing exactly what it was designed to do.
There are three plausible ways to address this issue. The first, more boring one, is to make the network itself more reliable, both in terms of hardening and in terms of redundancy. The second, more laborious one, is to push the theoretical boundary of device-independent protocols — a discipline that, we shall remind our readers, is still in its infancy — into progressively more tolerant noise regimes. The third, more fundamental one, is to realise that the issue is actually restricted to the distribution of raw quantum resources, and not to the establishment of the confidential key material itself. We elaborate further on this final point.
In order to run QKD, participants must share quantum resources, in the form of entangled pairs of quantum systems. In current designs, these resources are short-lived: They must be distributed on the fly across a quantum network, and immediately used to run the key generation protocol. However, this will not always remain the case: With the ongoing, tremendously accelerated development of reliable quantum memories, it will soon become possible to store quantum states, for use at a later stage.
The ability to keep quantum resources in memory decouples the two phases of QKD:
- The quantum resource distribution phase can be performed over a reliable quantum internet connetion, or it can alternatively be moved offline, e.g. by physical exchange of quantum memories. For online distribution, DoS attacks can be mitigated, for example, by discarding tainted batches and repeating the transfer until success.
- The key generation phase can be performed at any time after quantum resources have been exchanged, consuming only the required amount of resources and leaving the remaining quantum states untouched in storage. This stage only requires classical communication between the parties: It can be performed over the very same classical networks that already support our global information infrastructure, protected by the same DoS mitigation techniques.
How is this different, one might ask, from establishing a long pre-existing key and then using batches of bits as required? This is, after all, how one-time pads historically worked. The difference exists, and is monumental. Quantum resources in storage hold no confidential information: the key is generated at time of use, and any tampering of the quantum systems is covered by the device-independent security guarantees. Advance storage of quantum resources is merely a solution to the resource distribution problems, and it involves no assumptions of trust over time.
On Authentication in QKD
It is fair to acknowledge that many current QKD designs rely on a channel the authenticity of which has otherwise been established, e.g. by secured access or by public-key cryptography. This is, however, a matter of simplification, not one of necessity. It is absolutely possible to integrate authentication into QKD, in a way which exploits the unique features of quantum resources.
The key observation is that, in device-independent QKD, the only way for a round of key generation to succeed is for the two parties to operate on the two halves of a single entangled pair. At the time of key generation across a classical network, possession of the corresponding half of a batch of quantum resources is the only thing needed for authentication: No impostor — defined here as someone not in possession of the matching resources — can successfully run the key generation process with us. As long as quantum resource distribution was performed in an authenticated way — possibly much earlier on, possibly even by physical transfer of memory devices — key generation is automatically authenticated as a consequence, thanks to device independence.
We are then left with the problem of authenticating quantum resource distribution across an insecure quantum network. As long as we accept a small amount of initial shared entropy to be used for bootstrapping, this can be achieved by fairly traditional techniques.
An initial batch of quantum resources can be distributed and authenticated by running device-independent QKD combined with HMAC on a randomly selected subset of the entangled pairs. This bootstraps an initial pool of authenticated resources, which can later be consumed to perform key generation and/or to authenticate the distribution of more resources. Importantly, the security of the key generation process is independent of the confidentiality of the entropy of the original HMAC, because fresh entropy is generated by the QKD process each time. In the worst case scenario, authentication of the initial batch of quantum resources fails, and the process must be repeated with some fresh HMAC entropy: But it only needs to succeed once, and then the process is self-sustaining. (Also, physical memory distribution can be used for the initial bootstrapping.)
Conclusion
Device-independent cryptographic protocols have the potential to deliver the most exciting near-term applications of quantum technology, with a demonstrable advantage over comparable classical techniques.
Our mission at NeverLocal is to make this technology a reality: The final frontier of secure communications, guaranteed by very laws of Physics, decoupled from any trust in hardware and infrastructure. Join us.