Martin Rowson cartoon on quantum computers

It’s 2042. A schoolgirl sits in her bedroom and sets her computer a problem. It splits into trillions upon trillions of copies of itself, each of which works on a separate strand. After only a few seconds, they all come back together and a single answer pops up on the screen. It is an answer that would have taken the fastest supercomputer in the world more than the age of the Universe to obtain. Satisfied that her homework is done, the girl shuts down her computer and goes out to play. This, in a nutshell, is a quantum computer.

Quantum computers have the potential to change the world as profoundly as electricity did. “If you imagine the difference between an abacus and the world’s fastest supercomputer”, said BBC journalist Julian Brown, “you would still not have the barest inkling of how much more powerful a quantum computer could be compared with the computers we have today.”

Currently, all we have in laboratories around the world are stepping-stones on the road to building a useful quantum computer. The holy grail would be a general-purpose machine that could solve any conceivable problem in a fraction of the time of a conventional computer. But even if this proves impossible to achieve, we could still develop a more limited type of quantum computer that could, for instance, discover new drugs without the need for time-consuming and costly trials. The expectation is that one or other of these kinds of quantum computer could be with us within only 10 to 15 years.

So, how do they work? Either a quantum computer behaves as if it exploits trillions upon trillions of copies of itself in parallel universes, or it actually does. Most physicists believe the former. However, one of the pioneers of quantum computing, David Deutsch of the University of Oxford, believes the quantum computer is something entirely new: the first device ever built that exploits parallel universes, or realities. He has a good reason for believing this. But first, some necessary background on quantum theory and how it makes possible quantum computers.

Weird waves

Quantum theory is our best description of the submicroscopic world of atoms and their constituents. Experiments in the early 20th century revealed something scarcely believable about electrons, photons and all the building blocks of matter: they can behave both as particles (like tiny billiard balls) and waves (like ripples on a pond). The waves are peculiar: they are mathematical entities which are imagined to spread through space according to the Schrödinger equation. Where the waves are big, there is a high chance of finding a particle such as an electron, and where they are small, there is a low chance.

It follows that particles can do all the things that waves can. Imagine the sea on a stormy day. Big rolling waves are marching across it. Now imagine the next day when the storm has passed. The gentle breeze is driving small ripples. If you have ever looked closely at waves at sea, you will know that it is possible to have big rolling waves whose surface is rippled by the wind. This is a general property of all waves. If two or more waves are possible, so too is a combination.

Now imagine a quantum wave that represents an oxygen atom and which is highly peaked on one side of a room (so there is an almost 100 per cent chance of finding it there) and another quantum wave that represents the same oxygen atom that is highly peaked on the other side of a room (so there is an almost 100 per cent chance of also finding it there). Remember, if any two waves can exist, so too can a combination. In this case, it represents the oxygen atom on both sides of the room at the same time. Bizarre as it seems, a quantum particle can not only be in more than one place at a time, they can also do more than one thing at a time. So, atoms and their constituents can do many things at once. A quantum computer simply exploits this ability in order to do many calculations at once.

More calculations than fundamental particles

I first came across the term quantum computer in 1983 while at the California Institute of Technology in Pasadena and taking a series of lectures by Richard Feynman. Feynman had worked on the atomic bomb and won the Nobel Prize for Quantum Electrodynamics (and he played the bongos). At the time, he was recovering from cancer surgery so was indulged by the Caltech faculty and allowed to teach whatever he liked. His course was called “Potentialities and Limitations of Computers”.

Feynman was interested in the ultimate physical limits of computers – how small components could be made, how fast, and so on. At the time, a transistor – the basic component of a computer – consisted of about 100 billion atoms. Nowadays, it is down to about 25,000. But Feynman realised miniaturisation would ultimately result in transistors made of single atoms, at which point they would dance to the tune of quantum theory. This would be a new kind of beast, he realised: a quantum computer.

Whereas a normal computer consists of transistors, each of which represents a bit (a 0 or a 1, depending on whether the transistor allows an electric current through or not), a quantum computer consists of quantum bits, or qubits, each of which can represent a 0 and a 1 at the same time. Consequently, a qubit can be involved in two calculations simultaneously (one in which it is a 0, and another in which it is a 1). A pair of qubits can represent four possibilities simultaneously (01, 11, 10, 00) and so can be involved in four calculations at once; three cubits, eight calculations; and so on.

The power of a quantum computer becomes clear. With each additional bit, a normal computer increases its power only marginally. With each extra qubit, the power of a quantum computer doubles. Very quickly, such an exponential growth in computing power beats into submission even the biggest supercomputers.

Here is another way to appreciate the power of quantum computing. In 1965, Gordon Moore, head of the American chip manufacturer Intel, realised that computing power – the number of bits a computer can process, the number of bits it can store in its memory, and so on – was doubling roughly every two years. This has been going on since 1949 and is known as Moore’s Law. Contrast this with the number of qubits available, which is doubling roughly every five years. Perhaps this appears less impressive. But, remember, each additional qubit doubles the power of a quantum computer. Quantum computers are therefore increasing their power as an exponential of an exponential. To put it another way, after four Moore’s-Law-doublings, a conventional computer would be 16 times as powerful, whereas a quantum computer would be about 64,000 times as powerful.

Currently, the record-holding quantum computer, announced by IBM in November 2021, has 120 qubits, almost doubling the number of qubits of the previous record-holder, built by Google. If a quantum computer were built that could manipulate a mere 270 qubits, however, it would reach a critical threshold: it would be able to do more calculations simultaneously than there are fundamental particles in the universe.

This is why Deutsch thinks it is legitimate to ask: where is the quantum computer doing its calculations? The universe does not have the physical resources. Your computer, after all, can do only the calculations for which it has sufficient memory. Deutsch’s answer is that the quantum computer exploits physical resources in parallel universes. It is a device, he claims, that literally forces us to take seriously such universes.

Among other things, a general-purpose quantum computer might deliver true superior-to-human intelligence. But we face some challenges when it comes to creating a useful quantum computer.

What problems could a quantum computer actually solve?

Firstly, in order to function, quantum systems need to operate in total isolation. This is because qubits are fragile and lose their ability to do many things at once if they are impacted by their surroundings. While it’s relatively simple to isolate a single atom in a vacuum, it is harder with larger systems. The solution is to hold the qubits – which may be atoms or electrons or some other kind of quantum object – in an extreme vacuum so air molecules do not bounce off them, and to keep them cooled to close to absolute zero, the lowest temperature possible, so “photons” of heat do not bounce off them.

But this is impossible to do perfectly. There will always be stray air molecules and photons of light bouncing off qubits, causing them to lose their quantumness and “decohere” into normal bits. This can be corrected for; but each qubit requires anything from 10 to 100 qubits for the “error correction”. A conventional computer develops an error – a 0 flipping to a 1 or vice versa – about once every trillion trillion operations. The quantum computers under construction today develop an error about once every 1,000 operations. This is crippling, and makes it hard for these computers to perform even limited useful calculations.

So, while IBM’s quantum computer uses the most qubits of any existing prototype, most of these are needed simply for the task of error correction. To surmount this challenge, error correction would need to outpace the accumulation of errors – but it’s unclear whether this will ever be possible.

Once we’ve built a powerful quantum computer, we would then be faced with the challenge of giving it useful tasks. A quantum computer operates by splitting into trillions upon trillions of copies of itself, each of which works on a separate strand of the same calculation. In the end, the strands come back together and the computer spits out a single answer. It is not possible to access these separate strands, as that would entail interacting with the quantum computer, destroying its quantumness. Put bluntly, it is very hard to find problems that require lots of parallel calculations that spit out a single answer.

Nevertheless, in 1994, Peter Shor found just such a problem. He realised that a quantum computer could, in theory, break the RSA code used to encrypt all banking and internet data. RSA encryption relies on something that is easy to do but extremely hard to undo. It is easy to take two very big “prime” numbers and multiply them together, but is hard to take such a big number and find its prime factors. The security of an encrypted message relies on the fact that it would take centuries, even on the fastest supercomputer, for the code-breaker to find those prime factors. Shor showed that a quantum computer could achieve this in a split second.

Shor’s algorithm made people sit up. It did not matter that in 1994 no quantum computer existed that could implement the algorithm. The point was that the moment a working quantum computer was built, every piece of secret information exchanged in the world would be readable. There are rumours that intelligence agencies and criminals have long been harvesting financial and internet data in anticipation of that day.

It could turn out that useful algorithms like Shor’s are rare, and that it will never be possible to build a quantum computer capable of attacking a wide range of practical problems such as code-breaking. We may never be able to build a quantum computer powerful enough to create true artificial intelligence.

Understanding how particles behave

Nevertheless, quantum computers could still change the world beyond recognition. This is because there is one thing that a quantum computer can certainly do: it can simulate a quantum system, thus predicting how it behaves. In fact, it was for this purpose that Feynman first envisioned a quantum computer in 1983.

When it comes to how our world works, we are certain of very little. Physicists never like to admit that they have solved only one problem exactly: the two-body problem. They can predict, for example, the way in which an electron orbits a proton in the simplest (hydrogen) atom; or the way the Moon loops around the Earth. Everything else in physics is approximation. For instance, when more electrons are added to an atom, the behaviour of each electron depends on the behaviour of all the others and it quickly becomes impossible for even supercomputers to model.

However, the behaviour of every atom and molecule depends on the arrangement of its electrons. It is via these that it interacts with the world. The electron arrangement determines its chemistry – how it conducts heat, electricity and so on. But we cannot predict the behaviour of any multi-electron system because we can only solve the two-body problem exactly. But a quantum computer could.

Understanding how atoms and subatomic particles behave could have useful practical applications. Currently, we have to do drug trials because we cannot predict how drug molecules will behave in the body. But, with a quantum computer, we could circumvent all this and predict the behaviour of millions of potential drugs without having to do expensive and time-consuming trials.

Smartphones and Tesla electric cars have been made possible by the development of lithium-ion batteries, which store a lot of energy in a small volume. But the world is running short of lithium. And finding new and better substances to act as batteries involves synthesising and testing thousands of upon thousands of potential molecules. It is expensive and time-consuming. However, with a quantum computer it would be possible to predict the behaviour of millions upon millions of molecules in advance, and so improve on lithium-ion batteries.

Another application could help us to make farming more sustainable. About 40 per cent of the world’s population depends on wheat, which requires fertiliser. But that relies on the Haber-Bosch process, which takes nitrogen from the air and puts it in ammonia, the basis of fertilisers. The trouble is that this process uses lots of energy – as much as the airline industry. In fact, 40 per cent of the carbon footprint of a loaf of bread comes from this process.

Thankfully, Haber-Bosch is not the only game in town. Plants, in their roots, use bacteria which can efficiently take nitrogen out of the air. They do it with an enzyme called nitrogenase. However, nitrogenase has thus far been too complicated for us to understand. If we could understand how it worked by simulating it on a quantum computer, we could not only mimic its process but even improve on it. That would mean we could dispense with the hugely inefficient and energy-hungry Haber-Bosch process, which would have a huge impact on reducing the emissions behind global warming.

We may never be able to make a general-purpose quantum computer capable of giving us access to superior-to-human intelligence. Even so, the world could still be changed beyond all recognition by quantum computers able to perform even limited calculations, revolutionising the fields of medicine, agriculture and technology – and this might be only the start. And if a general-purpose quantum computer is possible, all bets are off.

This piece is from the New Humanist autumn 2022 edition. Subscribe here.