I didn’t have the energy to write about quantum computing last night, but fortunately I can make up for it in today’s post, since that was the main subject of the day — quantum information science and computing. Again, I’ll do my best to get the main idea across, but I’m sure it’s something I’ll revisit again in the future.
Raymond Laflamme, one of the leaders in the field, said he believes that “quantum information is going to fundamentally change your lives and the lives of your kids.” Quantum computing certainly makes big promises, and science fiction gets people even more excited by amplifying the promises. So what is quantum computing? How is better than current, “classical” computing? And how does it actually work?
The advent of practical, scalable, available quantum computing will almost certainly be transformative, even disruptive, but not because they make boundless computing power available. They don’t. The advantage of quantum computers is that they’re very, very good for solving particular kinds of problems, some of which are difficult (i.e., slow) to solve on existing computers. A helpful analogy might be the graphics card in a modern computer, which is extremely fast at doing certain things (processing videos and graphics, and other “embarrassingly parallel” calculations) but not particularly well equipped for general computing, which is why your computer also has a (general purpose) central processor. Likewise, quantum computers can carry out some kinds of computations much, much faster than a classical computer, but don’t really speed up other calculations. They can manage such feats as a consequence of the way they work.
At the core of a quantum computer is a quantum bit (q-bit), which is what makes all the rest possible. In a classical computer, bits are set to either 0 or 1, and a series of bits represents a particular number. Three bits are enough to represent numbers up to seven. For example, 101 is the number 5, 011 is the number 3, and 100 is the number 4. Q-bits aren’t 0 or 1. Superposition, which I tried to explain in the last post, means that an individual q-bit is both 0 and 1 at the same time. If you think of bits as light switches, then classical bits are the familiar switches which can be on or off, while a quantum bit is a switch which is simultaneously on and off. It may strain the imagination, but it’s true and it’s at the heart of why quantum computers are better at certain calculations. Since q-bits are both on and off, both 0 and 1, a series of q-bits doesn’t represent a single number. Three qubits don’t correspond to a particular number, but simultaneously represent every number up to 7. They’re all there at the same time, which makes it possible to do certain calculations much more quickly.
Since a set of q-bits represents a range of numbers instead of just a single number, you can do a calculation on the entire range of numbers all at once. For example, you might want to work out which numbers multiply together to give you 21 (other than 21×1). Instead of checking whether 2 works, then whether 3 works, then 4, etc, a quantum computer allows you to check a whole range of numbers at the same time. If you had a quantum computer with 3 q-bits available for the calculation, it would simultaneously check every number from 0 to 7. That’s the inherent power of quantum computing, but it has to be combined with a clever trick to make it work.
The fact that q-bits are in superposition means a quantum computer can do a series of calculations all at once, but there’s still the question of how to get an answer from them. Once you measure the q-bits — once you look at them — all the wonderful superpositions disappear. Each q-bit randomly becomes either a 1 or 0 instead of being both at the same time. The trick is to set up your calculation in a way which controls that randomness so the correct answer is more likely. Instead of 0s and 1s being equally likely for each q-bit, quantum algorithms are designed to make the right series of 0s and 1s — those that represent the answer — the combination most likely to appear. The details of how that’s done aren’t really familiar to me and are certainly more technical than I’d like to go into here; if you’re interested, Scott Aaronson’s post on the subject is recommended reading. The key point in this blog is to realize that quantum computers work by taking advantage of superpositions to represent (and do calculations on) many numbers at the same time and then use cleverly designed algorithms to make the correct answer be the most likely to appear when we measure the qubits and their superpositions collapse.
Edited 2014-08-29: Corrected ‘qubit’ to ‘q-bit’ throughout. That’s what I get for writing when I’m tired.
Great post. I think you explain a difficult concept very well
The Science Geek
Thanks; I’m really glad to hear that!
What do I think? I think quantum computing is all jam-tomorrow Emperor’s New Clothes hype. Look closely, narrow your eyes: Scott Aaronson doesn’t actually explain it, that’s why you don’t understand it. Have a look at his treatment of Joy Christian, who sought to dispel quantum weirdness. Then see the timeline of quantum computing to appreciate that it’s been around for forty years. In that time genuine computing has advanced enormously, and has delivered great benefit. But quantum computing has delivered nothing. IMHO it hinges on quantum mysticism, not quantum physics, and like string theory, it will fade away. But as ever, those who peddle mysticism and weirdness and woo will try to curry favour with science writers to use them as their propagandists.
Mmmkay, then there is one thing that, from just reading, strikes me as odd. When you write
“The key point in this blog is to realize that quantum computers work by taking advantage of superpositions to represent (and do calculations on) many numbers at the same time and then use cleverly designed algorithms to make the correct answer be the most likely to appear when we measure the qubits and their superpositions collapse.”
doesn’t that imply that you need to know the answer in advance? I’m sure I’m missing something here, but maybe you can shed some light on that Sedeer.
That’s an excellent question. It might seem like you need to know the answer, but you actually only need to know some property of the correct answer and then you design your quantum algorithm to include an operation which depends on that property so the answer you want becomes the most likely final state.
The factoring algorithm, for example, takes advantage of the fact that a certain mathematical sequence calculated for the number N has periodicity that depends on the factors of N. (If you’re curious, the sequence in x mod N, x² mod N, x³ mod N, x⁴ mod N,…) You can calculate the entire sequence as a superposition and then use a quantum Fourier transform to extract the period of the sequence. Once you have the period, you can figure out the prime factors. I realize that seems a bit vague, and I suggest you read Scott Aaronson’s post (linked above) for a detailed explanation of how it works.
If that still seems unclear, hand-wavy, or mystical, let me know and I’ll try to explain it better or come up with a good analogy.