Tags

, , ,

I didn’t have the energy to write about quantum computing last night, but fortunately I can make up for it in today’s post, since that was the main subject of the day — quantum information science and computing. Again, I’ll do my best to get the main idea across, but I’m sure it’s something I’ll revisit again in the future.

Quantum Computing

Raymond Laflamme, one of the leaders in the field, said he believes that “quantum information is going to fundamentally change your lives and the lives of your kids.” Quantum computing certainly makes big promises, and science fiction gets people even more excited by amplifying the promises. So what is quantum computing? How is better than current, “classical” computing? And how does it actually work?

The advent of practical, scalable, available quantum computing will almost certainly be transformative, even disruptive, but not because they make boundless computing power available. They don’t. The advantage of quantum computers is that they’re very, very good for solving particular kinds of problems, some of which are difficult (i.e., slow) to solve on existing computers. A helpful analogy might be the graphics card in a modern computer, which is extremely fast at doing certain things (processing videos and graphics, and other “embarrassingly parallel” calculations) but not particularly well equipped for general computing, which is why your computer also has a (general purpose) central processor. Likewise, quantum computers can carry out some kinds of computations much, much faster than a classical computer, but don’t really speed up other calculations. They can manage such feats as a consequence of the way they work.

At the core of a quantum computer is a quantum bit (q-bit), which is what makes all the rest possible. In a classical computer, bits are set to either 0 or 1, and a series of bits represents a particular number. Three bits are enough to represent numbers up to seven. For example, 101 is the number 5, 011 is the number 3, and 100 is the number 4. Q-bits aren’t 0 or 1. Superposition, which I tried to explain in the last post, means that an individual q-bit is both 0 and 1 at the same time. If you think of bits as light switches, then classical bits are the familiar switches which can be on or off, while a quantum bit is a switch which is simultaneously on and off. It may strain the imagination, but it’s true and it’s at the heart of why quantum computers are better at certain calculations. Since q-bits are both on and off, both 0 and 1, a series of q-bits doesn’t represent a single number. Three qubits don’t correspond to a particular number, but simultaneously represent every number up to 7. They’re all there at the same time, which makes it possible to do certain calculations much more quickly.

Superconducting qubit

The grey rectangle is a q-bit made out of superconductors. It’s surprisingly big — on the order of a centimeter across.

Since a set of q-bits represents a range of numbers instead of just a single number, you can do a calculation on the entire range of numbers all at once. For example, you might want to work out which numbers multiply together to give you 21 (other than 21×1). Instead of checking whether 2 works, then whether 3 works, then 4, etc, a quantum computer allows you to check a whole range of numbers at the same time. If you had a quantum computer with 3 q-bits available for the calculation, it would simultaneously check every number from 0 to 7. That’s the inherent power of quantum computing, but it has to be combined with a clever trick to make it work.

The fact that q-bits are in superposition means a quantum computer can do a series of calculations all at once, but there’s still the question of how to get an answer from them. Once you measure the q-bits — once you look at them — all the wonderful superpositions disappear. Each q-bit randomly becomes either a 1 or 0 instead of being both at the same time. The trick is to set up your calculation in a way which controls that randomness so the correct answer is more likely. Instead of 0s and 1s being equally likely for each q-bit, quantum algorithms are designed to make the right series of 0s and 1s — those that represent the answer — the combination most likely to appear. The details of how that’s done aren’t really familiar to me and are certainly more technical than I’d like to go into here; if you’re interested, Scott Aaronson’s post on the subject is recommended reading. The key point in this blog is to realize that quantum computers work by taking advantage of superpositions to represent (and do calculations on) many numbers at the same time and then use cleverly designed algorithms to make the correct answer be the most likely to appear when we measure the qubits and their superpositions collapse.

Edited 2014-08-29: Corrected ‘qubit’ to ‘q-bit’ throughout. That’s what I get for writing when I’m tired.