|The diamond in the center measures 1mm by 1mm.
This problem has now been resolved.
A paper just published in Nature claims that they have found a way to protect multiple qubits from decoherence over an extended period of time, and built a quantum computer into a diamond to prove it. Since I realize most of my readers don't keep up with this field as much as I do, I'll try my best to explain.
How it works.
Classical computers use bits (binary digits) to store information in memory. These are the binary digits (1s and 0s) that get stored in between calculation steps in any non-trivial program. If a 1 were to turn into 0 randomly in the middle of an operation, a program might still be able to recover if it has good error detection, but if huge numbers of bits were to switch from one to the other while a program was running, there's just no way it could continue to function as desired.
Quantum computers use qubits instead of bits. This is what makes quantum computing as a concept so very powerful. Rather than use 1s and 0s that are either one or the other, the qubits of quantum computing utilize a superposition of states where a single qubit might be partly a 1 and partly a 0. Yet this reliance upon qubits has been a fundamental problem in quantum computing so far. It is just too easy for qubits to decohere and lose whatever information you try to store in it.
It wasn't until 2008 that a group first figured out how to keep a qubit from losing its information for a grand total of 1.75 seconds. Not only is this amount of time too short to do anything with, but the process could not handle multiple qubits, making this type of quantum computer incapable of using more than one qubit at a time.
Today's news marks the first time anyone has figured out how to shield multiple qubits from decohering for an extended period of time. I won't get into details of how they accomplish this; you can read the paper yourself for that. (Suffice it to say that they used microwave pulses to delay decoherence continuously.) But the point is that they were able to construct a working quantum computer and run a simple program on it to verify the qubits are not decohering.
The fun part of the story.
|Quantum circuit representation of Grover's algorithm.
The usual explanation of Grover's algorithm starts by imagining a phone book organized in alphabetical order. (We'll need a new mental example soon; I can't remember the last time I actually saw a phone book in person.) If we have a specific phone number we're looking for, our only real recourse is to search through the book one entry at a time. We might get lucky and find it as soon as we open phone book, but we also might be unlucky and not find it until the very last entry. (Technically the second to last, since we assume the number exists in the book, but please ignore this sentence for clarity.) In general, it turns out that, on average, we will find the number we seek on our N/2nd try, where N is the total number of entries in the phone book. For a book with four entries, we'll find it on our second try on average; if the book has a hundred entries, we'll find it on the fiftieth try on average. A phone book listing everyone in New York City would have ~8,000,000 entries; we'd find a particular entry on the four millionth try on average.
Grover's algorithm, on the other hand, will find it, on average, on our O(N1/2) try. For a book of four entries, we'll basically find it on the first try every time. With a hundred entries, we'll find it on the tenth try on average. With 8,000,000 entries, on average, we'll find it on the 2,829 try. That's not a typo; it really is less than three thousand tries for an eight million entry database. This speed increase is enormous. The applications for such sheer speed has drastic repercussions in the real world.
The team used Grover's algorithm on their quantum computer and found the correct entry on their first try 95% of the time. This puts the question of whether their computer works or not entirely to rest. Their quantum computer not only works, but works well enough to actually be capable of quantum calculation at a 5% error rate. Sure, that's not perfect, but it's already enough accuracy that, if one desired, error-correcting could be done. Just redo the calculation ten times; with a 95% accuracy rate, that is more than enough to determine the correct output. The insane speed increase quantum computers have over classical computers is enough to make it more than worthwhile to repeat the same calculation ten times in a row each time.
Of course, the quantum computer they built held only two qubits, and so can only store so much information at a time. Doing Grover's algorithm on an 8,000,000 entry database, for example, would require seven qubits. (In general, O(log(N)) qubits would be required for a database of N entries.) But there is nothing stopping someone from creating such a quantum computer in principle, so long as they have enough microwave pulses to keep them all from decohering. The future, it appears, is now.
So what does this all mean?
First of all, it means that somebody has a working two qubit quantum computer right now. Realistically, this is not enough to cause much of a ruckus. To put it in context, this is only enough storage to solve Grover's algorithm for databases of seven entries or less. (Although Grover's original paper points out that his algorithm requires only that the processing be done with qubits; the memory can be saved in a classical bits.) However, quantum computers do not scale linearly like classical computers do. As mentioned previously, a mere seven qubits would be enough memory to store steps for Grover's algorithm on a database of 8,000,000 entries. It takes very few qubits to handle very large problems.
Now that the conceptual hurdle has been passed, someone could, right this very moment, build a quantum computer with a dozen qubits. There is nothing preventing us from accomplishing such a feat now, save sheer expense. But if you can afford the dozen microwave lasers and have an appropriately flawed diamond to work with, it is certainly doable. What could one do with such a machine?
To put it bluntly, nearly everything.
|All bounded error, quantum, polynomial time (BQP)
problems are efficiently solved by quantum computers.
This effectively breaks most commercial-grade encryption now in use on the internet. Some encryption does survive; notably lattice-based and McEliece cryptosystems.
Although the last section makes it seem like quantum computers have now arrived, there are still problems that need to be addresses. David DiVincenzo points out that a practical quantum computer must have:
- scalable physically to increase the number of qubits;
- qubits can be initialized to arbitrary values;
- quantum gates faster than decoherence time;
- universal gate set;
- easily read qubits.
This new discovery solves the first (albeit expensively) and third issues completely. The second issue is still problematic, but is something that can be programmed around. The fifth issue is a matter of convenience; expense and repeatability makes this solvable with money alone. This leaves only the fourth issue: a universal gate set. As this is not yet solved, we will not yet be able to program whatever we want on a quantum computer. But we can still run Grover's algorithm, and a few other programs to which we know the necessary gates, and I've already shown how this affects society directly.
A note on how the press is covering this.
As a skeptic, I was very disappointed in a quote by Daniel Lidar on their method of delaying decoherence. He told press "it’s a little like time travel", because the microwave pulses that made the electron qubit switch their direction of rotation did so in a way that makes the decoherence disappear by moving back in the direction it came from. But, quite frankly, this is bullshit. That has no more to do with time travel than moving left does when you want to take back a move to the right. However, now that the quote is out there in a story that headlines with the word "quantum", you can be sure lots of quacks will completely misunderstand, as they so often do.
A few credits.
Although credit for stuff like this gets cited in journals, blogs rarely take the time to actually link out to the individual scientists' blogs/social media in articles like this. So I thought I'd buck the trend by giving a shout out to the authors of the study, including Daniel Lidar, Zhihui Wang, Toeno van der Sar, Machiel Blok, Hannes Bernien, Tim Taminiau, David Toyli, David Awschalom, & Ronald Hanson. Well done.
Also, thanks to Wikipedia for its help in understanding basic principles and the University of South California for their press release summarizing the findings of the paper. And a tip of the hat to the Skeptic's Guide to the Universe for letting me know about this recent development in the first place. Your podcast is awesome. (c: