Wednesday, April 3, 2013

No entanglement of many qubits

Scott Aaronson is coming out with his own book on quantum computing, and criticizes another book:
Overall, Lance gives an admirably-accurate summary, and I was happy to see him throw cold water on breathless predictions about QC and other quantum-information technologies finding practical applications in the near future.  However, I think he goes beyond the truth when he writes:
[W]e do not know how to create a significant amount of entanglement in more than a handful of quantum bits.  It might be some fundamental rule of nature that prevents significant entanglement for any reasonable length of time.  Or it could just be a tricky engineering problem.  We’ll have to let the physicists sort that out.
The thing is, physicists do know how to create entanglement among many thousands or even millions of qubits — for example, in condensed-matter systems like spin lattices, and in superconducting Josephson junctions.  The problem is “merely” that they don’t know how to control the entanglement in the precise ways needed for quantum computing.  But as with much quantum computing skepticism, the passage above doesn’t seem to grapple with just how hard it is to kill off scalable QC.  How do you cook up a theory that can account for the massively-entangled states that have already been demonstrated, but that doesn’t give you all of BQP?
No, we cannot create entanglement of thousands of qubits. We can entangle thousands of electrons, or even atoms, but not qubits.

His main point is that scalable quantum computing appears to be very difficult or impossible in the lab, but there is no good theory explaining why it should be impossible.

No comments:

Post a Comment