Saturday, June 27, 2015

Quantum computers will not help interpretations

MIT quantum complexity theorist is plugging a PBS essay on his favorite quantum speculations:
a principle that we might call “Occam’s Razor with Computational Aftershave.” Namely: In choosing a picture of physical reality, we should be loath to posit computational effort on Nature’s part that vastly exceeds what could ever in principle be observed. ...

Could future discoveries in quantum computing theory settle once and for all, to every competent physicist’s satisfaction, “which interpretation is the true one”? To me, it seems much more likely that future insights will continue to do what the previous ones did: broaden our language, strip away irrelevancies, clarify the central issues, while still leaving plenty to argue about for people who like arguing.
Lubos Motl defends Copenhagenism:
Quantum mechanics may be understood as a kind of a "black box", like a computer that spits the right result (probability of one observation or another). And we may learn how to perform the calculations that exactly reproduce how the black box works. This is a description that Feynman used to say, too. Some people aren't satisfied with that – they want to see something "inside" the black box. But there is nothing inside. The black box – a set of rules that produce probabilistic predictions for measurements out of past measurements – is the most fundamental description of Nature that may exist. Everything else is scaffolding that people add but they shouldn't because it's not a part of Nature.

Quantum computers won't change anything about the desire of the laymen to see something inside the black box. On the contrary, a quantum computer will be an even blacker box! You press a button, it does something that is completely incomprehensible to a layman, and announces a correct result very quickly, after a short time that experts say to be impossible to achieve with classical computers.
A lot of quantum interpretations fall into the trap of trying to say what is in that black box, without any way to say that it is right or wrong. Aaronson does this when he talks about how much computation Nature has to do in that box. He implicitly assumes that Nature is using the same mathematical representations that we are when we make macro observations.

I do not accept that assumption, as I have posted in essays here.

Eric Dennis comments to Aaronson:
I doubt anyone is really suspicious of the possibility of long coherence times for small systems. We’re suspicious of massive parallelism.
I question those long coherence times. Some of the experiments are like rolling a coin on the floor so that it eventually falls over to heads or tails, with equal probability. Or balancing a rock on the head of a pin so that it eventually falls, with all directions equally likely. Can this be done for a long time? Maybe with the coin, but not with the rock. Can the uncertainty during the long time be used to extract a super-Turing computation? No way.

Update: A company claims:
D-Wave Systems has broken the quantum computing 1000 qubit barrier, developing a processor about double the size of D-Wave’s previous generation, and far exceeding the number of qubits ever developed by D-Wave or any other quantum effort, the announcement said.

It will allow “significantly more complex computational problems to be solved than was possible on any previous quantum computer.”

At 1000 qubits, the new processor considers 21000 possibilities simultaneously, a search space which dwarfs the 2512 possibilities available to the 512-qubit D-Wave Two. ‪”In fact, the new search space contains far more possibilities than there are ‪particles in the observable universe.”
This annoys Aaronson as much as I do, as he says that he has proved that a quantum computer cannot really search all those possibilities simultaneously.

D-Wave has probably a legitimate technological achievement, but they do not have any real qubits, and there are no previous quantum computers.

No comments:

Post a Comment