But Everett was less than clear in two respects. First, the quantum state can equally be written as a superposition of any set of basis states; what reason is there to single out his ‘branch states’ as distinguished? This is ‘the preferred basis problem’. Second, Everett assumed that a probability measure over branches is a function of branch amplitude1 and phase, and from this derived the Born rule (the standard probability rule for quantum mechanics). But there is a natural alternative probability measure suggested exactly by his picture of branching, that is not of this form and that does not in general agree with the Born rule: the ‘branch-counting rule’. Let the world split into two, each with a different amplitude: in what sense can one branch be more probable than the other? If Everett is to be believed, both come into existence with certainty, regardless of their amplitudes. After many splittings of this kind, divide the number of branches with one outcome, by the number of branches with another; that should give their relative probability. This is the branch-counting rule, in general in contradiction with the Born rule.The paper tries hard to count the branches and get probabilities, but there is no way to do it.
Everett did not reply to either criticism, having left the field even before the publication, in Reviews of Modern Physics in 1957, of his doctoral thesis, all of eight pages in length; he never wrote on quantum mechanics again.
You can pretend that probabilities come from the Born Rule, but that doesn't make any sense either.
The paper cites arguments for plowing ahead with many-worlds anyway. But if the theory cannot make sense out of probabilities, then what good is it? It cannot do anything, and those who promote it are charlatans.
These days, my personal sense is that Many Worlds is the orthodox position … it’s just that not all of its adherents are willing to come out and say so in so many words! Instead they talk about decoherence theory, the Church of the Larger Hilbert Space, etc. — they just refrain from pointing out the Everettian corollary of all this, and change the subject if someone else tries to. 🙂This says a lot about the sorry state of modern Physics. That a level-headed guy like Aaronson would believe in such a bizarre fairy tale. And apparently many of the leaders of this cult are unwilling to publicly admit it.
There are two branches, after all. What does it mean to have one branch be more probable than another?This is no answer. He is saying to follow the Copenhagen interpretation to determine probabilities, and then to bet on those probabilities in a many-worlds theory, even though many-worlds can say nothing about the probability of those worlds.
I’d say that it simply means: if someone asks you to bet on which branch you’ll find yourself in before the branching happens, then you should accept all and only those bets that would make sense if the probabilities were indeed 1/3 and 2/3, or whatever else the Born rule says they are.
This means many-worlds is nothing more than taking a theory that predicts probabilities, and pretending that false outcomes live as separate realities.
See also previously posted comments, about that Aaronson post.
Another commenter explains:
In other words, he just doesn't want to accept that an observation tells you what is real.Can someone who is a many-worlds person, or Everettian if you prefer, explain to me why the many-world ontology is so appearly or evident to you? I am looking for someone who is a die-hard, every branch is equally-real many-worldser.For me it was when I realized that Many-Worlds is what you get when you just take what the Schrödinger equation says as literally true, and stop torturing it with an unphysical and ill-defined collapse. It got reinforced when I was taking a lecture on QFT and realized that the high-energy people simply ignore collapse, for them the theory is completely unitary. Obvious in retrospect: for them relativistic effects are crucial, and how could they ever reconcile that with a nonlocal collapse? ... And yes, all branches are real. There’s nothing in the math to differentiate them.
Was there one moment where it all clicked for you? Do you believe that there is an uncountable infinity of other branches of the wavefunction that are equally real, equally extant?
Do I just lack imagination, or do I just not get it?
This is like saying: When I toss a coin, theory says that heads and tails each have probability 0.5. I realized that Many-Worlds is what you get when you take both possibilities as literally true, and stop artificially ruling out what does not happen.
In other words, you have a theory that a coin toss gives heads with probability 1/2. That is, heads occurs half the time. The many-worlds guy comes along and says you are torturing the model by excluding the tails that do not happen. So now you say all the possibilities occur all the time, just in parallel worlds. We cannot say that heads occurs half the time, because it occurs all the time. Maybe we could say it occurs in half the worlds, but no one has been able to make mathematical sense out of that. So Scott says the probability is still 1/2, because he would bet on heads that way.
This is stupid circular reasoning. He only bets on coin tosses because of the perfectly good theory that he discards. One he adopts many-worlds, the bet on heads wins in one branch, and loses in the other.
In comment #618, Aaronson addresses the fact that many-worlds is just conjecturing that all possibilities happen, with no dependence on quantum mechanics:
I, in return, am happy to cede to you the main point that you cared about: namely, that in both the Everettverse and the Classical Probabiliverse, the probabilities of branches could properly be called “objective.” ...Yes, the whole interference argument is a red herring. Many-Worlds is what you get when you deny probabilities, and assume anything can happen. It has nothing to do with quantum foundations.
Here’s something to ponder, though: your position seems to commit you to the view that, even if we only ever saw classical randomness in fundamental physics, and never quantum interference, a Probabiliverse (with all possible outcomes realized) would be the only philosophically acceptable way to make sense of it.
But given how hard it is to convince many people of MWI in this world, which does have quantum interference, can you even imagine how hard it would be to convince them if we lived in a Classical Probabiliverse?
In fact, if anti-Everettians understood this about your position, they could use it to their advantage. They could say: “Everettians, like Deutsch, are always claiming that quantum interference, as in the double-slit experiment, forces us to accept the reality of a muliverse. But now here’s Mateus, saying that even without interference, a multiverse would still be the only way to make sense of randomness in the fundamental laws! So it seems the whole interference argument was a giant red herring…” 🙂
Comment #637 argues:
Free will is kind of a red herring, as free will is not linked to determinism or non-determinism. A slight majority of philosophers accept this (59%? see Compatibilism), but the gist of the argument is thus:He then goes on the argue that free will is impossible in naturalist theory, but philosophers have rationalized this by saying we can have a feeling of free will in a determinist world.
This is an argument that only a learned philosopher would make, as it doesn't make any sense. Any normal person would say that if the theory cannot allow free will, then the theory is deficient.
In discussion about the essence of quantum mechanics, Aaronson argues:
If superposition is the defining feature of QM, then interference of amplitudes is the defining feature of superposition. It’s not some technical detail. It’s the only way we know that superpositions are really there in the first place, that they aren’t just probability distributions, reflecting our own ordinary classical ignorance about what’s going on.He got this reply:
I’m afraid you’re getting things backwards; interference is an artefact of the substance of matter being made of waves instead of particles. A wave goes through two slits at once and interferes with itself – that’s not at all interesting beyond the question of why matter is made of waves instead of particles, which is more along the lines of the nuts-bolts question. You think it matters because you assume the particle view and particle interactions are ontically prior – but that phenomena is well explained by decoherence.Aaronson responds:
A billion times no! To describe quantum interference as merely about “matter being made of waves and not particles” is one of the worst rhetorical misdirections in the subject’s history (and there’s stiff competition!). It suggests some tame little ripples moving around in 3-dimensional space, going through the two slits and interfering, etc. But that’s not how it is at all.He is saying that the quantum supremacy of quantum computers has proved that quantum systems have a complexity that is far beyond what you could expect by saying that QM is a wave theory of matter.
If we have a thousand particles in an entangled state, suddenly we need at least 21000 parameters to describe the “ripples” that they form. How so? Because these aren’t ripples of matter of all; they’re ripples of probability amplitude. And they don’t live in 3-dimensional space; they live in Hilbert space.
In other words, what quantum interference changes is not merely the nature of matter, but much more broadly, the rules of probability. That change to the rules of probability is quantum mechanics. The changes to the nature of matter are all special byproducts—alongside the changes to the nature of light, communication, computation, and more.
The great contribution of quantum computation to the quantum foundations debate is simply that, at long last, it forced everyone to face this enormity, to stop acting like they could ignore it by focusing on special examples like a single particle in a potential and pretending the rest of QM didn’t exist. Indeed, by now QC’s success in this has been so complete that it’s disconcerting to encounter someone who still talks about QM in the old way, the way with a foot still in classical intuitions … so that a change to the whole probability calculus underlying everything that could possibly happen gets rounded down to “matter acting like a wave.”
I have come to the conclusion that superposition and double-slit experiments are not so mysterious, once you accept that light and matter are waves. Then superposition is just what you expect. Aaronson says that there is a hidden complexity that allows factoring of numbers with hundreds of digits.
I might think differently if we have a convincing demonstration of quantum supremacy. We do not. We are decades away from factoring those numbers. It may never happen. Aaronson himself has retracted his quantum supremacy claims.
So there is a question. In QM, is matter just a wave, or is it a supercomputer in disguise? I will believe it is a supercomputer when it does a super-computation.