Pages

Wednesday, February 2, 2022

Cannot Count the Many-Worlds Branches

Here is a typical paper trying to make sense out of many-sorlds:
But Everett was less than clear in two respects. First, the quantum state can equally be written as a superposition of any set of basis states; what reason is there to single out his ‘branch states’ as distinguished? This is ‘the preferred basis problem’. Second, Everett assumed that a probability measure over branches is a function of branch amplitude1 and phase, and from this derived the Born rule (the standard probability rule for quantum mechanics). But there is a natural alternative probability measure suggested exactly by his picture of branching, that is not of this form and that does not in general agree with the Born rule: the ‘branch-counting rule’. Let the world split into two, each with a different amplitude: in what sense can one branch be more probable than the other? If Everett is to be believed, both come into existence with certainty, regardless of their amplitudes. After many splittings of this kind, divide the number of branches with one outcome, by the number of branches with another; that should give their relative probability. This is the branch-counting rule, in general in contradiction with the Born rule.

Everett did not reply to either criticism, having left the field even before the publication, in Reviews of Modern Physics in 1957, of his doctoral thesis, all of eight pages in length; he never wrote on quantum mechanics again.

The paper tries hard to count the branches and get probabilities, but there is no way to do it.

You can pretend that probabilities come from the Born Rule, but that doesn't make any sense either.

The paper cites arguments for plowing ahead with many-worlds anyway. But if the theory cannot make sense out of probabilities, then what good is it? It cannot do anything, and those who promote it are charlatans.

Scott Aaronson, a recent convert to many-worlds, says:

These days, my personal sense is that Many Worlds is the orthodox position … it’s just that not all of its adherents are willing to come out and say so in so many words! Instead they talk about decoherence theory, the Church of the Larger Hilbert Space, etc. — they just refrain from pointing out the Everettian corollary of all this, and change the subject if someone else tries to. 🙂
This says a lot about the sorry state of modern Physics. That a level-headed guy like Aaronson would believe in such a bizarre fairy tale. And apparently many of the leaders of this cult are unwilling to publicly admit it.

He gave this answer after 500 comments:
There are two branches, after all. What does it mean to have one branch be more probable than another?

I’d say that it simply means: if someone asks you to bet on which branch you’ll find yourself in before the branching happens, then you should accept all and only those bets that would make sense if the probabilities were indeed 1/3 and 2/3, or whatever else the Born rule says they are.

This is no answer. He is saying to follow the Copenhagen interpretation to determine probabilities, and then to bet on those probabilities in a many-worlds theory, even though many-worlds can say nothing about the probability of those worlds.

This means many-worlds is nothing more than taking a theory that predicts probabilities, and pretending that false outcomes live as separate realities.

See also previously posted comments, about that Aaronson post.

Another commenter explains:

Can someone who is a many-worlds person, or Everettian if you prefer, explain to me why the many-world ontology is so appearly or evident to you? I am looking for someone who is a die-hard, every branch is equally-real many-worldser.

Was there one moment where it all clicked for you? Do you believe that there is an uncountable infinity of other branches of the wavefunction that are equally real, equally extant?

Do I just lack imagination, or do I just not get it?

For me it was when I realized that Many-Worlds is what you get when you just take what the Schrödinger equation says as literally true, and stop torturing it with an unphysical and ill-defined collapse. It got reinforced when I was taking a lecture on QFT and realized that the high-energy people simply ignore collapse, for them the theory is completely unitary. Obvious in retrospect: for them relativistic effects are crucial, and how could they ever reconcile that with a nonlocal collapse? ... And yes, all branches are real. There’s nothing in the math to differentiate them.
In other words, he just doesn't want to accept that an observation tells you what is real.

This is like saying: When I toss a coin, theory says that heads and tails each have probability 0.5. I realized that Many-Worlds is what you get when you take both possibilities as literally true, and stop artificially ruling out what does not happen.

In other words, you have a theory that a coin toss gives heads with probability 1/2. That is, heads occurs half the time. The many-worlds guy comes along and says you are torturing the model by excluding the tails that do not happen. So now you say all the possibilities occur all the time, just in parallel worlds. We cannot say that heads occurs half the time, because it occurs all the time. Maybe we could say it occurs in half the worlds, but no one has been able to make mathematical sense out of that. So Scott says the probability is still 1/2, because he would bet on heads that way.

This is stupid circular reasoning. He only bets on coin tosses because of the perfectly good theory that he discards. One he adopts many-worlds, the bet on heads wins in one branch, and loses in the other.

In comment #618, Aaronson addresses the fact that many-worlds is just conjecturing that all possibilities happen, with no dependence on quantum mechanics:

I, in return, am happy to cede to you the main point that you cared about: namely, that in both the Everettverse and the Classical Probabiliverse, the probabilities of branches could properly be called “objective.” ...

Here’s something to ponder, though: your position seems to commit you to the view that, even if we only ever saw classical randomness in fundamental physics, and never quantum interference, a Probabiliverse (with all possible outcomes realized) would be the only philosophically acceptable way to make sense of it.

But given how hard it is to convince many people of MWI in this world, which does have quantum interference, can you even imagine how hard it would be to convince them if we lived in a Classical Probabiliverse?

In fact, if anti-Everettians understood this about your position, they could use it to their advantage. They could say: “Everettians, like Deutsch, are always claiming that quantum interference, as in the double-slit experiment, forces us to accept the reality of a muliverse. But now here’s Mateus, saying that even without interference, a multiverse would still be the only way to make sense of randomness in the fundamental laws! So it seems the whole interference argument was a giant red herring…” 🙂

Yes, the whole interference argument is a red herring. Many-Worlds is what you get when you deny probabilities, and assume anything can happen. It has nothing to do with quantum foundations.

Comment #637 argues:

Free will is kind of a red herring, as free will is not linked to determinism or non-determinism. A slight majority of philosophers accept this (59%? see Compatibilism), but the gist of the argument is thus:
He then goes on the argue that free will is impossible in naturalist theory, but philosophers have rationalized this by saying we can have a feeling of free will in a determinist world.

This is an argument that only a learned philosopher would make, as it doesn't make any sense. Any normal person would say that if the theory cannot allow free will, then the theory is deficient.

In discussion about the essence of quantum mechanics, Aaronson argues:

If superposition is the defining feature of QM, then interference of amplitudes is the defining feature of superposition. It’s not some technical detail. It’s the only way we know that superpositions are really there in the first place, that they aren’t just probability distributions, reflecting our own ordinary classical ignorance about what’s going on.
He got this reply:
I’m afraid you’re getting things backwards; interference is an artefact of the substance of matter being made of waves instead of particles. A wave goes through two slits at once and interferes with itself – that’s not at all interesting beyond the question of why matter is made of waves instead of particles, which is more along the lines of the nuts-bolts question. You think it matters because you assume the particle view and particle interactions are ontically prior – but that phenomena is well explained by decoherence.
Aaronson responds:
A billion times no! To describe quantum interference as merely about “matter being made of waves and not particles” is one of the worst rhetorical misdirections in the subject’s history (and there’s stiff competition!). It suggests some tame little ripples moving around in 3-dimensional space, going through the two slits and interfering, etc. But that’s not how it is at all.

If we have a thousand particles in an entangled state, suddenly we need at least 21000 parameters to describe the “ripples” that they form. How so? Because these aren’t ripples of matter of all; they’re ripples of probability amplitude. And they don’t live in 3-dimensional space; they live in Hilbert space.

In other words, what quantum interference changes is not merely the nature of matter, but much more broadly, the rules of probability. That change to the rules of probability is quantum mechanics. The changes to the nature of matter are all special byproducts—alongside the changes to the nature of light, communication, computation, and more.

The great contribution of quantum computation to the quantum foundations debate is simply that, at long last, it forced everyone to face this enormity, to stop acting like they could ignore it by focusing on special examples like a single particle in a potential and pretending the rest of QM didn’t exist. Indeed, by now QC’s success in this has been so complete that it’s disconcerting to encounter someone who still talks about QM in the old way, the way with a foot still in classical intuitions … so that a change to the whole probability calculus underlying everything that could possibly happen gets rounded down to “matter acting like a wave.”

He is saying that the quantum supremacy of quantum computers has proved that quantum systems have a complexity that is far beyond what you could expect by saying that QM is a wave theory of matter.

I have come to the conclusion that superposition and double-slit experiments are not so mysterious, once you accept that light and matter are waves. Then superposition is just what you expect. Aaronson says that there is a hidden complexity that allows factoring of numbers with hundreds of digits.

I might think differently if we have a convincing demonstration of quantum supremacy. We do not. We are decades away from factoring those numbers. It may never happen. Aaronson himself has retracted his quantum supremacy claims.

So there is a question. In QM, is matter just a wave, or is it a supercomputer in disguise? I will believe it is a supercomputer when it does a super-computation.

2 comments:

  1. I just went down a Carver Mead rabbit hole. You might perhaps like some of his stuff. Here, for example, http://dx.doi.org/10.1117/12.2046381 (open access), "The nature of light: what are photons?", he suggests that "photons are the 'noise' in an otherwise noiseless process". His is an idiosyncratic perspective that I think doesn't hold together completely, but I found it approximately consonant with wave-like thinking, provided we add noise. I came to that through a link in a comment on Quanta to an interview in American Spectator in 2001, http://worrydream.com/refs/Mead%20-%20American%20Spectator%20Interview.html

    ReplyDelete
  2. Roger,

    I had posted a couple of *further* replies at Scott's blog, but they have not appeared. It's easily possible that Scott has moderated them out.

    That would be nothing new. He had moderated out my comment at the time when Google's Quantum Supremacy claim was being discussed, but the paper had not yet arrived.

    ... As to my recent-most two comments (which Scott's blog doesn't show as of now), I may post these at my personal blog, in the goodness of time --- or I may not. But they were *are* relevant / pertinent to QM, esp. to the points being discussed at Scott's thread.

    But yes, I have stopped posting comments to that thread. And I will think thrice before posting any reply at his blog again.

    Let me now welcome the opportunity to post some further points at your blog, here.

    ---

    Referring to what Scott says here:

    > "Because these aren't ripples of matter of all; they're ripples of probability amplitude. And they don't live in 3-dimensional space; they live in Hilbert space."

    Too many errors / sloppy items... Let me go a bit slowly...

    > "Because these aren't ripples of matter of all;"

    The "ripples" refer to \Psi. If \Psi defined over the 3N-D config space doesn't correspond, in some or the other way, to matter, then why does Schrodinger equation work for electrons? Electrons *are* massive particles, aren't they? Apparently, Scott didn't pause to think through this simple logic. But the point is valid --- *in the way* it is valid.

    > "they're ripples of probability amplitude."

    Nope. Granted, Scott was trying to simplify. But it still is so much of a gross mis-representation! They are *not* ripples of probability-whatever at all. Schrodinger built his equation with \Psi and applied it to the H atom, way before Born came up with the probability interpretation. Once again, the "ripples" refer to \Psi. There is *one-way* street here, and it goes from \Psi to Probability (PDF). Not the other way around. (Can Scott show? Could Feynman show?) ... That's why to call it a probability-whatever is a gross abuse of language (and a taxing of the listener's reasoning capacity) to identify them in terms of the consequence and not the cause. Typical Feynmanian way, if I may add.

    > "And they don't live in 3-dimensional space; they live in Hilbert space."

    Half right. Their "life" in the Hilbert space becomes possible only because \Psi is defined over the 3N-D configuration space in the first place.

    Looking at the way Scott has written on these and similar points over the years, I think that he has never ever got *why* the 3N-D config space *becomes* necessary. Lorentz was very quick to grasp this point --- he was the first. (Einstein was second, perhaps third.) Personally and informally (i.e. in my mind), I call it "Schrodinger's Problem", viz., how to map from the 3N-D config space to the 3D physical space, so that the nature of \Psi is clear.

    In 1926, Schrodinger thought that he had a handle on it, but soon later realized that he didn't. *Many* have tried since (including may be all Physics Nobel laureates since). None has succeeded.

    But my point is this: People (like Lorentz) could at least grasp the reasons why and how Schrodinger's Problem comes about --- its roots. I wonder if Scott ever has paused to figure out such things. I think that someone (may be Umesh Vazirani? Scott's his KG teacher(s)?) handed him the Hilbert space formulation, and he lapped it up, and that was it! What's there to think about? ... He has been talking (including in his class-room lectures) just this way.

    I could go on, with many such things (be it by Scott or by others) but that's neither necessary, nor worth my attention.

    All that I might point out here is this: I think I have solved Schrodinger's Problem. Had I not, I couldn't have written my simulation for the Helium atom.

    Best,
    --Ajit

    ReplyDelete