Wednesday, January 26, 2022

Why the World is Quantum

Scott Aaronson asks:
Q: Why should the universe have been quantum-mechanical?

If you want, you can divide Q into two subquestions:

Q1: Why didn’t God just make the universe classical and be done with it? What would’ve been wrong with that choice?

Q2: Assuming classical physics wasn’t good enough for whatever reason, why this specific alternative? Why the complex-valued amplitudes? Why the unitary transformations? Why the Born rule? Why the tensor product?

He acts as if quantum mechanics is strange and complicated, and reality would be simpler if it were governed by classical mechanics.

I doubt it. Quantum mechanics allows for matter being composed of atoms, and being stable. I don't know how you get that in a classical theory. It also allows for human consciousness and free will. Again, I don't see how else you get that.

Scott explains:

Most importantly, people keep wanting to justify QM by reminding me about specific difficulties with the classical physics of the 19th century: for example, the ultraviolet catastrophe. To clarify, I never had any quarrel with the claim that, starting with 19th-century physics (especially electromagnetism), QM provided the only sensible completion.

But, to say it one more time, what would’ve been wrong with a totally different starting point—let’s say, a classical cellular automaton? Sure, it wouldn’t lead to our physics, but it would lead to some physics that was computationally universal and presumably able to support complex life (at least, until I see a good argument otherwise).

Which brings me to Stephen Wolfram, who several commenters already brought up. As I’ve been saying since 2002 (!!), Wolfram’s entire program for physics is doomed, precisely because it starts out by ignoring quantum mechanics, to the point where it can’t even reproduce violations of the Bell inequality. Then, after he notices the problem, Wolfram grafts little bits and pieces of QM onto his classical CA-like picture in a wholly inadequate and unconvincing way, never actually going so far as to define a Hilbert space or the operators on it.

Even so, you could call me a “Wolframian” in the following limited sense, and in that sense only: I view it as a central task for physics to explain why Wolfram turns out to be wrong!

Here is an explanation of Wolfram's project:
Wolfram’s framework is discrete, finite, and digital (based on a generalization of the cellular automata described in “A New Kind of Science”). Matter, energy, space, time, and quantum behavior emerge from underlying digital graphs. ...

Wolfram’s digital physics is fully deterministic, which seems to exclude free will.

But there is computational irreducibility: The strong unpredictability found in finite digital computations with simple evolution rules.

If you want to know what happens in the future of an irreducible digital computation, you must run the computation through all intermediate steps. There is no shortcut that permits predicting, with total certainty, what will happen in the future, without actually running the computation.

At this moment the question that I’m interested in is: Is computational irreducibility an “acceptable” replacement for free will?

So in Wolfram's world, you have no free will, but you might be able to make decisions that others cannot predict because of computational complexity?

Computer complexity is Scott's favorite subject, so I guess this has some appeal to him.

Update: Here is a good comment:

Scott #264: The problem is that you can’t even define what this “classical stochastic evolution rule” is. A “freely-willed transition rule” is even woollier.

You are probably aware of the century-old difficulty in even defining what a probability is. We do have a good theory of subjective uncertainty, but that’s not good enough to power the universe, we need true randomness. Do you have an answer? What does it mean to say that state A transitions to state B with probability 2/3 or to state C with probability 1/3?

We are used to letting true randomness be simply an unanalysed primitive. We know how to deal with it mathematically (with the Kolmogorov formalism), and we know how to produce it in practice (with QRNGs), so we don’t need to know what it is. But if you are writing down the rules that make a universe tick that doesn’t cut it, you do need a well-defined rule.

The only solution I know is defining true randomness as deterministic branching, aka Many-Worlds. And as I’ve argued before, you do need quantum mechanics to get Many-Worlds.

He is right that we understand probability mathematically, and in practice. It is not so clear when someone talks about true quantum randomness. And this reply:
Maybe someone can answer this, but how does MWI deal with actual probabilities? Let’s say after a measurement there is a 1/3 chance that particle A is spin-up and 2/3 spin-down. As I understand it, MWI means that the universe/wavefunction branches at the point of measurement — in one branch A is spin-up, and in one it is spin-down. What then becomes of the 1/3? There are two branches, after all. What does it mean to have one branch be more probable than another?
There is no good answer. He has put his finger on a fatal flaw to many-worlds. MWI cannot be reconciled with probability.

Another reply:

That’s one of the main difficulties of MWI, there’s no clear agreement among its proponents on how to deal with it (e.g. Sean Carroll has a lot to say on this).

The way I think about it is that everyone agrees on what’s a binary split, a 50/50 branching. And then any other split can be decomposed in a series of 50/50 splits, with special hidden labels.

If that is the best answer, it is terrible. There is no definition of a 50/50 branching. The rest is just wishful thinking. With no way to make sense of probability, there is no scientific theory.

No comments:

Post a Comment