Thursday, July 27, 2023

Arguing for Retrocausality

Aeon essay:
Almost a century ago, physics produced a problem child, astonishingly successful yet profoundly puzzling. Now, just in time for its 100th birthday, we think we’ve found a simple diagnosis of its central eccentricity. ...

The strangeness has a name – it’s called entanglement – but it is still poorly understood. Why does the quantum world behave this strange way? We think we’ve solved a central piece of this puzzle. ...

More recently, we ourselves have written about the advantages of retrocausal approaches to QM, both in avoiding action at a distance, and in respecting ‘time-symmetry’, the principle that the microworld doesn’t care about the distinction between past and future. But an additional striking advantage of retrocausality seems to have been missed. It suggests a simple mechanism for ‘the characteristic trait of quantum mechanics’ (Schrödinger), ‘its weirdest feature’ (Weinberg) – in other words, for the strange connections between separated systems called quantum entanglement.

It is amazing what mental gymnastics people will do to avoid accepting the quantum mechanics of 1927.

The Heisenberg Uncertainty principle is a little strange, but these guys somehow think that it is better to assume that the future determines the past?

Others prefer many-worlds, spooky action at a distance, or superdeterminism. They are all crazy.

Finally, a note for readers who are worried that the cure is worse than the disease – that retrocausality opens the door to a menagerie of paradoxes and problems. Well spotted!
Exactly.

Monday, July 17, 2023

In Search of Quantum Utility in Computation

Quantum supremacy has been claimed and attacked so many times in the last several years, it is hard to keep track of what we are supposed to think.

Here is the latest from the expert:

Speaking of making things common knowledge, several people asked me to blog about the recent IBM paper in Nature, “Evidence for the utility of quantum computing before fault tolerance.” So, uhh, consider it blogged about now! I was very happy to have the authors speak (by Zoom) in our UT Austin quantum computing group meeting. Much of the discussion focused on whether they were claiming a quantum advantage over classical, and how quantum computing could have “utility” if it doesn’t beat classical. Eventually I understood something like: no, they weren’t claiming a quantum advantage for their physics simulation, but they also hadn’t ruled out the possibility of quantum advantage (i.e., they didn’t know how to reproduce many of their data points in reasonable time on a classical computer), and they’d be happy if quantum advantage turned out to stand, but were also prepared for the possibility that it wouldn’t.

And I also understood: we’re now in an era where we’re going to see more and more of this stuff: call it the “pass the popcorn” era of potential quantum speedups for physical simulation problems. And I’m totally fine with it—as long as people communicate about it honestly, as these authors took pains to.

And then, a few days after our group meeting came three papers refuting the quantum speedup that was never claimed in the first place, by giving efficient classical simulations. And I was fine with that too.

Scott Aaronson was probably the biggest proponent of quantum supremacy. He endorsed Google's announcement in 2019. Now he seems resigned to the possibility that the present state of uncertainty could go on for years, with researchers claiming quantum supremacy, and skeptics replicating the results on classical computers.

Furthermore, he now admits the possibility that quantum computers could be built to have some utility, even if they are never shown to have any complexity advantage over classical computers.

I remain a skeptic. Just to fully explain my position, I accept quantum mechanics. I accept Feynman's argument that quantum mechanical predictions cannot be simulated efficiently on a classical computer.

This raises the possibility that a quantum system could be simulated on an analog quantum computer, and deliver an answer faster than digitally solving Schroedinger's equation. I accept that quantum computers could have some utility on such problems, and outperform digital simulations on a Turing machine.

What I do not accept is that quantum mechanics has some sort of magic for improving the asymptotic complexity performance of ordinary digital computers on certain math problems. Maybe it will be proved someday, but I doubt it. If so, it is decades away, and current public key cryptography will be safe for a long time.

Another Look at Bohr's Anti-Realist Realism

de Ronde, Christiande Ronde writes in a new paper:
Since its Greek origin, physics has been related to physis, namely, the totality of what is. The realist presupposition that gave birth to physics was the idea that theories provide knowledge about the logos (i.e., order) of reality through the creation of systematic, unified schemes capable to account for the multiplicity immanently found within experience (see for discussion [13, 14]). This was the case for more than two millennia of successful developments from Protagoras and Heraclitus to Plato and Aristotle, and then, up to modern times to Galileo and Newton. But even though modernity — with the creation of classical mechanics — could be regarded as the peak of the Greek theoretical realist program, this period can be also seen as the starting point of the anti-realist re-foundation of science. A process that would culminate in post-modern times, during the 20th century. As Karl Popper would famously describe the situation during the late 1950s:
“Today the view of physical science founded by Osiander, Cardinal Bellarmino, and Bishop Berkeley, has won the battle without another shot being fired. Without any further debate over the philosophical issue, without producing any new argument, the instrumentalist view (as I shall call it) has become an accepted dogma. It may well now be called the ‘official view’ of physical theory since it is accepted by most of our leading theorists of physics (although neither by Einstein nor by Schrödinger). And it has become part of the current teaching of physics.” [43, pp. 99-100]
Physical theories would then become to be regarded as an economy of ‘clicks’ in detectors not necessarily linked to the description of reality.
Osiander is the one who wrote the preface to the 1543 Copernicus book, saying that astronomy models can be useful even the underlying motions are not true. He wrote that book's ideas were "not put forward to convince anyone that they are true, but merely to provide a reliable basis for computation." Bellarmino argued that Galileo had not actually proved the motion of the Earth. By 1950, everyone accepted that motion is relative.

Bohr is the man famous for saying:

There is no quantum world. There is only an abstract quantum physical description. It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about nature.
People liked to put words in Bohr's mouth, so he may not have said exactly that.

Apparently the philosophically issues over realism were not settled in 1950, as plenty of physicists and philosophers argue about it today. The word "realism" is a misomer, and the advocates of realism are usually trying to get us to believe in properties that cannot be observed. They are imaginary properties that help make a model work.

Tuesday, July 11, 2023

The Inexhaustible Nonsense of David Bohm

A new paper tries to figure out whether a physicist was a determinist:
If David Bohm is known for one thing, it is for developing a fully deterministic interpretation of quantum mechanics. ...

On December 4, 1950, at the height of the cold war and the McCarthy era, Bohm was indicted for contempt of Congress, because of his affiliations with communism and his refusal to testify for the House Un-American Activities Committee (HUAC) (Freire 2019, 56-62). This led to him being suspended from his job at Princeton University. He had been a member of the communist party for a brief period in 1942, and although he had quickly become disappointed with the party, he had remained a Marxist. During the months he was suspended, Bohm had a lot of free time, which he used to develop an alternative interpretation of quantum physics (Bohm to Schatzman, 1952, in Besson 2018, 335).

By July 1951, Bohm had sent his article, “A suggested interpretation of the quantum theory in terms of ‘hidden’ variables”, to Physical Review, where it was published in early 1952. In Bohm’s interpretation, which is similar to an earlier proposal by De Broglie, particles have a well-defined position and momentum at all times and thus a well-defined path, and their movement is guided by a quantum potential which is derived from the Schrödinger equation. This interpretation yields exactly the same predictions as the standard interpretation. It is indeed deterministic;

They call it deterministic, but when it says a particle is in a particular place, it could really be miles away. QM predicts probablilities. If Bohm were really making the same predictions, then it would predict probabilities.

Bohm sometimes called his interpretation "causal" byt it allows particles to be guided by action-at-a-distance from far-away ghosts. This is the opposite of what most people mean by causal.

There is a Marxist element in Bohm’s conception of infinite levels of nature: Bohm himself directly connects it with dialectical materialism (Talbot 2017, 255). 4 In a later interview, Bohm said:
I remember also, before leaving the United States, I picked up something in the Princeton library, some Soviet publication which mentioned Lenin saying that the electron was inexhaustible. (…). That sort of struck a chord because I said, ‘Well, not only the electron, but everything, all matter is inexhaustible’.5
This refers to a passage from Lenin’s book Materialism and Empirio-criticism, in which Lenin argues against positivism. Lenin writes: “The electron is as inexhaustible as the atom, nature is infinite, but it infinitely exists” (Lenin 1947, 243). Also for Lenin, this meant that our knowledge of nature can never be complete. Bohm’s view of an infinite number of levels of nature was probably also influenced by Friedrich Engels, who also proposes a view according to which there are different levels of nature, with qualitatively new properties emerging at higher levels (Sheehan 1985, 34). Engels argues for materialism, but this is not a crude materialism but rather organicist and emergentist: there is always an emergence of qualitatively new properties. ...

The book Causality and Chance in Modern Physics does not mention the names Engels, Marx, Lenin, Hegel, or the word ‘dialectical’, but it is very clearly based on dialectical materialism. In fact, in his correspondence from his time in Brazil, he mentions working on a book on dialectical materialism, which is probably this book (Talbot 2017, 291). ...

Marxism is often associated with determinism, and especially with the idea that there are laws determining the development of society. But there are different schools in Marxist philosophy. Freire himself has pointed out that there was a variety of Marxist views on quantum physics,

Bohm still has a cult following today. I am not sure why. He had a lot of bad ideas.

Some people say quantum mechanics is nonlocal, because certain correlations cannot be explained by a classical theory. Bohm's theory really is nonlocal. Particles are influenced by distant ghosts. It contradicts almost everything we know about cause and effect.

Einstein was also a determinist, a Zionist, and belonged to a bunch of Communist front organization. I do not know whether his leftist politics influenced his Physics.

I prefer to judge Bohm and Einstein by their Physics. If you look to them for philosophical ideas, remember that these were terrible people with terrible politics.

Update: There just appeared a similar paper, by different authors, also arguing that Bohm was anti-determinist, even though he is mostly famous for introducing a determinist alternative to quantum mechanics. It glosses over his Communism:
After completing his PhD with Robert Oppenheimer at Berkeley in the 1940s and a post at the Institute of Advanced Studies in Princeton, in 1951, Bohm fell victim of the witch-hunt of McCarthyism because of his adherence to Marxism; this led him to a life of exile: firstly to Brazil, then to Israel, and finally to the UK, where he spent the rest of his life (see [25, 17] for biographies of Bohm).
No, I don't think that he had anything to do with Sen. McCarthy, and I don't think McCarthy ever went after anyone for adhering to Marxism.

I don't know why it matters whether Bohm was a determinist. Einstein was an avowed determinist. His opinions on the subject are not given much weight by either his fans or his detractors.

Bohm has his followers at infinitepotential.com, and there is an upcoming conference on his ideas.

Sunday, July 9, 2023

Dr. Bee Plugs Superdeterminism Again

Susan Hossenfelder is plugging superdeterminism again, on her weekly podcast:
Why is quantum mechanics non-local? (I wish someone had told me this 20 years ago.)
She correctly says that a lot of people say quantum mechanics is nonlocal, but by that, they mean that the wave function collapse is nonlocal. That is true, but not a physical nonlocality, she correctly says.

She also correctly says that last year's Nobel Prize for Bell test experiments did not endorse any nonlocality, even though some reported it that way.

Those experiments are consistent with:

  • Quantum mechanics, non-classical, local, Copenhagen
  • Nonlocal theories of hidden variables
  • Superdeterminism
  • Retrocausality
  • Many-worlds
The latter four possibilities do not make any physical sense, but are mathematically possible, and some people like to talk about them.

The only possibility ruled out by Bell tests is a local theory of classical hidden variables.

Hossenfelder like superdeterminism, which in this video she calls "Violation of measurement independence". It means you can never randomize the inputs to an experiment, so you can never trust the outputs.

She prefers superdeterminism to nonlocal theories of hidden variables. She acts as if those are the only two possibilities. However both have been rejected by almost everyone. It is a silly preference.

I only post this because her podcasts get a lot of views, and people think that they are learning something. But she has fringe views that are contrary to almost all of modern science.

Friday, July 7, 2023

Simple Explanation of Relativity

I got this from ChatGPT:
Sure! I'd be happy to explain relativity to a 10-year-old. Here we go:

Have you ever seen a fast-moving train pass by? When you're standing still on the platform, the train looks different to you than it does to someone sitting inside the train. That's because of something called relativity.

Relativity is a fancy word that means things can look different depending on how fast you're moving. One important person who studied this was a scientist named Albert Einstein. He came up with two types of relativity: special relativity and general relativity.

Special relativity talks about how things look when they move very, very fast, almost as fast as light. You see, nothing can move faster than light! Imagine you have a friend who's sitting in a spaceship and zooming past you at an incredible speed. If you could see inside the spaceship, you might notice something strange. Your friend's clock would be moving slower than yours, and even the shape of their spaceship might look different!

General relativity, on the other hand, talks about how gravity affects things. Gravity is what keeps us on the ground and makes things fall down. According to Einstein's ideas, gravity is not just a force that pulls things, but it's also the way objects change the shape of space and time around them. So, if you had a really massive object like a planet or a star, it would make a dent in the space around it, kind of like a heavy ball sinking into a soft mattress.

This bending of space and time is why objects move differently near really massive things. It's like when you roll a marble on a curved surface—it follows a different path than if it were on a flat surface. Planets move around the sun in curved paths because of this bending of space and time caused by the sun's gravity.

So, in a nutshell, relativity is all about how things look and move depending on how fast they're going or how strong gravity is. It's a pretty cool idea, and it helps scientists understand how the universe works!

I hope that helps you understand relativity a little better! If you have any more questions, feel free to ask.

I intend to post my own attempt. This is about what I expected, but I don't think it really explains relativity.

Monday, July 3, 2023

Stating Goedel's Theorem with Less Confusion

Here are Gödel's incompleteness theorems, according to Wikipedia:
The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an effective procedure (i.e., an algorithm) is capable of proving all truths about the arithmetic of natural numbers. For any such consistent formal system, there will always be statements about natural numbers that are true, but that are unprovable within the system. The second incompleteness theorem, an extension of the first, shows that the system cannot demonstrate its own consistency.
This is often summarized as saying that there are mathematical truths that cannot be proved.

The obvious implication is that logical proof from axioms is inadequate, as apparently there are other truths, and someone found them to be true somehow.

I think this is confusing. How do we know those truths are true, if no one can prove them?

Maybe it would be better to say: If an arithmetic system has a computable set of axioms, then there will be statements that are true in some models of the axioms, and false in others.

It is called "incompleteness", because a complete theory would be able to prove or disprove every statement from the axioms.

The point is that statements are true or false in some model, and the axioms allow many possible models.

You might think that arithmetic could be given a set of axioms such that only one model is possible. But you cannot, and that was known before Goedel.

Some say that Goedel proved that the axiomatic method does not work. But that is incorrect. The axioms do let you prove all the statements that are true in all the models.

This is an important point, that is worth repeating. The axiomatic method would be defective if there were some statements that were true in all the models, but still could not be proved from the axioms. Goedel proved that was not possible.

Here is a typical account, from a new paper:

In 1931 Kurt G ̈odel shook mathematics (literally) to its foundations with his incompleteness theorems:

I) No consistent finite set of axioms and rules11 can be used to prove all true theorems about the integers.

Even worse:
II) No consistent finite set of axioms and rules can prove itself to be consistent!

(1I am avoiding necessary subtleties. To be more precise, G ̈odel’s theorem can be stated as: The set of provable statements, although not recursive, is recursively enumerable, whereas the set of true statements is not.)

This absolutely contradicts Hilbert’s dream that one could axiomatize so as to prove or disprove any given mathematical claim. The first is frustrating, while the second unveils inescapable limitations in the possible formulation of axiomatic theories and indeed mathematical foundations:

Formal, axiomatic set theory ... cannot be a final foundational theory for mathematics, [yet] nothing else [can] be. Hallett [30]
Or as von Neumann put it in 1930:
There can be no rigorous justification for classical mathematics.
So how do mathematicians deal with this existential crisis in their subject? The only answer is that they learn to live with it.12
A theorem is a proved statement. So I do not agree that not all true theorems can be proved. The theorems can be proved, by definition.

Nothing can prove itself consistent, because inconsistent theories can prove anything. Goedel adds a subtle twist to that, but proving self-consistency was hopeless anyway.

I tried to find the source of the von Neumann quote, but failed. He did some excellent work on axiomatizing mathematics, and justifying modern mathematics, so I doubt that he was saying that his own work was not rigorous.

This paper has a very nice summary of the recent history of mathematical proofs, including computer-assisted proofs and wrong proofs that got published. He even mentions a famous mathematician who thought that the foundation of math might be inconsistent!