Friday, January 27, 2023

NY Times on Where Physics is Headed

Peter Woit reports:
The New York Times today has Where is Physics Headed (and How Soon Do We Get There?). It’s an interview by Dennis Overbye of Maria Spiropulu and Michael Turner, the chairs of the NAS Committee on Elementary Particle Physics – Progress and Promise. This committee is tasked with advising the DOE and NSF so they can “make informed decisions about funding, workforce, and research directions.”
Turner: But it is a powerful mathematical tool. And if you look at the progress of science over the past 2,500 years, from the Milesians, who began without mathematics, to the present, mathematics has been the pacing item. Geometry, algebra, Newton and calculus, and Einstein and non-Riemannian geometry.
He probably meant "Einstein and non-Euclidean geometry" or "Einstein and Riemannian geometry". Technically, general relativity metric are indefinite and not Riemannian, so he could have mean non-Riemannian. However Rimannian geometry was the math that Einstein and Hilbert used for the field equations.
Among the many features of string theory is that the equations seem to have 10⁵⁰⁰ solutions — describing 10⁵⁰⁰ different possible universes or even more. Do we live in a multiverse?

Turner:

I think we have to deal with it, even though it sounds crazy. And the multiverse gives me a headache; not being testable, at least not yet, it isn’t science. But it may be the most important idea of our time. It’s one of the things on the table. Headache or not, we have to deal with it. It needs to go up or out; either it’s part of science or it isn’t part of science.
If it is not testable, and not part of science, why do we have to deal with it? They are just solutions to some equations that have no known relationship to the real world.

Wednesday, January 25, 2023

PBS Nova: Einstein's Quantum Riddle

PBS TV Nova has a new documentary on Einstein's Quantum Riddle, free on PBS and YouTube.

It is all about entanglement as the fundamental mystery of the universe.

I think it is entirely wrong. As noted before, entanglement is not so different from classical physics.

It argues that entanglement can be used to improve communications security, because the encryption relies on the laws of physics.

Monday, January 9, 2023

Carroll and Coyne Against Free Will

Jerry Coyne and Sean M. Carroll posted new rants on free will.

Maybe I am stupid, but these guys don't make much sense to me. Carroll says he believes in free will, but he is also a determinist, and thinks it is theoretically possible to develop technology to predict everything your brain will do. If so, you won't have free will. But that will probably never happen, so you can think of yourself as having free will as a way of coping with everyday life.

In other words, free will is an illusion.

Coyne is more against free will.

[reader comment] According to your theory, the “sane” person should never be found guilty of a crime

[Coyne] Oh for crying out loud, you haven’t followed my writings on this at all. There are very good reasons to convict sane people of a crime: to keep them away from society, to reform them, and to act as a deterrent. Go read “free will” post on this site before you make remarks like that.

That answer might make sense if the judge has free will and the criminal does not. But if no one has any free will, what is the point of giving any reasons for doing anything? It is all pre-determined, so just sit back and enjoy the ride. Nothing you or the judge decide will make any difference.

Here is more:

[reader] Your comments make perfect sense from within the materialist/determinist paradigm, but I also think it points out why the materialist/determinist paradigm is just as incoherent as any other theory of consciousness.

1. Is it actually possible to live consistently within the idea that no one chooses any thought they hold? That would render much of Professor Coyne’s popular life work moot. Why try and convince people of the futility of religious or creationist beliefs of the could not have believed otherwise?

[Coyne] 1. I do live that way. Also, even thought what I wrote may have been determined, it can still change people’s minds, because it is an environmental factor that can influence other people. Saying that determinism makes my work is not only incoherent in itself, but, frankly, offensive. I don’t CARE if I was determined to write what I did. I’m happy to know that I’ve changed people’s minds, which I have.

[reader] 1. I have often felt a serious blind spot by those who call themselves determinists is their unwillingness to give up popular folk notions of personal responsibility. It is an incompatibility to say that our thoughts and behaviors are determined but people who I disagree with can change their positions. I don’t think appealing to any intermediary step such as environment helps as that step will be just as determined. All ideas have consequences and determinism has them for our everyday notions of law and morality.

[Coyne] This is the last bit of the exchange; we’re done.

1. There is no incompatibility. If you kick a friendly dog because you were determined by the circumstances or your personality to do that, the dog will eventually shy away [from] you. Determinism plus behavior change. No problem. You appear to be confused. I’ve already discussed what I mean by “personal responsibility”: Person X did thing Y. Person X is therefore responsible for having done Y. You know this so why is this an issue?

I say humans have more personal responsibility than a dog because of consciousness and free will. Not sure what Coyne is saying. If he is not capable of changing his own mind, then I don't know why he thinks that he can change someone else's mind. If people are just like dogs who have been kicked, then I don't know why they would have personal responsibility.

I have a similar issue with Sam Harris. He is always talking about how no one has free will, and he does not even have the feeling of free will. And yet he spends the rest of his time trying to persuade people of various moral stances. Makes no sense to me.

Here is Carroll on many-worlds, from his blog in 2015:

The particular objection I’m thinking of is:

MWI is not a good theory because it’s not testable.

It has appeared recently in this article by Philip Ball — an essay whose snidely aggressive tone is matched only by the consistency with which it is off-base. Worst of all, the piece actually quotes me, explaining why the objection is wrong. So clearly I am either being too obscure, or too polite.

I suspect that almost everyone who makes this objection doesn’t understand MWI at all. This is me trying to be generous, because that’s the only reason I can think of why one would make it. In particular, if you were under the impression that MWI postulated a huge number of unobservable worlds, then you would be perfectly in your rights to make that objection. So I have to think that the objectors actually are under that impression.

An impression that is completely incorrect. The MWI does not postulate a huge number of unobservable worlds, misleading name notwithstanding. (One reason many of us like to call it “Everettian Quantum Mechanics” instead of “Many-Worlds.”)

Now, MWI certainly does predict the existence of a huge number of unobservable worlds. But it doesn’t postulate them. It derives them, from what it does postulate.

Got that? He says it would be reasonable to object to many-worlds if you thought it postulated many worlds. But it actually postulates something equivalent to many-worlds, and then derives the many worlds. He says this misunderstanding "saddens me, as an MWI proponent". 

Sorry, but it is a mathematical fact that if you postulate something that implies many-worlds, then you are postulating many-worlds.

A review notes:
Carroll echoes Everett in contending that the key mathematical expression in quantum physics, known as the wave function, should be taken seriously. If the wave function contains multiple possible realities, then all those possibilities must actually exist. As Carroll argues, the wave function is “ontic” — a direct representation of reality — rather than “epistemic,” a merely useful measure of our knowledge about reality for use in calculating experimental expectations. In epistemic interpretations, “the wave function isn’t a physical thing at all, but simply a way of characterizing what we know about reality.”
So once he postulates the equivalent of many worlds, he insists that they are real. No, imaginary unobservable things do not become real by postulating them (or antecedents of them).

Saturday, January 7, 2023

Why you can buy a Bobble-head Einstein

This week's Sabine Hossenfelder video is on Special Relativity: This Is Why You Misunderstand It.
The most important part of Einstein's theories is that they combine space and time into one common entity, space-time. This idea didn't come from Einstein but from Minkowski, but Einstein was the one to understand what itmeans. Which is why today you cany buy a bobble-head Einstein but not a bobble-head Minkowski. Sorry Minkowski.
No, the idea came from Poincare's 1905 paper, and further developed by Minkowski in 1907. Einstein missed it in his papers, and even admitted:
Since the mathematicians have invaded the theory of relativity, I do not understand it myself anymore.
Almost everything she says was from Poincare and Minkowski, and not even understood by Einstein until around 1915. She adopts a modern geometrical interpretation that Einstein rejected most of his life.

Monday, January 2, 2023

Textbooks get Plum Pudding Wrong

A couple of Norway professors write:
Most physics textbooks at college and university level introduce quantum physics in a historical context. However, the textbook version of this history does not match the actual history.
Their main complaints are about textbook descriptions of the Plum pudding model and Rutherford model. In particular, the textbooks say that Rutherford introduce unstable electron orbits into Thomson's plum pudding model. This is incorrect. Wikipedia explains:
The Rutherford model served to concentrate a great deal of the atom's charge and mass to a very small core, but didn't attribute any structure to the remaining electrons and remaining atomic mass.
The electron orbits had already been proposed by Thomson and others, and Rutherford was only concerned with the nucleus.

I am glad to see this paper "exposing the flaws in the textbook version of the historical development of quantum theory", but there is no mention of Wikipedia. The paper is organized about the confusion of a hypothetical girl Emma who reads Gamow's 1966 book but not Wikipedia.

Physics textbooks love to tell these simplified historical stories. Such as how Galileo dropped two rocks from the Leaning Tower of Pisa and proved Aristotle wrong. Usually the true story is just as good, and more instructive.

Actually, Galileo never dropped anything from the Pisa tower, and Aristotle did not say heavy objects fall faster.

Thursday, December 29, 2022

What is Relativity?

There are two main ways to describe the theory of relativity.

(1) A way of reconciling Maxwell's electromagnetism theory with the motion of the Earth.

(2) Combining space and time into a 4-dimensional spacetime with a non-Euclidean geometry, such that the laws of physics respect that geometry.

Version (1) was the view of Lorentz and Einstein (up to 1912 or so), and also FitzGerald, Maxwell, and other pioneers.

Version (2) was the view of Poincare (1915), Minkowski (1907), Grossmann (1913), and Hilbert (1915). Einstein sometimes seemed to eventually adopt the view, but he also denied that geometry had any essential role.

Wikipedia defines:

The theory of relativity usually encompasses two interrelated theories by Albert Einstein: special relativity and general relativity, proposed and published in 1905 and 1915, respectively.[1] Special relativity applies to all physical phenomena in the absence of gravity. General relativity explains the law of gravitation and its relation to the forces of nature.[2] It applies to the cosmological and astrophysical realm, including astronomy.[3]

The theory transformed theoretical physics and astronomy during the 20th century, superseding a 200-year-old theory of mechanics created primarily by Isaac Newton.[3][4][5] It introduced concepts including 4-dimensional spacetime as a unified entity of space and time, ...

Typical scholarly historical papers are Elie Zahar's Why did Einstein's Programme supercede Lorentz's Part I, and Part II. (Paywalled)

It is a historical fact that Einstein's programme had no significant influence, and it was the Poincare-Minkowski geometric view that superseded Lorentz's in 1908. Einstein's programme was called the Lorentz-Einstein theory, and nobody thought he was saying anything different from Lorentz. Almost no one. Vladimir Varicak credited Einstein with a more geometric view in 1911, but Einstein published a rebuttal, denying any difference from Lorentz.

In today's textbooks, relativity is the covariance of Maxwell's equations and other laws under the symmetries of spacetime, notably Lorentz transformations.

Wikipedia is correct that relativity "introduced concepts including 4-dimensional spacetime as a unified entity of space and time", but Einstein did not introduce that concept, and did not accept it in 1908 when everyone else did. It is not clear that he ever completely accepted it, as he denied the importance of geometry.

When I say non-Euclidean geometry, I do not just mean the curvature of general relativity, or the hyperbolic geometry rapidity of special relativity. I mean the geometry that Minkowski so clearly explained in 1907-8, where 4-D spacetime has an indefinite metric and a symmetry group preserving that metric. A geometry is a space with some structure, and a group preserving the structure. For example, see this recent Roger Penrose interview, where he calls it Minkowski geometry. Or see this recent Terry Tao post.

Thursday, December 22, 2022

Is Psi Ontic or Epistemic?

Here is a typical paper addressing the reality of the wave function in quantum mechanics:
The ontological models framework distinguishes ψ-ontic from ψ-epistemic wave- functions. It is, in general, quite straightforward to categorize the wave-function of a certain quantum theory. Nevertheless, there has been a debate about the ontological status of the wave-function in the statistical interpretation of quantum mechanics: is it ψ-epistemic and incomplete or ψ-ontic and complete?
I do not see how this question makes any sense. The wave function is not directly observable. It is useful for making predictions. What sense is there in wondering whether or not it is real?

Bell's theorem shows that psi cannot be epistemic in the sense that it shows our knowledge about an underlying local hidden variable theory. But it could still encode our knowledge about the physical state.

The paper also talks about whether psi is statistical. Agaikn, this makes no sense. All physical theories are statistical in the sense that you can run multiple experiments, and compute statistics on the predictions and outcomes. Quantum mechanics is not much different from classical mechanics in this way. All the theories give error bars on predictions, if you do them right.

Supposedly the PBR Theorem proves that psi is ontic. This is very misleading, at best. If I release an electron from my lab, its wave funciton will soon be miles wide. Does any think that the ontic reality of that election is miles wide? No, that is absurd. Only our uncertainty is miles wide, because we don't know which way the electron went.

The PBR paper was originally submitted under the title, "The quantum state cannot be interpreted statistically". It is baffling how anyone could think that they proved something so absurd. Everything can be interpreted statistically. How could it not be? When the paper was published, the editors forced them to change the title to "On the reality of the quantum state". Of course the paper cannot prove anything about reality either.

The Schroedinger Cat thought experiment was supposed to dispose of the idea that the wave function is ontic. The cat is not really half dead and half alive. The wave function predicts probabilities consistent with observation, and that is all. Schroedinger and Bohr thought that this was obvious.

A lot of people seem to believe that the Copenhagen interpretation requires believing in a cat that is a superposition of dead and alive. Wikipedia says so. But it also says that Bohr himself did not, and believed that something like decoherence would irreversibly put the cat into the dead state or the alive state, even before anyone opens the box.

Here is a recently-released short video on Max Tegmark - Many Worlds of Quantum Theory. I wonder if these physicists realize how ridiculous they sound when they gave layman explanations.

He says he believes that the many worlds are real, because we see electrons being in two places at once in the lab. But no one has ever seen an electron in two places at once. The double-slit experiment can be interpreted as an electron going thru both slits, but that is just a wave property of an electron. And even if an electron can be in two places, and humans are made of electrons, it does not follow that humans can be in two places.

Never mind the details. Many worlds theory is an absurdity. He says these worlds are real, and there is no evidence for them whatsoever. Listening to him is like going to a comic book convention and watching someone, dressed in a costume, ramble about some fantasy world as if it were real.

Tegmark ends by saying that many-worlds could be disproved if quantum mechanics were disproved. He does not even admit the obvious possibility that quantum mechanics is correct and many-worlds is not.

Sabine Hossenfelder has a new lecture on The Other Side Of Physics (TEDxNewcastle). It is hard to take anything she says seriously, once you realize that she believes in superdeterminism. She talks about Physics as a tool for answering questions, but that is not true under superdeterminism. That teaches that experimental results cannot be believed, because the outcomes could have been forced by a conspiracy dating back to the Big Bang. In particular, it says that all the quantum experiments could be fake, and that we really live under classical mechanics.

In particular, she argues that the correlations of the Bell test experiments are all wrong because they are artifacts of the experimenters failing to properly randomize the input data, and they always fail because of mysterious constraints.

She also says that Eintein's special theory of relativity showed that there is no such thing as "now". This is because when we see or hear something, it takes time for the signals to reach our brains.

Then she rambles about multiverse theory being ascientific, because the scientific method cannot be applied to it. But surely superdeterminism is even worse, and is unscientific, as it says that the scientific method does not work.

She says that information cannot be destroyed. Seems like more unscientific nonsense to me.

There are certains ideas of Physics that are so outlandish as to discredit anyone who promotes them. I include: many-worlds, superdeterminism, retrocausality, simulation hypothesis, and action-at-a-distance.

Monday, December 19, 2022

Do moving clocks slow down?

Some Russian scholars address this question:
The special theory of relativity has fundamentally changed our views of space and time. The relativity of simultaneity in particular, and the theory of relativity as a whole, still presents significant difficulty for beginners in the theory. v...

The real question is why we can even find real clocks (such as atomic clocks) that work very much like the ideal clocks of relativity. The short answer to this question is that a real clock behaves like an ideal clock because the physics that governs its inner workings is Lorentz invariant. ...

However, this question makes sense in the more ambitious program of Lorentz and Poincar ́e on how the theory of relativity should be developed. Lorentz noted in 1909 that the main difference between the two programs is that “Einstein simply postulates what we have deduced, with some difficulty and not altogether satisfactorily, from the fundamental equations of the electromagnetic field” [109].

Although the Lorentz-Poincar ́e program of deriving the Lorentz symmetry rather than postulating it was never completely abandoned [110, 2], it is clear that Einstein’s program replaced it [109, 111], and for good reason: even today, Lorentzian symmetry cannot be deduced from a more fundamental theory.

The phrase “moving clocks slow down” is universally used and has been in common use for over a hundred years, and we realize that our suggestion not to use it, while correct, is not very realistic due to the ”intellectual inertia” [112] of the physics community. ...

In our opinion, better pedagogical practice would be to base the special relativity from the very beginning on the Minkowski four-dimensional formalism, which allows the introduction of relativistic concepts without any mention of their Newtonian counterparts [13].

It is amusing to see Russian scholars engaging in the same sort of Einstein idol worship that we see in the West, especially when contrary evidence is staring them in the face.

Yes, Einstein simply postulated what Lorentz and Poincare proved. By this, Lorentz meant that they deduced it from experiments, like Michelson-Morley, and established theory, like Maxwell's equations. Einstein ignored all that, took their conclusions, and made them postulates.

Einstein did not dispute this. His original papers did not have any references, but in later interviews he always said  he got the light postulate from Lorentz, and did not use Michelson-Morley himself.

The Minkowski 4-D formalism was invented and published by Poincare in 1905, and popularized by Minkowski in 1908. Einstein eventually accepted it a few years after that.

I am not sure Einstein's program is really so popular in textbooks. All the books I've seen either start with Michelson-Morley, or go straight to 4-D. The books might credit Einstein with these ideas, but he missed them.

For example, here are a couple of recent lectures on spacetime, here and here. Both credit Einstein with discovering relativity, but acknowledge that it was Minkowski's 4-D formalism that caught on among physicists of the day and that still dominates and that Einstein only reluctantly accepted it a couple of years later after arguing against it.

In his 1922 Kyoto lecture “How I Created the Theory of Relativity,” Einstein describes the decisive moment when he became enlightened in conversations with his friend Michel Besso: “My interpretation was really about the concept of time. Namely, time could not be defined absolutely, but is in an inseparable relationship with the signal velocity” [107].

Although the alleged cause of Einstein’s redefinition of time is not known for certain, and it is conceivable that Einstein, consciously or unconsciously, may have borrowed from other authors, most plausibly from Poincar ́e, more than his writings and sayings suggest [108], without any doubt, this new definition of time was the most shocking and paradoxical aspect of special relativity.

Einstein elevated Lorentz’s local time to the status of “true” time, and for his contemporaries it became the successor to Newtonian time. This immediately gave the theory a paradoxical tinge, since Newtonian time is absolute, while Lorentz’s local time varies depending on the inertial frame of reference.

Einstein said his big breakthrough in 1905 was to understand the local time that Lorentz invented in 1895. Lorentz got the Nobel Prize in 1902, and Poincare's nomination credited him with the ingenious invention of local time.

To answer the title question, the clocks do not slow down, relative to their own world lines. They appear to slow down in another frame, because of the 4-D non-Euclidean geometry.

Discussion of whether the special relativity contractions and dilations are real or apparent has a long history. From Wikipedia:

In the period 1909 to 1913 Varićak had correspondence with Albert Einstein[6] concerning rotation and length contraction where Varićak's interpretations differed from those of Einstein. Concerning length contraction Varićak said that in Einstein's interpretation the contraction is only an "apparent" or a "psychological" phenomenon due to the convention of clock measurements whereas in the Lorentz theory it was an objective phenomenon.[7] Einstein published a brief rebuttal, saying that his interpretation of the contraction was closer to Lorentz's.[8]
See also Reality of length contraction. Einstein still did not accept in 1911 that relativity used a non-Euclidean geometry, and that the contractions and dilations were artifacts of that geometry.

Another new paper argues:

There have been three geometrizations in history. The first one is historically due to the Pythagorean school and Plato, the second one comes from Galileo, Kepler, Descartes and Newton, and the third is Einstein's geometrization of nature. The term geometrization of nature means the conception according to which nature (with its different meanings) is massively described by using geometry.
Einstein would not agree with this. He did not believe that General Relativity geometrizes gravity. He persisted in this view, long after he was credited with geometrizing gravity. Steve Weinberg also did not like that view. Strange, as they sometimes used geometric arguments to solve general relativity problems, and almost everyone else accepts the geometry view.

Wednesday, December 14, 2022

Dynamical Models do not Generate Free Will

Physics professor Scott Aaronson's blog attracts trolls, but a couple of comments there are so glaringly wrong that it is worth showing why.

SR Says:

As pointed out by Mateus Araújo above, QFT is local, so it seems to me that the only ways to reconcile this with the nonlocality of standard QM evidenced by the Bell test are: (1) QFT being totally incorrect in an easily-measurable way, (2) Many-Worlds, so that the appearance of non-locality is only due to our observable universe residing in a slice of the “true” wavefunction, (3) superdeterminism.
If those were really the only consequences of the Bell tests, then the Nobel citation would have said so. It did not.

Fred writes:

“It’s very weird that we feel that we have the free will to perform experiments and the consciousness to feel like we understand them, when, to our best understanding, everything is predetermined by the laws of physics.”

What’s even weirder is that determinism isn’t even the core culprit here! You only have two ingredients at each end of the spectrum: perfect determinism (e.g. a bunch of balls moving around and hitting one another… i.e. every event is caused by a prior event), and pure randomness (everything is uncorrelated noise, i.e. no causality, things happen without a prior cause… very weird too when you think about it).

And then you can mix those two in various amount, on a continuous scale. QM is a mix of determinism and randomness, somewhere in the middle of the scale. MWI + consciousness also seems to lie in the middle of the scale (the wave function of the universe is determined, but my place as a conscious being on that structure seems random, from my subjective point of view).

When it comes to free will: sure, determinism seems to obviously exclude it… but randomness seems to exclude it too! For the general idea of free will isn’t exactly understood by throwing a dice at every moment a supposed “decision point” happens. He is right that our dynamical models do not generate free will. That is why it is called free will.

He responds:

You might as well call it pixie magic dust then?

A dynamic system is a system which state evolves with time (a parameter).

My point is that dynamic systems either evolve following causality (current state is derived from prior state) and/or randomness (current state is independent of prior state), and then any degree of mix of those two things (where events depend partly on prior events and partly on some randomness).

Note that randomness is non-determinsm, meaning an event without any cause within the system. Whether that randomness is pure (appearing magically within the system) or is a dependence on causes external to the system is basically the same.

That’s it!

What other ingredient would there be?

No, randomness is not an event without any cause within the system, it is an event without cause in the model. Something is random if your model cannot predict it. There is no such thing as pure randomness.

Free will is a form of randomness that no one is even trying to model.

He moved on to the latest scientific hoax:

Looks like fusion energy supremacy has been demonstrated!

Scott Says: Only “scientific supremacy,” not supremacy where you account for the actual energy cost of the lasers which is still 100x too high. Still good though!

They refer to this story:
Scientists with the U.S. Department of Energy have reached a breakthrough in nuclear fusion.

For the first time ever in a laboratory, researchers were able to generate more energy from fusion reactions than they used to start the process. The total gain was around 150%.

"America has achieved a tremendous scientific breakthrough," Energy Secretary Jennifer Granholm said at a press conference.

The achievement came at the National Ignition Facility (NIF), a $3.5 billion laser complex at Lawrence Livermore National Laboratory in California. For more than a decade, NIF has struggled to meet its stated goal of producing a fusion reaction that generates more energy than it consumes.

To make this claim, they ignore the energy required to run the lasers, and to power the confinement structure. As Scott notes, they are really putting 100x energy in.

The analogy here is that this new experiment is supposed to show that it is possible to generate fusion energy, even if the overall efficiency is lousy. Quantum supremacy is supposed to show that it is possible to generate a super-Turing computation, even if it is completely useless.

Update: I added this comment:

@fred: You say that events are either caused by the previous state (determinism), or independent of it (random), and magic pixie dust is the only other possibility.

Consciousness is the other possible cause. My conscious state appears inscrutable to you. You cannot model it, and my decisions appear unpredictable. You as might as well call them random, if you cannot predict them. I cannot really explain it either, except to say that I am more sure of it than any of my perceptions. Cogito ergo sum. (I think, therefore I am.)

Readers here might say I could be a troll or an AI bot, and hence not to be believed. Fine, decide for yourself. Do you really think that modern physics has explained causality so thoroughly as to rule out human consciousness? Or do you make decisions everyday that science cannot predict or explain?

Update: Another comment:
They got a breakthrough on Nuclear Fusion. No one now can say quantum computing is the nuclear fusion of the 60s.
Not sure if this is intended to be sarcastic. Quantum computing is more like nuclear fusion than ever, with both fields making big splashes with supremacy claims of no practical significance.

Monday, December 12, 2022

Impossible or Fundamentally Impossible

Here is Scott Aaronson's main argument for quantum computing:
I notice that you never responded about a fault-tolerant QC running Shor’s algorithm. Do you believe that that’s fundamentally possible, or not? If not, what physical principle is going to come in and prevent it? Will you agree that the discovery of that principle would be a revolution in physics?
He draws a distinction between what is impossible, and what is fundamentally impossible. I am not sure the distinction makes any sense.

In other words, QC might be impossible, but it is not fundamentally impossible unless some law of physics forbids it. We have not found that law of physics. The Extended Church-Turing Thesis forbids it, but he says it is not fundamental and it "is still on thin ice."

A response:

From an engineering point of view, there are often unforeseen limitations emerging from complex interactions of different domains such as physics of materials, chemistry, thermodynamics, mechanics, economics, etc. Those different knowledge fields are themselves at a much higher conceptual level compared to the underlying basic physics they all share, so their own laws/heuristics only hold in specific domains with specific assumptions, and all those various models (often highly non linear) just don’t overlap.

There’s nothing in the basic laws of physics explicitly saying that you can’t build a stable stack of quarters from here all the way up to the edge of space.

But do you believe it can be done? Given an existing stack of quarters, it’s trivial to just add one more quarter to it, and then by recursion assume the stack can be arbitrarily high. But that’s not how system scalability works in practice: at some point, what works for 100 quarters won’t work for 1000 quarters, because new problems are introduced: e.g. the wind will screw things up, and if you build your stack inside a tube with a vacuum, you’re now facing another totally different engineering challenge (create a 100 mile-high tube that can contain a vacuum). And, even without air, you’d have to deal with the effects of tides, plate tectonics, strength limitations in alloys, etc.

There’s also no specific law of physics telling us whether building room temperature super-conductors is impossible.

Same about building a stealth bomber that can travel faster than mach 5 at sea level.

It also goes the other way: a hundred years ago, it would have seem impossible (given the technology of the day, but given pretty much the same laws of physics) to build a gravitational wave detector that could measure changes in distance around 1/10,000th of the diameter of a proton, between two mirrors separated by 4km.

So, for the vast majority of hard engineering problems (and building a QC *is* a hard engineering problem), the fact that there’s no clear black and white basic principle saying it’s impossible isn’t really helping much at all. It wouldn’t be the first time we set up to build something, and then it never happens because various requirements just can’t be met within the same system (often it’s quietly killed because money runs out and people move on to other things because some new engineering progress makes an entirely different problem more exciting to work on).

So maybe QC is like that gravitational wave detector. It seemed impossible for a long time, until some huge technological advances were made.

In the same thread, Aaronson ridicules the idea that Einstein might have thought that ER=EPR. He did not even believe in either wormholes or quantum mechanics. It is not clear today if anyone really believes this wormhole entanglement nonsense. Somehow Einstein did inspire a lot of bogus physics thinking.

Sabine Hossenfelder has a new video on Quantum Uncertainty Simply Explained. She correctly describe the uncertainty principle as an essential part of quantum mechanics, but also explains that all waves obey an uncertainty principle. The uncertainty principle is a way of saying that electrons have wave-like behavior.

Update: Quanta has an article by Robbert Dijkgraaf saying There Are No Laws of Physics. There’s Only the Landscape. That is because he is a string theorist who studies mathematical abstractions that have nothing to do with reality.

Thursday, December 8, 2022

Holographic Wormhole on a Microchip

From Nature, the leading European science journal:
A holographic wormhole in a quantum computer

Physicists have used a quantum computer to generate an entity known as an emergent wormhole. Quantum systems can be linked by entanglement even when separated by extremely long distances. The authors generated a highly entangled quantum state between the two halves of a quantum computer, creating an alternative description, known as a holographic dual, in the form of an emergent wormhole stretched between two exterior regions. They then simulated a message traversing this wormhole. Such exotic physics is part of efforts to reconcile quantum mechanics with the general theory of relativity.

Dr. Quantum Supremacy responds:
Tonight, David Nirenberg, Director of the IAS and a medieval historian, gave an after-dinner speech to our workshop, centered around how auspicious it was that the workshop was being held a mere week after the momentous announcement of a holographic wormhole on a microchip (!!) — a feat that experts were calling the first-ever laboratory investigation of quantum gravity, and a new frontier for experimental physics itself. Nirenberg asked whether, a century from now, people might look back on the wormhole achievement as today we look back on Eddington’s 1919 eclipse observations providing the evidence for general relativity.

I confess: this was the first time I felt visceral anger, rather than mere bemusement, over this wormhole affair. Before, I had implicitly assumed: no one was actually hoodwinked by this. No one really, literally believed that this little 9-qubit simulation opened up a wormhole, or helped prove the holographic nature of the real universe, or anything like that. I was wrong.

That 1919 eclipse was hyped with a NY Times headline: “Men of Science More or Less Agog Over Results of Eclipse Observations”.

Update: Here is the Quanta video.

Almost a century ago, Albert Einstein realized that the equations of general relativity could produce wormholes. But it would take a number of theoretical leaps and a “crazy” team of experimentalists to build one on Google's quantum computer. Read the full article at Quanta Magazine:
Quanta used to be a respectable magazine. As was Nature, which is now filled with woke nonsense.

Monday, December 5, 2022

What is Entanglement?

Entanglement is supposed tobe the essence of quantum mechanics, but I wonder about it. Wikipedia defines:
Quantum entanglement is the physical phenomenon that occurs when a group of particles are generated, interact, or share spatial proximity in a way such that the quantum state of each particle of the group cannot be described independently of the state of the others, including when the particles are separated by a large distance. The topic of quantum entanglement is at the heart of the disparity between classical and quantum physics: entanglement is a primary feature of quantum mechanics not present in classical mechanics.[1]

Measurements of physical properties such as position, momentum, spin, and polarization performed on entangled particles can, in some cases, be found to be perfectly correlated. For example, if a pair of entangled particles is generated such that their total spin is known to be zero, and one particle is found to have clockwise spin on a first axis, then the spin of the other particle, measured on the same axis, is found to be anticlockwise.

It confusingly says it is a "physical phenomenon", and then says it is a property of how quantum states are described. So it is not a physical property. I removed the word "physical".

My bigger issue is "quantum entanglement is at the heart of the disparity between classical and quantum physics". This is conventional wisdom, so it is reasonable for Wikipedia to say this, but it is true?

Classical systems obey entanglement, exactly as it is described here. Suppose you have a system of two balls, and you know their total momentum. Assume that momentum is conserved. Then the balls get separated, and you measure the momentum of one of them. You then know the momentum of the other ball, as it is perfectly correlated and opposite. The momenta must sum to the original total.

You can do the same with angular momentum.

So how is this any different from the quantum entanglement? I do not see any difference.

Anticipating your objections, you might tell me that the quantum momentum and spin are not known until measured, or that Bell's theorem puts limits on certain correlations. Okay, I accept all that, but what does it have to do with the definition of entanglement? Nothing.

Or you might say that entanglement is a nonlocal interaction. Spooky action and all that. But that is a big hoax. Quantum mechanics is a local theory.

A more sophisticated objection says that entanglement is a precious resource that can be used for cryptography, teleportation, and parallel computation. Quantum entangled particles have the necessary magic, and classically entangled ones do not.

This objection is harder to answer, as currently billions of dollars are being spent to try to determine whether any such magic exists. Most say yes, but they have yet to demonstrate anything useful from that magic.

Even if such magic exists, there ought to be a way to define entanglement that makes it clearly a quantum phenomenon, and not a classical one.

Quantum systems are different from classical systems. Bell proved that. The uncertainty principle is different, and so are certain correlations. But I don't see how entanglement is any different.

Lenny Susskind and others have been saying that wormholes and entanglement are the same thing.

YouTube:

Almost a century ago, Albert Einstein realized that the equations of general relativity could produce wormholes. But it would take a number of theoretical leaps and a “crazy” team of experimentalists to build one on Google's quantum computer. Read the full article at Quanta Magazine:
The idea seems to be that if entanglement and wormholes are the same thing, and quantum computers use entanglement to do super-Turing computations, then there should be some wormholes hiding inside a quantum computer. Seems like a joke to me, but I did not read the details.

Update: Peter Woit writes:

The best way to understand the “physicists create wormholes in the lab” nonsense of the past few days is as a publicity stunt ...

I’m hoping that journalists and scientists will learn something from this fiasco and not get taken in again anytime soon. It would be very helpful if both Nature and Quanta did an internal investigation of how this happened and reported the results to the public. Who were the organizers of the stunt and how did they pull it off? ...

his claims in the Quanta video that the result of the Google quantum computer calculation was on a par with the Higgs discovery. Does he really believe this (he’s completely delusional) or not (he’s intentionally dishonest)? ...

Peter Shor says: It seems to me that the string theorists and “it from qubit” community seem to have this unwritten rule that they don’t criticize other members of this community in public.

I used to think that Physics had higher standards than other sciences for truth and professionalism. Apparently not.

Another comment:

I thought you’d be interested to know that the “wormhole created in a quantum computer” story is now being covered in some far-right-wing media. I won’t name them here (they’re very far-right sites, not sure if you’d allow a link here), but they’re essentially saying “isn’t this manifestly stupid? See? Why should we believe scientists when they publish bullshit like this?” and essentially use the story to argue that scientists and science journalists are all a bunch of idiots, hence why should we trust them on vaccines/climate change etc.

This is another consequence of bad publicity stunts like this: it erodes trust in scientists.

Here is one such site. It is so disreputable that Google confiscated its domain name. Possibly the most censored site in the world. It just quotes a NY Times tweet, a Google tweet, a Reuters story, and a couple of more tweets, and adds:
This is very obviously fake and it’s goofy that people think it’s real.
I agree with that. It is goofy that people think that this research is real.

Thursday, December 1, 2022

Entanglement Discovered in a Quantum Computer

Lenny Susskind and others have been saying that wormholes and entanglement are the same thing.

YouTube:

Almost a century ago, Albert Einstein realized that the equations of general relativity could produce wormholes. But it would take a number of theoretical leaps and a “crazy” team of experimentalists to build one on Google's quantum computer. Read the full article at Quanta Magazine:
The idea seems to be that if entanglement and wormholes are the same thing, and quantum computers use entanglement to do super-Turing computations, then there should be some wormholes hiding inside a quantum computer. Seems like a joke to me, but I did not read the details.

See Peter Woit for details. At least one physicist calls it a publicity stunt. The quantum computer researchers have burned a lot of money, and need something to show for it.

Update: A comment:

Even if the headline isn’t strictly accurate (a topic for another time, although I think you’re splitting hairs here), what’s the harm? It’s a cool-sounding result which gets people interested in theoretical physics, science more generally. As long as science journalists are driving interest and engagement, I think they’re doing a good job. If you want to discuss bad science journalism, surely a better use of your time would be all the anti-science fake news coming from the populist right in the U.S.
I suspect that this view is common. Over-hyped phony stories generate interest and funding. If you want to be a good Leftist, you should not call out Leftist lies. Instead you should devote that energy to attacking right-wingers!

Update: Scott Aaronson admits that the wormhole story is a big hoax, promoted by physicists who should know better. He also discusses a new paper saying that quantum supremacy is impossible. He says it is no surprise to experts in the field who have known since 2016 that scaling up quantum computers will not work. He is still a believer:

So, though it’s been under sustained attack from multiple directions these past few years, I’d say that the flag of quantum supremacy yet waves. The Extended Church-Turing Thesis is still on thin ice.
That is, he says that he has not been proved wrong yet. Okay, but he hasn't been proved right either.

Monday, November 28, 2022

Aharonov–Bohm effect does not Prove Nonlocality

I heard the suggestion that the Aharonov–Bohm effect proves a form of quantum nonlocality.
The Aharonov–Bohm effect, sometimes called the Ehrenberg–Siday–Aharonov–Bohm effect, is a quantum mechanical phenomenon in which an electrically charged particle is affected by an electromagnetic potential (φ, A), despite being confined to a region in which both the magnetic field B and electric field E are zero.[1] The underlying mechanism is the coupling of the electromagnetic potential with the complex phase of a charged particle's wave function, and the Aharonov–Bohm effect is accordingly illustrated by interference experiments.

The most commonly described case, sometimes called the Aharonov–Bohm solenoid effect, takes place when the wave function of a charged particle passing around a long solenoid experiences a phase shift as a result of the enclosed magnetic field, despite the magnetic field being negligible in the region through which the particle passes and the particle's wavefunction being negligible inside the solenoid. This phase shift has been observed experimentally.[2]

So the effect depends on the potential, and not just the fields.

The potential and fields are all locally defined, so what is the problem?

The problem is that only the fields are directly observable, and there is considerable discretion in defining the potential. Sometimes the potential is defined to satisfy a distant condition. This is allowed, because gauge symmetry means it has the same physical effect.

From the viewpoint of differential geometry, the potential is a connection on a complex line bundle, and is a purely local object. It is more fundamental than the fields.

The paradox is that an electron can interfere with itself after going around a non-null-homotopic loop with a flat complex line bundle. Arguably there is something nonlocal about that. I don't think so. It is not like action-at-a-distance at all.

Friday, November 25, 2022

Electrons Are Spinning

Scientific American reports:
Quantum Particles Aren’t Spinning. So Where Does Their Spin Come From?

A new proposal seeks to solve the paradox of quantum spin ...

But despite appearances, electrons don’t spin. They can’t spin; proving that it’s impossible for electrons to be spinning is a standard homework problem in any introductory quantum physics course. If electrons actually spun fast enough to account for all of the spinlike behavior they display, their surfaces would be moving much faster than the speed of light (if they even have surfaces at all). Even more surprising is that for nearly a century, this seeming contradiction has just been written off by most physicists as yet another strange feature of the quantum world, nothing to lose sleep over.

No, this is wrong. Electrons do spin. You only get that paradox if you assume that electrons are very tiny spheres or point particles, but quantum mechanics teaches that electron are non-classical entities with wave-like properties.

The article goes on to give the history of quantum spin, and how crucial it is for understanding chemistry and many other things.

But all of these fabulous discoveries, applications, and explanations still leave Goudsmit and Uhlenbeck’s question on the table: what is spin? If electrons must have spin, but can’t be spinning, then where does that angular momentum come from? The standard answer is that this momentum is simply inherent to subatomic particles, and doesn’t correspond to any macroscopic notion of spinning.

Yet this answer is not satisfying to everyone. “I never loved the account of spin that you got in a quantum mechanics class,” says Charles Sebens, a philosopher of physics at the California Institute of Technology.

No, this is silly. The QM textbooks teach that position, momentum, energy, angular momentum, and spin are observables that correspond to the classical variables, but cannot be taken literally about electrons as point particles, as the uncertainty principle prevents such a literal treatment. There is not really any difference between spin and the other variables in this respect.

I previously posted Electrons do spin.

Peter Woit explains:

Despite what Sebens and Carroll claim, it has nothing to do with quantum field theory. The spin phenomenon is already there in the single particle theory, with the free QFT just providing a consistent multi-particle theory. In addition, while relativity and four-dimensional space-time geometry introduce new aspects to the spin phenomenon, it’s already there in the non-relativistic theory with its three-dimensional spatial geometry.
Asking whether electrons really spin is a like asking whether they orbit the nucleus of an atom. A century ago, physicists tried to model an atom as classical electron orbits, and figured out that it doesn't work. You need a quantum model. But it is still correct to say that the electrons orbit the nucleus.