Monday, December 29, 2025

Heisenberg's 1925 Quantum Mechanics Paper

Here is a good new video: How Heisenberg Discovered It. There is more detail here.

It is largely on Heisenberg's famous 1925 Umdeutung paper.

In his article, Heisenberg described a new framework for quantum theory that was based on observable parameters (parameters that could be measured in scientific experiments), such as transition probabilities or frequencies associated with quantum jumps in spectral lines, rather than unobservable parameters, like the position or velocity of electrons in electron orbits. Thus, Heisenberg used two indices for his re-interpretation of position, corresponding to initial and final states of quantum jumps. Heisenberg used his framework to successfully explain the energy levels of a one-dimensional anharmonic oscillator.

Mathematically, Heisenberg used non-commutative operators in his new multiplication rule, i.e. generally A B ≠ B A for quantum quantities A and B. This insight would later become the basis for Heisenberg's uncertainty principle.

This theory beme quantum mechanics, and has been well accepted for a century, described in textbooks, and applied to a trillion dollar industry.

The bizarre thing is that now the leading popularizers of quantum mechanics seem to not understand Heisenberg's first paragraph. The latest offender is the Veritasium channel. I have criticized many others on this blog. What they have in common is that they refuse to accept that quantum mechanics is about observables, and argue that the theory must mathematically represent unobservables.

In their jargon, the unobservables are called hidden variables, and the belief that they should be incorporated into the theory is called realism.

The 2022 Nobel Prize was given for the experimental proof that local hidden variables are impossible. John von Neumann had said so in his 1930 treatise, and the experiments were just confirmation of what the textbooks have said since 1930.

Maybe all the textbooks are wrong, but that is like saying that perpetual motion machines are possible, or that rockets can go faster than light. Anyone making such a claim needs to explain how everyone else has been so wrong for so long.

They do not, of course. They mainly give some silly argument about how QM would be hard to understand if it included the unobservables. Of course that is true, because the whole point of QM since 1925 has been to exclude the unobservables.

When you hear people demand realism, they are essentially demanding commuting hidden variables to represent unobservables. The whole point of QM is to avoid such things.

Another trouble point is the supposed quantum nonlocality. I have come to the conclusion that this is a misunderstanding or rejection of probability, and doesn't even have anything to do with QM. If a theory predicts probabilities, as all physical theories do, then it says there is a chance something happens and something else does not happen. Assume that something involves at least two spatially separated events. As soon as you run an experiment and see that something happens, you immediately learn that something else did not happen, and that conclusion is the supposed nonlocality. You can call this nonlocal, but that is silly as the same thing happens in any theory.

Friday, December 26, 2025

Einstein's Notion of a Principle Theory

Einstein scholar Galina Weinstein
Einstein's distinction between principle theories and constructive theories is methodological rather than metaphysical. Principle theories such as thermodynamics and relativity articulate empirically distilled constraints that delimit admissible microphysical models, while constructive theories remain provisional and revisable....

In late 1919, following the British eclipse expeditions that confirmed the light-bending prediction of general relativity, Albert Einstein agreed to write an explanatory article for The Times of London. Written in German as “Was ist Relativitätstheorie?” and published in English as “Time, Space, and Gravitation” [7, 8], the article was intended not merely as popularization, but as a methodological clarification of the kind of theory relativity is.

In the essay, Einstein contrasts constructive theories (konstruktive Theorien) with principle theories (Prinziptheorien) [7]. This distinction is not merely classificatory but methodological and epistemological in character [22].

Here is that 1919 Einstein paper:
There are several kinds of theory in physics. Most of them are constructive. These attempt to build a picture of complex phenomena out of some relatively simple proposition. The kinetic theory of gases, for instance, attempts to refer to molecular movement the mechanical thermal, and diffusional properties of gases. When we say that we understand a group of natural phenomena, we mean that we have found a constructive theory which embraces them.

But in addition to this most weighty group of theories, there is another group consisting of what I call theories of principle. These employ the analytic, not the synthetic method. ...

The special relativity theory is therefore the application of the following proposition to any natural process: "Every law of nature which holds good with respect to a coordinate system K must also hold good for any other system K' provided that K and K' are in uniform movement of translation."

The second principle on which the special relativity theory rests is that of the constancy of the velocity of light in a vacuum.

No, relativity was not developed as a principle theory. FitzGerald proposed the relativity length contraction in this 1889 paper:
I have read with much interest Messrs. Michelson and Morley's wonderfully delicate experiment attempting to decide the important question as to how far the ether is carried along by the earth. Their result seems opposed to other experiments showing that the ether in the air can be carried along only to an inappreciable extent. I would suggest that almost the only hypothesis that can reconcile this opposition is that the length of material bodies changes, according as they are moving through the ether or across it, by an amount depending on the square of the ratio of their velocity to that of light. We know that electric forces are affected by the motion of the electrified bodies relative to the ether, and it seems a not improbable supposition that the molecular forces are affected by the motion, and that the size of a body alters consequently.
This appears partly inspired by this 1988 Heaviside paper. That is, solid objects are held together by electromagnetic forces, and those fields were known to be warped by motion. Relativity was the constructive consequence.

The relativity principle that laws holding in K must also hold for K' was essentially what Lorentz proved in 1895.

Einstein seemed to disavow all of this when quantum mechanics was discovered. He was happy to avoid the question of how the FitzGerald contraction works on the molecular level, but refused to accept a quantum theory that did not explain the Heisenberg uncertainty on an atomic level.

It is interesting that in 1919 Einstein was still using 1895 terminology, and not saying that the laws of nature must be in covariant equations, or that the laws must be well-defined on a non-Euclidean manifold.

Weinstein posted some other goofy papers recently, including this on Einstein EPR entanglement, and this comparing Heisenberberg-Schroedinger to the P=NP problem. The Heisenberg and Schroeding theories were mathematically equivalent, but she has a whole paper analogizes them to things that are completely different.

Monday, December 22, 2025

Aaronson's Latest Quantum Computer Assessment

Dr. Quantum Supremacy, aka Scott Aaronson, posts his latest judgment on the feasibility of quantum computers.

In brief, quantum supremacy has not been achieved, but he still has hopes based on theoretical considerations from 30 years ago, and recent progress in quantum gate fidelity.

And he hints that at some point, researchers might hold back on public announcements, just as 1940 research into fission bombs avoided publishing how to build a bomb.

I think that artificial general super intelligence is potentially a lot more dangerous than quantum computers, and so there would be more reason to hold back on that. Maybe OpenAI or Google or Microsoft is holding back, but I doubt it. They are locked in a high-stakes competition.

He makes this ominous comment:

Similarly, at some point, the people doing detailed estimates of how many physical qubits and gates it’ll take to break actually deployed cryptosystems using Shor’s algorithm are going to stop publishing those estimates, if for no other reason than the risk of giving too much information to adversaries. Indeed, for all we know, that point may have been passed already. This is the clearest warning that I can offer in public right now about the urgency of migrating to post-quantum cryptosystems, a process that I’m grateful is already underway.
The US government is migrating to post-quantum cryptosystems, but I don't think those estimates will help any evil-doers. So far, the quantum computers can only factor 15 = 5x3. It will take quantum computers 50 years to crack today's cryptosystems, even if it is possible.

Saturday, December 20, 2025

Veritasium goes Full Retard

This YouTube channel has nearly 20 million subscribers, and a lot of truly excellent videos. But the latest release get physics badly wrong.
The Experiment That Breaks Relativity

Veritasium 19.7M subscribers

Dec 18, 2025
How an argument between Einstein and Bohr changed quantum mechanics forever.

A tipoff is when it says that all the textbooks are wrong:
7:09 - Physicists tell a version of this story, you know that you will find in physics textbooks 7:15 and in pop science books and that you know physicists tell amongst ourselves that 7:20 what happened was Einstein and Bohr had a great debate and Einstein was unhappy with quantum mechanics ...

33:29 - We did do this experiment again, and the number, we got very much agreed with quantum mechanics, 33:41 but this is one of the most misunderstood experiments in all of physics. - You'll find in all sorts of physics textbooks and papers 33:48 and whatnot, that what Bell's theorem proves that it rules out local hidden variables or local realism. ...

35:00 it's a really deep misunderstanding that shows up in almost every single textbook on the subject. - So what does Bell's theorem really prove?

No, the quantum mechanics textbooks are right, and this video is wrong.

Here is the textbook explanation of quantum mechanics.

Position and momentum do not commute, and do not have definite values until observed. There is a Heisenberg uncertainty. If you measure both, your answers will depend on the order of measurement.

This contrasts with classical mechanics, where these variables have values independent of measurement.

Einstein and Bell wondered if maybe quantum mechanics could be reformulated as a classical theory. Bell cleverly formalized classical theories as theories of local hidden variables, and proved a theorem that such theories differ from quantum mechanics. Experiments then confirmed the quantum mechanics that everyone believed since 1930.

That was the end of the matter, for all serious thinkers. But some pursue some loopholes to this argument. Namely nonlocal theories, many-worlds, and superdeterminism.

Where Veratasium goes off the rails to make three fundamental errors.

(1) That simple entanglement examples show that quantum mechanics is nonlocal. The given example is to produce two related particles, such that a conservation law tells you that observing one immediately tells you something about the other.

The same thing happens in classical mechanics. This does not distinguish classical and quantum theories, or local and nonlocal theories.

(2) That "realism" means a classical theory, so if you believe in reality, you have to accept classical mechanics and reject quantum mechanics.

Quantum mechanics is not a classical theory. If you define realism that way, then quantum mechanics does not obey local realism. In particular, position and momentum do not have values until observed.

(3) That many-worlds theory somehow provides a way out of the quantum puzzles of locality and realism.

No, many-worlds theory does not, and cannot, explain anything.

John Bell passed away suddenly at the age of 62. 40:05 He didn't know it, but he had been nominated for the Nobel Prize just a year earlier - In a talk he gave in Geneva in January, 1990. 40:14 He said, I think you're stuck with the non-locality. I don't know any conception of locality, which works 40:22 with quantum mechanics. That was eight months before he died.
Yes, some believed that Bell deserved a Nobel for this, but the mainstream view, and the Nobel view, is that Bell was wrong. The 2022 prize was given for some Bell-related work, but the prize citation pointedly avoided giving Bell any credit for his goofy non-locality ideas.

My title refers to this movie clip.

This video is very disappointing. The channel had been very reliable and informative. I have learned a lot. But when you see a video claiming that all the textbooks and top experts are wrong, you probably should not believe it.

In this case, the video is rejecting mainstream physics that has been well-accepted for a century. And it is for the pursuit of goofy ideas that cannot lead anywhere.

Thursday, December 18, 2025

Proving the Pythagorean Theorem

Here is a new paper with complicated proofs of the Pythagorean theorem. I don't know why anyone bothers, as plenty of proofs are very simple. Some on the Wikipedia page do not even require any words.

My favorite depends on the fact that in similar figures, all linear dimensions are proportional. Areas are proportional to the square of the linear dimensions.

Given a right triangle, drop a perpendicular from the right angle to the hypotenuse. This makes two new triangles, both similar to the original, and combining to make up the original.

Since the triangles are similar, the areas are proportional to the square of their hypotenuses.

The three hypotenuses are the three sides of the original triangle, and adding the areas of the new triangles gives the area of the original, so adding the squares of the sides must give the area of the hypotenuse.

I like this proof because it is so simple and direct. It does not rely on any tricky cancelations.

Monday, December 15, 2025

Are Particles Real, or Model Dependent?

I found this new video confusing:
Why particles might not exist | Sabine Hossenfelder, Hilary Lawson, Tim Maudlin

Sabine Hossenfelder, Hilary Lawson, and Tim Maudlin discuss the existence of particles, quantum field theory, and ultimate reality.

Are particles just an invention of the human mind?

From Democritus to Einstein, we have assumed the world is made of tiny building blocks of matter. But the more we’ve looked for them, the more they’ve disappeared. Our best theory now proposes the world is better described by ‘fields’ that don’t have the familiar properties of physical bits, things, or particles. Yet physicists still refer to particles, though few seem to agree on their nature. Some say they ‘approximately exist’ and others say that they don’t exist at all. Stranger still, there are ‘quasiparticles’, phenomena that we can treat as particles and enable us to solve equations, but which we know aren't fundamentally real.

They argued about whether a particle is a vibration in a field.

They also argued about how to think about physics, when there are mathematically equivalent descriptions of it.

Even if it 11:10 were true, we're we have that all the 11:12 time in physics and we don't think they 11:14 have to be equal. So, Lorentz had an 11:16 understanding of spacetime where there's 11:18 absolute simultaneity. 11:19 Einstein got rid of it, but you can 11:22 prove in their applications. They make 11:26 the same predictions. Nobody thinks 11:27 they're the same theory. All right? 11:29 There's different theories.
Actually I do think that they are same theory. Most people did in the early 1900s, as it was called Lorentz-Einstein theory (LET). Some said that Minkowski's theory was different, because it was based on a Lorentz-invariant spacetime geometry, but Einstein's was the same as Lorentz's. The main difference was that Einstein postulated the Michelson-Morley consequences.

Einstein's famous 1905 relativity paper has a whole section on simultaneity, but never says there is no absolute simultaneity. Here is how the section ends:

It is essential to have time defined by means of stationary clocks in the stationary system, and the time now defined being appropriate to the stationary system we call it “the time of the stationary system.”
The next section ends:
So we see that we cannot attach any absolute signification to the concept of simultaneity, but that two events which, viewed from a system of co-ordinates, are simultaneous, can no longer be looked upon as simultaneous events when envisaged from a system which is in motion relatively to that system.
So two events can be simultaneous in one frame, but not another. The same is true in Lorentz's theory, as motion makes time run more slowly.

Even today, it is generally believed that the cosmic microwave background (CMB) defines an absolute simultaneity.

It often happens that there are mathematically equivalent descriptions, in which I do not see how one can be more real than the other.

If particles are defined as having definite positions, velocities, and trajectories, then they are certainly not real. Quantum mechanics only says that they look real when observations are made.

Thursday, December 11, 2025

Quadratic Gravity is Possible

Quanta magazine has article on quadratic gravity.

I mentioned this before as an obvious alternative way to quantize gravity.

People have been saying for a long time that string theory is the only hope for quantizing gravity. It is completely false. For example, Brian Greene says in a new video:

0:00 String theory is the most potent and powerful approach that we have discovered as a species for blending general relativity and quantum mechanics.
He goes on to say that string theory is still viable as long as researchers want to work on it, and that is how science works.

Tuesday, December 9, 2025

Goedel Proved Hilbert Wrong

I wondered whether Goedel literally proved Hilbert wrong, as is commonly claimed. I doubted it.

David Hilbert didn’t leave a single, tidy quote where he explicitly says, “I demand completeness and consistency provable from within,” in those exact words. His vision for the formalization of mathematics, known as Hilbert’s Program, emerged through his lectures, papers, and talks—particularly in the 1920s — and is pieced together from these sources. I’ll show you where these demands are clearest, based on his own words and how contemporaries understood him, without inventing anything he didn’t say.

Completeness

Hilbert’s call for a complete axiomatic system — where all true statements in a mathematical domain (like arithmetic) can be derived from the axioms—shines through in his 1900 address to the International Congress of Mathematicians in Paris. There, he posed his famous 23 problems, and the second problem implicitly reflects this goal. He asked for a proof of the consistency of arithmetic’s axioms but tied it to a broader vision of a “definite system” that settles all questions. He wrote:

“By the solution of these problems, a definitive clarification of the foundations of analysis and arithmetic will be obtained, and the system of axioms will be so complete that all remaining questions can be decided by purely logical deduction from them.”
(From Hilbert’s 1900 address, as translated in various editions, e.g., Bulletin of the AMS, 1902). The term “complete” here doesn’t mean Gödel-style completeness yet (that formal definition came later), but Hilbert’s intent was a system where no mathematical truth escapes the axioms’ reach. In his 1922 lecture “The Logical Foundations of Mathematics” (published in From Kant to Hilbert, ed. Ewald), he’s more explicit:
“The chief requirement of the theory of axioms must be that it makes it possible to decide, by means of a finite number of logical inferences, whether any given statement formulated in the language of the theory is true or false.”
This is Hilbert pushing for what we’d now call decidability — every statement provable or disprovable within the system—which implies completeness. He wanted math to be a closed, self-sufficient machine.

Consistency Provable from Within

Hilbert’s demand that consistency be provable “from within” the system is trickier—he didn’t phrase it exactly that way, but it’s how his program was framed, and Gödel’s response hinges on this interpretation. In his 1925 paper “On the Infinite” (Über das Unendliche, Mathematische Annalen), Hilbert outlines his method, later dubbed “finitism,” to secure mathematics:

“We must establish the consistency of the axioms by means of a careful description and analysis of the methods actually used in mathematics… The goal is to establish certainty by finitary means.”
He proposed proving consistency using only “finitary” reasoning — simple, concrete logic he believed was unassailable. Crucially, in his 1928 book Grundzüge der theoretischen Logik (with Wilhelm Ackermann), he posed the “consistency problem” as:
“To prove by finite methods that the axioms do not lead to a contradiction.”
Hilbert didn’t say “from within” verbatim, but his finitist approach aimed to use tools internal to the system’s basic logic, avoiding infinite or external assumptions. He saw this as a proof that the system (e.g., for arithmetic) could vouch for itself. At the 1928 International Congress, he reiterated:
“The development of mathematical science must proceed in such a way that… its freedom from contradiction is proved by methods inherent in the nature of the science itself.”
How Gödel Saw It

Gödel’s 1931 paper (On Formally Undecidable Propositions, Monatshefte für Mathematik) directly engages Hilbert’s goals. He notes that Hilbert’s Program seeks a system where “all true mathematical statements can be derived” (completeness) and where “the freedom from contradiction can be proved by finitary means” (consistency). Gödel then shows this is impossible for arithmetic, citing Hilbert’s own framework as the target.

Why “From Within”?

Hilbert’s insistence on finitary methods — logic he thought was intrinsic to math — implied a self-contained proof. Later scholars (e.g., von Neumann, in letters from the 1930s, and modern texts like Gödel’s Theorem: An Incomplete Guide by Torkel Franzén) interpret this as “within the system” because Hilbert rejected leaning on stronger, external systems, which would undermine his foundationalist aim.

The Evidence

Completeness: 1900 address, 1922 lecture—Hilbert wants all truths decidable from axioms.
Consistency from Within: 1925 paper, 1928 book/talk—proof via finitary, internal means.
Hilbert’s words don’t always use modern terms, but his intent, as read by peers and successors, aligns with these demands. Gödel’s theorems hit precisely because they showed these twin pillars — total coverage and self-proved safety — couldn’t stand together. I can’t paste a single smoking-gun quote, but the pattern across these works is what mathematicians like Gödel and historians like Constance Reid (Hilbert, 1970) point to as the program’s heart.

Okay, I guess Hilbert was wrong, but people wrongfully conclude that mathematical axiomatization does not work. In fact, mathematics can be axiomatized as ZFC, where ZFC is consistent and powerful enough to prove all the theorems of math. It just isn't strong enough to decide every question, or to prove consistency within ZFC.

It was not wrong to try to axiomatize mathematics, not wrong to demand finitary reasoning, and not wrong to want consistency. If he really wanted to prove consistency from within the system, that is not possible, and is not really desirable. Hilbert's real error was to think it is desirable, if he really thought that.

The reason is that an inconsistent system can prove anything. If a system has a contradiction then it can prove 1=2 or anything else. So it does not make any sense for a system to prove itself consistent. It is like asking someone if he is honest. A liar will say that he is honest, and you learn nothing. To assess someone's honesty, you have to ask someone else.

If it is true that Hilbert wanted a math system to prove itself consistent, then he was wrong because that is a nonsensical thing to want. That should have been obvious long before Goedel.

People hear all this and figure that our axiom systems are not strong enough to prove what we want. But actually, Harvey Friedman's grand conjecture says that all our big math theorems can be proved in much weak axiom systems than ZFC, which is what is usually used.

Monday, December 1, 2025

Explanation of Newtonian Time

Matt Farr posted a new paper on Time in Classical Physics:
Wigner (1995, 334) describes how Newton’s “most important” achievement was the division of the world into “Initial Conditions and Laws of Nature”, noting that “[b]efore Newton there was no sharp separation between the two concepts. […] After Newton’s time the sharp separation of initial conditions and laws of nature was taken for granted and rarely even mentioned.” This is the central feature of the Newtonian schema.
Some people are so locked into this view that they say that indeterminism and free will are inconceivable. When you make a choice at a restaurant menu, it has to be determined by the initial conditions, or else the laws of physics are violated. No, that is just the Newtonian schema.

For example, Sabine Hossenfelder argues:

And according to new scientist, the superdeterminist view 5:20 naturally raises the possibility that the laws of physics are at odds with unlimited free will. 5:26 What are we to make of this? For one thing, this free will assumption in quantum physics, despite 5:33 its name, has nothing to do with what we normally refer to as free will in none of the definitions 5:40 that philosophers like to use.

Regardless of what you think quantum physics exactly means, the laws 5:47 of physics are always at odds with unlimited free will. This is why they're called laws. If you jump 5:54 off a bridge, you'll fall down. And no amount of free will is going to make you fall up.

She is saying that the Newtonian schema leaves no room for free will. If your initial conditions have you jumping off a bridge, the laws of physics determine your fall, and free will cannot do anything.

I think she is alluding to philosophers who try to define free will as being compatible with all your choices being determined before you were born. To those philosophers, free will is just in your imagination, and has nothing to do with the laws of physics or any actual choices you make. Most philosophers have such a nihilist view.

Yes, the Newton schema assumes that the past determines the future. That is not a law of physics. It is just an assumption. It works well approximately in a great many cases. Not all cases, if you believe in free will.

Some people also argue that the future can determine the past, in the same way that the past determines the future.

The above paper looks at what Newton said about time, and contrasts it with relativity and Lagrangian mechanics. Everyone says Newtonian time is more intuitive than relativistic time, but I am not sure. I have no intuition for anything going faster than light, as Newtonian time allows.

Lagrangian mechanics is another story. Time is just another variable, and it is not so clear how causality works. The paper tries to make sense of it.

New Scientist just released a video:

What Is Reality? Does Quantum Physics Have The Answer?

Over the past century, quantum physics has transformed science and reshaped our understanding of reality. In this special compilation from the New Scientist archive, we trace that evolution, from the birth of quantum mechanics to today’s lab-made “mini universes.”

We explore how quantum ideas revolutionised technology, how they continue to inspire new forms of creativity, and how recent breakthroughs are pushing the limits of what we can understand.

Most of it is not too bad, but it presents an expert physicist saying, about interpretations of quantum mechanics:
I think the 5:02 one that is probably most compelling to 5:04 the majority of physicists is called the 5:06 many worlds interpretation. It's 5:08 compelling because it says that 5:09 fundamentally we are also in superposition. Every possibility has a 5:14 realization in different worlds.
No, this is crazy stuff. I hope it is not true that a majority of physicists find this nuttiness compelling.

The Schroedinger Cat was once an example of silly thinking. Now this man is compelled to believe in many-worlds because he wants to believe that he is just like a Schroeding cat.

Has quantum advantage been achieved?

Dominik Hangleiter writes on his blog : Recently, I gave a couple of perspective talks on quantum advantage, one at the annual retreat of th...