Friday, December 8, 2023

Harvard/MIT claims Quantum Error Correction

Scott Aaronson announces:
The biggest talk at Q2B this year was yesterday’s announcement, by a Harvard/MIT/QuEra team led by Misha Lukin and Vlad Vuletic, to have demonstrated “useful” quantum error-correction, for some definition of “useful,” in neutral atoms (see here for the Nature paper). To drill down a bit into what they did:

They ran experiments with up to 280 physical qubits, which simulated up to 48 logical qubits. ...

They don’t claim to have demonstrated quantum supremacy with their logical qubits—i.e., nothing that’s too hard to simulate using a classical computer.

Assuming the result stands, I think it’s plausibly the top experimental quantum computing advance of 2023 (coming in just under the deadline!). We clearly still have a long way to go until “actually useful” fault-tolerant QC, which might require thousands of logical qubits and millions of logical gates. But this is already beyond what I expected to be done this year, and (to use the AI doomers’ lingo) it “moves my timelines forward” for quantum fault-tolerance. It should now be possible, among other milestones, to perform the first demonstrations of Shor’s factoring algorithm with logically encoded qubits (though still to factor tiny numbers, of course). I’m slightly curious to see how Gil Kalai and the other quantum computing skeptics wiggle their way out now, though I’m absolutely certain they’ll find a way! Anyway, huge congratulations to the Harvard/MIT/QuEra team for their achievement.

What, is it my job to critique these experiments? He says "assuming the result stands", so he is not so sure himself. This is all I know.

Thursday, December 7, 2023

IBM may soon have one Logical Qubit

SciAm reports:
IBM has unveiled the first quantum computer with more than 1,000 qubits — the equivalent of the digital bits in an ordinary computer. But the company says it will now shift gears and focus on making its machines more error-resistant rather than larger.

For years, IBM has been following a quantum-computing road map that roughly doubled the number of qubits every year. The chip unveiled on 4 December, called Condor, has 1,121 superconducting qubits arranged in a honeycomb pattern. ...

Researchers have generally said that state-of-the-art error-correction techniques will require more than 1,000 physical qubits for each logical qubit. A machine that can do useful computations would then need to have millions of physical qubits. ...

A new IBM road map on the its quantum research unveiled today sees it reaching useful computations — such as simulating the workings of catalyst molecules — by decade’s end.

Okay, one logical qubit, as soon as IBM gets that error correction figured out. And something useful by the year 2029.

Monday, December 4, 2023

Philosophy of Quantum Mechanics

David Wallace writes Philosophy of Quantum Mechanics:
This is a general introduction to and review of the philosophy of quantum mechanics, aimed at readers with a physics background and assuming no prior exposure to philosophy. It is a draft version of an article to appear in the Oxford Research Encyclopedia of Physics.
He summarizes the interpretations, and concludes:
Among physicists, the (more operationalist versions of the) probability-based approach, and the Everett interpretation, are roughly as popular as one another, with different sub-communities having different preferences. (The modificatory strategies are much less popular among physicists, although they are probably the most common choice among philosophers of physics.) But more popular than either is the ‘shut-up-and-calculate’ approach [167]: the view that we should not worry about these issues and should get on with applying quantum mechanics to concrete problems.

In its place, there is much to be said for ‘shut up and calculate’. Not everyone needs to be interested in the interpretation of quantum mechanics; insofar as a physicist working on, say, solar neutrinos or superfluidity can apply the quantum formalism without caring about its interpretation, they should go right ahead — just as a biochemist may be able to ignore quantum mechanics entirely, or a behavioral ecologist may be able to ignore biochemistry.

There is some overlap here. The theory described in textbooks could be called probability-based, or shut up and calculate.

Wallace has sympathies to Everett many-worlds, but admits:

More productive criticisms4 of the Everett interpretation have mostly fallen into two classes, known collectively as the ‘preferred basis problem’ and the ‘probability problem’. ...

It is fairly widely accepted that decoherence provides a solution to the preferred-basis problem. ...

The probability problem remains largely intact when decoherence is considered, and has been the main locus of controversy about the Everett in- terpretation in 21st-century discussions.

I am coming around to the view that there are really just two interpretations: QM with and without the probabilities. With the probabilities, you can make predictions and do experiments. Without, you get many-worlds, and a lot of philosophers and physicists love it, but I don't see what it has to do with the real world.

Friday, December 1, 2023

Carroll has new course on Many Worlds

Physicist Sean M. Carroll announces a new course:
The Many Hidden Worlds of Quantum Mechanics

One universe is not enough. Learn about the Many-Worlds Interpretation of quantum mechanics in this exciting course taught by a renowned expert. ...

Professor Carroll explains how quantum mechanics predicts the existence of a large number of universes parallel to our own. ...

Use the concepts developed in the course so far to learn how physicist Hugh Everett arrived at a bold new approach to quantum mechanics. Called the Many-Worlds Interpretation, it holds that the wave function represents reality and evolves smoothly into multiple distinct worlds when a quantum measurement takes place. Contrast Everett’s straightforward idea with the opaque Copenhagen Interpretation. ...

Many-Worlds theorist David Deutsch helped pioneer quantum computing, which he argues is an outgrowth on the Many-Worlds Interpretation. ...

By contrast, Many-Worlds is deterministic. We can derive an understanding of probability by thinking about where we are in the quantum wave function. ...

Many-Worlds and competing theories on the foundations of quantum mechanics may seem essential for our understanding of reality, but they were long ignored by no-nonsense practicing physicists. Close the course by witnessing how the tide is turning, as it becomes increasingly clear that the foundational issues are likely the key to unlocking the outstanding mysteries of the cosmos.

This is all nonsense. There is no practical value to Many-Worlds, or to the other "competing theories" he presents. They do not even make sense has scientific theories. This course is crackpot stuff.

I have posted many times with details on why Many-worlds is nonsense. Carroll giving a course on this is like giving a course on Astrology. The students should be warned that they will not learn anything worthwhile.

His Thanksgiving post was a plug for his latest book, saying that he is thankful for quanta.

Thursday, November 23, 2023

Vatican Astronomer explains Cosmology

A Vatican astronomer writes a paper on God and the Big-Bang: Past and Modern Debates Between Science and Theology. Nice to see a modern attempt to reconcile Theology and Cosmology.
The Christian concept of creation is, instead, completely different from that of the God-demiurge of scientists. First, God creates from a state where before there was really nothing (creatio ex nihilo), i.e., neither initial energy nor physical laws. Indeed, he creates both energy and physical laws from nothing and keeps them in existence. God creates the world and all its creatures into being.
I had to look up demiurge:
In the Platonic, Neopythagorean, Middle Platonic, and Neoplatonic schools of philosophy, the demiurge (/ˈdɛmi.ɜːrdʒ/) (sometimes spelled as demiurg) is an artisan-like figure responsible for fashioning and maintaining the physical universe. The Gnostics adopted the term demiurge.
I question whether God had to create energy. When masses become gravitationally bound, they lose potential energy, and maybe no mass was bound at the Big Bang. It is possible that the net energy of the universe is zero, because all the energy we see from starlight is balanced by the negative energy of gravitational wells.
Today we see that the ΛCDM standard cosmological model works quite well with observational data; however, as we have explained, it is necessary to use ad hoc “elements,” such as dark matter and dark energy, to explain some otherwise unexplained phenomena. In this sense one could think, with all the reservations and cautions of the case, that there could be an analogy between the theory of the epicycles of Ptolemy’s geocentric system, invented to explain the motion of the planets, and the hypotheses of dark matter and dark energy, introduced to adapt the cosmological model to otherwise unexplained phenomena. In other words, it must be said that, despite all the progress that has been made in science, and in particular in current cosmology, the myth of a “very precise” science, without any shadow, must certainly be debunked. The truth, however, is that even the scientific models that we possess today and which we use to describe nature have limitations, and therefore do not possess to any degree the character of infallibility that a new dogmatic “scientism” would like to attribute to them. Since ancient times there has always existed a tight connection between cosmology and religion.

In ancient cultures, starting from the harmony and order existing in the visible universe – which at that time was simply the starry sky – people have always tried to hypothesize the existence of an “architect” God which is the cause of this harmony. Let us remember, one out of many, the so-called “cosmological proofs of the existence of God,” where from the “contingency” of the world philosophical arguments deduced the necessity of the existence of a first cause, God, Who is also the guarantors of Universe harmony. However, the modern conflicts – for example, the “Galileo case” and the subsequent fracture between science and theology – lead us to think that, following Lemaître, the right approach, in the science-theology debate, is the separation between the theological and scientific planes or magisteria. But this does not prevent a mind, enlightened by the Grace of God as Pius XII was mentioning in his 1951 speech, from seeing in the harmony and order of the universe a beauty that reflects the imprint of the Creator and the Love with which God created and wove the universe. However, this is not proof of the existence of God, but rather an a posteriori observation, valid only for those who are either already believers or accepting God’s Grace to believe.

You may think that this is ridiculous, but it is not any more ridiculous than many-worlds theory, determinism, and a lot of other things that modern scientists accept.

Monday, November 20, 2023

Mathematics is Truth

New podcast: Why is Mathematics True and Beautiful? | Episode 2201 | Closer To Truth
Does mathematics have two transcendent attributes: truth and beauty? What makes math true? What makes math beautiful? Are there different kinds of mathematical existence? How can math combine idealized perfection and explanatory simplicity?

Featuring interviews with Robbert Dijkgraaf, Edward Witten, Max Tegmark, Sabine Hossenfelder, and Jim Holt.

Since the subject is Mathematics, it would have been nice if they interviewed some mathematicians. Yes, Witten has some very impressive mathematical accomplishments, but he was schooled in Physics and thinks like a physicist.

Inevitably these non-mathematicians say that Math is not truth because it depends on axioms, or because Goedel proved that truth is not provable. Their description of Goedel's work is usually something that Goedel himself would disagree with.

To show that math can be false because of faulty axioms, they point out that dropping one of Euclid's axioms gives rise to non-euclidean geometry, and that was actually useful in general relativity.

No, Math gives absolute truth, and these arguments miss the point. Euclidean geometry is just as true as it ever was. General relativity uses Euclidean geometry to model the spatial tangent space. Yes, different axioms give different theorems, and all those theorems are true for those axioms.

Another podcast in this series addresses Why the “Unreasonable Effectiveness” of Mathematics? | Episode 2203 | Closer To Truth. Again, the interviews are with string theorists and physicists, not mathematicians.

Thursday, November 16, 2023

The Return of the Determinists

Physicist Lawrence M. Krauss writes:
The Return of the Creationists

How can we expect political sense or reason from people who cannot distinguish empirical reality from ancient myth?

I have spent so much of the past few years publicly bemoaning the anti-free-speech craziness — driven mostly by the Left — at American universities and scientific institutions, that I had almost forgotten that, in the not-too-distant past, the most severe threat to rational discourse and policy came from religious fundamentalists.

The pendulum is not yet swinging back, but there are worrying signs that it might. Last month Mike Johnson was elected Speaker of the US House of Representatives, a role that puts him third in line to the presidency. That Johnson’s political and social views are extremely right wing, and that he was a strong supporter of Donald Trump’s efforts to invalidate the results of the 2020 election are well known — but, even more worryingly, he espouses a fundamentalist Biblical literalism, which informs all his views on policy issues.

I remember scientists getting hysterical about Creationism, but now that the dust has settled, where was the harm?

Why would he be worried that Johnson wanted a fair election? Trump merely used lawful processes to challenge the election procedures that were rigged against him.

Johnson’s ideological intransigence may hamper an effective response to foreign conflicts, including those in Israel and Ukraine, and the government’s ability to meet looming deadlines on borrowing. And the fundamental problems that will more profoundly affect American health, welfare, and national security in this century all call for technological solutions based on the real world, not an imagined one. Can a Speaker of the House who treats the Bible as a scientific document rationally address the real challenges the United States faces today?

We can only hope that he heeds the admonition of St. Augustine, who wrote two millennia ago, “We must be on our guard against giving interpretations which are hazardous or opposed to science, and so exposing the word of God to the ridicule of unbelievers.”

It was more like 1600 years ago. St. Augustine was a big defender of free will, among other things.

But Krauss has recently come out as firmly in favor of determinism and firmly against all forms of free will.

If you do not believe in free will, then what is the point of trying to analyze how someone else makes decisions? No one makes any decisions. Everything is determined from conditions eons ago, and there is no point getting excited about it. Nothing you or anyone decides will make any difference on anything.

Krauss says someone who believes in the Bible cannot rationally face today's challenges. I say that anyone who believes in determinism is much worse.

A determinist cannot possibly have a rational opinion about anything. He is just a preprogrammed bot, like ChatGPT. Yes, he can impress us with his expertise, but he is incapable of human judgments.

I do not know how much Johnson accepts Biblical dogma. But Krauss accepts the much more pernicious dogma of determinism.

Whenever these physicists and rationalists talk politics, they don't sound at all rational to me. They are all Trump-haters and anti-Republican, but they can never articulate any way in which Pres. Biden has been better than Pres. Trump.

For example, Krauss says his big issue has been "anti-free-speech craziness". It is the Biden administration that got a judge to issue a gag order against Trump publicly arguing for his innocence. It is currently under appeal. Trump faces five trials, and they are nearly all for him making statements that have always been considered to be within his free speech rights. So if free speech is your issue, and you are rational, then you will almost certainly support Trump.

The current trial is for Trump exaggerating some property values. But Trump has a free speech right to exaggerate all he wants, unless he is cheating someone out of money, and no one has alleged that. The next trial is will be for having a business record of paying a lawyer without detailing why. Again, he has a free speech right to abbreviate his records all he wants, unless there is some sort of illegal fraud going on, and no one alleges that. The third trial will be Trump saying that the election was stolen, and the fourth trial will be for Trump asking Georgia officials to do a recount. The fifth trial will be for retaining copies of White House documents.

Anyone who believes in free speech must side with Trump. Even the left-wing ACLU has sided with Trump against the gag order.

But Krauss says he has no free will, so he could not support Trump even if he wanted to.

Monday, November 13, 2023

The Death of Supersymmetry

Here is a new paper from a supersymmetry enthusiast:
Half a century has now passed since the discovery of supersymmetry. During this time the subject has developed enormously, with stupendous advances on many fronts, some of which are also documented in this book. Supersymmetry has been a major driving force of developments in mathematical physics and pure mathematics. So it is definitely here to stay! Nevertheless, we now (in 2023) have to face up to the fact that supersymmetry, at least in the form championed over many years, is off the table as a realistic option for real life particle physics. 15 years of LHC searches have not produced a shred of evidence for superpartners of any kind. Quite to the contrary, the integrated results from LHC strongly indicate that the SM could happily live up to the Planck scale more or less as is, and without supersymmetry or other major modifications.
It is funny how physicists can persist with ideas that have no hope of finding any connection to the real world.

So what keeps them going? A misguided notion that they are following an Einsteinian ideal.

Independently of whether the ideas sketched above are on the right track or not, I remain attached to the Einsteinian point of view that we should try to understand and explain first of all our universe and our low energy world, and that in the end there should emerge a more or less unique answer. I believe that 50 years of supersymmetry have brought us a wee bit closer to this goal, though not as close as many would have wished. Of course, this point of view runs counter to currently prevalent views according to which the only way out of the vacuum dilemma of string theory is the multiverse. But if Nature must pick the ‘right’ answer at random from a huge (> 10272,000 ?) number of possibilities, I see no hope that we would ever be able to confirm or refute such a theory.
I don't see how supersymmetry could be getting us closer to any goal, or that the multiverse could be a way out of anything.
Already in 1929, and in connection with his first attempts at unification, Albert Einstein published an article in which he states with wonderful and characteristic lucidity what the criteria should be of a ‘good’ unified theory: (1) to describe as far as possible all phenomena and their inherent links, and (2) to do so on the basis of a minimal number of assumptions and logically independent basic concepts. The second of these goals, also known as the principle of Occam’s razor, he refers to as “logical unity” (“logische Einheitlichkeit”), and goes on to say: “Roughly but truthfully, one might say: we not only want to understand how Nature works, but we are also after the perhaps utopian and presumptuous goal of understanding why Nature is the way it is, and not otherwise.”
This is why I wrote an anti-Einstein book. Einstein's views were bad enough, but his followers quote him to justify absurd research programs.

Update: Sean M. Carroll made some comments in his recent AMA podcast, similar to this paper. That is, supersymmetry was invented to solve certain technical theoretical mysteries in high-energy physics. Had that been valid, supersymmetric particles would have been discodvered at the LHC collider. The LHC determined that no such particle exists. However he insists that supersymmetry is not dead, because theorists still study it, and because there could be supersymmetric particles that are beyond the range of what anyone can observe.

Wednesday, November 8, 2023

Quantum Mechanics said to be Hilariously Ill-defined

Physicist Sean M. Carroll on his monthly AMA podcast:
Q. Can you talk about the future of many-worlds? What would have to be true, or what milestones would have to be achieved for the majority of the physics community to adopt many-worlds as a proper model of foundational physics?

A. I think there's two things going on, one is already happening. Which is, you just need to appreciate the need for a proper model of foundational physics, a proper model of quantum physics in particular. You know we have been muddling along with the Copenhagen Interpretation, which is hilariously ill-defined for a very long time now, but as technology improves, and as physicists are paying more attention to truly quantum phenomena, especially entanglement and measurement, and so forth, they are becoming easier to convince that we need to get quantum mechanics right. So whether it is many-worlds or something else, first we need to convince physicists that it is worth spending the effort to think carefully about the foundations of quantum mechanics.

He goes on to say the second thing, which is that someone has to find some practical utility to many-worlds, and no one has found any yet.

I do not see how a respectable professor can say anything so ridiculous.

QM is not a new or immature theory. Its worldwide impact is about a trillion dollars a year, counting all the semiconductors, lasers, liquid crystals, etc. Those products are built with an understanding of the basic science from textbooks on the Copenhagen interpretation of QM.

Physicists need to be convinced to get it right? What about all those people invested in that trillion dollar economy? Don't they want to get it right?

Carroll is telling them that they need to give up their theory for some many-worlds nonsense that has never shown any practical utility whatsoever.

Got that? On the one hand, a theory worth a trillion dollars a year. On the other, one worth zero. And Carroll is trying to get people to jump to the latter theory.

This is like saying: What would it take for people to stop driving cars, and using wormholes for transportation instead?

And the eminent professor answering: We need two things, to convince everyone that cars are hilariously ill-defined, and to find a way to use wormholes for transportation.

In a sense, he's right, in that people would use wormholes if they became more useful than cars. But there is no chance of that ever happening.

Here is a new article on many-worlds (MWI) being unscientific:

We show, in fact, that a whole class of theories -- of which MWI is a prime example -- fails to satisfy some basic tenets of science which we call facts about natural science. The problem of approaches the likes of MWI is that, in order to reproduce the observed empirical evidence about any concrete quantum measurement outcome, they require as a tacit assumption that the theory does in fact apply to an arbitrarily large range of phenomena, and ultimately to all phenomena. ...

The unblemished success of the theory in such ample range of phenomena is really staggering. Precisely because of that, it is suicidal to leave our best comprehension of such rounding success in hands of any interpretation that, due to its soaring ambition, is incapable of building itself on any concrete empirical ground, and therefore cannot but fall apart sooner or later. ...

In the specific case of MWI, there seems to be an almost religious sentiment that animates its supporters by believing that everything that exists is a single, “simple”, immutable, elegant mathematical object, which supposedly lives in an abstract Hilbert space. In this view, everything we observe and experience, including the space in which we move and live, would just be emerging from the only “real” entity – the universal wave function [44]. With the arguments exposed in this article, we then join Heisenberg here who, to similar claims put forward by Felix Bloch, once simply replied: “Nonsense, space is blue and birds fly through it.”

I do believe that many-worlds is completely contrary to everything we know about science. Believing in it is more backwards than believing in astrology or witchcraft.

Monday, November 6, 2023

Minkowski never mentioned the Erlangen Program

Here is an example of an Einstein historian who looked at the original documents, finds that Poincare and Minkowski discovered spacetime, but still finds strange reasons for discrediting them.

Thibault Damour wrote in a 2008 paper:

This contribution tries to highlight the importance of Minkowski's ``Raum und Zeit'' lecture in a ``negative'' way, where negative is taken in the photographic sense of reversing lights and shades. Indeed, we focus on the ``shades'' of Minkowski's text, i.e. what is missing, or misunderstood. In particular, we focus on two issues: (i) why are Poincaré's pioneering contributions to four-dimensional geometry not quoted by Minkowski (while he abundantly quoted them a few months before the Cologne lecture)?, and (ii) did Minkowski fully grasp the physical (and existential) meaning of ``time'' within spacetime? We think that this ``negative'' approach (and the contrast between Poincaré's and Minkowski's attitudes towards physics) allows one to better grasp the boldness of the revolutionary step taken by Minkowski in his Cologne lecture.
He finds that Minkowski got crucial relativity and spacetime ideas from Poincare, and credited him in 1907 papers, but not in his famous 1908 paper.
This contribution tries to highlight the importance of Minkowski's ``Raum und Zeit'' lecture in a ``negative'' way, where negative is taken in the photographic sense of reversing lights and shades. Indeed, we focus on the ``shades'' of Minkowski's text, i.e. what is missing, or misunderstood. In particular, we focus on two issues: (i) why are Poincaré's pioneering contributions to four-dimensional geometry not quoted by Minkowski (while he abundantly quoted them a few months before the Cologne lecture)?, and (ii) did Minkowski fully grasp the physical (and existential) meaning of ``time'' within spacetime? We think that this ``negative'' approach (and the contrast between Poincaré's and Minkowski's attitudes towards physics) allows one to better grasp the boldness of the revolutionary step taken by Minkowski in his Cologne lecture.
Another odd omission:
I therefore find rather surprising that Minkowski never points out the link between his group-approach to a 4-dimensional geometry and Klein’s famous Erlangen programme (which consisted in defining a geometry by its symmetry group, rather than by the ‘objects’ on which it acts). This is all the more surprising since Klein was the organizer of the mathematics section in which Minkowski was invited to speak. Knowing also all what Minkowski owed to Felix Klein, I would have expected Minkowski to add at least a passing allusion to his Erlangen Programme. For instance, Pauli’s famous article (and book) on Relativity contains a section (§8) on how Relativity fits within Klein’s “Erlangen Programme” [17].
To this day, the flat non-Euclidean geometry of Minnkowski space is not appreciated. It is a wonder that Poincare does not mention it either.

Briefly, the Erlangen Program was an 1872 plan to unify study of non-euclidean geometries by symmetry groups or invariants.  Euclidean geometry has the symmetries of rotations, translations, and reflections, and ordinary distance is invariant. Similarly other geometries can be described by symmetries and invariants. Spacetime fits that program, with the Lorentz group being the symmetries, and the metric dx2 + dy2 + dz2 - dt2 being the invariant. While Euclidean geometry was defined by Euclid's Elements, non-euclidean geometry is based on the Erlangen program.

I don't know why Minkowski did not mention the Erlangen program. More curious is why most of the relativity textbooks of the next century do not mention it either. I think physicists have a hostility towards geometry, and towards the mathematicians who appreciate geometry.

Damour concludes by attacking Poincare:

To conclude these somewhat disconnected remarks, let me try to characterize the greatness of the conceptual leap achieved by Minkowski in his Raum und Zeit lecture by contrasting it with the attitude of Poincar´e. We recalled above that, at the purely technical level, several (though certainly not all) of the key mathematical structures of “Minkowski spacetime” were already, explicitly or implicitly, contained in Poincare’s Rendiconti paper. But, what made the difference was that Minkowski had the boldness of realizing and publicizing the revolutionary aspects of these structures.
Then he goes on to explain a section of Poincare's 1905 paper where he makes an analogy, saying his new relativity theory is replacing Lorentz's analogously to the way that Copernicus replaced Ptolemy.

The analogy is that in the Ptolemy theory, the Earth's year appears coincidentally in the orbits of the Sun and other planets. With Copernicus, the number has a common origin in the orbit of the Earth. Likewise, in Lorentz, gravity and electromagnetism coincidentally propagate with the speed of light. In Poincare's spacetime theory, the speeds have a common origin in the geometry of spacetime.

And Damour complains that this is not bold or revolutionary!

This citation clearly shows the deeply conservative bend of Poincare in physics. He is happy to contribute to the Lorentz-Ptolemy programme, and he steps back from any move that might shake its kinematical foundations. Minkowski, by contrast, had a lot of ambition and self-confidence (not to say chutzpah), and was keen on breaking new ground in mathematical physics. Without fully understanding what Einstein had done, nor (at least initially) what Poincare had already achieved, he was lucky to unearth elegant and deep mathematical structures that were implicitly contained in their (and others’) work, and had the boldness to embrace with enthusiasm their revolutionary character. One must certainly admire him for this achievement, though one might regret his unfairness towards Poincare.
This is crazy stuff. Minkowski obviously understood everything Einstein did, and much preferred Poincare's geometrical spacetime theory. Poincare said he had a theory as revolutionary as Copernicus. Einstein made no such claim, and only said he had an elaboration of Lorentz's theory. Einstein never goes against Lorentz the way Poincare does.

I would criticize Minkowski for not properly crediting Poincare and the Erlangen program, except that he died about a year later. Maybe he would have credited them better if he had lived.

Thursday, November 2, 2023

Bell inequalities are still poorly understood

Nicolas Gisin writes in a new paper:
On the conceptual side, the violation of Bell inequalities dramatically revolutionized our world-view. Interestingly, Newton’s theory of gravity was also non-local, even signaling.... In contrast to Newton’s non-locality, quantum non-locality is here to stay; the experimental evidence is clear on that point.
He complains that the Nobel committee does not agree with him:
Despite these beautiful experiments and the intellectually fascinating discoveries, Bell inequalities remained dismissed and poorly understood. Even to this day, the clear terminology non-local (equivalently, not-Bell-local) is too often blurred as not satisfying “local-realism”, as if non-realism was a way out [3, 4]. The fact is that assumption (1) is no longer tenable. As an example, consider the scientific background provided by the Nobel Committee [5]. A few lines after correctly presenting Bohm’s non-local hidden variable model, one reads that Bell inequality violation shows “that no hidden variable theory would be able to reproduce all the results of quantum mechanics”, contradicting the just cited Bohm model (which does predict violation of Bell inequalities). The correct statement is that no local variable theory is able to reproduce all results of quantum mechanics. And a few lines further, locality is defined as no-signaling - no communication without any physical object carrying the information, despite the fact that one of the main contribution of quantum information to the foundations of physics is a clear distinction between these two concepts. Next, realism is defined as determinism, even though Bell inequalities also hold in all stochastic theories satisfying (1). All this illustrates that Bell inequalities are still poorly understood by the general physics community. The 2022 Nobel Prize in physics allows one to hope that henceforth Bell inequalities will be part of all physics [courses].
I am with the Nobel Committee. The Bell inequalities told us nothing new. Bohm's theory is unphysical, and need not be considered seriously. Neither does any nonlocal theory.

I am not even sure quantum information is a worthwhile concept. Quantum cryptography and teleportation have not found any commercial uses.

I cannot find it, but there is a video clip of R.P. Feynman being asked about Bell's theorem. He dismisses it as unimportant. He says it was just a way of formulating what everyone already knew.

Monday, October 30, 2023

The decolonisation of mathematics

John Armstrong and India Jackman write:
We describe a mainstream "universalist" approach to the understanding of mathematics. We then conduct a systematic (but not exhaustive) review of the academic literature on the decolonisation of mathematics and identify how this challenges the universalist view. We examine evidence of whether the experience of mathematics in the UK is systemically racist, examining both the decolonial arguments and the empirical evidence.
The paper makes good arguments that mathematics is universal, that attempts to decolonize math have been big failures, and that math culture is not racially discriminatory.

There are some examples of Western books being more likely to credit Western sources, but that is to be expected.

All this may seem obvious, but it is good to see a paper defend Mathematics against the increasing attacks from leftist decolonizers.

Friday, October 27, 2023

Poor Reasons for Rejecting Many-Worlds

Sabine Hossenfelder has posted her video, The Many Worlds Interpretation of Quantum Mechanics -- And why I don't believe it. I don't believe it either, so I was hoping to agree with her. Nope. Let me explain some basic points.

All scientific theories are probabilistic. Just read any scientific paper that reports experiments. There are always probabilities, error bars, p-values, or something to indicate probability. Even classical celestial mechanics, which is commonly thought to be deterministic, is always applied with probabilities. Astronomers estimate the position and other parameters for a planet, and then predict its future position, with probabilities.

Collapse is not really nonlocality. Anytime you make a probabilistic prediction, and then observe a definite value, you are ruling out the other possibilities. In quantum mechanics (QM), this is called collapse of the wave function. It happens in every other scientific theory. It does not mean that nature has any nonlocal properties. It only means that our knowledge of what is possible changes when we obtain new information. The Bayesians say this is essential to all of science.

QM is not inherently probabilistic, any more than any other theory. The wave function is not a probability. What it gives, most directly, is expectation values for observables. If A is an observable (self-adjoint operator), then <ψ|Aψ> is the expectation value of that observable. Usually a range of values is possible, and you can similarly get expectation values for the variance. If you want the probability that a particle is in a certain region, then you calculate the expectation for it being in the region.

QM is probabilistic in the sense that it predicts expectation values and variances, but the same can be said of any other scientific theory. They all predict expectation values and variances.

Any theory can be converted to a many-worlds theory, by rejecting the probabilities. Any time a theory predicts an event with probability p, you could disregard the probability, and say that observing the event splits the universe into many worlds, some with and some without the event.

Many-worlds theory (MWI) is just applying this to QM. It is not useful because the parallel worlds are imaginary, and because we lose the ability to interpret the probability. There is no real notion of being in a high or low probability world.

At this point, the MWI advocates start talking about the measurement problem, or the Bell problem. But MWI does nothing to solve the measurement problem. Measurements do not become less mysterious by postulating that world-splittings go along with them.

Bell's Theorem only says that QM is not a local hidden variable theory. Hossenfelder is wrong to say that Bell proved QM is nonlocal.

The obvious problem with MWI is that it hypothesizes parallel worlds that cannot be observed. There is no scientific value in discussing such things. The main problem is worse. Accepting MWI is the same as denying probability.

Suppose theory predicts a 90% chance of rain tomorrow. That means that if conditions are repeated 10 times, you can expect 9 rain days. But in MWI, it means that it will rain in some of the parallel worlds, and not others. You might think it will rain in 90% of the parallel worlds, but MWI has no way to count the worlds or make a statement like that. You don't know anything about the likelihood of rain in your world, because the MWI view is that there is no such thing. So arguing for many-worlds is the same as saying that the 90% probability is meaningless.

All of the above is mathematical fact, not opinion. Some physicists, like Sean M. Carroll choose to believe in MWI, but only because they choose to believe in unobservable parallel worlds with no scientific value, and because they reject probability theory.

What would Carroll say to defend MWI against these arguments? First, he would say that belief in wave function collapse is unnecessary because it is conjectured that solutions to the Schroedinger equation will exhibit a decoherence that is effectively the same as collapse. I say that may be true, but it does not have anything to do with the fact that the parallel worlds have no scientific value.

Second, he would say that you can still believe in the probabilities, even if they have no direct meaning. I say he has become detached from reality.

MWI takes a perfectly good theory about the world, destroys the part that gives it predictive power, and adds zillions of ghosts that can never be seen. What is the point? There is no practical or computational value to any of it. It is contrary to science. No good has ever come out of it.

MWI is one of those ideas that is so crazy that if anyone advocates it, then his views are on all other subjects are suspect. Other such ideas are determinism, free will denial, and the simulation hypothesis.

Wednesday, October 25, 2023

New Video on Non-Euclidean Geometry and Relativity

The latest Veritasium video is on:
How One Line in the Oldest Math Text Hinted at Hidden Universes
It is excellent as usual, and concerns the development of non-Euclidean geometry, from Euclid to relativity. It separates special and general relativity, and non-euclidean geometry being essential to general relativity.

In particular it tells how a supernova was seen 4 times in 2014, and predicted to be seen again in 2015, all being an illusion from gravitational lensing.

This is all correct, but special relativity has had a much bigger impact on XX century Physics than general relativity, and special relativity is also founded on non-euclidean geometry.

In the Veritasium model the "one line" is Euclid's Fifth Postulate about the existence of parallel lines. The "hidden universes" are model geometries where the postulate fails. Relativity is non-euclidean because it curves space. But it does not mention the bigger non-euclidean aspect -- light rays have null length. In euclidean geometry, every line segment has a positive length.

Euclidean geometry is based on Euclidean distances. The distance between two points is given by the Pythagorean Theorem. Time is the fourth dimension. The Minkowski metric applies, not a Euclidean one.

The non-euclidean geometry was appreciated early on, at least by Poincare in 1905, Minkowski in 1907, and Varicak in 1910. Einstein disagreed with it.

Monday, October 23, 2023

Whiteness in Physics Classrooms is Epistemicide

I try to avoid political topics, but woke papers are now infecting Physics journals.

Last year, the American Chemical Society published a paper on deconolonizing chemistry education, in a special woke issue.

Now the American Physical Society has published Observing whiteness in introductory physics: A case study. It is criticized here, here, here, and here.

The paper tries really hard to relate white supremacy to learning thermodynamics in a physics class. I don't see it, but that is supposed to be the original contribution. My whiteness prevents me from seeing it, the authors would say. More striking to me is the conventional woke wisdom they regurgitate.

Our goal in this paper has been to “make whiteness visible,” in the tradition of Critical Whiteness Studies. In particular, we have sought to make visible how everyday physics classroom interactions reproduce whiteness as social organization, and how physics representations, values, and pedagogical tools play a role in this reproduction. That whiteness is “ordinary” in physics classrooms is not surprising, given critical race theory’s assertion that whiteness is endemic to every aspect of U.S. society [7]. The ordinariness of whiteness’ reproduction is not surprising either, given critical scholarship’s emphasis on the invisibility and hegemony of whiteness.
Since the paper is about whiteness, you have to understand what it means by "white". It is not the skin color or the biological race. The paper is emphatic that there is no such thing. It does occasionally identify particular people as being white, but it is not clear what makes them white, if not race or skin color. It also makes a point of not capitalizing white, as it capitalizes Black, Color, and Hispanic. This is because whites have no culture, and no common identity. Even "Activists of Color" get two capital letters, but whites get none.

These papers are just lying when they say that race is a social construct with no biological reality. Popular racial classifications are nearly identical to results of objective DNA tests.

The impact of whiteness in physics classrooms cannot be understated. One outcome of enforcing a social organization with a (consistent) center and margins is epistemicide [45], or “the extermination of knowledge and ways of knowing” [37] (cited in Ref. [45]). We see glimmers of this in the data we have shared in this paper.
It goes on to explain that some white student solved a thermodynamics problem using an energy interaction diagram at a whiteboard. This was "an example where one form of knowledge building was discontinued (or extinguished) in service of another." Apparently he is called white because he solved a physics using white patriachal thinking, not because of his skin color.

This is analogous to white supremacy and patriarchy driving genocide. We are not killing Students of Color in the classroom, but we are marginalizing their bad science ideas.

The work is funded by the National Science Foundation. You tax dollars are funding this garbage, and our leading science journals are publishing it. You would probably be ostracized as a racist, if you disagreed with it.

The current Scientific American says:

The Theory That Men Evolved to Hunt and Women Evolved to Gather Is Wrong

The influential idea that in the past men were hunters and women were not isn’t supported by the available evidence ...

The inequity between male and female athletes is a result not of inherent biological differences between the sexes but of biases in how they are treated in sports.

They must think we are really stupid. Or that we will blindly accept scientific authorities. Or that we will be afraid to dispute woke pronouncement. I don't know. Scientific American was a great magazine for about 150 years.

Modern experts like to make fun of medieval scholars for trying to turn lead into gold, and the Sun going around the Earth. But what will our descendants think of 21st century science?

Update: Here is another example of bad science cited to support woke ideas. Coleman Hughes is a Black podcaster who gave a talk in favor of color-blindness, as pushed by M.L. King. He just posted a response to criticism. In brief, the TED folks tried to blackball his talk, claiming that it was disproved by social science saying that we must have racial preferences for Blacks. The social science is bogus, of course.

Update: Biologist Jerry Coyne debunks the rest of that SciAm article. Referring to the above quote:

Here the authors are wading into quicksand. In fact, the entire quote is offensive to reason, for it implies that, if women were treated the same as men in sports, they would do as well. Given the differences between the sexes in morphology and physiology, such a claim flies in the race of everything we know.  ...

In the end, we have still more evidence that Scientific American is no longer circling the drain, but is now in the drain, headed for, well, the sewers. It used to have scientists writing about their field, with no ideological bias, but now has ideologues (these authors are scientist-ideologues) writing about science in a biased way and misleading.