Wednesday, July 28, 2021

Brian Greene Still Clings to String Theory

Physicist Brian Greene used to be the public face of string theory, and on a recent video Q&A, he was asked:
Is it true that physicists are losing their hope in string theory?
The answer is obviously yes, but he refuses to admit it.

Greene's answer is amusingly weak. He is unable to point to any theoretical or experiment progress in string theory. He just babbles.

He starts by getting a drink of water, emphatically saying the answer is no, and claiming that the string theory critics need psychotherapy!

He goes on to defend the multiverse, even tho that was not really the question. He says string theory does not require a multiverse, but he argues for the multiverse, as that were essential to his belief in string theory.

He implies that only a narrow-minded person would only believe in one universe. He admits that it is desirable to have physical evidence for physical theories, but says that evidence for part of a theory (ie, one universe) can be taken to enhance a belief in other parts (ie, other universes.

He ends by saying that the state of the field is one of excitment.

Watch the above video from 42:00 to 46:00 if you think that I have distorted him.

If I knew nothing about string theory, I would say that this is a con man promoting goofy ideas that no one could believe. If I watched the rest of the video, I would see that he undertands a lot of physics, and is good at explaining textbook physics, and I would be very confused.

Dr. Bee's latest video says that string theorists have given up on their original goals, leaving an overhyped research program. The multiverse is so silly it is not even science. She challenges a lot of modern theoretical physics as too speculative to be worthwhile.

Sunday, July 25, 2021

R.I.P. Steven Weinberg

Many fine obituaries of Steven Weinberg are appearing.

His Nobel Prive was for the weak interaction, but he shared it with two others who came up with the same equations at the same time. His paper had essentially no citations, until others developed gauge theory more fully. So while the theory is important to the standard model, I am not so sure his 1967 paper was important.

Weinberg wrote this in the preface to his 1972 general relavity book:

There was another, more personal reason for my writing this book. In learning general relativity, and then in teaching it to classes at Berkeley and MIT, I became dissatisfied with what seemed to be the usual approach to the subject. I found that in most textbooks geometric ideas were given a starring role, so that a student who asked why the gravitational field is represented by a metric tensor, or why freely falling particles move on geodesics, or why the field equations are generally covariant would come away with an impression that this had something to do with the fact that spacetime is a Riemannian manifold.

Of course, this was Einstein's point of view, and his preeminent genius necessarily shapes our understanding of the theory he created. However, I believe that the geometrical approach has driven a wedge between general relativity and the theory of elementary particles. As long as it could be hoped, as Einstein did hope, that matter would eventually be understood in geometrical terms, it made sense to give Riemannian geometry a primary role in describing the theory of gravitation. But now the passage of time has taught us not to expect that the strong, weak, and electromagnetic interactions can be understood in geometrical terms, and that too great an emphasis on geometry can only obscure the deep connections between gravitation and the rest of physics.

This opinion is so bizarre that it is hard to believe it came from such a smart man.

All general relativity scholars give geometric ideas a starring role. But that was not Einstein's view. Einstein explicitly rejected the geometry.

This rejection was puzzling for both Einstein and Weinberg, as parts of general relativity have only geometric explanations, and their books do give those explanations. They did not succeed in purging the geometry.

It was also bizarre for his to deny that geometry was important for the other forces. This was five years after his Nobel prizewinning paper, and the work was credited with giving a geometrical unification of electromagnetism and weak forces. If he did not do that, what did he do? Was he repudiating his work? Did he not understand that gauge theories were geometric?

I am not trying to criticize him here. I am just pointing out some puzzling aspects of his beliefs.

He was also a hardcore atheist and scientific reductionist. He opposed religion, and considered Islam much worse that Christianity. He opposed paradigm shifters and other modern philosophy. He was a typical academic leftist, but had not bought into the current racial wokeness fanaticism.

Okay, that's all fine with me, but he also rejected positivism, and in his later years, endorsed many-worlds quantum theory. Again, these opinions are bizarre, coming from him. Lubos Motl also found them strange:

Years ago, I was only gradually noticing weird comments about quantum mechanics that Weinberg sometimes made.... At any rate, I consider Weinberg to be a 100% anti-quantum zealot ... It's sad.
He was of Jewish descent, but against Judaism the religion. He was extremely pro-Israel, and an extreme idolizer of Einstein. He studies the history of science enough to know what Einstein did and did not do, but excessively praised him anyway.

Friday, July 23, 2021

O'Dowd can explain Magnetism, but not Many Worlds

Matt O'Dowd has another nice PBS TV Physics video, on How Magnetism Shapes The Universe. I learned a few things. Then, at 15:00, he responds to some queries about a previous episode on Many-Worlds theory.

The questions have no good answers. He says that he will have to make some more videos to explain. Good luck with that. No one can tell how how many worlds there are, or how often they split, or how one can be more probable than any other, or any practical questions about how it all works.

Monday, July 19, 2021

Google Shows 1% of a Logical Qubit

The Google Quantum AI team has published a hot new result in Nature:
Realizing the potential of quantum computing requires sufficiently low logical error rates1. Many applications call for error rates as low as 10−15 (refs. 2,3,4,5,6,7,8,9), but state-of-the-art quantum platforms typically have physical error rates near 10−3 (refs. 10,11,12,13,14). Quantum error correction15,16,17 promises to bridge this divide by distributing quantum logical information across many physical qubits in such a way that errors can be detected and corrected.
As summarized in AAAS Science, they tried to spread a logical qubit over 11 physical qubits. They were not able to correct errors, but they estimate that they could achieve one logical qubit by using 1000 physical qubits.

The Google author promises something much better "in the relatively near future — date TBD."

Getting one qubit is still a long way from doing anything useful.

Tuesday, July 13, 2021

Parallel Universes do not explain anything

Karl-Erik Eriksson compares The Many-Worlds Interpretation and postmodernism:
Sokal's definition of postmodernism is the following: an intellectual current characterized by the more-or-less explicit rejection of the rationalist tradition of the Enlightenment, by theoretical discourses disconnected from any empirical test, and by cognitive and cultThe Many-Worlds Interpretation and postmodernismural relativism that regards science as nothing more than a "narration", a "myth" or a social construction among many others.27

Let us make a description of MWI:

MWI is a world view that regards our experiences of reality as nothing more than one narrative about our world among innumerably many other narratives in a world of worlds. A narrative exists only in one world and cannot be communicated to another world.

This is probably close to what also an MWI proponent could accept.

Then what do postmodernists see in MWI? They see MWI as a world view structured in a way that is similar to postmodernism and therefore useful to support it. Even if postmodernists do not value science, they can value the prestige of science, as shown by Sokal [18]. Therefore, MWI can be felt as a scientific support for postmodernism.

He also cites Brian Greene's arguments for many-worlds:
Stage one — the evolution of wavefunctions according to Schrödinger's equation — is mathematically rigorous, totally unambiguous, and fully accepted by the physics community. Stage two — the collapse of a wavefunction upon measurement — is, to the contrary, something that during the last eight decades has, at best, kept physics mildly bemused, and at worst, posed problems, puzzles and potential paradoxes that have devoured careers. The difficulty [...] is that according to Schrödinger's equation wave functions do not collapse. Wavefunction collapse is an add-on. It was introduced after Schrödinger discovered his equation, in an attempt to account for what experimenters actually see. [[9], p. 201.]
No, the evolution of the wave function is not stage one. The wave function is not directly observable, so we can only infer estimated wave function by other methods.

This might seem like a minor point, except that Greene is going to argue that the Schroedinger equation is all we need. That cannot be.

John Preskill, in his recent podcast, also said that maybe unitary evolution of the wavefunction is all there is. But that cannot be all there is, because we still need an explanation for why we see discrete measurements.

Note the attempt to denigrate wavefunction collapse as an invention "to account for what experimenters actually see."! Yes, physical theories try to account for what experimenters see. They see collapse. Any theory not explaining collapse is not doing the job.

The many-worlds fans say that the collapse is seen as the splitting of the universes. So they have to have collapse as part of the theory, but they say the splitting is a mystery and cannot tell us much more than that.

[Each] of the potential outcomes embodied in the wavefunction still vies for realization. And so we are still wondering how one outcome "wins" and where the many other possibilities "go" when that actually happens. When a coin is tossed, [...] you can, in principle, predict whether it will land heads or tails. On closer inspection, then, precisely one outcome is determined by the details you initially overlooked. The same cannot be said in quantum physics. [...]
There is not really any conceptual difference here.

The coin toss cannot be predicted with certainty. Maybe some overlooked details would enable a better prediction, but that could be true of quantum mechanics too, for all we know.

Much in the spirit of Bohr, some physicists believe that searching for such an explanation of how a single, definite outcome arises is misguided. These physicists argue that quantum mechanics, with its updating to include decoherence, is a sharply formulated theory whose predictions account for the behavior of laboratory measuring devices. And according to this view, that is the goal of science. To seek an explanation of what's really going on, to strive for an understanding of how a particular outcome came to be, to hunt for a level of reality beyond detector readings and computer printouts betrays an unreasonable intellectual greediness.

Many others, including me, have a different perspective. Explaining data is what science is about. But many physicists believe that science is also about embracing the theories data confirms and going further by using them to get maximal insight into the nature of reality. [[9], pp 212-213.

I am all for explaining data also, but just saying that anything can happen in parallel universes explains nothing.

Thursday, July 8, 2021

Electrons do Spin

There is a new PBS TV video on Electrons DO NOT Spin.

This guy is usually pretty reliable, but he is way off base here. Of course electorns spin.

His main argument is that if you conceptualize an electron as a particle, then it is hard to see how the charge distribution and angular velocity could result in the observed magnetic moment.

Okay, electrons are not classical particles. If you conceptualize an electron as a classical particle, you will have trouble with position, momentum, and everything else.

Spin is the intrinsic angular momentum, quantized.

Update: I found this explanation:

WHAT IS SPIN?

Spin is an inherent property possessed by the electron. However, it does not rotate. In quantum mechanics, we speak of an en electron as having an intrinsic angular momentum called spin. The reason we use this term is that electrons possess an angular momentum & a magnetic moment just like a rotating charged body.

DO ELECTRONS SPIN SIMILAR TO PLANETS?

IT IS MISLEADING TO IMAGINE AN ELECTRON AS A SMALL SPINNING OBJECT DUE TO THE FOLLOWING:

An electron’s spin is quantified. It has only two possible orientations, spin up and down, unlike a tossed ball.
To regard an electron as spinning, it must rotate with a speed greater than light to have the correct angular momentum[Griffiths, 2005, problem 4.25].
Similarly, the electron’s charge would have to rotate faster than the speed of light to generate the correct magnetic moment[Rohrlich, 2007, Pg 127].
Unlike a tossed ball, the spin of an electron never changes. It has only two possible orientations: spin up and down.

These arguments are just wrong.

The 1st and 4th are not true. An electron spin can be in any direction, not just up and down. If you put it in a suitable magnetic field, then it will be just up or down, but the same is true about a classical spinning charged ball.

I don't have those textbooks, but they presumably do a computation assuming an electron is a charged particle with extremely small size. But an electron is not a classical particle. While it looks like a point particle in some experiments, it also looks like a wave of concentrated fields. That wave/field is spinning.

Monday, July 5, 2021

Why Quantum Systems are Hard to Simulate

I just listened to Mindscape 153 | John Preskill on Quantum Computers and What They’re Good For:
Depending on who you listen to, quantum computers are either the biggest technological change coming down the road or just another overhyped bubble. Today we’re talking with a good person to listen to: John Preskill, one of the leaders in modern quantum information science. We talk about what a quantum computer is and promising technologies for actually building them. John emphasizes that quantum computers are tailor-made for simulating the behavior of quantum systems like molecules and materials; whether they will lead to breakthroughs in cryptography or optimization problems is less clear. Then we relate the idea of quantum information back to gravity and the emergence of spacetime. (If you want to build and run your own quantum algorithm, try the IBM Quantum Experience.)
Preskill coined the term "quantum supremacy", and supposedly that is the biggest accomplishment in the field, but oddly the term is never mentioned.

Sean M. Carroll is an advocate of many-worlds theory, and Preskill said he is comfortable with that belief.

Preskill referred to Feynman kickstarting the field by a lecture that said that quantum systems are hard to simulate. This supposedly implied that quantum computers are inevitable.

As I see it, there are three main views underlying this thinking.

Many-worlds. If computers are constantly splitting into multiple computers doing different computers, then possibly they can be put to work doing parallel computation, and reporting back a result in one world.

This used to be the eccentric view of David Deutsch, but now Carroll, Preskill, Aaronson, and many others are on board.

Negative probability. In this view, quantum entanglement is a mysterious resource that can have negative or imaginary probabilities, so maybe it can do things better than classical computers that are limited by [0,1] probabilities. This is the preferred explanation of Scott Aaronson.

Pinball. The game of pinball is also hard to simulate because a ball can hit a bumper, be forced to one side or the other in a way that is hard to predict, and then the subsequent action of that ball is wildly different, depending on the side of the bumper. It is a simple example of chaos.

Quantum systems can be like pinball. Imagine a series of 100 double-slits. You fire an electron thru all the slits. At the first slit, the electron goes thru one slit, or the other, or some superposition. The electron has the same choice at the next slit, and it is influenced by what happened at the first slit. Simulating the 100 slits requires keep track of 2100 possibilities.

So pinball is hard to simulate, but no one would try to build a super-computer out of pinball machines.

My theory here is that your faith in quantum computers depends on which of the above three views you hold.

If quantum experiments are glimpses into parallel worlds, then it is reasonable to expect parallel computation to do useful work. If quantum experiments unlock the power of negative probabilities, then it is also plausible there is a quantum magic to be used in a computer. But it quantum experiments are just pinball machines, then nothing special should be expected.

You might say that we know quantum experiments are not classical, as Bell proved that. My answer is that they are not literally parallel universes or negative probabilities either. Maybe the human mind cannot even grasp what is really going on, so we have to use simple metaphors.

The many-worlds and negative probability views do not even make any mathematical sense. Many-worlds is so nutty that Carroll, Aaronson, and Preskill discredit much of what they have to say by expressing these opinions.

Now Aaronson says:

To confine myself to some general comments: since Google’s announcement in Fall 2019, I’ve consistently said that sampling-based quantum supremacy is not yet a done deal. I’ve said that quantum supremacy seems important enough to want independent replications, and demonstrations in other hardware platforms like ion traps and photonics, and better gate fidelity, and better classical hardness, and better verification protocols. Most of all, I’ve said that we needed a genuine dialogue between the “quantum supremacists” and the classical skeptics: the former doing experiments and releasing all their data, the latter trying to design efficient classical simulations for those experiments, and so on in an iterative process. Just like in applied cryptography, we’d only have real confidence in a quantum supremacy claim once it had survived at least a few years of attacks by skeptics. So I’m delighted that this is precisely what’s now happening.
Wow. He consistently said that?

He was the referee who approved Google's claim of quantum supremacy. He has collected awards for papers on that "sampling-based quantum supremacy". Maybe his referee report should have recommended that Google scale back its claims.

Aaronson has also argued that the quantum computing skeptics have been proven wrong. I guess not. It will still take a few years of consensus building.

I am one of those quantum computing skeptics. I thought that maybe someday and experiment would be done that proves me wrong. But apparently that is not how it works.

The experiments are inconclusive, but the experts will eventually settle into the pro or con sides. Only when that happens will I be seen to be right or wrong.

No, I don't accept that. Expert opinion is shifting towards many-worlds, but it is still a crackpot unscientific theory of no value. When Shor's algorithm on a quantum computer factors a number that could not previously be factored, then I will accept that I was wrong. Currently, Dr. Boson Sampling Supremacy himself admits that I have not been proven wrong.

Update: Here is a SciAm explanation:

If you think of a computer solving a problem as a mouse running through a maze, a classical computer finds its way through by trying every path until it reaches the end.

What if, instead of solving the maze through trial and error, you could consider all possible routes simultaneously?

Even now, experts are still trying to get quantum computers to work well enough to best classical supercomputers.

Wednesday, June 30, 2021

The Aether was not a Fringe Theory

An anonymous Wikipedia comment:
I'm perplexed that the other commenters identify ether theory as a fringe theory. I suppose, by their definition, Newtonian mechanics is also a "fringe theory" simply because it is outdated, and yet nobody is out to debunk Newtonian mechanics (in fact we teach it in schools). "Debunking" ether theory and obscuring its pedagogical utility is clearly an emotional pursuit for these people. It's hard to understand where this religious mentality about ether theory comes from, except in the context of a century of propaganda, as you pointed out. The attitude is clearly political and inappropriate (yet all too common) on Wikipedia.
I agree. If you think that aether theory was somehow wrong or unscientific, then read J.C. Maxwell's 1878 article in the Encyclopedia Britannica. He did not know that the aether had to be invariant under Lorentz transformations, but his essay holds up pretty well.

Here is what Lorentz's famous 1895 relativity paper said. After dismissing some aether theories:

It is not my intention to enter into such speculations more closely, or to express assumptions about the nature of the aether. I only wish to keep myself as free as possible from preconceived opinions about that substance, and I won't, for example, attribute to it the properties of ordinary liquids and gases. If it is the case, that a representation of the phenomena would succeed best under the condition of absolute permeability, then one should admit of such an assumption for the time being, and leave it to the subsequent research, to give us a deeper understanding. ​That we cannot speak about an absolute rest of the aether, is self-evident; this expression would not even make sense. When I say for the sake of brevity, that the aether would be at rest, then this only means that one part of this medium does not move against the other one and that all perceptible motions are relative motions of the celestial bodies in relation to the aether.
Einstein said essentially the same thing in 1905:
The introduction of a “luminiferous ether” will prove to be superfluous inasmuch as the view here to be developed will not require an “absolutely stationary space” provided with special properties, nor assign a velocity-vector to a point of the empty space in which electromagnetic processes take place.
Poincare wrote in a popular 1902 book that the aether is a convenient hypothesis what will someday be thrown aside as useless.

The lesson from relativity was not that the aether was wrong, or unscientific, or nonexistent, or exemplary of 19th century thinking.

The lesson was that theories based on motion against the aether were disproved.

The aether is sometimes defined as whatever transmits electromagnetism. If it is Lorentz invariant, then it poses no problem for relativity.

The existence of a preferred frame poses no problem either. The cosmic microwave radiation can be used to define what a rest frame is, so there is nothing wrong with that concept.

There was a time, maybe a century ago, when one could make fun of ancient scientists for lacking the imagination to understand that a vacuum might be empty space. But a vacuum is not really empty in today's theories either.

Thursday, June 24, 2021

Poincare understood what he wrote

I was referred to this 2014 Howard Stein paper for the bizarre claim that Poincare did not understand what he was doing:
Poincaré is a pre-eminent figure: as one of the greatest of mathematicians; as a contributor of prime importance to the development of physical theory at a time when physics was undergoing a profound transformation; and as a philosopher. However, I think that Poincaré, with all this virtue, made a serious philosophical mistake. In Poincaré’s own work, this error seems to me to have kept him from several fundamental discoveries in physics.
As Stein admits, claiming that Poincare did not understand what he was saying is to hypothesize something that has never happened in the entire history of physics.

It quotes Poincare making the following points.

1. Poincare presents a relativity theory with electromagnetic and gravitational force both propagate at the speed of light. This remarkable coincidence has two possible explanations. Lorentz would say that it is because gravity and everything else has an electromagnetic origin.

2. Poincare proposes a more radical explanation, which he analogizes to Copernicus differing from Ptolemy. His idea is that relativity is about how we do spacetime measurements, and therefore applies to all forces.

Stein quotes Poincare saying all this, and is baffled by it, because it is about eight years ahead of Einstein.

I commented on Stein's paper back in 2014.

Here is another silly argument:

No one referred to Poincare's 1905/6 papers in subsequent years and Einstein was unaware of them until the 1950s. Minkowski did not reference Poincare's work. Poincare wrote a semi-popular article in 1908, and that was all. Without the controversy started by E. T. Whitaker and discussed in this WP article, Poincare's 1905/6/8 papers might have languished in obscurity.
No, Poincare's papers were more influential than Einstein's, at the time. Here is Minkowski's big paper on relativity, and as you can see, it references Poincare's big 1905/6 paper twice. These two papers were the most important relativity papers after Lorentz's, and formed the basis of all subsequent work. They were known to Einstein at the time, and to everyone else with an interest in relativity.

Poincare's paper was in French, but Einstein was fluent in French, as he had attended a French-speaking university. Relativity quickly became widely accepted because of these papers, not Einstein's. Both papers would have been tough reading for physicists of the day, but both had shorter summary versions that were widely distributed and read. See translations here: Poincare and Minkowski.

There is still a question that begs for explanation. It may be unique in the history of science. In 1905, Poincare was maybe the most respected scientist or mathematician in all of Europe. Certainly one of the most famous and respected. In 1904 he wrote a paper declaring an entirely new mechanics based on the speed of light. In 1905 he published a long paper on his relativity, comparing it to Copernican heliocentrism, and the work immediately became the basis of all XX century physics.

And yet there are scholars today who claim that no one ever noticed Poincare's papers, and that all the credit should belong to Einstein instead. None of them can tell us anything that Einstein did that was original, so they resort to arguments that Lorentz and Poincare did not know what they were doing!

This is like saying that Copernicus should not get any credit for heliocentrism because he did not understand that the Earth revolved around the Sun in his model.

Einstein went on to spend most of his post-1920 career arguing against quantum mechanics and pursuing silly unified field theories. Other physicists accomplishment many great things, but there is a signiticant faction even today that says goofy things about quantum mechanics and proposes grand untestable theories.

Update: The Wikipedia discussion ended without adding the remarks about Poincare not knowing what he was doing, or that Minkowski failed cite Poincare. The article remains heavily biased towards the views of Einstein scholars who favor crediting Einstein, but the facts presented are quite historically accurate, and you can decide for yourself.

Monday, June 21, 2021

New book overhypes Quantum Computers

News:
Wired published a long extract from Amit Katwala's book Quantum Computing: How It Works and How It Could Change the World — explaining how it's already being put to use to explore some of science's biggest secrets by simulating nature itelf:

Some of the world's top scientists are engaged in a frantic race to find new battery technologies that can replace lithium-ion with something cleaner, cheaper and more plentiful. Quantum computers could be their secret weapon... Although we've known all the equations we need to simulate chemistry since the 1930s, we've never had the computing power available to do it...

Comments add:
QCs are _still_ much slower than much cheaper conventional computers if you use the best algorithms for each technology. And that will remain the case for a long time yet, and possibly forever. Hence there is absolutely _nothing_ that QCs are good for at this time, except separating fools from their money. Also note the excessive use of "could" in the article. In this context "could" = "maybe, maybe not, but certainly not anytime soon". ...

Good, grief the very title Quantum Computing: How It Works and How It Could Change the World tells you it's a work of speculative fiction. In the past it would have been Fusion Power: How It Works and How It Could Change the World or Faster-then-light Travel: How It Works and How It Could Change the World or Telekinesis: How It Works and How It Could Change the World or, if you're that way inclined, Yogic Flying: How It Works and How It Could Change the World. Show me one thing that quantum computing has actually achieved that isn't a specially-crafted artificial problem and that couldn't have been achieved with much less effort with a standard computer.

That's right. Quantum computers have not demonstrated the ability to do practical computations, and will not anytime soon, if ever.

Google has been similarly overpromising self-driving cars, but that technology has been demonstrated in controlled situations, even if it is not yet ready for the mass market.

Monday, June 14, 2021

New Book on Poincare and Relativity

There is a new book on Henri Poincare, and the author has posted a summary on Wikipedia::
Bruce Popp (2020) [1] argues that Poincaré ([Poi05] and [Poi06]) developed a correct relativistic theory of electrodynamics that achieved both substantial and incomplete progress to a theory of special relativity by a different route from Einstein. This route had its origins in work on radioactivity and electrons. His 1905 and 1906 papers are immediately based on his close reading of [Lor04] and the three divergences from Lorentz that Poincaré identified. For example, he understood Lorentz’s presentation of the transformations based on corresponding states was flawed. Poincaré provided the correct form for the transformations and the understanding that they were coordinate transformations. It is this corrected form that Poincaré named “Lorentz transformations” and that match the form and understanding given to them by Einstein [Ein05c]. Poincaré shows that thus corrected the transformations are a group corresponding to a rotation in four-dimensional space with three spatial and one time dimension and that the space-time interval is an invariant of this group. 
Popp emphasizes that while, as this summary suggests, Poincaré would have been justified in making a series of strong statements about his findings, very surprisingly he did not. In fact, Poincaré does not seem to have understood and synthesized what he showed in 1905. Worse, he contradicts himself in later writing adding to confusion about his work and positions, notably concerning the ether. Popp indicates that this is one reason why Poincaré’s alternate path to special relativity is not fully realized. Another is that Poincaré shows no appreciation of the implications for simultaneity and time; in brief there is nothing comparable to Einstein’s discussion of moving watch hands and trains arriving.

References Popp, Bruce D. (2020). Henri Poincaré: Electrons to special relativity: Translation of selected papers and discussion. Cham: Springer International Publishing. ISBN 978-3-030-48038-7.

This is all conventional wisdom, and here are his main arguments.

Poincare did not brag about his work, as Einstein did. Poincare obviously thought that his papers speak for themselves. He didn't brag about his many other original works either. That was common for scientists. Einstein was the exception, as he made great effort to claim credit for the work of others.

If Poincare understood what he wrote, then he was years ahead of Einstein. The Einstein fans say that this proves that Poincare did not understand what he wrote, but of course that never happens. Obviously Poincare understood what he wrote.

Poincare did not emphasize simultaneity in 1905. As you can read in the Wikipedia article on the subject, Poincare discovered relativistic time synchronization in 1898, and regarded it as a solved problem.

Poincare's contribution has been forgotten. There is some truth to this, as Poincare's work is mostly remembered in two papers by Minkowski, who died shortly afterwards, and in all subsequent work that considers relativity a 4-dimensional theory.

It is amazing how scholars concoct these stories to credit Einstein over Poincare. Our current understanding of relativity is based much more on the work of Poincare than Einstein.

Poincare's works get mentioned on Jordan Ellenberg: Mathematics of High-Dimensional Shapes and Geometries | Lex Fridman Podcast #190. He is praised for his work on celestial mechanics, stability of dynamics, and topology. Discovering relativity is not even mentioned. Einstein's greatest accomplishment was just a poor plagiarization of one of Poincare's minor papers.

After some discussion, the Wikipedia article on Einstein recently removed:

[Einstein is] universally acknowledged to be one of the two greatest physicists of all time, the other being Isaac Newton.
Someone pointed out that polls by PhysicsWorld and UK BBC showed physicists saying that Einstein was the greatest, with Newton in second place.

There continues to be crazy over-the-top idolization of Einstein. Normally a book about a great scholar will simply describe what he did, without gratuitous insults about him being inferior to some other great man.

That's what the above book does. It recognizes what Poincare did, and then makes nonsensical disparaging remarks in order to say that Einstein was better. Maybe someday I will see an Einstein scholar write something like this:

Einstein's 1905 relativity paper was a nice exposition of Lorentz's theory. But it lacked references to earlier theoretical work by Lorentz, FitzGerald, Poincare, and others, and to crucial experimental work by Michelson-Morley and others. It failed to explain how his theory was any different from Lorentz's. Nobody saw any difference, and called it the Lorentz-Einstein theory. Einstein failed to grasp the spacetime geometry, the Lorentz group, the covariance of Maxwell's equations, or the implications for gravity. Einstein shows no appreciation of relativity as a 4-dimensional theory; in brief there is nothing comparable to Poincare's 1905 work, and nothing that led to further work.
Whittaker did say something similar in his 1954 book. Einstein was sill alive, and could not refute it, even though his friend Max Born tried. So all serious scholars know that this Einstein credit for relativity is a hoax. The Einstein worship has only accelerated since then.

Monday, June 7, 2021

The Current War on Science

Science and medicine are being politicized, and there are so many examples that it is tiresome to list them.

During the Trump administration, it was common to hear academic and news media complaints that he was anti-Trump. But they never had any examples of acting against accepted research or not funding mainstream science programs.

Anthony Fauci was interviewed on Science Friday. He has been embarrassed by emails, but those emails are not that much different from his public statements. He has said a long list of foolish and unscientific things.

Friday he talked about AIDS a lot. He tried to blame it on Pres. Ronald Reagan. He tried to say it was not a gay disease, as proved by Magic Johnson getting it. (Johnson was rumored to be participating in dangerous homosexual practices, even before the AIDS story.)

Fauci and other experts have told us for a year that the coronavirus could not have been a Wuhan lab leak, when that is still the most plausible explanation.

Here is an essay onWhat Happens When Doctors Can't Tell the Truth?

Here is a paper by a Black woman a PhD from the Perimeter Institute, home to a lot of crackpot physics:

To provide an example of the role that white empiricism plays in physics, I discuss the current debate in string theory about postempiricism, motivated in part by a question: why are string theorists calling for an end to empiricism rather than an end to racial hegemony? I believe the answer is that knowledge production in physics is contingent on the ascribed identities of the physicists. ...

For these reasons, the area of quantum gravity, a physics subdiscipline considered by many to be the pinnacle of physics prestige, objectivity, universality, and culturelessness, is a natural starting point for a discussion about how social prestige asymmetries affect epistemic outcomes in physics. Ultimately, the discourse about the quantum gravity model of string theory provides an example of how white supremacist racial prestige asymmetry produces an antiempiricist epistemic practice among physicists, white empiricism. In string theory, we find an example wherein extremely speculative ideas that require abandoning the empiricist core of the scientific method and which are endorsed by white scientists are taken more seriously than the idea that Black women are competent observers of their own experiences.

Maybe the Perimeter Institute considers quantum gravity to be "the pinnacle of physics prestige, objectivity", but nothing good has ever come out of that subject.

Environmentalism has been hopelessly politicized for years. If they really cared about global warming, their top priorities would be building nuclear power plants and blocking Third World immigration into the First World.

Larry Krauss has a decent defense of objective science in Quillette. He is probably also a Trump-hating leftist, but I cite him to show that not all academics have bought into the current nonsense.

Authors are constantly chastened for terminology. I see Scott Aaronson still uses "quantum supremacy", but probably only because he has tenure and his enemies have other grounds for attacking him.

Reason reports:

Last month, the Journal of Hospital Medicine published an article titled, "Tribalism: The Good, the Bad, and the Future." It proposed strategies for medical professionals to overcome some of the natural group clustering that occurs in any large workspace: launch interdepartmental projects, socialize outside of the office, etc.
The paper was recalled, and the authors had to put out this apology:
From this experience, we learned that the words "tribe" and "tribalism" have no consistent meaning, are associated with negative historical and cultural assumptions, and can promote misleading stereotypes.4 The term "tribe" became popular as a colonial construct to describe forms of social organization considered "uncivilized" or "primitive." In using the term "tribe" to describe members of medical communities, we ignored the complex and dynamic identities of Native American, African, and other Indigenous Peoples and the history of their oppression.
This is ridiculous, as tribe is a perfectly good word. The authors ended up substituting "silo" for "tribe", but that has a less suitable meaning.

Update: Just today, here is a SciAm article complaining that physicians often note racial info, as it is correlated with an assortment of medical problems:

Yet, a tool used daily by almost every physician, the history of present illness (HPI), may still perpetuate medical racism. ...

Physicians often determine racial and ethnic labels themselves rather than asking patients to self-identify. ...

Beyond the issue of physicians using inaccurate racial labels, research has proven what scholars like W.E.B. Du Bois and Derrick Bell stated for decades: race is a social construct. ...

By using this outdated practice, physicians may be reinforcing the incorrect idea that race differentiation holds scientific value instead of being a clumsy artifact of the profession. ...

But, if physicians are truly trying to discern if patients are carriers of genetic allelic variants ..., then genetic mapping should be used in high-risk patients. ...

To be clear, a “color-blind” approach is not ideal either.

It seems clear that these people will cry racism no matter what the physicians do.

Update: Another example of one-sided politicization:

Well, the latest scientific journal or magazine to go to hell in a handbasket is Scientific American, which under the editorial guidance of Laura Helmuth has published a putrid piece of pure pro-Palestinian propaganda. It’s an op-ed piece apparently written by a group of Palestinian BDS activists (one author wishes to be anonymous). purveying the usual distortions, omissions, and outright lies.  If there were a counter piece refuting those lies (there is below, but not at Sci Am), it would be somewhat better, but not much. Instead, the op-ed is linked to a Google Document petition (surely not posted by Sci Am) that you can sign in solidarity with Palestine.

First of all, a science magazine has no business taking an ideological stand like this, particularly one replete with lies and distortions. What was Scientific American thinking? Do they fancy themselves to be Mother Jones?

And here is a recent Nature magazine editorial promoting leftist racial nonsense.

Update: From "Meet the Press Daily":

"So if you are trying to get at me as a public health official and a scientist, you're really attacking not only Dr. Anthony Fauci, you're attacking science. And anybody that looks at what is going on, clearly sees that, you have to be asleep not to see that. That is what going on," he added.

"Science and the truth are being attacked," Fauci concluded.

He is the highest paid US govt officil, and he certainly needs to be accountable to criticism.

Thursday, June 3, 2021

Einstein book addendum

I wrote an Einstein book several years ago. One of the main arguments was that Einstein does not deserve credit for discovering relativity. The reasons are:

1. All of the important special relativity equations were published by others before Einstein wrote anything on the subject.

2. Einstein's 1905 theory was not seen at the time as being particularly novel or influential.

3. The main concept behind relativity is that spacetime has a non-euclidean geometry. This was published by others, and missed by Einstein.

Historians acknowledge (1), but credit Einstein for some non-mathematical subtlety such as accepting local time, saying the aether was superfluous, or giving a derivation that was not ad hoc. The trouble with these is that what Einstein actually said about local time and the aether was nearly identical to what Lorentz and Poincare said years earlier.

Item (2) is also acknowledged, but not so well known. There were papers written on competing theories, and they referred to the "Lorentz-Einstein theory", as if there were no distinction between the Lorentz and Einstein theories. Einstein tried, but was never able to give a good explanation as to how his theory differed from Lorentz's. Lorentz said that Einstein merely postulated what he and others had deduced from previous theory and experiment. Poincare and Minkowski did explain how their versions of relativity differed from Lorentz.

As for (3), it is well-known that Minkowski published a non-euclidean geometry treatment of relativity, and that is what caught on with physicists and led to widespread acceptance. Einstein complained that he turned the theory into something that he could not recognize. Some assume that Minkowski built on Einstein's ideas, but Lorentz and Poincare were much greater influences, and it is not clear that Minkowski got anything from Einstein.

Even as late as 1910, when someone suggested that Einstein's non-Euclidean geometrical view could avoid a paradox of Lorentzian relativity, Einstein wrote a letter to the journal denying that he has any such view different from Lorentz's. That would have been a great opportunity for Einstein to take credit for a conceptual advance, but he denied it.

In short, here is the paradox. If the Lorentz contraction is applied to a spinning bicycle wheel, the tire contracts while the spoke lengths remain the same. This seem to contradict the Euclidean geometry fact that a circle circumference is 2π times the radius. Adopting a non-euclidean geometry resolves the paradox.

Someone similar happened in the 1920s, when a general relativity explained that non-euclidean geometry was the heart of the theory. Einstein published a favorable book review, but denied the geometry view.

See also: Einstein did not discover relativity, Einstein book update, and Second Einstein book update

The history of relativity gives the background for the distortions in Physics that came later in the book. Einstein found that he was widely idolized for his supposed genius ability to do non-empirical theorizing. By the late 1920s, he was repudiating his earlier more empirical approach. Dutch physicist Jeroen van Dongen has written a very good new paper on this XX century trend towards non-empirical Physics. He writes:

In the absence of the empirical, Einstein emphasized the merit of his personal epistemological conviction, along with its success as documented in his version of his biography: the epistemic benefit of doing unified field theory was bound up with the virtuous dispositions of his kind of theorist.
This is a polite way of saying that Einstein lied about his life story in order to promote himself and the virtues of his worthless unified field theory research.
For admiration of Einstein as empiricist icon, see e.g. Heisenberg (1989) ; Heisenberg here further recalls his surprise when Einstein explained to him in 1926 that he no longer held empiricist views. In 1927, Heisenberg signaled a difference of opinion regarding the role of `simplicity' and the empirical with Einstein (Heisenberg to Einstein, 10 June 1927, cited on p. 467 in Pais 1982); Einstein himself was well aware of his isolation and the negative judgment of his peers; see Pais (1982), p. 462. See Howard (1994) on the logical empiricists. ...

Dismissal could take a moral tone, for instance when Robert Oppenheimer deemed that Einstein had been "wasting his time." In fact, he had gone "completely cuckoo", Oppenheimer added in private, or, as he put it in public, Einstein had "lost contact with the profession of physics." Clearly, the Einstein of unified field theory was not a proper theorist.

That's right. Respect for Einstein early was based on empirical work. The Nobel Prize was for one his more empirical papers. Then Einstein went non-empirical, and his work was cuckoo.

But a philosophical shift made the non-empirical work more respectable than the empirical. Those logical empiricists were driven out pf academia. The Kuhn paradigm shifters put non-empirical work as the true scientific revolutions that everyone admired.

Example of Einstein against empiricism:

In the same letter, Einstein expressed that he was no longer thinking about experiments on the wave and particle properties of light, and that one "will never arrive at a sensible theory in an inductive manner", even if "fundamental experiments" could still be of value - once again deprecating the quantum program's empirical slant.
The history is imprtant because these Kuhnian revolutions never happen. The discoveries of relativity and quantum mechanics in the early XX century were driven by empirical findings.

The patron saints of non-empirical philosphy are Copernicus, Galileo, Einstein, and Kuhn.

The example of Copernicus is particularly apt for today’s discussion. Copernicus proposed his alternative to the fairly successful Ptolemean universe in 1543. Yet, this theoretical proposal was basically beyond any meaningful notion of empirical falsifiability. This situation persisted pretty much until Galileo pointed the newly invented telescope to the heavens and in 1610 observed the phases of Venus.
The phases of Venus were not decisive, and arguments for and against continued until Isaac Newton. Some of the arguments were not fully resolved for centuries.

Kuhn makes a big deal out of this because Copernicus described a "revolution" of the Earth around the Sun, and the theory eventually caught on even tho there was little empirical evidence for it at the time. So he portrayed scientists as a bunch of irrational fad-followers.

In the case of relativity, all of the important early papers referred directly to the Michelson-Morley experiment as the crucial experiment, as well as to other experiments. This was acknowledged by everyone at the time, including Einstein. The view only got revised later, in efforts to credit Einstein and devalue empiricism.

I have posted here many times that I think that the theories of relativity and quantum mechanics could have been anticipated by clever theorists. If you are looking for a locally causal field theory, the math leads directly to relativity and gauge theory. In a way, that is what Maxwell did with electromagnetism.

And once you accept that we needed a wave theory of matter, quantum mechanics is the obvious thing. Nobody knows any better way to even propose such a theory. So these theories could have been developed from pure theory.

Or so it seems in retrospect. It never happened that way.

String theorists would like to tell you that Einstein created relativity out of pure theory, and that inspired string theorists to do the same today. Forget it. When Einstein shifted to purely theory analysis, his work was garbage.

Peter Woit mentions the above paper, and a comment notes that it ends by saying that non-empirical physics like string theory is a Kuhnian paradigm shift, and urging that we “keep funding it as generously as before.”

Tuesday, June 1, 2021

No, this is not Math's Fatal Flaw

A recent YouTube video explains:
This is Math's Fatal Flaw

Not everything that is true can be proven. This discovery transformed infinity, changed the course of a world war and led to the modern computer.

So this discovery was one of the greatest accomplishments of the XX century, and yet it is a "fatal flaw"?

The video is actually pretty good, but I object to all the explanations that say that Mathematics is somehow deficient because the consistency of the axioms cannot be proved from the axioms.

Nobody would ever want the consistency to be provable from the axioms anyway. Such a proof would mean nothing. Inconsistent systems allow such proofs, and nobody wants that.

It would be nice to have an algorithm to determine whether a given math statement is true or false. The above discovery shows that it is not possible. But again, this is not a fatal flaw. It is what makes Math interesting.

Saturday, May 29, 2021

Burying the Wuhan lab leak hypothesis

Here is another example of scientists politicizing an issue of large public interest.

A Nature/SciAm article reports:

Calls to investigate Chinese laboratories have reached a fever pitch in the United States, as Republican leaders allege that the coronavirus causing the pandemic was leaked from one, and as some scientists argue that this ‘lab leak’ hypothesis requires a thorough, independent inquiry. But for many researchers, the tone of the growing demands is unsettling. ...

Others worry that the rhetoric around an alleged lab leak has grown so toxic that it’s fuelling online bullying of scientists ...

The debate over the lab-leak hypothesis has been rumbling since last year. But it has grown louder in the past month ...

Even if the letter in Science was well intentioned, its authors should have thought more about how it would feed into the divisive political environment surrounding this issue, says Angela Rasmussen, a virologist at the University of Saskatchewan in Saskatoon, Canada. ...

Rasmussen says, “This debate has moved so far from the evidence that I don’t know if we can dial it back.” ...

The United States has since requested that the WHO conduct a "transparent, science-based" phase 2 origins study, and US President Joe Biden announced that he has asked the US intelligence community, in addition to its national labs, to "press China to participate" in an investigation.

Apparently there is a lot of evidence for the Wuhan lab leak hypothesis, but we may never know for sure.

Because Republicans have demanded an investigation, a lot of Democrat scientists have resisted, arguing that no credence should be given to Pres. Trump's lies.

Here is another example. For decades, computer scientists and others have been warning that our elections are insecure and must be fixed. A prominent such article was just published:

Elections must be constructed and conducted such that everyone (all of the winning and losing candidates, as well as those who have supported them) can, with extremely high confidence, rationally believe the results reflect the will of the voters. ... As computer scientists, we must bear responsibility for warning about election vulnerabilities and proposing solutions,
Of course election integrity has been politicized, and they have to disavow being Trump supporters. As a blogger notes:
The letter is well written, but ... They say in the letter:
However, notwithstanding these serious concerns, we have never claimed that technical vulnerabilities have actually been exploited to alter the outcome of any US election.
This seems to be a problem. How do you get people to look both ways when crossing the street, when you have also basically asserted: “No one has ever been hit by a car when crossing the street.”
Just as we don't know the source of the Wuhan virus, we don't know who would have won a properly conducted 2020 election. And we may never know.

Tucker Carlson Tonight on Fox News, the highest rates cable TV news show, now regularly devotes segments to the politicization of science. Last night he discussed papers being retracted for political reasons, Biden administration appointments of incompetent political hacks to top-level scientific posts, physicians being forced to give harmful medical treatments, physicians fired for accusations of being transphobic, CDC employees not following official advice, and false and alarmist climate science.

Thursday, May 27, 2021

Watering down the Definition of Quantum Supremacy

Scott Aaronson admits:
If your point is just that quantum supremacy claims depend for their credibility on people trying hard to refute them and failing, then I vehemently agree! So you and I should both be glad that this is exactly what’s happening right now.

Regarding your question: no, I would not bet that Google’s Sycamore chip can’t be spoofed by a classical computer in 10 years — or right now, for that matter!

In other words, this Google quantum supremacy means nothing except that Google has made a machine that no one has bothered to simulate yet. And he has no confidence that it cannot be done.

Even if it cannot be done, it is still subject to the teapot supremacy argument.

I thought that quantum supremacy meant demonstrating decisively a computational ability superior to ordinary Turing machine computers. Aaronson seems to think that it just means doing something complicated that critics have not yet matched.

Here is how John Preskill originally used the term:

we hope to hasten the day when well controlled quantum systems can perform tasks surpassing what can be done in the classical world. One way to achieve such "quantum supremacy" would be to run an algorithm on a quantum computer which solves a problem with a super-polynomial speedup relative to classical computers ...

We therefore hope to hasten the onset of the era ofquantum supremacy, when wewill be able to perform tasks with controlled quantum systems going beyond whatcan be achieved with ordinary digital computers. ...

I have emphasized the goal of quantum supremacy (super-classical behavior of con-trollable quantum systems) as the driving force behind the quest for a quantumcomputer ...

I have emphasized the goal of quantum supremacy (super-classical behavior of con-trollable quantum systems) as the driving force behind the quest for a quantumcomputer, and the idea of quantum error correction as the basis for our hope thatscalable quantum computing will be achievable.

Aaronson is like Dr. Fauci admitting that the coronavirus may have come from the Wuhan lab. He is preparing for research that could be profoundly embarrassing to the entire field.

Update: Gil Kalai responds:

Hi Scott, regarding the analogy between Google’s Sycamore and IBM’s Deep Blue, here are three differences
1. In the Sycamore case, the researchers largely invented a new game to play
2. In the Sycamore case, the researchers themselves also played the part of Kasparov
3. In the Sycamore case, a huge leap compared to previous efforts is claimed
Aaronson previously wrote:
As I’ve said in dozens of my talks, the application of QC that brings me the most joy is refuting Gil Kalai, Leonid Levin, and all the others who said that quantum speedups were impossible in our world.

Monday, May 24, 2021

What makes quantum mechanics mysterious?

Everybody says quantum mechanics is mysterious, but what is the mystery?

Entanglement. A lot of smart physicists have made a big deal out of this, but it is really not mysterious unless combined with some other effect. I recently posted about this.

Double-slit experiment. We get a similar interference pattern for any wave phenomenon. We would expect it for light, even with no QM.

Uncertainty principle. Not too surprising, once you assume a wave nature of matter.

Superposition. Like the half-dead half-alive cat. Just a useful fiction.

Many worlds. This is just a fantasy. You can have the same fantasy about classical theories, if you wish.

Identical particles. It is strange that all electrons are the same. It makes the world very simple, compared to the alternatives.

Discrete energy levels. This is truly one of the nice features of QM, but not usually described as mysterious.

Probabilities. Scott Aaronson says the essence is negative probabilities. I say there is no such thing. QM has regular probabilities, and so does every other theory.

Nonlocality. QM has no true nonlocality, in the sense doing something in one place causing action at a distance elsewhere.

Canceling infinities. These are mostly artifacts of extreme assumptions, such as having mass and charge concentrated at a point, having infinite density.

Linearity. Some other theories are linear, such as Maxwells equations.

Super-Turing computation. This would be interesting, if proved. Some say it has been. I don't believe it.

Lack of local hidden variables. Bell made a big deal out of this, but nobody expected local hidden variable anyway.

No counterfactual definiteness. This is closely related to the lack of hidden variables.

So what makes quantum mechanics different from classical mechanics? I say that it is the non-commuting observables and positivist outlook. The above stuff is just not that mysterious.

Thursday, May 20, 2021

Google promises to make a Logical Qubit

In my skepticism about quantum computing, I have occasionally remarked that all the claims about research machines with dozens of qubits is exaggerated, because they still have not made a true qubit yet. Here is an explanation of that.

Google announces at its annual I/O conference:

Within the decade, Google aims to build a useful, error-corrected quantum computer. ...

To begin our journey, today we’re unveiling our new Quantum AI campus in Santa Barbara, California. ...

[wild futuristic hype snipped]

To reach this goal, we’re on a journey to build 1,000,000 physical qubits that work in concert inside a room-sized error-corrected quantum computer. That’s a big leap from today’s modestly-sized systems of fewer than 100 qubits.

To get there, we must build the world’s first “quantum transistor” — two error-corrected “logical qubits” performing quantum operations together — and then figure out how to tile hundreds to thousands of them to form the error-corrected quantum computer. That will take years.

To get there, we need to show we can encode one logical qubit — with 1,000 physical qubits.

Got that? Google claims that it has achieved quantum supremacy, but it is still a long way from building a system with even one logical qubit.

Monday, May 17, 2021

Rethinking entanglement of a single particle

Dr. Bee has caused me to rethink entanglement, and reader Ajit sends a paper on Entanglement isn't just for spin
Quantum entanglement occurs not just in discrete systems such as spins, but also in the spatial wave functions of systems with more than one degree of freedom.
It is sometimes said that Einstein discovered entanglement in 1935, and it was immediately recognized as the central defining feature of quantum mechanics. But as the above paper notes, the word was not in common use until about 1987, and did not find its way into textbooks until after that.

As the article explains, entanglement is not some peculiarity of tricky spin experiments. It is a property of all quantum systems.

Entanglement is explained as the thing that makes quantum mechanics nonlocal, and hence the essence of why the theory is non-classical and mysterious.

Paul Dirac one said:

Quantum-mechanically, an interference pattern occurs due to quantum interference of the wavefunction of a photon. The wavefunction of a single photon only interferes with itself. Different photons (for example from different atoms) do not interfere.
This is not an exact quote, but he said something similar.

This is a confusing statement, and I would not take it too literally. But in a similar spirit, I would say that a quantum particle can be entangled with itself.

Entanglement is often introduced by describing creation of a pair of particles with equal and opposite spins. But it is much more common. In any atom with several orbital electrons, those electrons are entangled. Nearby particles usually are. The case of the equal and opposite pair is interesting because that gives distant entanglement, but nearby entanglement occurs all the time.

Consider a stream of particles being fired into a double slit. Each particle is interfering with itself, and is entangled with itself. The interference results in the interference pattern on the screen.

The entanglement results in each particle hitting the screen exactly once. If you purely followed the probabilities, there are many places on the screen where the particle might hit. Those possibilities are entangled. If the particle is detected in one spot, it will not be detected in any other.

You cannot understand the experiment as localized probabilities in each spot of the screen.

Viewed this way, I am not sure the 2-particle entanglement story is any more mysterious than the 1-particle story. Maybe explanations of entanglement should just stick to the 1-particle story, as the essence of the matter.

Update: Reader Ajit suggests that I am confusing entanglement with superposition. Let me explain further. Consider the double-slit experiment with electrons being fired thru a double-slit to a screen, and the screen is divided into ten regions. Shortly before an electron hits the screen, there is an electron-possibility-thing that is about to hit each of the ten regions. Assuming locality, these electron-possibility-things cannot interact with each other. Each one causes an electron-screen detection event to be recorded, or disappears. These electron-possibility-things must be entangled, because each group of ten results in exactly one event, and the other nine disappear. There is a correlation that is hard to explain locally, as seeing what happens to one electron-possibility-thing tells you something about what will happen to the others. You might object that the double-slit phenomenon is observed classically with waves, and we don't call it entanglement. I say that when a single electron is fired, that electron is entangled with itself. The observed interference pattern is the result.

Tuesday, May 11, 2021

President Joe Biden is Politicizing Science

Lawrence Krauss has a WSJ article attacking Pres. Biden for politicizing science.
The New Scientific Method: Identity Politics
The National Academy of Sciences fights bias by explicitly introducing more of it.
Lubos Motl praises the article.

In particular there is now an aggressive affirmative action program at the National Academy of Sciences, where less competent women and Blacks are being appointed in order to meet diversity quotas.

Biden's White House Coronavirus Response Coordinator is a Democrat political hack named Jeffrey Zients. Donald Trump had an immunology expert in that job.

The authorities are still not telling us the truth about the virus. See this article by a NY Times science reporter on evidence it came from the Wuhan Institute of Virology.

Update: From a CDC official in a press conference, as reported in the NY Times:

DR. WALENSKY: … There’s increasing data that suggests that most of transmission is happening indoors rather than outdoors; less than 10 percent of documented transmission, in many studies, have occurred outdoors.
The paper goes on to explain that the true number is more like 0.1%. Yes, that is less than 10%, but appears to be a distortion attempting to justify outside mask requirements.

Monday, May 10, 2021

Quantum wavefunction is not everything

Reader Ajit argues that I am not following textbook quantum mechanics properly. He has posted Postulates of quantum mechanics, as stated by various authors.

Checking other versions of the postultes, I find:

The state of a system is completely described by a wavefunction

Associated with any particle moving in a conservative field of force is a wave function which determines everything that can be known about the system.

I wonder why this would be stated as a postulate. It is not used by the theory anywhere, and it is not true.

Sometimes it is stated for a single particle, but it cannot be true if the particle is entangled with another. Sometimes it is stated for scalar wave functions, but that cannot be true if the particle has spin.

You can correct those problems by introducing spinor-valued wave functions of several variables, but then you are still ignoring quantum fields and all sorts of other complexities.

Now you might say: Okay, but if use the whole Standard Model, or some bigger unified field that takes into account all possible interactions, and then we construct a wave function of the universe, then that would completely describe the state of the universe.

That would not be quantum mechanics. That would be some theorist's fantasy that has never been carried out.

Quantum mechanics is a theory that takes in some available info, and makes some predictions, but never achieves a complete description of the system. Nobody has any idea how any such complete description would ever be accomplished.

Take a simple example, the Schroedinger Cat. The wavefunction is a superposition of dead and alive states. Is it a complete description of the state of the system? No, of course not. The cat is either dead or alive. You can get a more complete description by opening the door and looking to see if the cat is dead. The wavefunction is most emphatically not giving a complete description.

I don't know why anyone would say that the wavefunction is a complete description of the system. Other physics theories do not start off with a postulate declaring some sort of god-like omniscient. It doesn't make sense to even say something like that.

And yet this postulate is prominent on various lists of postulates for quantum mechanics. I will have to do some further research to find out who is responsible for this silly idea.

This week's Dr. Bee video is on Einstein's spooky action at a distance. She says that the spookiness is the measurement update (ie, collapse of the wavefunction), not entanglement.

Believing that the wave function is a complete description necessarily causes these spooky concerns. Any observation affect distant parts of the wavefunction. If the wavefunction is a complete physical thing, then it is spooky.

Monday, May 3, 2021

Does Quantum AI have Free Will?

A new paper argues that a quantum computer could be conscious, and have free will.

Since I am skeptical that quantum computer will ever achieve quantum supremacy, you probably think that I dismiss this as nuts. Actually I don't.

Turing machines are deterministic, and do not have free will. But I believe humans do. The mechanism is not understood, and may involve quantum mechanics. So maybe a quantum computer can do that, even if cannot factor large numbers.

The London Guardian has a good essay on the arguments about free will. It says:

Harris argues that if we fully grasped the case against free will, it would be difficult to hate other people: how can you hate someone you don’t blame for their actions? Yet love would survive largely unscathed, ...

I personally can’t claim to find the case against free will ultimately persuasive; it’s just at odds with too much else that seems obviously true about life.

I agree with that last sentence. A lot of intellectuals reject free will, but in the process they also reject a lot of things that seem obviously true.

I do not agree with the love/hate analysis. If I believe that someone has no free will, and is merely a preprogrammed robot to do evil things, then sure, that is a good reason to hate him. He would be a sub-human evil nuisance. A puppet of the devil. As for love, try telling your wife that you only love her because the chemicals in your body have made that illusion. Some psychologists say that, and I don't think it helps.

The article says that philosophers have gotten death threats over such issues.

Jerry Coyne endorses most of the essay, but argues:

Contracausal free will is the bedrock of Abrahamic religions, which of course have many adherents.
No. Islam doesn't accept free will. Moslems are always talking about God's will being carried out, as if no one can do anything about it. Jews have mixed views. Catholics believe strongly in free will, and so do many Protestants, but some, such as Calvinists, do not.

A previous Guardian essay by historian Yuval Noah Harari said:
Unfortunately, “free will” isn’t a scientific reality. It is a myth inherited from Christian theology. Theologians developed the idea of “free will” to explain why God is right to punish sinners for their bad choices and reward saints for their good choices. If our choices aren’t made freely, why should God punish or reward us for them? ...

You cannot decide what desires you have. You don’t decide to be introvert or extrovert, easy-going or anxious, gay or straight. Humans make choices – but they are never independent choices. ...

But now the belief in “free will” suddenly becomes dangerous. If governments and corporations succeed in hacking the human animal, the easiest people to manipulate will be those who believe in free will.

He is gay Israeli Jewish atheist. Perhaps he is a slave to his programming, but others are not.

Theologians did not invent free will. The Gospels use phrases like "go and sin no more". This assumes that you can choose to sin, or not sin.

You can decide to be an introvert or extravert. Change is not easy, but people do it.

No, the easiest to manipulate are those who think that they are already slaves.

Coyne argues:

Free will skepticism (sometimes called “hard determinism”). As you must know, this is the view to which I adhere. Though it’s often called “determinism”, with the implication that the laws of physics have already determined the entire future of the universe, including what you will do, that’s not my view. There is, if quantum mechanics be right, a fundamental form of indeterminism that is unpredictable, like when a given atom in a radioactive compound will decay. It’s unclear to what extent this fundamental unpredictability affects our actions or their predictability, but I’m sure it’s played some role in evolution (via mutation) or in the Big Bang (as Sean Carroll tells me). Thus I prefer to use the term “naturalism” rather than “determinism.” But, at any rate, fundamental quantum unpredictability cannot give us free will, for it has nothing to do with either “will” or “freedom”.
I call this argument: Only God has Free Will.

Coyne is an atheist, but he seems to believe in some sort of Spinoza God. Humans have no freedom or free will. We are just puppets being controlled. God is not a predictable robot, and can make choices for us and the world. God even guides evolution of biological species by directing mutations.

The phrase "fundamental quantum unpredictability" means that the human observer can only predict probabilities. It always leaves open the possibility that someone with more info could make a better prediction. If Coyne wants to believe that it is some sort of God making all our choices for us, I guess that possibility is allowed.

For example, a quantum mechanics textbook might say that a uranium atom has a certain probability of radioactive decay in the next hour. And maybe that is all that can be said with the info available. But nowhere will it say that it is impossible to make a better prediction, if the state of the atom could be more precisely determined. As a practical matter, it is hopeless to get the wavefunctions of all the quarks in a uranium nucleus, but the point remains that better info might give a better prediction.

In my opinion, attributing all the decisions in the world to a Spinoza God is contrary to common sense and experience, and does not really solve anything. It is like a turtle argument that atheists like to mock. In fact, I worry about the mental health of anyone who believes that, as it is similar to schizophrenics who say that they are obeying voices in their heads.

Thursday, April 29, 2021

Feynman quote on Leftist Groupthink

...There was a special dinner at some point, and the head of the theology place, a very nice, very Jewish man, gave a speech. It was a good speech, and he was a very good speaker, so while it sounds crazy now, when I’m telling about it, at that time his main idea sounded completely obvious and true. He talked about the big differences in the welfare of various countries, which cause jealousy, which leads to conflict, and now that we have atomic weapons, any war and we’re doomed, so therefore the right way out is to strive for peace by making sure there are no great differences from place to place, and since we have so much in the United States, we should give up nearly everything to the other countries until we’re all even. Everybody was listening to this, and we were all full of sacrificial feeling, and all thinking we ought to do this. But I came back to my senses on the way home. The next day one of the guys in our group said, “I think that speech last night was so good that we should all endorse it, and it should be the summary of our conference.” I started to say that the idea of distributing everything evenly is based on a theory that there’s only X amount of stuff in the world, that somehow we took it away from the poorer countries in the first place, and therefore we should give it back to them. But this theory doesn’t take into account the real reason for the differences between countries—that is, the development of new techniques for growing food, the development of machinery to grow food and to do other things, and the fact that all this machinery requires the concentration of capital. It isn’t the stuff, but the power to make the stuff, that is important. But I realize now that these people were not in science; they didn’t understand it. They didn’t understand technology; they didn’t understand their time. The conference made me so nervous that a girl I knew in New York had to calm me down. “Look,” she said, “you’re shaking!..."

Thursday, April 22, 2021

New book on Free Will Debate

Philosophers Dan Dennett and Greg Caruso wrote a a 2018 debate on free will, and now they have expanded it into a book. From a review, they agree more than they disagree:
Both are naturalists (JD p.171) who see no supernatural interference in the workings of the world. That leaves both men accepting general determinism in the universe (JD p.33), which simply means all events and behaviours have prior causes. Therefore, the libertarian version of free will is out. Any hope that humans can generate an uncaused action is deemed a “non-starter” by Gregg (JD p.41) and “panicky metaphysics” by Dan (JD p.53). Nonetheless, both agree that “determinism does not prevent you from making choices” (JD p.36), and some of those choices are hotly debated because of “the importance of morality” (JD p.104). Laws are written to define which choices are criminal offenses. But both acknowledge that “criminal behaviour is often the result of social determinants” (JD p.110) and “among human beings, many are extremely unlucky in their initial circumstances, to say nothing of the plights that befall them later in life” (JD p.111). Therefore “our current system of punishment is obscenely cruel and unjust” (JD p.113), and both share “concern for social justice and attention to the well-being of criminals” (JD p.131).
Their hair-splitting philosophical differences are not that interesting. What interests me is how they could both have such a screwed-up view of life, and still think that they are on opposite sides of a big issue.

Caruso says we have no free will. Dennett says that we think that we do, and it is useful to maintain the illusion, but it is not real.

Without free will, there ia no consciousness. Our systems of law, ethics, morality, and politics depend on free will. Christianity is based on it. So is the scientific method. It is hard to imagine how modern civilization could even exist without free will.

These philosophers discard it all based on a belief that all events have prior causes.

When a uranium nucleus decays, is it determined by prior causes? Our best scientists cannot answer this question. But somehow these philosophers can get the answer by playing silly word games? No, it is all nonsense.

Monday, April 19, 2021

Trans Ideology and the New Ptolemaism

The social sciences often make cosmological analogies, and screw them up so badly that I cannot even tell what point they are making.

Here is a new scholarly paper on an academic dispute:

Trans Ideology and the New Ptolemaism in the Academy ...

Ptolemy constructed an inordinately complex model of the universe in order to make all of the empirical data conform to a central, organizing false assumption, namely, that the earth was at the center.

Foucault’s influence in the academy is at least as often lamented as celebrated, and I will not attempt in what follows a comprehensive critique of his work. Instead, I will focus on one tendency his example has encouraged, which, using Rockhill’s analogy, I will call the “new Ptolemaism.” This is a push for scholarship to be insistently insular and to be much less interested in the study of the world than in the study of the study of the world. This kind of work, which is by now very common in the social sciences and humanities, performs the same neat trick every time. It turns out, in every such analysis, that the framing of inquiry turns out to be more significant than the object of inquiry. ... ...

Consider present day calls to remake the academy. There should be more soft sociology of the hard sciences; there should be more women in male dominated disciplines; we should “indigenize” the university. There are two terms in each case; we should reverse the conventional hierarchy of those terms; and the results will be profoundly liberatory, because, Ptolemaically, the university rather than the world is the most important locus of struggle. ...

Gender critical feminists like me notice, of course, that one infinitely more often sees and hears the slogan “transwomen are women” than its counterpart “transmen are men.” To understand why this is the case, you’d have to pay attention to patterns of power in the world rather than to Ptolemaic valence-flipping. One of the signs on my office door that most infuriated feminist academic women colleagues on social media described the parallels between men’s rights activism and trans rights activism. Many feminist academic women clearly saw it as their moral and intellectual duty to decry this assertion.

The Foucealt here has nothing to do with the Foucault pendulum, which helped prove Ptolemy wrong about the motion of the Earth. No, it is a French post-modernist and pedophile rapist.

The author might have some valid points about feminism and trans ideology, but the Ptolemy stuff is nonsense, and the Foucealt stuff probably is also.

Ptolemy did not construct an inordinately complex model of the universe. It was not any more complicated that any other model achieving similar accuracy. He did assume that the Earth was at the center, but the model is not really any different or more complicated from that. He descibed the stars, Sun, planets, and Moon as seen from Earth, so he would have to include the calculations needed for that whether the Earth moved or not. It was not really a false assumption that the Earth was at the center, but a way of defining an Earth-centered coordinate system that is a completely legitimate way of recording observations.

The motion of the Earth was one of the great scientific issues in the history of mankind, but it is nearly always misrepresented.

This Babylon Bee parody is a lot more entertaining on the subject. To understand it, it helps to have seen the November 3, 1961 episode of The Twilight Zone.

Monday, April 12, 2021

Israeli prize nominee is quantum skeptic

Scott Aaronson write:
Oded Goldreich is a theoretical computer scientist at the Weizmann Institute in Rehovot, Israel. He’s best known for helping to lay the rigorous foundations of cryptography in the 1980s, ... Since then, I’ve interacted with Oded from time to time, partly around his firm belief that quantum computing is impossible.

Last month a committee in Israel voted to award Goldreich the Israel Prize (roughly analogous to the US National Medal of Science), for which I’d say Goldreich had been a plausible candidate for decades. But alas, Yoav Gallant, Netanyahu’s Education Minister, then rather non-gallantly blocked the award, solely because he objected to Goldreich’s far-left political views (and apparently because of various statements Goldreich signed, including in support of a boycott of Ariel University, which is in the West Bank). ...

[Nick] Is there any kind of correlation between leftist political views and QC skepticism?

Nick #33: I can’t say I’ve noticed any such correlation. On the other hand, maybe not surprisingly, I have noticed a strong correlation between QC skepticism and just general contrarianism, about politics, climate science, high-energy physics, or whatever else.

Some people just don't go along with the program for what everyone is supposed to believe, I guess.

Most of Aaronson's post and comments have to do with whether professors should be denied academic prizes because of their political opinions. This is how far we have gone. No bright young ambitious academic researcher expresses a politically incorrect opinion anymore.

Tuesday, April 6, 2021

Philosophers try to discredit Realism

John Horgan writes in SciAm:
Although my realism has been wobbling lately, I remain a realist. ...

Filmmaker Errol Morris, who studied under Kuhn in the 1970s and ended up loathing him, contends that Kuhnian-style postmodernism makes it easier for politicians and other powerful figures to lie. Philosopher Timothy Williamson makes a similar point in “In defence of realism.” “Imagine a future,” Williamson writes, “where a dictator or would-be dictator, accused of spreading falsehoods, can reply: ‘You are relying on obsolescent realist ideas of truth and falsity; realism has been discredited in philosophy.’”

I agree with methat, but I am afraid it is a losing battle.

Not only are philosohers denying realism, so are physicists, increasingly. And even those who agree with me on interpretations of quantum mechanics have conceded the term realism. That is, they will say that Copenhagen is not a realist interpretation, because we cannot simultaneously say the electron's position and momentum are.

Thursday, April 1, 2021

Good videos about Quantum Mechanics

I have criticized popular accounts of quantum mechanics, but they are not all bad.

Lubos Motl praises a series of 3 elementary videos.

I also recommend Quantum Mechanics Isn’t Weird, We’re Just Too Big | Qiskit Seminar Series with Phillip Ball. Ball is a well-known science writer.

I am sure that there are many others. There have been good textbooks since 1930. Just be wary of anything talking about cats, parallel universes, and nonlocality.

There are lots of good videos on relativity, but I have a quibble with this one on general relativity mishaps. Most of it is about distinguishing the time dilation from velocity, which it calls special relativity, from the time dilations from gravity, which it calls general relativity.

He says that if the GPS satellites were the right height, the the effects would cancel out.

All that is correct, except that both time dilations are part of what used to be called special relativity. You don't need any metric geometry, and Einstein derived the gravity time dilation from just special relativity.

Some people say that special relativity is just about constant velocities (ie, uniform motion), but it was applied to accelating objects from the very start. The GPS satellite is just an accelating object. So is the ground receiver, if you figure in the acceleration of gravity.

Monday, March 29, 2021

Civilization was inspired by Astronomy

Thought experiment:
Imagine that over the last 11 thousand years (that is, the period of stable climate following upon the last ice age which allowed the human civilisation to develop)the atmospheric conditions on Earth were different: the skies were always covered, even in the absence of clouds, by a very light haze, not preventing the developmentof agriculture, but obscuring the stars and turning the sun and the moon into amorphous light spots. Would mathematics have had a chance to develop beyond basic arithmetic and geometry sufficient for measuring fields and keeping records of harvest? I doubt that. Civilisations which developed serious mathematics also had serious astronomy (it was an equivalent of our theoretical physics). But I claim even more: the movement of stars in the sky was the paradigm of precision andreproducibility, the two characteristic features of mathematics. Where else could humans learn the concept of absolute precision?
I agree with that. Without astronomy, we would not have much civilization today.

Thursday, March 25, 2021

Trusting CDC to be science-based

From a Time magazine article, a year ago:
As the new coronavirus COVID-19 spreads in the U.S., people who are well want to stay that way. But since no vaccines are currently available, the strongest weapons Americans have are basic preventive measures like hand-washing and sanitizing surfaces, according to the Centers for Disease Control and Prevention (CDC).

The simplicity of those recommendations is likely unsettling to people anxious to do more to protect themselves, so it’s no surprise that face masks are in short supply—despite the CDC specifically not recommending them for healthy people trying to protect against COVID-19. “It seems kind of intuitively obvious that if you put something — whether it’s a scarf or a mask — in front of your nose and mouth, that will filter out some of these viruses that are floating around out there,” says Dr. William Schaffner, professor of medicine in the division of infectious diseases at Vanderbilt University. The only problem: that’s not effective against respiratory illnesses like the flu and COVID-19. If it were, “the CDC would have recommended it years ago,” he says. “It doesn’t, because it makes science-based recommendations.”

The science, according to the CDC, says that surgical masks won’t stop the wearer from inhaling small airborne particles, which can cause infection.

Remember this, next time you are told to trust the experts at the CDC.

Maybe the CDC was right that the masks were useless. The evidence for masks is dubious. I am not sure myself. But the official CDC recommendations don't seem to be any better than common-sense judgments from the average person.