Thursday, September 20, 2018

Mermin defends Copenhagen Interpretation

N. David Mermin has written many popular essays explaining quantum mechanics, and now he summarizes his views on how to interpret the theory in Making better sense of quantum mechanics.

He prefers something called QBism, but nearly everything he says could be considered a defense of the Copenhagen interpretation.
Much of the ambiguity and confusion at the foundations of quantum mechanics stems from an almost universal refusal to recognize that individual personal experience is at the foundation of the story each of us tells about the world. Orthodox ("Copenhagen") thinking about quantum foundations overlooks this central role of private personal experience, seeking to replace it by impersonal features of a common "classical" external world.
He is drawing a fairly trivial distinction between his QBism view and Copenhagen. He illustrates with this famous (but possibly paraphrased) Bohr quote::
When asked whether the algorithm of quantum mechanics could be considered as somehow mirroring an underlying quantum world, Bohr would answer "There is no quantum world. There is only an abstract quantum physical description. It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about nature."
Mermin's only quibble with this is that he prefers "each of us can say" to "we can say". That is, he doesn't like the way Bohr lumps together everyone's observations and calls it the classical world.

Okay, I guess that distinction makes a difference when discussing Wigner's Friend, a thought experiment where one observer watches another. But for the most part, Mermin likes the Copenhagen interpretation, and successfully rebuts those who say that the interpretation is deficient somehow.

Monday, September 17, 2018

Correcting errors about EPR paradox

Blake C. Stacey writes about the Einstein–Podolsky–Rosen paradox:
Misreading EPR: Variations on an Incorrect Theme

Notwithstanding its great influence in modern physics, the EPR thought-experiment has been explained incorrectly a surprising number of times.
He then gives examples of famous authors who get EPR wrong.

He gets to the heart of the Bohr-Einstein dispute:
EPR write, near the end of their paper, "[O]ne would not arrive at our conclusion if one insisted that two or more physical quantities can be regarded as simultaneous elements of reality only when they can be simultaneously measured or predicted."

The response that Bohr could have made: "Yes."

EPR briefly considered the implications of this idea and then dismissed it with the remark, "No reasonable definition of reality could be expected to permit this."

But that is exactly what Bohr did. A possible reply in the Bohrian vein: "Could a `reasonable definition of reality' permit so basic a fact as the simultaneity of two events to be dependent on the observer's frame of reference? Many notions familiar from everyday life only become well-defined in relativity theory once we fix a Lorentz frame. Likewise, many statements in quantum theory only become well-defined once we have given a complete description of the experimental apparatus and its arrangement."

This is not a quote from anywhere in Bohr's writings, but it is fairly in the tradition of his Warsaw lecture, where he put considerable emphasis on what he felt to be "deepgoing analogies" between quantum theory and relativity.
In spite of all differences in the physical problems concerned, relativity theory and quantum theory possess striking similarities in a purely logical aspect. In both cases we are confronted with novel aspects of the observational problem, involving a revision of customary ideas of physical reality, and originating in the recognition of general laws of nature which do not directly affect practical experience. The impossibility of an unambiguous separation between space and time without reference to the observer, and the impossibility of a sharp separation between the behavior of objects and their interaction with the means of observation are, in fact, straightforward consequences of the existence of a maximum velocity of propagation of all actions and of a minimum quantity of any action, respectively.
This is well put. The aim of EPR is to explain a simple example of entangled particles, and to argue that no reasonable definition of reality would permit two observables that cannot be simultaneously measured.

And yet that is a core teaching of quantum mechanics, from about 10 years earlier. Two non-commuting observables cannot be simultaneously measured precisely. That is the Heisenberg uncertainty principle.

Theories that assign definite simultaneous values to observables are called hidden variable theories. All the reasonable ones have been ruled out by the Bell Test Experiments.

Complaining that the uncertainty principle violates pre-conceptions about reality is like complaining that relativity violates pre-conceptions about simultaneity. Of course it does. Get with the program.

There are crackpots who reject relativity because of the Twin Paradox, or some other such surprising effect. The physics community treats them as crackpots. And yet the community tolerates those who get excited by EPR, even tho EPR makes essentially the same mistake.

Saying "maximum velocity of propagation" is a way of saying the core of relativity theory, and saying "minimum quantity of any action" is a way of saying the core of quantum mechanics. The minimum is Planck's constant h, or h-bar. The Heisenberg uncertainties are proportional to this constant. That minimum makes it impossible to precisely measure position and momentum simultaneously, just as the finite speed of light makes it impossible to keep clocks simultaneous.

Thursday, September 13, 2018

Joint Hubble Lemaitre credit is a bad idea

I mentioned renaming the Hubble Law, as a way to correct history, but it appears that they have made the matter worse.

The respected science historian Helge Kragh writes:
The Hubble law, widely considered the first observational basis for the expansion of the universe, may in the future be known as the Hubble-Lema\^itre law. This is what the General Assembly of the International Astronomical Union recommended at its recent meeting in Vienna. However, the resolution in favour of a renamed law is problematic in so far as concerns its arguments based on the history of cosmology in the relevant period from about 1927 to the early 1930s. A critical examination of the resolution reveals flaws of a non-trivial nature. The purpose of this note is to highlight these problems and to provide a better historically informed background for the voting among the union's members, which in a few months' time will result in either a confirmation or a rejection of the decision made by the General Assembly.
He notes:
Until the mid-1940s no astronomer or physicist seems to have clearly identified Hubble as the discoverer of the cosmic expansion. Indeed, when Hubble went into his grave in 1953 he was happily unaware that he had discovered the expansion of the universe.
He says the cited evidence that Hubble met with Lemaitre is wrong. Furthermore, there are really two discoveries being confused -- the cosmic expansion and the empirical redshift-distance law. Hubble had a role in the latter, but not the former.

Monday, September 10, 2018

Where exactly does probability enter the theory?

Peter Woit writes:
A central question of the interpretation of quantum mechanics is that of “where exactly does probability enter the theory?”. The simple question that has been bothering me is that of why one can’t just take as answer the same place as in the classical theory: in one’s lack of precise knowledge about the initial state.
Lee Smolin says he is writing a book, and there are 3 options: (1) orthodox quantum mechanics, (2) many-worlds, (3) hidden variable theories, like pilot waves. All attempts at (2) have failed, so he says "My personal view is that option 3) is the only way forward for physics."

This is a pretty crazy opinion. No one has been able to makes sense out of probabilities in a many-worlds theory, and Bell test experiments have ruled out all sensible hidden variable theories.

Lubos Motl posts a rant against them, as usual:
Quantum mechanics was born 93 years ago but it's still normal for people who essentially or literally claim to be theoretical physicists to admit that they misunderstand even the most basic questions about the field. As a kid, I was shocked that people could have doubted heliocentrism and other things pretty much a century after these things were convincingly justified. But in recent years, I saw it would be totally unfair to dismiss those folks as medieval morons. The "modern morons" (or perhaps "postmodern morons") keep on overlooking and denying the basic scientific discoveries for a century, too! And this centennial delay is arguably more embarrassing today because there exist faster tools to spread the knowledge than the tools in the Middle Ages.
Lumo is mostly right, but it is possible to blame uncertainties on lack of knowledge of the initial state. It is theoretically possible that if you had perfect knowledge about a radioactive nucleus, then you would know when it would decay.

However it is also true that measurements are not going to give you that knowledge, based on what we know about quantum mechanics. This is what makes determinism more of a philosophical question than a scientific one.

I agree with Lumo that deriving the Born rule is silly. The Born rule is part of quantum theory. Deriving it from something equivalent might please some theorists, but really is just a mathematical exercise with no scientific significance.

This question about the origin of probabilities only makes sense to those who view probably as the essential thing that makes quantum mechanics different from classical mechanics. I do not have that view. Probabilities enter into all of science. It is hard to imagine any scientific theory that can be tested without some resort to a probabilistic analysis. So I don't think that the appearance of probability requires any special explanation. How else would any theory work?

It is very strange that respectable physicists can have such bizarre views about things that were settled about a century ago. I agree with Lumo about that.

Saturday, September 8, 2018

Another claim for QC real soon

Latest quantum computer hype:
Today the [Berkeley-based startup Rigetti] launched a project in the mold of Amazon Web Services (AWS) called Quantum Cloud Services. "What this platform achieves for the very first time is an integrated computing system that is the first quantum cloud services architecture," says Chad Rigetti, founder and CEO of his namesake company. The dozen initial users Rigetti has announced include biotech and chemistry companies harnessing quantum technology to study complex molecules in order to develop new drugs. The particular operations that the quantum end of the system can do, while still limited and error-prone, are nearly good enough to boost the performance of traditional computers beyond what they could do on their own -- a coming milestone called quantum advantage. "My guess is this could happen anytime from six to 36 months out," says Rigetti.
My guess is that their investors said that they require results in 6 to 36 months.

There is no chance that this company will have any success before the funding runs out.

Tuesday, September 4, 2018

Vote to rename law to Hubble-Lemaitre Law

Astronomers have long credited Hubble for discovering the expansion of the universe, even tho he had little to do with it.

If they can decide that Pluto is not a planet, then they can correct this error. Now they will vote on it:
Astronomers are engaged in a lively debate over plans to rename one of the laws of physics.

It emerged overnight at the 30th Meeting of the International Astronomical Union (IAU), in Vienna, where members of the general assembly considered a resolution on amending the name of the Hubble Law to the Hubble-Lemaître Law.

The resolution aims to credit the work of the Belgian astronomer Georges Lemaître and his contribution—along with the American astronomer Edwin Hubble — to our understanding of the expansion of the universe.

While most (but not all) members at the meeting were in favor of the resolution, a decision allowed all members of the International Astronomical Union a chance to vote. Subsequently, voting was downgraded to a straw vote and the resolution will formally be voted on by an electronic vote at a later date.
As the article explains, the Belgian Catholic priest published both the theory and the experimental evidence for it, before Hubble had a clue. Hubble did later publish some data confirming Lemaitre's paper as he had a better telescope, but the data was very crude and not really much better.

It is an amusing historical fact that Einstein, Eddington, and other leading cosmologists clung to the idea of a steady-state universe, while a Catholic priest and Vatican astronomers led the way to convincing everyone that the universe had a beginning in what is now called the Big Bang.
But Hubble was not the first. In 1927, Georges Lemaître had already published an article on the expansion of the universe. His article was written in French and published in a Belgian journal.

Lemaître presented a theoretical foundation for the expansion of the universe and used the astronomical data (the very same data that Hubble used in his 1929 article) to infer the rate at which the universe is expanding.

In 1928, the American mathematician and physicist Howard Robertson also published an article in Philosophical Magazine and Journal of Science, where he derived the formula for the expansion of the universe and inferred the rate of expansion from the same data that were used by Lemaître (a year before) and Hubble (a year after). ...

In January 1930 at the meeting of the Royal Astronomical Society in London, the English astronomer, physicist, and mathematician Arthur Eddington raised the problem of the expansion of the universe and the lack of any theory that would satisfactory explain this phenomenon.

When Lemaître found about this, he wrote to Eddington to remind him about his 1927 paper, where he laid theoretical foundation for the expansion of the universe.
It should be called the Lemaitre Law, or maybe the Lemaitre-Robertson Law, if you want to give an American some credit.

Thursday, August 30, 2018

Modifying gravity is called "cheating"

Gizmodo reports:
A fight over the very nature of the universe has turned ugly on social media and in the popular science press, complete with accusations of “cheating” and ad hominem attacks on Twitter. Most of the universe is hiding, and some scientists disagree over where it has gone.

It’s quite literally a story as old as time. Wherever you look in the cosmos, things don’t seem to add up. Our human observations of the universe’s structure—as far back as we can observe—suggest that there’s around five times more mass than we see in the galaxies, stars, dust, planets, brown dwarfs, and black holes that telescopes have observed directly. We call this mystery mass, or the mystery as a whole, “dark matter.”

Several thousand physicists researching these dark matter-related mysteries will tell you that dark matter is a particle, the way that electrons and protons are particles, that only appears to interact with other known particles via the gravitational pull of its mass. But there are a few dozen physicists who instead think that a set of ideas called “modified gravity” might one day explain these mysteries. Modified gravity would do away with the need for dark matter via a tweak to the laws of gravity. ...

Then, in June, the most sensitive dark matter particle-hunting experiment, called XENON, announced it had once again failed to find a dark matter particle. A story titled “Is Dark Matter Real?” followed in the August issue of Scientific American, ...

“It’s only if you ignore all of modern cosmology that the modified gravity alternative looks viable. Selectively ignoring the robust evidence that contradicts you may win you a debate in the eyes of the general public. But in the scientific realm, the evidence has already decided the matter, and 5/6ths of it is dark.”
In other words, it is a cheat to tweak the laws of gravity to accommodate the slow galaxy rotation, but not cheat to hypothesize a new particle.

Hoping for a dark matter particle was one of the main reasons for believing in SUSY, as SUSY requires about 100 new particles. Maybe the lightest one is the dark matter particle.

Another little controversy is whether the evidence for dark matter already contradicts the Standard Model. Not necessarily. Wilczek pushes axions as an explanation that I think is consistent with the SM.

Also, the SM only tries to explain strong, weak, and electromagnetic interactions. Dark matter could be some substance that does not interact with those forces, and thus could exist independently from the SM.
In Gizmodo’s conversations with 13 physicists studying dark matter, a pretty clear picture emerged: Dark matter as an undiscovered population of particles that influence the universe through gravity is the prevailing paradigm for a reason, and will continue as such until a theory comes along with the same predictive power for the universe’s grandest features.
It is odd to call the substance a particle. We only call electrons particles because of how they interact with light, but dark matter does not interact with light.
“Everywhere the dark matter theories make predictions, they get the right answers,” Scott Dodelson, a Carnegie Mellon physics professor, told Gizmodo. But he offered a caveat: “They can’t make predictions as well on small scales,” such as the scales of galaxies.
I am surprised that anyone would brag about a theory that only works on scales much larger than galaxies.

Monday, August 27, 2018

Billion new dollars for quantum computation

Peter Woit announces:
Moving through the US Congress is a National Quantum Initiative Act, which would provide over a billion dollars in funding for things related to quantum computation.
A billion dollars?!

IBM and Google both promised quantum supremacy in 2017. We have no announcement of QS, or any explanation for the failure.

I am not the only one saying it is impossible. See this recent Quanta mag article for other prominent naysayers.

If Congress were to have hearings on this funding, I would expect physicists to be extremely reluctant to throw cold water on lucrative funding for their colleagues. Maybe that is what is keeping Scott Aaronson quiet.

Previously Woit commented:
It’s remarkable to see publicly acknowledged by string theorists just how damaging to their subject multiverse mania has been, and rather bizarre to see that they attribute the problem to my book and Lee Smolin’s. The source of the damage is actually different books, the ones promoting the multiverse, for example this one.
This was induced by some string theorists still complaining about those books that appeared in around 2005.

It is bizarre for anyone to be bothered by some criticism from 13 years ago. The two books did not even say the same thing. You would think that the string theorists would just publish their rebuttal and move on.

Apparently they had no rebuttal, and they depended on everyone going along with the fiction that string theory was working.

Likewise, the quantum computation folks depend on everyone going along with the idea that we are about to have quantum computers (with quantum supremacy), and it will be a big technological advance. We don't need two books on the subject, as it is pretty obvious that IBM and Google are not delivering what they promised.

Saturday, August 25, 2018

Professor arrested for pocketing $4 in tips

Quantum computer complexity theorist Scott Aaronson seems to have survived his latest personal struggle, with his worldview intact.

He bought a smoothie, paid with a credit card, and took the $4 in the tip jar. An employee approached him, and politely explained that the tip jar is for tips. He grudgingly gave $1 back.

The manager then called the cops, and cop interviewed him to confirm what he had done. He was still oblivious to what was going on, so the cop handcuffed him and arrested him. That got his attention, and the manager agreed to drop the charges when the $4 was returned.

There is a biography about physicist Paul Dirac that calls him "the world's strangest man" because of a few silly anecdotes about him being a stereotypical absent-minded professor. That biographer has not met Scott.

Scott says that it was all the fault of the smoothie maker for not clearly explaining to him that he does not get to take change from the tip jar if he pays with a credit card. Scott is correct that there was a failure of communication, and surely both sides are at least somewhat to blame.

I am not posting this to criticize Scott. Just read his blog where he posts enuf negative info about himself. If I wanted to badmouth him, I would just link to his various posts where he has admitted to be wrong about various things. I am inclined to side with him as a fellow nerd who is frustrated by those who fail to explain themselves in a more logical manner. I am just posting it because I think that it is funny. After all, Scott has been named as one of the 30 smartest people alive and also one of the top 10 smartest people. And yet there are people with about 50 less IQ points who have no trouble buying smoothies, or understanding a request to put the tip money back.

Monday, August 6, 2018

Copenhagen is rooted in logical positivism

From an AAAS Science mag book review:
Most physicists still frame quantum problems through the sole lens of the so-called “Copenhagen interpretation,” the loose set of assumptions Niels Bohr and his colleagues developed to make sense of the strange quantum phenomena they discovered in the 1920s and 1930s. However, he warns, the apparent success of the Copenhagen interpretation hides profound failures.

The approach of Bohr and his followers, Becker argues, was ultimately rooted in logical positivism, an early-20th-century philosophical movement that attempted to limit science to what is empirically verifiable. By the mid-20th century, philosophers such as Thomas Kuhn and W. V. O. Quine had completely discredited this untenable view of science, Becker continues. The end of logical positivism, he concludes, should have led to the demise of the Copenhagen interpretation. Yet, physicists maintain that it is the only viable approach to quantum mechanics.

As Becker demonstrates, the physics community’s faith in Bohr’s wisdom rapidly transformed into a pervasive censorship that stifled any opposition.
This is partially correct. Quantum mechanics, and the Copenhagen Interpretation were rooted in logical positivism. Much of XX century physics was influenced, for the better, by logical positivism and related views.

It is also true that XX century philosophers abandoned logical positivism, for largely stupid reasons. They decided that there was no such thing as truth.

This created a huge split between the scientific world, which searches for truth, and the philosophical world, which contends that there is no such thing as truth. These views are irreconcilable. Science and Philosophy have become like Astronomy and Astrology. Each thinks that the other is so silly that any conversation is pointless.

Unfortunately, many physicists are now infected with anti-positivist views of quantum mechanics, and say that there is something wrong with it. Those physicists complain, but have gotten nowhere with there silly ideas.

Wednesday, August 1, 2018

Einstein's 1905 relativity had no new dogmas

Lubos Motl writes:
Einstein's breakthrough was far deeper, more philosophical than assumed

Relativity is about general, qualitative principles, not about light or particular objects and gadgets

Some days ago, we had interesting discussions about the special theory of relativity, its main message, the way of thinking, the essence of Einstein's genius and his paradigm shift, and the good and bad ways how relativity is presented to the kids and others. ...

Did the physicists before Einstein spend their days by screaming that the simultaneity of events is absolute? They didn't. It was an assumption that they were making all the time. All of science totally depended on it. But it seemed to obvious that they didn't even articulate that they were making this assumption. When they were describing the switch to another inertial system, they needed to use the Galilean transformation and at that moment, it became clear that they were assuming something. But everyone instinctively thought that one shouldn't question such an assumption. No one has even had the idea to question it. And that's why they couldn't find relativity before Einstein.

Einstein has figured out that some of these assumptions were just wrong and he replaced them with "new scientific dogmas".
They did find relativity before Einstein. With all the formulas. In particular, what Einstein said about simultaneity and synchronization of clocks was straight from what Poincare said five years earlier.

Motl repeats the widespread belief that Einstein found relativity by repudiating conventional wisdom and introducing new dogmas. That is not true at all. The most widely accepted theoy on the matter was Lorentz's 1895 theory. Lorentz had already received a Nobel prize for it in 1902.

Einstein's big dogmas were that the speed of light is constant and motion is relative. Einstein later admitted that he got the constant speed of light straight from Lorentz. He also got the relativity postulate from Lorentz, although it was Poincare who really emphasized it, so maybe he got it from Poincare.

Einstein did later argue against rival theories, such as Abraham's, but he never claimed that Lorentz or Poincare were wrong about their relativity theories. Other authors referred to the "Lorentz-Einstein theory", as if there were no diffence.xxc Even when Einstein was credited with saying something different from Lorentz, he insisted that his theory was the same as Lorentz's.

Einstein did sometimes pretend to have made conceptual advances in his formulation of special relativity, such as with the aether and local time. But what Einstein said on these matters was essentially the same as what Lorentz said many years earlier.

The formulation of special relativity that is accepted today is the geometric spacetime version presented by Poincare and Minkowki, not Einstein's. Poincare and Minkowski did explain how their view was different from Lorentz's.

Monday, July 30, 2018

Was Copernicus really heliocentric?

Wikipedia current has a debate on whether the Copernican system was heliocentric. See the Talk pages for Nicolaus Copernicus and Copernican heliocentrism.

This is a diversion from the usual Copernicus argument, which is whether he was Polish or German. He is customarily called Polish, but nation-states were not well defined, and there is an argument that he was more German than Polish.

The main point of confusion is that Copernicus did not really put the Sun at the center of the Earth's orbit. It was displaced by 1/25 to 1/31 of the Earth's orbit radius.

It appears that the center of the Earth's orbit revolved around the Sun, but you could also think of the Sun as revolving around the center of the Earth's orbit.

So is it fair to say that the Sun is at the center of the universe? Maybe if you mean that the Sun is near the center of the planetary orbits. Or that the Sun was at the center of the fixed stars. Or that the word "center" is used loosely to contrast with an Earth-centered system.

In Kepler's system, and Newton's, the Sun is not at the center of any orbit, but at a focus of an ellipse. It was later learned that the Sun orbits around a black hole at the center of the Milky Way galaxy.

I am not sure why anyone attaches such great importance to these issue. Motion is relative, and depends on your frame of reference. The Ptolemy and Copernicus models had essentially the same scientific merit and accuracy. They mainly differed in their choice of a frame of reference, and in their dubious arguments for preferring those frames.

People act as if the Copernicus choice of frame was one of the great intellectual advances of all time.

Suppose ancient map makers put East at the top of the page. Then one day a map maker put North at the top of the page. Would we credit him for being a great intellectual hero? Of course not.

There is an argument that the forces are easier to understand if you choose the frame so that the center of mass is stationary. Okay, but that was not really Copernicus's argument. There were ancient Greeks who thought it made more sense to put the Sun at the center because it was so much larger than the Earth. Yes, they very cleverly figured out that the Sun was much larger. There is a good logic to that also, but it is still just a choice of frame.

Friday, July 27, 2018

How physicists discovered the math of gauge theories

We have seen that if you are looking for theories obeying a locality axiom, and if you adopt a geometrical view, you are inevitably led to metric theories like general relativity, and gauge theories like electromagnetism. Those are the simplest theories obeying the axioms.

Formally, metric theories and gauge theories are very similar. Both satisfy the locality and geometry axioms. Both define the fields as the curvature of a connection on a bundle. The metric theories use the tangent bundle, while the gauge theories use an external group. Both use tensor calculus to codify the symmetries.

The Standard Model of particle physics is a gauge theory, with the group being U(2)xSU(3) instead of U(1), and explains the strong, weak, and electromagnetic interactions.

Pure gauge theories predict massless particles like the photon. The Standard Model also has a scalar Higgs field that breaks some of the symmetries and allows particles to have mass.

The curious thing, to me, is why it took so long to figure out that gauge theories were the key to constructing local field theories.
Newton published his theory of gravity in 1682, but was unhappy about the action-at-a-distance. He would have preferred a local field theory, but he could not figure out how to do it. Maxwell figured it out for electromagnetism in 1865, based on experiments of Faraday and others.

Nobody figured out the symmetries to Maxwell’s equations until Lorentz and Poincare concocted theories to explain the Michelson-Morley experiment,in 1892-1905. Poincare showed in 1905 that a relativistic field theory for gravity resolves the action-at-a-distance paradoxes of Newtonian gravity. After Minkowski stressed the importance of the metric, the geometry, and tensor analysis to special relativity in 1908, Nordstrom, Grossmann, Einstein, and Hilbert figured out how to make gravity a local geometrical theory. Hermann Weyl combined gravity and electromagnetism into what he called a “gauge” theory in 1919.

Poincare turned electromagnetism into a geometric theory in 1905. He put a non-Euclidean geometry on spacetime, with its metric and symmetry group, and used the 4-vector potential to prove the covariance of Maxwell’s equations. But he did not notice that his potential was just the connection on a line bundle.

Electromagnetism was shown to be a renormalizable quantum field theory by Feynman, Schwinger and others in the 1940s. ‘tHooft showed that all gauge theories were renormalizable in 1970. Only after 1970 did physicists decide that gauge theories were the fundamental key to quantum field theory, and the Standard Model was constructed in the 1970s. They picked SU(3) for the strong interaction because particles had already been found that closely matched representations of SU(3).

It seems to me that all of relativity (special and general) and gauge theory (electromagnetism and the Standard Model) could have derived mathematically from general principles, with little or no reference to experiment. It could have happened centuries ago.

Perhaps the mathematical sophistication was not there. Characterizing geometries in terms of symmetry transformations and their invariants was described in Klein's Erlangen Program, 1872. Newton did not use a vector notation. Vector notation did not become popular until about 1890. Tensor analysis was developed after that. A modern understanding of manifolds and fiber bundles was not published until about 1950.

Hermann Weyl was a brilliant mathematician who surely had an intuitive understanding of all these things in about 1920. He could have worked out the details, if he had understood how essential they were to modern physics. Why didn’t he?

Even Einstein, who was supposedly the big advocate of deriving physics from first principles, never seems to have noticed that relativity could be derived from geometry and causality. Geometry probably would not have been one of his principles, as he never really accepted that relativity is a geometrical theory.

I am still trying to figure out who was the first to say, in print, that relativity and electromagnetism are the inevitable consequences of the geometrical and locality axioms. This seems to have been obvious in the 1970s. But who said it first?

Is there even a physics textbook today that explains this argument? You can find some of it mentioned in Wikipedia articles, such as Covariant formulation of classical electromagnetism, Maxwell's equations in curved spacetime - Geometric formulation, and Mathematical descriptions of the electromagnetic field - Classical electrodynamics as the curvature of a line bundle. These articles mention that electromagnetics fields can be formulated as the curvature of a line bundle, and that this is an elegant formulation. But they do not explain how general considerations of geometry and locality lead to the formulation.

David Morrison writes in a recent paper:
In the late 1960s and early 1970s, Yang got acquainted with James Simons, ... Simons identified the relevant mathematics as the mathematical theory of connections on fiber bundles ... Simons communicated these newly uncovered connections with physics to Isadore Singer at MIT ... It is likely that similar observations were made independently by others.
I attended Singer’s seminars in the 1970s, so I can confirm that it was a big revelation to him that physicists were constructing a standard model based on connections on fiber bundles, a subject where he was a leading authority. He certainly had the belief that mathematicians and physicists did not know they were studying the same thing under different names, and he knew a lot of those mathematicians and physicists.

String theorists like to believe that useful physical theories can be derived from first principles. Based on the above, I have to say that it is possible, and it could have happened with relativity. But it has never happened in the history of science.

If there were ever an example where a theory might have been developed from first principles, it would be relativity and electromagnetism. But even in that case, it appears that a modern geometric view of the theory was only obtained many decades later.

Wednesday, July 25, 2018

Clifford suggested matter could curve space

A new paper:
Almost half a century before Einstein expounded his general theory of relativity, the English mathematician William Kingdon Clifford argued that space might not be Euclidean and proposed that matter is nothing but a small distortion in that spatial curvature. He further proposed that matter in motion is not more than the simple variation in space of this distortion. In this work, we conjecture that Clifford went further than his aforementioned proposals, as he tried to show that matter effectively curves space. For this purpose he made an unsuccessful observation on the change of the plane of polarization of the skylight during the solar eclipse of December 22, 1870 in Sicily.
I have wondered why some people credit Clifford, and this article spells it out. He was way ahead of everyone with the idea that our physical space might be non-Euclidean, as an explanation for gravity.

Friday, July 20, 2018

Vectors, tensors, and spinors on spacetime

Of the symmetries of spacetime, several of them are arguably not so reasonable when considering locality.

The first is time reversal. That takes time t to −t, and preserves the metric and some of the laws of mechanics. It takes the forward light cone to the backward light cone, and vice versa.

But locality is based on causality, and the past causes the future, not the other way around. There is a logical arrow of time from the past to the future. Most scientific phenomena are irreversible.

The second dubious symmetry is parity reversal. That is a spatial reflection that reverses right-handedness and left-handedness.

DNA is a right-handed helix, and there is no known life using left-handed DNA. Certain weak interactions (involved in radioactive decay) have a handedness preference. The Standard Model explains this by saying all neutrinos are massless with a left-handed helicity. That is, they spin left compared to the velocity direction. (Neutrinos are now thought to have mass.)

Charge conjugation is another dubious symmetry. Most of the laws of electricity will be the same if all positive charges are changed to negative, and all negative to positive.

Curiously, if you combine all three of the above symmetries, you get the CPT symmetry, and it is believed to be a true symmetry of nature. Under some very general assumptions, a quantum field theory with a local Lorentz symmetry must also have a CPT symmetry.

The final dubious symmetry is the rotation by 360°. It is a surprising mathematical fact that a rotation thru 720° is homotopic to the identity, but a rotation thru 360° is not. You can see this yourself by twisting your belt.

Thus the safest symmetry group to use for spacetime is not the Lorentz group, but the double cover of the connected component of the Lorentz group. That is, the spatial reflections and time reversals are excluded, and a rotation by x° is distinguished from a rotation by x+360°. This is called the spin group.

Geometric theories on spacetime are formulated in terms of vectors and tensors. Vectors and tensors are covariant, so that they automatically transform under a change of coordinates. Alternatively, you can say that spacetime is a geometric coordinate-free manifold with a Lorentz group symmetry, and vectors and tensors are the well-defined on that manifold.

If we consider spacetime with a spin group symmetry instead of the Lorentz group, then we get a new class or vector-like functions called spinors. Physicists sometimes think of a spinor as a square root of a vector, as you can multiply two spinors and get a vector.

The basic ingredients for a theory satisfying the geometry and locality axioms are thus: a spacetime manifold, a spin group structure on the tangent bundle, a separate fiber bundle, and connections on those bundles. The fields and other physical variables will be spinors and sections.

This is the essence of the Standard Model. Quarks, electrons, and neutrinos are all spinor fields. They have spin 1/2, which means they have 720° rotational symmetry, and not a 360° rotation symmetry. Such particles are also called fermions. The neutrinos are left-handed, meaning they have no spatial reflection symmetry. The Lagrangian defining the theory is covariant under the spin group, and hence well-defined on spacetime.

Under quantum mechanics, all particles with the same quantum numbers are identical, and a permutation of those identical particles is a symmetry of the system. Swapping two identical fermions introduces a factor of −1, just like rotating by 360°. That is what separates identical fermions, and keeps them from occupying the same state. This is called the Pauli exclusion principle, and it is the fundamental reason why fermions can be the building blocks of matter, and form stable objects.

Pauli exclusion for fermions is a mathematical consequence of having a Lorentz invariant quantum field theory.

All particles are either fermions or bosons. The photon is a boson, and has spin 1. A laser can have millions of photons in the same state, and they cannot be used to build a material substance.

The point here is that under general axioms of geometry and locality, one can plausibly deduce that spinor and gauge theories are the obvious candidates for fundamental physical theories. The Standard Model then seems pretty reasonable, and is actually rather simple and elegant compared to the alternatives.

In my counterfactual anti-positivist history of physics, it seems possible that someone could have derived models similar to the Standard Model from purely abstract principles, and some modern mathematics, but with no experiments.

Wednesday, July 18, 2018

Electric charge and local fields

Studying electric charge requires mathematical structures distinct from spacetime. Charge is conserved, but that conservation law is not related to any transformation of space or time. To measure electric fields, we need functions that measure something other than space and time.

The simplest type of such function would be from spacetime to the complex numbers. But such a function has an immediate problem with the locality axiom. A function like that would allow comparing values at spatially separated points, but no such comparison should be possible. It doesn't make sense to relate physics at one spacetime point to anything outside its light cones.

So we need to have a way of doing functions on spacetime, such that function values can only be compared infinitesimally, or along causal curves. The mathematical construction for this is called a line bundle.

For electromagnetism, imagine that every point in spacetime has a phase, a complex number with norm one. The phase does not mean anything by itself, but the change in phase along a causal curve is meaningful, and is the key to describing the propagation of an electromagnetic field.

A bundle connection is, by definition, the mathematical info for relating the phases along a curve. It is usually given in infinitesimal form, so the total phase change along the curve is given by integrating along the curve.

It turns out that a bundle can be curved, just as spacetime can be curved. The curvature is a direct measure of how the phase can change around a closed loop.

You cannot make a flat map of the surface of the Earth, without distorting distances, and you can make a flat map of a curved bundle either. You can choose some function (called a bundle section) that defines a preferred phase at each point. Then any other section is this one multiplied by some complex function on spacetime. Thus any section can be written fs, where f is an ordinary complex function on spacetime, and s is a special section that gives a phase at each point. You can think of the phase as a norm-1 complex value, but that is misleading because the values at one point cannot be directly compared to the values at other points.

For derivatives, we expect a product formula like d(fs) = f ds + s df.

The difficulty is in interpreting ds, as a derivative requires comparing values to nearby points. A bundle connection is what allows infinitesimal comparisons, so in place of ds there is some ω for which we have a formula: d(fs) = f ω + s df.

The ω is what mathematicians call a connection 1-form, and what physicists call a 4-vector potential. It is not necessarily the derivative of a function. The derivative is what mathematicians call the curvature 2-form, and what physicists call the electromagnetic field. It does not depend on the choice of the section s.

Maxwell’s equations reduce to interpreting the derivative of the curvature as the charge/current density.

The whole theory of electromagnetism is thus the inevitable consequence of looking for a geometric theory obeying locality.

One can also construct field theories with higher dimensional bundles, replacing the complex numbers C with Cn for some n, and the phase with U(n), the group of nxn unitary matrices. Other groups are possible also, such as SU(n), the group of such matrices with determinant 1. The 1-form ω has values in the Lie algebra, which is the skew symmetric complex matrices, if the group is U(n).

When a field theory is quantized, notorious infinities arise. For field theories based on connections on bundles as above, there is a theory of renormalization to cancel the infinities. It is a generalization of the system for quantum electrodynamics. Other theories are difficult or impossible to renormalize. Physicists settled on gauge theories because they are the only ones to give finite predictions in a quantum theory.

Historical note. This formulation of Maxwell’s equations appears to have been known by Hermann Weyl and other by 1920 or so, but it may not have been explicitly published until 1970 or so.

Friday, July 13, 2018

Infinitesimal analysis of geometry and locality

The geometric axiom would appear to be in conflict with the locality axiom. The geometries are defined by symmetry transformations that relate points to distant points. Under locality, points only relate to nearby points, not distant points.

The resolution of this conflict is that the geometric operations actually act infinitesmally. The Lorentz transformations do not actually act on spacetime, but on the tangent space at a given point.

Here “infinitesimal” is a mathematical shorthand for the limits used in computing derivatives and tangents. Locality allows a point to be related to an infinitesimally close point in its light cone, and that is really a statement about the derivatives at the point.

In general relativity, matter curves spacetime, and spacetime does not necessarily have rotational or other symmetries. But locally, to first order in infinitesimals, a point looks like the Minkowski space described earlier. That is, the tangent space is linear with the metric −dx2−dy2−dz2+c2dt2, in suitable coordinates.

A causal path in a curved spacetime is one that is within the light cones at every point. As the light cones are in the tangent space, this is a statement about the tangents to the curve. In other words, the velocity along the path cannot exceed the speed of light.

There is a mathematical theory for curved spaces. A manifold has a tangent space at each point, and if also has a metric on that, then one can take gradients of functions to get vector fields, find the shortest distance between points, compare vectors along curves, and calculate curvature. All of these things can be defined independently of coordinates on the manifold.

Spacetime (Riemann) curvature decomposes as the Ricci plus Weyl tensors. Technically, the Ricci tensor splits into the trace and trace-free parts, but that subtlety can be ignored for now. The Ricci tensor is a geometric measure of the presence of matter, and is zero in empty space.

There are whole textbooks explaining the mathematics of Riemann curvature, so I am not going to detail it here. It suffices to say that if you want a space that looks locally like Minkowski space, then it is locally described by the metric and Riemann curvature tensor.

The equations of general relativity are that the Ricci tensor is interpreted as mass density. More precisely, it is a tensor involving density, pressure, and stress. In particular, solving the dynamics of the solar system means using the equation Ricci = 0, as the sun and planets can be considered point masses in empty space. Einstein's calculation of the precession of Mercury's orbit was based studying spacetime with Ricci = 0.

Next we look at electric charges.

Wednesday, July 11, 2018

Bee finds free will in reductionism failure

Sabine Hossenfelder writes on the Limits of Reductionism:
Almost forgot to mention I made it 3rd prize in the 2018 FQXi essay contest “What is fundamental?”

The new essay continues my thoughts about whether free will is or isn’t compatible with what we know about the laws of nature. For many years I was convinced that the only way to make free will compatible with physics is to adopt a meaningless definition of free will. The current status is that I cannot exclude it’s compatible.

The conflict between physics and free will is that to our best current knowdge everything in the universe is made of a few dozen particles (take or give some more for dark matter) and we know the laws that determine those particles’ behavior. They all work the same way: If you know the state of the universe at one time, you can use the laws to calculate the state of the universe at all other times. This implies that what you do tomorrow is already encoded in the state of the universe today. There is, hence, nothing free about your behavior.
I don't know how she can get a Physics PhD and write stuff like that.

The universe is not really made of particles. The Heisenberg Uncertainty Principles shows that there are no particles. There are no laws that determine trajectories of particles. We have theories for how quantum fields evolve, and even predict bubble chamber tracks like the ones on the side of this blog. But it is just not true that the state tomorrow is encoded in the state today.

Look at radioactive decay. We have theories that might predict half-life or properties of emissions and other such things, but we cannot say when the decay occurs. For all we know, the atom has a mind of its own and decays when it feels like decaying.

The known laws of physics are simply not deterministic in the way that she describes.

She goes on to argue that world might still be indeterministic because of some unknown failure to reductionism.

In a way, chaos theory is such a failure of reductionism, because deterministic laws give rise in indeterministic physics. But she denies that sort of failure.

What she fails to grasp is that quantum mechanics is already compatible with (libertarian) free will.

The advocates of many-worlds (MWI) are nuts, but at least they concede that quantum mechanics does not determine your future (in this world). It makes for a range of future possibilites, and your choices will affect which of those possibilities you will get.

Of course the MWI advocates believe that every time you make a choice, you create an evil twin who gets the world with the opposite choice. That belief has no scientific substance to it. But the first part, saying that quantum mechanics allows free choices, is correct.

The other FQXI contest winners also have dubious physics, and perhaps I will comment on other essays.

Friday, July 6, 2018

All physical processes are local

After the geometry axiom, here is my next axiom.

Locality axiom: All physical processes are local.

Locality means that physical processes are only affected by nearby processes. It means that there is no action-at-a-distance. It means that your personality is not determined by your astrological sign. It means that the Moon cannot cause the tides unless some force or energy is transmitted from the Moon to the Earth at some finite speed.

Maxwell's electromagnetism theory of 1861 was local because the effects are transmitted by local fields at the speed of light. Newton's gravity was not. The concept of locality is closely related to the concept of causality. Both require a careful understanding of time. Quantum mechanics is local, if properly interpreted.

Time is very different from space. You can go back and forth in any spatial direction, but you can only go forward in time. If you want to go from position (0,0,0) to (1,1,1), you can go first in the x-direction, and then in the y-direction, or go in any order of directions that you please. Symmetries of Euclidean space guarantee these alternatives. But if you want to go from (0,0,0) at time t=0 to (1,1,1) at t=1, then your options are limited. Locality prohibits you from visiting two spatially distinct points at the same time. Time must always be increasing along your path.

Locality demands that there is some maximum speed for getting from one point to another. Call that speed c, as it will turn out to be the (c for constant) speed of light. No signal or any other causation can go faster than c.

Thus a physical event at a particular point and time can only influence events that are within a radius ct at a time t later. The origin (0,0,0) at t=0 can only influence future events (x,y,z,t) if x2+y2+z2 ≤ c2t2, t > 0. This is called the forward light cone. Likewise, the origin can only be influenced by past events in the backward light cone, x2+y2+z2 ≤ c2t2, t < 0. Anything outside those cones is essentially nonexistent to someone at the origin. Such is the causal structure of spacetime.

Since the world is geometrical, there should be some geometry underlying this causal structure. The geometry is not Euclidean. The simplest such geometry to assume a metric where distance squared equals −dx2−dy2−dz2+c2dt2. This will be positive inside the light cones. It does not make sense to relate to something outside the light cones anyway. Four-dimensional space with this non-Euclidean geometry is called Minkowski space.

The symmetries of spacetime with this metric may be found by an analogy to Euclidean space. Translations and reflections preserve the metric, just as with Euclidean space. A spatial rotation is a symmetry: (x,y,z,t) → (x cos u − y sin u, x sin u + y cost u, z, t). We can formally find additional symmetries by assuming that time is imaginary, and the metric is a Euclidean one on (x,y,z,ict). Poincare used this trick for an alternate derivation of the Lorentz transformations in 1905. Rotating by an imaginary angle iu, and converting the trigonometric functions to hyperbolic ones:
(x,y,z,ict) → (x, y, z cos iu − ict sin iu, z sin iu + ict cos iu)
    = (x, y, z cosh u + ct sinh u, iz sinh u + ict cosh u)
This is a Lorentz (boost) transformation with rapidity u, or with velocity v = c tanh u. The more familiar Lorentz factor is cosh u = 1/√(1 − v2/c2).

Imaginary time seems like just a sneaky math trick with no physical consequences, but there are some. If a physical theory has functions that are analytic in spacetime variables, then they can be continued to imaginary time, and sometimes this has consequences for real time.

Thus locality has far-reaching consequences. From the principle, it appears that the world must be something like Minkowski space, with Lorentz transformations.

Next, we will combine our two axioms.

Tuesday, July 3, 2018

The world is geometrical

In my anti-positivist counterfactual history, here is the first axiom.

Geometry axiom: The world is geometrical.

The most familiar geometry is Euclidean geometry, where the world is R3 and distance squared is given by dx2 + dy2 + dz2. There are other geometries.

Newtonian mechanics is a geometrical theory. Space is represented by R3, with Euclidean metric. That is a geometrical object because of the linear structure and the metric. Lines can be defined as the short distance between points. Plane, circles, triangles, and other geometric objects can be defined.

It is also a geometrical object because there is a large class of transformations that preserve the metric, and hence also the lines and circles determined by the metric. Those transformations are the rotations, reflections, and translations. For example, the transformation (x,y,z) → (x+5,y,z) preserves distances, and takes lines to lines and circles to circles.

Mathematically, a geometry can be defined in terms of some structure like a metric, or the transformations preserving that structure. This view has been known as the Klein Erlangen program since 1872.

The laws of classical equations can be written in geometrical equations like F=ma, where F is the force vector, a is the acceleration vector, and m is the mass. All are functions on Euclidean space. What makes F=ma geometrical is not just that it is defined on a geometrical space, or that vectors are used. The formula is geometrical because all quantities are covariant under the relevant transformations.

Classical mechanics does not specify where you put your coordinate origin (0,0,0), or how the axes are oriented. You can make any choice, and then apply one of the symmetries of Euclidean space. Formulas like F=ma will look the same, and so will physical computations. You can even do a change of coordinates that does not preserve the Euclidean structure, and covariance will automatically dictate the expression of the formula in those new coordinates.

One can think of Euclidean space and F=ma abstractly, where the space has no preferred coordinates, and F=ma is defined on that abstract space. Saying that F=ma is covariant with respect to a change of coordinates is exactly the same as saying F=ma is well-defined on a coordinate-free Euclidean space.

The strongly geometrical character of classical mechanics was confirmed by a theorem of Neother that symmetries are essentially the same as conservation laws. Momentum is conserved because spacetime has a spatial translational symmetry, energy is conserved because of time translation symmetry, and angular momentum is conserved because of rotational symmetry.

Next, we look at geometries that make physical sense.

Friday, June 29, 2018

MWI is the strangest and least believable thing

In Sam Harris and Sean Carroll DEBATE Free Will & Laplace's Demon - YouTube, we get a discussion of many-worlds:
Sam Harris [at 4:00]: This is supposed to be science, right?

It sounds like the strangest and least believable thing [inaudible] So how is it that science, after centuries of being apparently rigorous, and parsimonious, and hard-headed, finally disgorges a picture of reality which seems to be the least believable thing that anyone has ever thought of?

Sean M. Carroll: You've come to the right place!
Carroll goes on to explain his beliefs in many-worlds quantum mechanics, block universe, Laplace's Demonm, determinism, and time reversibility.

The fact that we think we have free will, and we remember the past instead of the future, is all just a psychological illusion caused by the increase in entropy.

I wonder how Carroll ever got to be a physics professor. He is entitled to believe in whatever gods he chooses, but he represents these opinions as consequences of modern science. They are not.

Harris's question nails it. Scientists were rigorous, parsimonious, and hard-headed for centuries, but now the public image of science is dominated by professors like Carroll who present the least believable ideas.

A recent New Scientist article on how to think about the multiverse quotes him:
“One of the most common misconceptions is that the multiverse is a hypothesis,” says Sean Carroll at the California Institute of Technology in Pasadena. In fact, it is forced upon us.” It is a prediction of theories we have good reason to think are correct.”
If the multiverse really were a hypothesis, it would be testable. No, the multiverse is just some anti-positivist philosophical belief that has no empirical support, and does not follow from any accepted theory.

For someone who doesn't believe in libertarian free will, Carroll is intolerant of those who disagree with his leftist agenda. He is sympathetic to a restaurant for refusing to serve a Trump White House employee!

I would think that a science professor, who likes to talk about how great science is, would happily promote the open exchange of political ideas. But no, he apparently thinks that leftist Democrats should not necessarily be civil to Trump supporters.

Wednesday, June 27, 2018

Bell test experiments explained

Eight physicists have written an excellent new paper explaining quantum mechanics from the viewpoint of the Bell test experiments:
Understanding quantum physics through simple experiments: from wave-particle duality to Bell's theorem

Quantum physics, which describes the strange behavior of light and matter at the smallest scales, is one of the most successful descriptions of reality, yet it is notoriously inaccessible. Here we provide an approachable explanation of quantum physics using simple thought experiments that deal with one- and two-particle interference. We derive all relevant quantum predictions using minimal mathematics, without introducing the advanced calculations that are typically used to describe quantum physics. We focus on the two key surprises of quantum physics, namely wave-particle duality, which deals with the counter-intuitive behavior of single quantum particles, and entanglement, which applies to two or more quantum particles and brings out the inherent contradiction between quantum physics and seemingly obvious assumptions regarding the nature of reality. We employ Hardy's version of Bell's theorem to show that so-called local hidden variables are inadequate at explaining the behavior of entangled quantum particles. This means that one either has to give up on hidden variables, i.e. the idea that the outcomes of measurements on quantum particles are determined before an experiment is actually carried out, or one has to relinquish the principle of locality, which requires that no causal influences should be faster than the speed of light. Finally, we describe how these remarkable predictions of quantum physics have been confirmed in experiments. We have successfully used the present approach in a course that is open to all undergraduate students at the University of Calgary, without any prerequisites in mathematics or physics.
It concludes:

Bell's theorem indicates a deep contradiction between quantum physics and EPR's seemingly obvious assumptions about reality. Surprisingly, experiments demonstrate quantum physics to be correct and rule out the intuitively appealing possibility of local hidden variables. This implies that measurement outcomes are either not predetermined at all, or they are determined by nonlocal hidden variables, such as Bohm's pilot wave model.
My only quibble with this is that it is a sloppy about the alternatives to hidden variables.

Yes, experiments have ruled out local hidden variables. Nnonlocal hidden variables are possible but foolish; who wants a theory where inaccessible variable have magical effects at a distance? Such theories are useless and misleading.

This leaves us to conclude, according to the paper, "that measurement outcomes are either not predetermined at all". In other words, it says, we must reject "the idea that the outcomes of measurements on quantum particles are determined before an experiment is actually carried out".

There are actually two possibilities here. One is that photons, and other quantum objects, have a free will of their own where they can decide what properties they will have, within the range of possibilities. That is, the physics is not determinist.

The other possibility is that the outcomes are all predetermined in some physical sense, but not as mathematical variables like real numbers.

Suppose you fire a photon at a screen, thru a double slit. Maybe its path is determined once it passes the double slit. Maybe it is determined even before the slit. Maybe the photon is still deciding what it wants to be right up to the point that it hits the screen. Our experiments cannot really distinguish between this possibilities.

What we can say is that you cannot model the photon by introducing some non-measured variables like which slit the photon is passing thru. It doesn't work. The experiments show that.

My view is that we do measurements with real numbers because that is the only way we know how to make observations. But there is no reason to believe that a photon can be fully described by real numbers. Photons are strange objects. So are real numbers. But they are not the same.

If you have a single electron, then it appears that you can describe it by a wave function okay. The above paper does a good job of explaining this. But with two particles, they can be entangled. Then wave functions can still predict experiments, according to quantum mechanics, but they don't truly match what appears to be the photon internal structure. After all, photons appear to behave locally, but the wave functions seem somewhat nonlocal in the case of an entangled measurement.

Quantum mechanics is s positivist theory. It predicts experiments, and if that is all you require, then it is a complete theory. But if you think that you are representing the true internals of a photon with wave functions, forget it. It cannot be done. The wave functions are just tools for predicting experiments. The photons could be sentient beings with their own free will and God-given souls, for all we know, as long as they are constrained to obey quantum mechanics.

There are a lot of papers on Bell's theorem that make strong conclusions because of faulty assumptions about reality. The above paper does an excellent job of explaining the experiments that must be accepted. The experiments are not really that strange, as they do not show nonlocality or pilot waves or anything that bizarre. Quantum mechanics is not really so strange, if you let go of your preconceptions about how photons can be modeled.

Note: Quantum field theory teaches that photons are not really fundamental, and discussions about photons like the above are convenient simplifications of fields.

Monday, June 25, 2018

Was relativity positivist or anti-positivist?

A famous string theorist wrote:
Albert Einstein famously believed that, given some general principles, there is essentially a unique way to construct a consistent, functioning universe.
That is much of the rationale behind string theory. Ignore experiment, follow abstract principles, and devise laws of nature.

Put another way, he believed in a top-down approach, instead of a bottom-up approach. See recent postings by Allanach, Lumo, and Bee. They would all like to believe in the top-down approach, but they differ in the extent that they are willing to reconsider their beliefs in the face of contrary LHC evidence.

Sabine Hossenfelder (aka Dr. Bee) has a new book, Lost in Math, that critcizes various purely mathematical approaches to theoretical physics that have failed to get any empirical results. I have not read it.

Some people think that Einstein did this top-down approach with relativity. That is, he just adopted some philosophically attractive postulates, and derived special relativity.

That is not exactly what happened. Maxwell developed his electromagnetic theory from experiments by Faraday and others, and noticed a paradox about the motion of the Earth. The Michelson-Morley experiment tested Maxwell's ideas, and found a symmetry of nature that was not obviously apparent in his equations. Lorentz found a way to reconcile the theory and experiment. Poincare showed that the Lorentz transformations generated a symmetry group for 4-dimensional spacetime, and that Maxwell’s equations could be understood geometrically on spacetime. Einstein adopted two of Lorentz's theorems as postulates, and showed how they could be used to derive the Lorentz transformations. Minkowski elaborated on the non-Euclidean geometry of Poincare’s spacetime, and formulated the version of special relativity that became popular at the time.

There are philosophers who say that getting relativity from experiment is just positivist propaganda. They say Einstein ignored the Michelson-Morley and other experiments, and applied non-empirical thinking. Polyanhi even said that special relativity was proposed "on the basis of pure speculation, rationally intuited by Einstein before he had ever heard about it."

So Einstein’s 1905 special relativity paper is widely praised, but some praise it for being positivist, and some for it being anti-positivist!

It was not really either, because it was just an exposition of the ideas of others. The theory was created by Lorentz, Poincare, and Minkowski, and they all explicitly relied on empirical findings in their papers, and did their work independently from Einstein.

I don’t think physics has ever had substantial advances from anti-positivist thinking.

But if there were to be an anti-positivist invention of relativity in some alternate universe, what would it look like? That is what I wish to propose in several subsequent posts.

Friday, June 22, 2018

Quantum mechanics has no action-at-a-distance

Stephen Boughn writes:
There Is No Action at a Distance in Quantum Mechanics, Spooky or Otherwise

I feel compelled to respond to the frequent references to spooky action at a distance that often accompany reports of experiments investigating entangled quantum mechanical states. Most, but not all, of these articles have appeared in the popular press. As an experimentalist I have great admiration for such experiments and the concomitant advances in quantum information and quantum computing, but accompanying claims of action at a distance are quite simply nonsense. Some physicists and philosophers of science have bought into the story by promoting the nonlocal nature of quantum mechanics. In 1964, John Bell proved that classical hidden variable theories cannot reproduce the predictions of quantum mechanics unless they employ some type of action at a distance. I have no problem with this conclusion. Unfortunately, Bell later expanded his analysis and mistakenly deduced that quantum mechanics and by implication nature herself are nonlocal.
He has some additional argument in Making Sense of Bell's Theorem and Quantum Nonlocality.

He is correct. His explanations are routine textbook stuff, with nothing particularly original. None of it would need to be said, except that most of the popular explanations of quantum mechanics insist on saying that the theory is nonlocal. Some physicists even say this. Some even say that Bell proved it.

Boughn's explanations are clear enough, but it would have been nice if he quoted the respectable authorities who say that quantum mechanics is nonlocal. it would prove that he is not attacking a straw man, and justify writing a paper to explain some textbook material.

Thursday, June 21, 2018

For Solstice, celebrate Earth's uniqueness

Astronomers are always claiming that they found a distant planet that might support life, or that there must be millions of Earth-like planets in our galaxy.

But there may be no others. Earth has many unusual properties that are essential for life, and that may never be found elsewhere.

The NY Times reports:
As you mark the longest day of the year, consider the debate among astronomers over whether Earth’s tilt toward the sun helps make life on our world and others possible. ...

The solstice occurs because Earth does not spin upright but leans 23.5 degrees on a tilted axis. Such a slouch, or obliquity, has long caused astronomers to wonder whether Earth’s tilt — which you could argue is in a sweet spot between more extreme obliquities — helped create the conditions necessary for life. ...

Is life only possible on an exoplanet with a tilt similar to ours? ...

Mars’s slouch, for example, is currently akin to Earth’s at 25.19 degrees, but it shifts back and forth between 10 degrees and 60 degrees over millions of years. That means that the seasons and climate of the red planet — which is currently experiencing an extreme dust storm — vary wildly. That could create conditions that make life impossible.

Take Earth as an example. Although our planet’s obliquity is relatively constant, it does change by a mere few degrees. Such slight variations have sent vast sheets of glaciers from the poles to the tropics and entombed Earth within a frozen skin of solid ice. Luckily, Earth has managed to escape these so-called snowball states. But scientists are not sure whether the same will be true for planets like Mars with larger variations in their tilts. ...

As such, a stable tilt just might be a necessary ingredient for life. It’s an interesting finding given that the Earth’s tilt never changes drastically thanks to the Moon. And yet astronomers don’t know how common such moons are within the galaxy, said John Armstrong, an astronomer at Weber State University in Utah. If they turn out to be uncommon across the galaxy, it could mean that such stability — and therefore life — is hard to come by.
That's right, the Earth has a tilt that is stabilized by the Moon, giving regular seasons over billions of years. Having a single large moon is probably very unusual.

Earth also has large amounts of surface water, geologic activity (volcanoes and plate tectonics), a large variety of minerals, etc. It also has a huge outer planet, Jupiter, to further stabilize the orbit. There are probably many other such factors that I don't even know about.

We don't have the ability to detect whether distant planets have these features. But common sense would indicate that they would be very unlikely. I think it is likely that Earth has the only intelligent life of this galaxy.

Wednesday, June 20, 2018

Standard Model of Physics at 50

Yvette Cendes writes in SciAm:
The Standard Model (of Physics) at 50

It has successfully predicted many particles, including the Higgs Boson, and has led to 55 Nobels so far, but there’s plenty it still can’t account for

Just over a half-century ago, the physicist Steven Weinberg published a seminal paper titled “A Model of Leptons in the journal Physical Review Letters.” It was just three pages long, but its contents were revolutionary: in the paper, Weinberg outlined the core of the theory now known as the Standard Model, which governs elementary particles.
If this was really such a great paper, then why didn't anyone notice at the time?

As you can see here, it was only cited once in 1968 (by Salam), once in 1969, and once in 1970.

The paper proposed a model for electroweak interactions. It was the same as quantum electrodynamics, except that the gauge group was U(2) instead of U(1), a Higgs field was added to make the weak force short range, and no one knew how to cancel the infinities.

The idea of using U(2) for weak/electroweak was due to Schwinger and Glashow in 1961, and the idea of using a Higgs was due to Higgs and others in 1964. A scheme for canceling infinities in a gauge theory was published by 'tHooft in 1970.

So Weinberg's paper was no big deal at the time. It did not solve any physical problem. It wasn't much of a theory, because he had no way of computing anything. By 2015, it had 9289 citations, no. 2 on the list.

I am not trying to put down Weinberg, but he and Salam lucked into that Nobel prize. The works of Higgs and 'tHooft were more essential to creating the Standard Model.

The SciAm author ends with a complaint about a physics party:
And one Nobel laureate, who shall go unnamed, proceeded to frame our introduction by stating I was clearly invited because I was pretty, and that I looked old enough to finish my PhD already. (The Nobel Prize in Physics is still such an old boys club that only two women have ever won the prize out of 207 recipients. The last was in 1962 — a greater gap than in any other field, and not for a lack of good scientists.)
I guess she is saying that female physicists were unfairly deprived of Nobel prizes, but maybe they were too pretty, or taking care of babies instead of finishing a PhD.

Monday, June 18, 2018

Video rant against Jewish Physics

I just found an amusing video from a couple of years ago titled Weev talks about relativity. Weev is a well-known internet troll, and doesn't actually say much about relativity, but has two others talking about relativity.

While they talk, the video shows flashes of Donald Trump, and news footage related to Trump. I guess this was posted during Trump's campaign for the Presidency, and Weev was a Trump supporter.

One rants about "Jewish Physics", while the other is a skeptic. The main guy talks about how physics has gotten away from experiment, and says a lot of it is "academic circle jerk ... like the epicircles of astronomy". He means "epicycles".

He argues that Einstein's special relativity was just untestable philosophical ideas, not physics. He just took the math and theory from Lorentz's papers of 10 years earlier, and added some untestable philosophical re-imagining about there being no place of rest. Weev adds that Einstein did not cite his sources.

The skeptic doubts that there could be a Jewish conspiracy about such matters, and the main guy says that it is not really a conspiracy, and not just Jews. It is more a matter that Jews like philosophical unscientific ideas.

The explanations are a little garbled, but most of it is essentially correct. It is true that Einstein's famous 1905 special relativity theory was mathematically and observationally equivalent to what Lorentz has already published, and what Einstein failed to cite. Historians of science agree to this.

It is also true that saying that there is no rest frame, or no aether, is just untestable philosophy. It does not add anything useful to Lorentz's theory.

You might say that it is testable, because you could prove Einstein wrong by finding a rest frame or aether. In fact, you can define a rest frame in terms of the cosmic background microwave radiation, and an aether in terms of the quantum electrodynamic vacuum, but nobody says this proves Einstein wrong.

You might say that Einstein's 1905 paper was a big advance if it were conceptually superior, or if led to other advances in the field. But that is not true. Einstein's approach was not seen as being much different from Lorentz's at the time. The conceptually superior approach was considered to be the spacetime geometry relativity of Poincare and Minkowski, and subsequent work built on Minkowski's, not Einstein's.

Einstein has sort of a cult following, and most of the followers appear to be non-Jews. So how is this Jewish Physics?

The term "Jewish Physics" is inaccurate and unnecessarily inflammatory, but there can't be any doubt that Einstein is a Jewish saint. He was aggressively promoted and idolized by Jews. I hesitate to call it a religious thing, as Einstein was not very religious and it is the secular Jews who idolize him, not the orthodox Jews.

It is also hard to separate Einstein's science from his ideological beliefs. His deepest beliefs include Zionism, determinism, Jewish pantheism, anti-positivism, and Communism. When he attacks quantum mechanics, he relied on his anti-positivism and determinism. When he promoted his version of relativity over that of Lorentz, he relied on his beliefs, and not any math or empirical science.

You would think that his membership in Communist front organizations would detract from his popularity, but it does not seem to have had any such effect. It is also well-known that Einstein did not really contribute anything to special relativity, as Whittaker explained it in his 1953 book.

Einstein did have substantial scientific accomplishments, but not nearly enough to be TIME Man of the Century. Idolizing him is largely ideological.

On another topic, some freshman physics courses give a problem on what would happen if everyone in China jumped up and down at the same time. Pure fiction, right? Apparently everyone in Mexico jumped at the same time, and it caused an earthquake!

Thursday, June 14, 2018

Rovelli defends ancient philosophy for physics

Carlo Rovelli writes Physics Needs Philosophy. Philosophy Needs Physics.
Against Philosophy is the title of a chapter of a book by one of the great physicists of the last generation: Steven Weinberg, Nobel Prize winner and one of the architects of the Standard Model of elementary particle physics. Weinberg argues eloquently that philosophy is more damaging than helpful for physics - although it might provide some good ideas at times, it is often a straightjacket that physicists have to free themselves from. More radically, Stephen Hawking famously wrote that "philosophy is dead" because the big questions that used to be discussed by philosophers are now in the hands of physicists. Similar views are widespread among scientists, and scientists do not keep them to themselves. Neil de Grasse Tyson, a well known figure in the popularisation of science in America, publicly stated in the same vein: ".we learn about the expanding universe, . we learn about quantum physics, each of which falls so far out of what you can deduce from your armchair that the whole community of philosophers . was rendered essentially obsolete."
That's right, many modern physicists have concluded that philosophy is dead to them.

Rovelli disagrees, but his argument is almost entirely based on Aristotle and other long-dead philosophers, and by arguing in favor of philosophical thinking.

His examples:
But the direct influence of philosophy on physics is certainly not limited to the birth of modern physics. It can be recognised in every major step. Take the twentieth century. Both major advances made by twentieth century physics were strongly influenced by philosophy. They would have been inconceivable without thephilosophy of the time. Quantum mechanics springs from an intuition due to Heisenberg, grounded in the strongly positivist philosophical atmosphere in which he found himself: one gets knowledge by restricting oneself to what is observable. The abstract of Heisenberg's 1925 milestone paper on quantum theory is explicit about this:

"The aim of this work is to set the basis for a theory of quantum mechanics based exclusively on relations between quantities that are in principle observable."

The same distinctly philosophical attitude nourished Einstein's discovery of special relativity: by restricting to what is observable, we recognise that the notion of simultaneity is misleading. Einstein explicitly recognised his debt to the philosophical writings of Mach and Poincaré. Without these inputs, his special relativity would have been inconceivable. Although not the same, the philosophical influences on Einstein's conception of general relativity were even stronger. Once again, he was explicit in recognising his debt to philosophy, this time to the critical thinking of Leibniz, Berkeley, and Mach.
That's right, the discoveries of relativity and quantum mechanics were strongly influenced by logical positivist thinking.

His history is not quite correct. Einstein did not discover special relativity or the problem of simultaneity. He got most of SR from Lorentz, and got synchronization from Poincare. Up to his dying day, he never acknowledged his debt to Poincare. Einstein did not really buy into the positivist philosophy, and disavowed positivist thinking about QM. He explicitly disavowed positivism in 1945.

I agree that logical positivism has been important to physics, but it has been a dead philosophy since WWII.

I defend logical positivism, but I am in very small minority. There are no reputable philosophers who defend it.

Post-WWII philosophers not only reject logical positivism, they reject the scientific method and much of what modern science is all about.

Siding with today's philosophers is essentially the same as being anti-science. Modern philosophy is at war with modern science.

The only philosophers of the last century discussed by Rovelli are Popper and Kuhn, and Rovellis concedes that they have had a negative influence on physics.
I suspect that part of the problem is precisely that the dominant ideas of Popper and Kuhn have misled current theoretical investigations. Physicists have been too casual in dismissing the insights of successful established theories. Misled by Kuhn’s insistence on incommensurability across scientific revolutions, they fail to build on what we already know, which is how science has always moved forward.
Rovelli says his "own technical area", loop quantum gravity, does not make any sense, and he hopes philosophers will help make sense of it. No chance of that. Loop quantum gravity is a dead end. No good physics has come out of that field, and nothing ever will.

His article is really a defense of ancient philosophy. There is no example of any good from modern philosophers.

Wednesday, June 13, 2018

We dummies should not question the super-smart

Dr. Bee has written a book about failed theories in modern physics, and complains:
“By writing [this book], I waived my hopes of ever getting tenure.” ...

I am not tenured and I do not have a tenure-track position, so not like someone threatened me. I presently have a temporary contract which will run out next year. What I should be doing right now is applying for faculty positions. Now imagine you work at some institution which has a group in my research area. Everyone is happily producing papers in record numbers, but I go around and say this is a waste of money. Would you give me a job? You probably wouldn’t. I probably wouldn’t give me a job either. ...

I have never been an easy fit to academia. I guess I was hoping I’d grow into it, but with time my fit has only become more uneasy. At some point I simply concluded I have had enough of this nonsense. I don’t want to be associated with a community which wastes tax-money because its practitioners think they are morally and intellectually so superior that they cannot possibly be affected by cognitive biases. You only have to read the comments on this blog to witness the origin of the problem, as with commenters who work in the field laughing off the idea that their objectivity can possibly be affected by working in echo-chambers. I can’t even.
I haven't read her book, but it is definitely true that a huge amount of money is pumped into worthless theories, but the leading scholars will not tell the truth about them.

This triggers LuMo into one of his usual rants:
Less than 1,000 people are actually being paid as string theorists or something "really close" in the world now, and even if you realistically assume that the average string theorist is paid more than the average person, the fraction of the mankind's money that goes to string theory is some "one millionth" or so. Or 1/100,000 of the money that goes to porn or any other big industry. Moreover, the funds are allocated by special institutions or donors – they're too technical decisions that the taxpayer simply shouldn't make directly. ...

You don't really need to be a string theorist to understand that string theorists are the cream of the cream of the cream. Most people have met someone who belongs to the cream of the cream, e.g. an astronaut. Well, there's some extra selection related to the theoretical physics-related abilities needed to become a string theorist. ...

If someone has dedicated a few years to these matters and he has failed to learn string theory and to understand that it's the only known promising way to go beyond quantum field theory as of 2018, then I can assure you that his IQ is below 150. ...

If you investigate what smart enough people – who have cared about these matters – honestly think about string theory, you may really measure their intelligence in this way. The more they appreciate string theory, the smarter they are.
Okay, I admit it, my IQ is only 149, and I do not see how string theory offers any promise to move quantum field theory forward. Research in the field peaked in the 1990s, and it has not even made any significant progress in the last 20 years. The theory still has no known relationship to any observable phenomenon. It is just a mathematical idea that did not pan out.

Tuesday, June 12, 2018

Elegance is the fuzziest aspect of beauty

Physicist Dr. Bee writes, in connections with her new book:
Elegance is the fuzziest aspect of beauty. It is often described as an element of surprise, the “aha-effect,” or the discovery of unexpected connections. One specific aspect of elegance is a theory’s resistance to change, often referred to as “rigidity” or (misleadingly, I think) as the ability of a theory to “explain itself.”

By no way do I mean to propose this as a definition of beauty; it is merely a summary of what physicists mean when they say a theory is beautiful. General relativity, string theory, grand unification, and supersymmetry score high on all three aspects of beauty. The standard model, modified gravity, or asymptotically safe gravity, not so much.

But while physicists largely agree on what they mean by beauty, in some cases they disagree on whether a theory fulfills the requirements. This is the case most prominently for quantum mechanics and the multiverse.
I do not agree that grand unification and supersymmetry are beautiful. They require 100s of new parameters and particles in the theory, over the standard model's 20 or so.

A comment says:
A beautiful equation is also one that exhibits the fewest free parameters while explaining the most physics. That's why general relativity is beautiful while the Lagrangian of the Standard Model is ugly as hell. They both work, one by itself and the other by brute force, although I would never compare one with the other.
No, I disagree. This is like saying that the periodic table of the chemical elements is ugly as hell, because it have 92+ elements and some irregularities. It was vastly simpler than any other categorization of the 1000s of known substances, and put them into simple patterns.

The standard model is just quarks, electrons, and neutrinos, with some flavors, generations, colors, and anti-particles, and some bosons for transmitting forces.