Thursday, September 20, 2018

Mermin defends Copenhagen Interpretation

N. David Mermin has written many popular essays explaining quantum mechanics, and now he summarizes his views on how to interpret the theory in Making better sense of quantum mechanics.

He prefers something called QBism, but nearly everything he says could be considered a defense of the Copenhagen interpretation.
Much of the ambiguity and confusion at the foundations of quantum mechanics stems from an almost universal refusal to recognize that individual personal experience is at the foundation of the story each of us tells about the world. Orthodox ("Copenhagen") thinking about quantum foundations overlooks this central role of private personal experience, seeking to replace it by impersonal features of a common "classical" external world.
He is drawing a fairly trivial distinction between his QBism view and Copenhagen. He illustrates with this famous (but possibly paraphrased) Bohr quote::
When asked whether the algorithm of quantum mechanics could be considered as somehow mirroring an underlying quantum world, Bohr would answer "There is no quantum world. There is only an abstract quantum physical description. It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about nature."
Mermin's only quibble with this is that he prefers "each of us can say" to "we can say". That is, he doesn't like the way Bohr lumps together everyone's observations and calls it the classical world.

Okay, I guess that distinction makes a difference when discussing Wigner's Friend, a thought experiment where one observer watches another. But for the most part, Mermin likes the Copenhagen interpretation, and successfully rebuts those who say that the interpretation is deficient somehow.

Monday, September 17, 2018

Correcting errors about EPR paradox

Blake C. Stacey writes about the Einstein–Podolsky–Rosen paradox:
Misreading EPR: Variations on an Incorrect Theme

Notwithstanding its great influence in modern physics, the EPR thought-experiment has been explained incorrectly a surprising number of times.
He then gives examples of famous authors who get EPR wrong.

He gets to the heart of the Bohr-Einstein dispute:
EPR write, near the end of their paper, "[O]ne would not arrive at our conclusion if one insisted that two or more physical quantities can be regarded as simultaneous elements of reality only when they can be simultaneously measured or predicted."

The response that Bohr could have made: "Yes."

EPR briefly considered the implications of this idea and then dismissed it with the remark, "No reasonable definition of reality could be expected to permit this."

But that is exactly what Bohr did. A possible reply in the Bohrian vein: "Could a `reasonable definition of reality' permit so basic a fact as the simultaneity of two events to be dependent on the observer's frame of reference? Many notions familiar from everyday life only become well-defined in relativity theory once we fix a Lorentz frame. Likewise, many statements in quantum theory only become well-defined once we have given a complete description of the experimental apparatus and its arrangement."

This is not a quote from anywhere in Bohr's writings, but it is fairly in the tradition of his Warsaw lecture, where he put considerable emphasis on what he felt to be "deepgoing analogies" between quantum theory and relativity.
In spite of all differences in the physical problems concerned, relativity theory and quantum theory possess striking similarities in a purely logical aspect. In both cases we are confronted with novel aspects of the observational problem, involving a revision of customary ideas of physical reality, and originating in the recognition of general laws of nature which do not directly affect practical experience. The impossibility of an unambiguous separation between space and time without reference to the observer, and the impossibility of a sharp separation between the behavior of objects and their interaction with the means of observation are, in fact, straightforward consequences of the existence of a maximum velocity of propagation of all actions and of a minimum quantity of any action, respectively.
This is well put. The aim of EPR is to explain a simple example of entangled particles, and to argue that no reasonable definition of reality would permit two observables that cannot be simultaneously measured.

And yet that is a core teaching of quantum mechanics, from about 10 years earlier. Two non-commuting observables cannot be simultaneously measured precisely. That is the Heisenberg uncertainty principle.

Theories that assign definite simultaneous values to observables are called hidden variable theories. All the reasonable ones have been ruled out by the Bell Test Experiments.

Complaining that the uncertainty principle violates pre-conceptions about reality is like complaining that relativity violates pre-conceptions about simultaneity. Of course it does. Get with the program.

There are crackpots who reject relativity because of the Twin Paradox, or some other such surprising effect. The physics community treats them as crackpots. And yet the community tolerates those who get excited by EPR, even tho EPR makes essentially the same mistake.

Saying "maximum velocity of propagation" is a way of saying the core of relativity theory, and saying "minimum quantity of any action" is a way of saying the core of quantum mechanics. The minimum is Planck's constant h, or h-bar. The Heisenberg uncertainties are proportional to this constant. That minimum makes it impossible to precisely measure position and momentum simultaneously, just as the finite speed of light makes it impossible to keep clocks simultaneous.

Thursday, September 13, 2018

Joint Hubble Lemaitre credit is a bad idea

I mentioned renaming the Hubble Law, as a way to correct history, but it appears that they have made the matter worse.

The respected science historian Helge Kragh writes:
The Hubble law, widely considered the first observational basis for the expansion of the universe, may in the future be known as the Hubble-Lema\^itre law. This is what the General Assembly of the International Astronomical Union recommended at its recent meeting in Vienna. However, the resolution in favour of a renamed law is problematic in so far as concerns its arguments based on the history of cosmology in the relevant period from about 1927 to the early 1930s. A critical examination of the resolution reveals flaws of a non-trivial nature. The purpose of this note is to highlight these problems and to provide a better historically informed background for the voting among the union's members, which in a few months' time will result in either a confirmation or a rejection of the decision made by the General Assembly.
He notes:
Until the mid-1940s no astronomer or physicist seems to have clearly identified Hubble as the discoverer of the cosmic expansion. Indeed, when Hubble went into his grave in 1953 he was happily unaware that he had discovered the expansion of the universe.
He says the cited evidence that Hubble met with Lemaitre is wrong. Furthermore, there are really two discoveries being confused -- the cosmic expansion and the empirical redshift-distance law. Hubble had a role in the latter, but not the former.

Monday, September 10, 2018

Where exactly does probability enter the theory?

Peter Woit writes:
A central question of the interpretation of quantum mechanics is that of “where exactly does probability enter the theory?”. The simple question that has been bothering me is that of why one can’t just take as answer the same place as in the classical theory: in one’s lack of precise knowledge about the initial state.
Lee Smolin says he is writing a book, and there are 3 options: (1) orthodox quantum mechanics, (2) many-worlds, (3) hidden variable theories, like pilot waves. All attempts at (2) have failed, so he says "My personal view is that option 3) is the only way forward for physics."

This is a pretty crazy opinion. No one has been able to makes sense out of probabilities in a many-worlds theory, and Bell test experiments have ruled out all sensible hidden variable theories.

Lubos Motl posts a rant against them, as usual:
Quantum mechanics was born 93 years ago but it's still normal for people who essentially or literally claim to be theoretical physicists to admit that they misunderstand even the most basic questions about the field. As a kid, I was shocked that people could have doubted heliocentrism and other things pretty much a century after these things were convincingly justified. But in recent years, I saw it would be totally unfair to dismiss those folks as medieval morons. The "modern morons" (or perhaps "postmodern morons") keep on overlooking and denying the basic scientific discoveries for a century, too! And this centennial delay is arguably more embarrassing today because there exist faster tools to spread the knowledge than the tools in the Middle Ages.
Lumo is mostly right, but it is possible to blame uncertainties on lack of knowledge of the initial state. It is theoretically possible that if you had perfect knowledge about a radioactive nucleus, then you would know when it would decay.

However it is also true that measurements are not going to give you that knowledge, based on what we know about quantum mechanics. This is what makes determinism more of a philosophical question than a scientific one.

I agree with Lumo that deriving the Born rule is silly. The Born rule is part of quantum theory. Deriving it from something equivalent might please some theorists, but really is just a mathematical exercise with no scientific significance.

This question about the origin of probabilities only makes sense to those who view probably as the essential thing that makes quantum mechanics different from classical mechanics. I do not have that view. Probabilities enter into all of science. It is hard to imagine any scientific theory that can be tested without some resort to a probabilistic analysis. So I don't think that the appearance of probability requires any special explanation. How else would any theory work?

It is very strange that respectable physicists can have such bizarre views about things that were settled about a century ago. I agree with Lumo about that.

Saturday, September 8, 2018

Another claim for QC real soon

Latest quantum computer hype:
Today the [Berkeley-based startup Rigetti] launched a project in the mold of Amazon Web Services (AWS) called Quantum Cloud Services. "What this platform achieves for the very first time is an integrated computing system that is the first quantum cloud services architecture," says Chad Rigetti, founder and CEO of his namesake company. The dozen initial users Rigetti has announced include biotech and chemistry companies harnessing quantum technology to study complex molecules in order to develop new drugs. The particular operations that the quantum end of the system can do, while still limited and error-prone, are nearly good enough to boost the performance of traditional computers beyond what they could do on their own -- a coming milestone called quantum advantage. "My guess is this could happen anytime from six to 36 months out," says Rigetti.
My guess is that their investors said that they require results in 6 to 36 months.

There is no chance that this company will have any success before the funding runs out.

Tuesday, September 4, 2018

Vote to rename law to Hubble-Lemaitre Law

Astronomers have long credited Hubble for discovering the expansion of the universe, even tho he had little to do with it.

If they can decide that Pluto is not a planet, then they can correct this error. Now they will vote on it:
Astronomers are engaged in a lively debate over plans to rename one of the laws of physics.

It emerged overnight at the 30th Meeting of the International Astronomical Union (IAU), in Vienna, where members of the general assembly considered a resolution on amending the name of the Hubble Law to the Hubble-Lemaître Law.

The resolution aims to credit the work of the Belgian astronomer Georges Lemaître and his contribution—along with the American astronomer Edwin Hubble — to our understanding of the expansion of the universe.

While most (but not all) members at the meeting were in favor of the resolution, a decision allowed all members of the International Astronomical Union a chance to vote. Subsequently, voting was downgraded to a straw vote and the resolution will formally be voted on by an electronic vote at a later date.
As the article explains, the Belgian Catholic priest published both the theory and the experimental evidence for it, before Hubble had a clue. Hubble did later publish some data confirming Lemaitre's paper as he had a better telescope, but the data was very crude and not really much better.

It is an amusing historical fact that Einstein, Eddington, and other leading cosmologists clung to the idea of a steady-state universe, while a Catholic priest and Vatican astronomers led the way to convincing everyone that the universe had a beginning in what is now called the Big Bang.
But Hubble was not the first. In 1927, Georges Lemaître had already published an article on the expansion of the universe. His article was written in French and published in a Belgian journal.

Lemaître presented a theoretical foundation for the expansion of the universe and used the astronomical data (the very same data that Hubble used in his 1929 article) to infer the rate at which the universe is expanding.

In 1928, the American mathematician and physicist Howard Robertson also published an article in Philosophical Magazine and Journal of Science, where he derived the formula for the expansion of the universe and inferred the rate of expansion from the same data that were used by Lemaître (a year before) and Hubble (a year after). ...

In January 1930 at the meeting of the Royal Astronomical Society in London, the English astronomer, physicist, and mathematician Arthur Eddington raised the problem of the expansion of the universe and the lack of any theory that would satisfactory explain this phenomenon.

When Lemaître found about this, he wrote to Eddington to remind him about his 1927 paper, where he laid theoretical foundation for the expansion of the universe.
It should be called the Lemaitre Law, or maybe the Lemaitre-Robertson Law, if you want to give an American some credit.

Thursday, August 30, 2018

Modifying gravity is called "cheating"

Gizmodo reports:
A fight over the very nature of the universe has turned ugly on social media and in the popular science press, complete with accusations of “cheating” and ad hominem attacks on Twitter. Most of the universe is hiding, and some scientists disagree over where it has gone.

It’s quite literally a story as old as time. Wherever you look in the cosmos, things don’t seem to add up. Our human observations of the universe’s structure—as far back as we can observe—suggest that there’s around five times more mass than we see in the galaxies, stars, dust, planets, brown dwarfs, and black holes that telescopes have observed directly. We call this mystery mass, or the mystery as a whole, “dark matter.”

Several thousand physicists researching these dark matter-related mysteries will tell you that dark matter is a particle, the way that electrons and protons are particles, that only appears to interact with other known particles via the gravitational pull of its mass. But there are a few dozen physicists who instead think that a set of ideas called “modified gravity” might one day explain these mysteries. Modified gravity would do away with the need for dark matter via a tweak to the laws of gravity. ...

Then, in June, the most sensitive dark matter particle-hunting experiment, called XENON, announced it had once again failed to find a dark matter particle. A story titled “Is Dark Matter Real?” followed in the August issue of Scientific American, ...

“It’s only if you ignore all of modern cosmology that the modified gravity alternative looks viable. Selectively ignoring the robust evidence that contradicts you may win you a debate in the eyes of the general public. But in the scientific realm, the evidence has already decided the matter, and 5/6ths of it is dark.”
In other words, it is a cheat to tweak the laws of gravity to accommodate the slow galaxy rotation, but not cheat to hypothesize a new particle.

Hoping for a dark matter particle was one of the main reasons for believing in SUSY, as SUSY requires about 100 new particles. Maybe the lightest one is the dark matter particle.

Another little controversy is whether the evidence for dark matter already contradicts the Standard Model. Not necessarily. Wilczek pushes axions as an explanation that I think is consistent with the SM.

Also, the SM only tries to explain strong, weak, and electromagnetic interactions. Dark matter could be some substance that does not interact with those forces, and thus could exist independently from the SM.
In Gizmodo’s conversations with 13 physicists studying dark matter, a pretty clear picture emerged: Dark matter as an undiscovered population of particles that influence the universe through gravity is the prevailing paradigm for a reason, and will continue as such until a theory comes along with the same predictive power for the universe’s grandest features.
It is odd to call the substance a particle. We only call electrons particles because of how they interact with light, but dark matter does not interact with light.
“Everywhere the dark matter theories make predictions, they get the right answers,” Scott Dodelson, a Carnegie Mellon physics professor, told Gizmodo. But he offered a caveat: “They can’t make predictions as well on small scales,” such as the scales of galaxies.
I am surprised that anyone would brag about a theory that only works on scales much larger than galaxies.

Monday, August 27, 2018

Billion new dollars for quantum computation

Peter Woit announces:
Moving through the US Congress is a National Quantum Initiative Act, which would provide over a billion dollars in funding for things related to quantum computation.
A billion dollars?!

IBM and Google both promised quantum supremacy in 2017. We have no announcement of QS, or any explanation for the failure.

I am not the only one saying it is impossible. See this recent Quanta mag article for other prominent naysayers.

If Congress were to have hearings on this funding, I would expect physicists to be extremely reluctant to throw cold water on lucrative funding for their colleagues. Maybe that is what is keeping Scott Aaronson quiet.

Previously Woit commented:
It’s remarkable to see publicly acknowledged by string theorists just how damaging to their subject multiverse mania has been, and rather bizarre to see that they attribute the problem to my book and Lee Smolin’s. The source of the damage is actually different books, the ones promoting the multiverse, for example this one.
This was induced by some string theorists still complaining about those books that appeared in around 2005.

It is bizarre for anyone to be bothered by some criticism from 13 years ago. The two books did not even say the same thing. You would think that the string theorists would just publish their rebuttal and move on.

Apparently they had no rebuttal, and they depended on everyone going along with the fiction that string theory was working.

Likewise, the quantum computation folks depend on everyone going along with the idea that we are about to have quantum computers (with quantum supremacy), and it will be a big technological advance. We don't need two books on the subject, as it is pretty obvious that IBM and Google are not delivering what they promised.

Saturday, August 25, 2018

Professor arrested for pocketing $4 in tips

Quantum computer complexity theorist Scott Aaronson seems to have survived his latest personal struggle, with his worldview intact.

He bought a smoothie, paid with a credit card, and took the $4 in the tip jar. An employee approached him, and politely explained that the tip jar is for tips. He grudgingly gave $1 back.

The manager then called the cops, and cop interviewed him to confirm what he had done. He was still oblivious to what was going on, so the cop handcuffed him and arrested him. That got his attention, and the manager agreed to drop the charges when the $4 was returned.

There is a biography about physicist Paul Dirac that calls him "the world's strangest man" because of a few silly anecdotes about him being a stereotypical absent-minded professor. That biographer has not met Scott.

Scott says that it was all the fault of the smoothie maker for not clearly explaining to him that he does not get to take change from the tip jar if he pays with a credit card. Scott is correct that there was a failure of communication, and surely both sides are at least somewhat to blame.

I am not posting this to criticize Scott. Just read his blog where he posts enuf negative info about himself. If I wanted to badmouth him, I would just link to his various posts where he has admitted to be wrong about various things. I am inclined to side with him as a fellow nerd who is frustrated by those who fail to explain themselves in a more logical manner. I am just posting it because I think that it is funny. After all, Scott has been named as one of the 30 smartest people alive and also one of the top 10 smartest people. And yet there are people with about 50 less IQ points who have no trouble buying smoothies, or understanding a request to put the tip money back.

Monday, August 6, 2018

Copenhagen is rooted in logical positivism

From an AAAS Science mag book review:
Most physicists still frame quantum problems through the sole lens of the so-called “Copenhagen interpretation,” the loose set of assumptions Niels Bohr and his colleagues developed to make sense of the strange quantum phenomena they discovered in the 1920s and 1930s. However, he warns, the apparent success of the Copenhagen interpretation hides profound failures.

The approach of Bohr and his followers, Becker argues, was ultimately rooted in logical positivism, an early-20th-century philosophical movement that attempted to limit science to what is empirically verifiable. By the mid-20th century, philosophers such as Thomas Kuhn and W. V. O. Quine had completely discredited this untenable view of science, Becker continues. The end of logical positivism, he concludes, should have led to the demise of the Copenhagen interpretation. Yet, physicists maintain that it is the only viable approach to quantum mechanics.

As Becker demonstrates, the physics community’s faith in Bohr’s wisdom rapidly transformed into a pervasive censorship that stifled any opposition.
This is partially correct. Quantum mechanics, and the Copenhagen Interpretation were rooted in logical positivism. Much of XX century physics was influenced, for the better, by logical positivism and related views.

It is also true that XX century philosophers abandoned logical positivism, for largely stupid reasons. They decided that there was no such thing as truth.

This created a huge split between the scientific world, which searches for truth, and the philosophical world, which contends that there is no such thing as truth. These views are irreconcilable. Science and Philosophy have become like Astronomy and Astrology. Each thinks that the other is so silly that any conversation is pointless.

Unfortunately, many physicists are now infected with anti-positivist views of quantum mechanics, and say that there is something wrong with it. Those physicists complain, but have gotten nowhere with there silly ideas.

Wednesday, August 1, 2018

Einstein's 1905 relativity had no new dogmas

Lubos Motl writes:
Einstein's breakthrough was far deeper, more philosophical than assumed

Relativity is about general, qualitative principles, not about light or particular objects and gadgets

Some days ago, we had interesting discussions about the special theory of relativity, its main message, the way of thinking, the essence of Einstein's genius and his paradigm shift, and the good and bad ways how relativity is presented to the kids and others. ...

Did the physicists before Einstein spend their days by screaming that the simultaneity of events is absolute? They didn't. It was an assumption that they were making all the time. All of science totally depended on it. But it seemed to obvious that they didn't even articulate that they were making this assumption. When they were describing the switch to another inertial system, they needed to use the Galilean transformation and at that moment, it became clear that they were assuming something. But everyone instinctively thought that one shouldn't question such an assumption. No one has even had the idea to question it. And that's why they couldn't find relativity before Einstein.

Einstein has figured out that some of these assumptions were just wrong and he replaced them with "new scientific dogmas".
They did find relativity before Einstein. With all the formulas. In particular, what Einstein said about simultaneity and synchronization of clocks was straight from what Poincare said five years earlier.

Motl repeats the widespread belief that Einstein found relativity by repudiating conventional wisdom and introducing new dogmas. That is not true at all. The most widely accepted theoy on the matter was Lorentz's 1895 theory. Lorentz had already received a Nobel prize for it in 1902.

Einstein's big dogmas were that the speed of light is constant and motion is relative. Einstein later admitted that he got the constant speed of light straight from Lorentz. He also got the relativity postulate from Lorentz, although it was Poincare who really emphasized it, so maybe he got it from Poincare.

Einstein did later argue against rival theories, such as Abraham's, but he never claimed that Lorentz or Poincare were wrong about their relativity theories. Other authors referred to the "Lorentz-Einstein theory", as if there were no diffence.xxc Even when Einstein was credited with saying something different from Lorentz, he insisted that his theory was the same as Lorentz's.

Einstein did sometimes pretend to have made conceptual advances in his formulation of special relativity, such as with the aether and local time. But what Einstein said on these matters was essentially the same as what Lorentz said many years earlier.

The formulation of special relativity that is accepted today is the geometric spacetime version presented by Poincare and Minkowki, not Einstein's. Poincare and Minkowski did explain how their view was different from Lorentz's.

Monday, July 30, 2018

Was Copernicus really heliocentric?

Wikipedia current has a debate on whether the Copernican system was heliocentric. See the Talk pages for Nicolaus Copernicus and Copernican heliocentrism.

This is a diversion from the usual Copernicus argument, which is whether he was Polish or German. He is customarily called Polish, but nation-states were not well defined, and there is an argument that he was more German than Polish.

The main point of confusion is that Copernicus did not really put the Sun at the center of the Earth's orbit. It was displaced by 1/25 to 1/31 of the Earth's orbit radius.

It appears that the center of the Earth's orbit revolved around the Sun, but you could also think of the Sun as revolving around the center of the Earth's orbit.

So is it fair to say that the Sun is at the center of the universe? Maybe if you mean that the Sun is near the center of the planetary orbits. Or that the Sun was at the center of the fixed stars. Or that the word "center" is used loosely to contrast with an Earth-centered system.

In Kepler's system, and Newton's, the Sun is not at the center of any orbit, but at a focus of an ellipse. It was later learned that the Sun orbits around a black hole at the center of the Milky Way galaxy.

I am not sure why anyone attaches such great importance to these issue. Motion is relative, and depends on your frame of reference. The Ptolemy and Copernicus models had essentially the same scientific merit and accuracy. They mainly differed in their choice of a frame of reference, and in their dubious arguments for preferring those frames.

People act as if the Copernicus choice of frame was one of the great intellectual advances of all time.

Suppose ancient map makers put East at the top of the page. Then one day a map maker put North at the top of the page. Would we credit him for being a great intellectual hero? Of course not.

There is an argument that the forces are easier to understand if you choose the frame so that the center of mass is stationary. Okay, but that was not really Copernicus's argument. There were ancient Greeks who thought it made more sense to put the Sun at the center because it was so much larger than the Earth. Yes, they very cleverly figured out that the Sun was much larger. There is a good logic to that also, but it is still just a choice of frame.

Friday, July 27, 2018

How physicists discovered the math of gauge theories

We have seen that if you are looking for theories obeying a locality axiom, and if you adopt a geometrical view, you are inevitably led to metric theories like general relativity, and gauge theories like electromagnetism. Those are the simplest theories obeying the axioms.

Formally, metric theories and gauge theories are very similar. Both satisfy the locality and geometry axioms. Both define the fields as the curvature of a connection on a bundle. The metric theories use the tangent bundle, while the gauge theories use an external group. Both use tensor calculus to codify the symmetries.

The Standard Model of particle physics is a gauge theory, with the group being U(2)xSU(3) instead of U(1), and explains the strong, weak, and electromagnetic interactions.

Pure gauge theories predict massless particles like the photon. The Standard Model also has a scalar Higgs field that breaks some of the symmetries and allows particles to have mass.

The curious thing, to me, is why it took so long to figure out that gauge theories were the key to constructing local field theories.
Newton published his theory of gravity in 1682, but was unhappy about the action-at-a-distance. He would have preferred a local field theory, but he could not figure out how to do it. Maxwell figured it out for electromagnetism in 1865, based on experiments of Faraday and others.

Nobody figured out the symmetries to Maxwell’s equations until Lorentz and Poincare concocted theories to explain the Michelson-Morley experiment,in 1892-1905. Poincare showed in 1905 that a relativistic field theory for gravity resolves the action-at-a-distance paradoxes of Newtonian gravity. After Minkowski stressed the importance of the metric, the geometry, and tensor analysis to special relativity in 1908, Nordstrom, Grossmann, Einstein, and Hilbert figured out how to make gravity a local geometrical theory. Hermann Weyl combined gravity and electromagnetism into what he called a “gauge” theory in 1919.

Poincare turned electromagnetism into a geometric theory in 1905. He put a non-Euclidean geometry on spacetime, with its metric and symmetry group, and used the 4-vector potential to prove the covariance of Maxwell’s equations. But he did not notice that his potential was just the connection on a line bundle.

Electromagnetism was shown to be a renormalizable quantum field theory by Feynman, Schwinger and others in the 1940s. ‘tHooft showed that all gauge theories were renormalizable in 1970. Only after 1970 did physicists decide that gauge theories were the fundamental key to quantum field theory, and the Standard Model was constructed in the 1970s. They picked SU(3) for the strong interaction because particles had already been found that closely matched representations of SU(3).

It seems to me that all of relativity (special and general) and gauge theory (electromagnetism and the Standard Model) could have derived mathematically from general principles, with little or no reference to experiment. It could have happened centuries ago.

Perhaps the mathematical sophistication was not there. Characterizing geometries in terms of symmetry transformations and their invariants was described in Klein's Erlangen Program, 1872. Newton did not use a vector notation. Vector notation did not become popular until about 1890. Tensor analysis was developed after that. A modern understanding of manifolds and fiber bundles was not published until about 1950.

Hermann Weyl was a brilliant mathematician who surely had an intuitive understanding of all these things in about 1920. He could have worked out the details, if he had understood how essential they were to modern physics. Why didn’t he?

Even Einstein, who was supposedly the big advocate of deriving physics from first principles, never seems to have noticed that relativity could be derived from geometry and causality. Geometry probably would not have been one of his principles, as he never really accepted that relativity is a geometrical theory.

I am still trying to figure out who was the first to say, in print, that relativity and electromagnetism are the inevitable consequences of the geometrical and locality axioms. This seems to have been obvious in the 1970s. But who said it first?

Is there even a physics textbook today that explains this argument? You can find some of it mentioned in Wikipedia articles, such as Covariant formulation of classical electromagnetism, Maxwell's equations in curved spacetime - Geometric formulation, and Mathematical descriptions of the electromagnetic field - Classical electrodynamics as the curvature of a line bundle. These articles mention that electromagnetics fields can be formulated as the curvature of a line bundle, and that this is an elegant formulation. But they do not explain how general considerations of geometry and locality lead to the formulation.

David Morrison writes in a recent paper:
In the late 1960s and early 1970s, Yang got acquainted with James Simons, ... Simons identified the relevant mathematics as the mathematical theory of connections on fiber bundles ... Simons communicated these newly uncovered connections with physics to Isadore Singer at MIT ... It is likely that similar observations were made independently by others.
I attended Singer’s seminars in the 1970s, so I can confirm that it was a big revelation to him that physicists were constructing a standard model based on connections on fiber bundles, a subject where he was a leading authority. He certainly had the belief that mathematicians and physicists did not know they were studying the same thing under different names, and he knew a lot of those mathematicians and physicists.

String theorists like to believe that useful physical theories can be derived from first principles. Based on the above, I have to say that it is possible, and it could have happened with relativity. But it has never happened in the history of science.

If there were ever an example where a theory might have been developed from first principles, it would be relativity and electromagnetism. But even in that case, it appears that a modern geometric view of the theory was only obtained many decades later.

Wednesday, July 25, 2018

Clifford suggested matter could curve space

A new paper:
Almost half a century before Einstein expounded his general theory of relativity, the English mathematician William Kingdon Clifford argued that space might not be Euclidean and proposed that matter is nothing but a small distortion in that spatial curvature. He further proposed that matter in motion is not more than the simple variation in space of this distortion. In this work, we conjecture that Clifford went further than his aforementioned proposals, as he tried to show that matter effectively curves space. For this purpose he made an unsuccessful observation on the change of the plane of polarization of the skylight during the solar eclipse of December 22, 1870 in Sicily.
I have wondered why some people credit Clifford, and this article spells it out. He was way ahead of everyone with the idea that our physical space might be non-Euclidean, as an explanation for gravity.

Friday, July 20, 2018

Vectors, tensors, and spinors on spacetime

Of the symmetries of spacetime, several of them are arguably not so reasonable when considering locality.

The first is time reversal. That takes time t to −t, and preserves the metric and some of the laws of mechanics. It takes the forward light cone to the backward light cone, and vice versa.

But locality is based on causality, and the past causes the future, not the other way around. There is a logical arrow of time from the past to the future. Most scientific phenomena are irreversible.

The second dubious symmetry is parity reversal. That is a spatial reflection that reverses right-handedness and left-handedness.

DNA is a right-handed helix, and there is no known life using left-handed DNA. Certain weak interactions (involved in radioactive decay) have a handedness preference. The Standard Model explains this by saying all neutrinos are massless with a left-handed helicity. That is, they spin left compared to the velocity direction. (Neutrinos are now thought to have mass.)

Charge conjugation is another dubious symmetry. Most of the laws of electricity will be the same if all positive charges are changed to negative, and all negative to positive.

Curiously, if you combine all three of the above symmetries, you get the CPT symmetry, and it is believed to be a true symmetry of nature. Under some very general assumptions, a quantum field theory with a local Lorentz symmetry must also have a CPT symmetry.

The final dubious symmetry is the rotation by 360°. It is a surprising mathematical fact that a rotation thru 720° is homotopic to the identity, but a rotation thru 360° is not. You can see this yourself by twisting your belt.

Thus the safest symmetry group to use for spacetime is not the Lorentz group, but the double cover of the connected component of the Lorentz group. That is, the spatial reflections and time reversals are excluded, and a rotation by x° is distinguished from a rotation by x+360°. This is called the spin group.

Geometric theories on spacetime are formulated in terms of vectors and tensors. Vectors and tensors are covariant, so that they automatically transform under a change of coordinates. Alternatively, you can say that spacetime is a geometric coordinate-free manifold with a Lorentz group symmetry, and vectors and tensors are the well-defined on that manifold.

If we consider spacetime with a spin group symmetry instead of the Lorentz group, then we get a new class or vector-like functions called spinors. Physicists sometimes think of a spinor as a square root of a vector, as you can multiply two spinors and get a vector.

The basic ingredients for a theory satisfying the geometry and locality axioms are thus: a spacetime manifold, a spin group structure on the tangent bundle, a separate fiber bundle, and connections on those bundles. The fields and other physical variables will be spinors and sections.

This is the essence of the Standard Model. Quarks, electrons, and neutrinos are all spinor fields. They have spin 1/2, which means they have 720° rotational symmetry, and not a 360° rotation symmetry. Such particles are also called fermions. The neutrinos are left-handed, meaning they have no spatial reflection symmetry. The Lagrangian defining the theory is covariant under the spin group, and hence well-defined on spacetime.

Under quantum mechanics, all particles with the same quantum numbers are identical, and a permutation of those identical particles is a symmetry of the system. Swapping two identical fermions introduces a factor of −1, just like rotating by 360°. That is what separates identical fermions, and keeps them from occupying the same state. This is called the Pauli exclusion principle, and it is the fundamental reason why fermions can be the building blocks of matter, and form stable objects.

Pauli exclusion for fermions is a mathematical consequence of having a Lorentz invariant quantum field theory.

All particles are either fermions or bosons. The photon is a boson, and has spin 1. A laser can have millions of photons in the same state, and they cannot be used to build a material substance.

The point here is that under general axioms of geometry and locality, one can plausibly deduce that spinor and gauge theories are the obvious candidates for fundamental physical theories. The Standard Model then seems pretty reasonable, and is actually rather simple and elegant compared to the alternatives.

In my counterfactual anti-positivist history of physics, it seems possible that someone could have derived models similar to the Standard Model from purely abstract principles, and some modern mathematics, but with no experiments.