Wednesday, December 26, 2012

Slipher data was before Hubble

I have posted papers showing that Hubble does not deserve credit for discovering the big bang. Now Cormac O. Raifeartaigh posts a new paper:
A brief history of the discovery of the expanding universe is presented, with an emphasis on the seminal contribution of VM Slipher. It is suggested that the well-known Hubble graph of 1929 could also be known as the Hubble-Slipher graph. It is also argued that the discovery of the expanding universe matches the traditional view of scientific advance as a gradual process of discovery and acceptance, and does not concur with the Kuhnian view of science progressing via abrupt paradigm shifts.
He explains how Lemaitre and others did the theoretical work, and Hubble relied on Slipher, in part, for experimental work. The paper also says that Hubble never really accepted the big bang, and left open the possibility that his data might be consistent with other models.

The paper also argues that the discovery of the big bang was nothing like a Kuhnian paradigm shift. If there were any merit to Kuhn's philosophy, then the big bang should have been a paradigm shift. The whole concept of a paradigm shift is bogus.

Update: Another new paper says:
Much debate has ensued recently over who deserves credit for being first to discover the Universe is expanding. Lemaître’s theoretical discovery of expansion in 1927 was not translated into English until two years after Hubble’s observational discovery of expansion in 1929 (Lemaître 1927b, translated 1931). Further, that translation omitted the Hubble constant of expansion that Lemaître calculated in 1927. That omission, only recently revealed in these pages by frequent contributor to JRASC Sidney van den Bergh of the Dominion Astrophysical Observatory, peaked the debate (see van den Bergh 2011, JRASC 105,151). At least a dozen papers regarding Hubble’s priority in the discovery of expansion have been published in the last two years alone, more papers regarding Hubble’s priority than were written in all the years since 1929 combined.

Monday, December 24, 2012

Incinerated by a black hole firewall

Jennifer Ouellette writes in SciAm:
Conventionally, physicists have assumed that if the black hole is large enough, Alice won’t notice anything unusual as she crosses the horizon. In this scenario, colorfully dubbed “No Drama,” the gravitational forces won’t become extreme until she approaches a point inside the black hole called the singularity. There, the gravitational pull will be so much stronger on her feet than on her head that Alice will be “spaghettified.”

Now a new hypothesis is giving poor Alice even more drama than she bargained for. If this alternative is correct, as the unsuspecting Alice crosses the event horizon, she will encounter a massive wall of fire that will incinerate her on the spot. As unfair as this seems for Alice, the scenario would also mean that at least one of three cherished notions in theoretical physics must be wrong. ...

Paradoxes in physics have a way of clarifying key issues. At the heart of this particular puzzle lies a conflict between three fundamental postulates beloved by many physicists. The first, based on the equivalence principle of general relativity, leads to the No Drama scenario: Because Alice is in free fall as she crosses the horizon, and there is no difference between free fall and inertial motion, she shouldn’t feel extreme effects of gravity. The second postulate is unitarity, the assumption, in keeping with a fundamental tenet of quantum mechanics, that information that falls into a black hole is not irretrievably lost. Lastly, there is what might be best described as “normality,” namely, that physics works as expected far away from a black hole even if it breaks down at some point within the black hole — either at the singularity or at the event horizon.
I haven't read these papers, but this conflict sounds crazy to me. Unitarity is not a fundamental tenet of quantum mechanics. It is not a physical principle at all, and there is no physical evidence for it. It is only thought to be important by those with some philosophical disagreement with wave function collapse.

The idea that Alice will be incinerated by a massive wall of fire upon entering a black hole sounds a little bit like the story of Lot's wife in the Bible.

A comment gives the source for Einstein's "spooky action at a distance":
"I cannot make a case for my attitude in physics which you would consider at all reasonable. I admit, of course, that there is a considerable amount of validity in the statistical approach which you were the first to recognise clearly as necessary given the framework of the existing formalism. I cannot seriously believe in it because the theory cannot be reconciled with the idea that physics should represent a reality in time and space, free from spooky actions at a distance. I am, however, not yet firmly convinced it can really be achieved with a continuous field theory, although I have discovered a possible way of doing this which so far seems quite reasonable...But I am quite convinced that someone will eventually come up with a theory whose objects, connected by laws, are not probabilities but considered facts, as used to be taken for granted until quite recently. I cannot however, base this conviction on logical reasons, but can only produce my little finger as witness, that is I offer no authority which would be able to command any kind of respect outside of my own hand." (Einstein to Born, 2 Mar. 1947.)

Born comments: "I too had considered this postulate [that physics should represent a reality in time and space] to be one which could claim absolute validity. But the realities of physical experience had taught me that this postulate is not an a priori principle but a time-dependent rule which must be, and can be, replaced by a more general one."
Einstein wants to reject quantum mechanics in favor of some sort of hidden variable theory, and his buddy Max Born cannot convince him that the probabilities are essential. In my opinion, they were both wrong.

String theorist Lumo adds his criticism of Ouellette's article:
Well, more precisely, it's nice and informative if you assume that her task was to uncritically promote the views of Joe Polchinski, Leonard Susskind, Raphael Bousso, and a few others. From a more objective viewpoint, the article's main message is wrong and the text misinterprets the state of the research, too.

Over the last decade or so, my great respect for some of the most famous names in high-energy physics was diminishing and this trend has become undeniable by now. It seems to me that my previous worries about the apparent deterioration of meritocracy within the field have turned out to be a tangible reality.

Saturday, December 22, 2012

String theory races for Kuhnian glory

Chemist Ashutosh (Ash) Jogalekar writes in SciAm:
Freeman Dyson has a perspective in this week’s Science magazine in which he provides a summary of a theme he has explored in his book “The Sun, the Genome and the Internet”. Dyson’s central thesis is that scientific revolutions are driven as much or even more by tools than by ideas. This view runs somewhat contrary to the generally accepted belief regarding the dominance of Kuhnian revolutions – described famously by Thomas Kuhn in his seminal book “The Structure of Scientific Revolutions” – which are engineered by ideas and shifting paradigms. In contrast, in reference to Harvard university historian of science Peter Galison, Dyson emphasizes the importance of Galisonian revolutions which are driven mainly by experimental tools.

As a chemist I find myself in almost complete agreement with the idea of tool-driven Galisonian revolutions.
Dyson writes:
Thomas Kuhn was a theoretical physicist before he became a historian. He saw the history of science through the eyes of a theorist. He gave us an accurate view of events in the world of ideas. His favorite word, “paradigm,” means a system of ideas that dominate the science of a particular place and time. A scientific revolution is a discontinuous shift from one paradigm to another. The shift happens suddenly because new ideas explode with a barrage of new insights and new questions that push old ideas into oblivion. I remember the joy of reading Kuhn's book, The Structure of Scientific Revolutions, when it first appeared in 1962. It made sense of the relativity and quantum revolutions that had happened just before the theoretical physicists of my generation were born. Those were revolutions led by deep thinkers — Einstein and Heisenberg and Schrödinger and Dirac — who guessed nature's secrets by dreaming dreams of mathematical beauty. Their new paradigms were created out of abstract ideas. In those revolutionary years from 1900 to 1930, ideas led the way to understanding.
These guys are badly confused. A Kuhnian paradigm shifts is a reformulation of a scientific theory that is has no measurable advantage to the previous theory. The canonical examples are Copernican heliocentrism and Einstein's 1905 relativity paper. Kuhn wrote a whole book on the dawn of quantum mechanics, and did not say that it was a revolution or paradigm shift.

The term paradigm shift is also used by crackpots who complain that the science establishment is ignoring their silly ideas.

The Copernican revolution was the Earth revolving around the Sun. It was a different point of view, but did not approximate the observable orbits any more accurately.

Einstein's 1905 relativity was essentially the same as Lorentz's, with minor technical differences.

Dyson ends up concluding that string theory and multiverse speculations are entirely Kuhnian:
At the beginning of the 21st century, we find ourselves in a situation reminiscent of the 1950s. Once again, the community of physi-cists is split into Kuhnians and Galisonians. The most ambitious of the Kuhnian programs is string theory, building a grand and beauti-ful structure out of abstract mathematics and hoping to find it somehow mirrored in the architecture of the universe. This program is not an isolated one-man show like Einstein’s unified field theory. String theory is a collec-tive enterprise combining the efforts of thou-sands of people in hundreds of universities. These people are the best and the brightest of their generation, most of them young and many of them brilliant. Their work is admired by the pure mathematicians who share their ideas and speak their language. String theory, as a solid part of modern mathematics, is here to stay. But meanwhile, Galisonian science is continuing to forge ahead, exploring nature without paying attention to string theory. The great recent discoveries in the physical sci-ences were dark matter and dark energy, two mysterious monsters together constituting 97% of the mass of the universe. These dis-coveries did not give rise to new paradigms. ...

We are standing now as we stood in the 1950s, between a Kuhnian dream of sudden illumination and a Galisonian reality of labo-rious exploring. On one side are string theory and speculations about multiverses; on the other are all-sky surveys and observations of real black holes. The balance today is more even than it was in the 1950s. String theory is a far more promising venture than Einstein’s unified field theory. Kuhn and Galison are running neck and neck in the race for glory. We are lucky to live in a time when both are going strong.
I agree that string theory and multiverse theory are Kuhnian in that they have no hope of making any measurable progress in our understanding of the world, and that belief in them is not rational. It is only because of Einsteinian-Kuhnian thinking that there is any glory in such activity.

In a companion essay (also behind a paywall), molecular biologist Sydney Brenner argues:
It seems remark-able that historians once thought that science
progressed by the steady addition of knowl-edge, building the edifice of scientific truth, brick by brick. In his 1962 book The Struc-ture of Scientific Revolutions, Thomas Kuhn argued that progress occurs in revolutionary steps by the introduction of new paradigms, which may be new theories—new ways of looking at the world—or new technical meth-ods that enhance observation and analysis. Between Kuhn’s revolutions, scientific knowledge does advance by accretion, as there is much to do to consolidate the new sci-ence. But then, inevitably, unsolved problems accumulate and, in many cases, the inconsis-tencies have been put to one side and every-body hopes that they will quietly go away. The edifice becomes rickety; some of its founda-tions are insecure and many of the bricks have not been well-baked. This is when a new rev-olutionary wave in the form of new ideas or new techniques appears, which allows us to condemn and demolish the unsafe or corrupt parts of the edifice and rebuild truth. Often there is great resistance to the new wave, but as Max Planck pointed out, it succeeds because the opponents grow old and die. The process is then repeated: The radicals become liberals, the liberals become conservatives, the conservatives become reactionaries, and the reactionaries disappear. Students of evolu-tion will recognize this process in the theory of punctuated equilibrium: Organisms stay much the same for very long periods of time; this is interrupted by bursts of change when novelty appears, followed again by stasis. The life sciences have undergone a radical revolution in my lifetime, and it is interesting to view this from the vantage point of the pres-ent to understand its full meaning and impact. In the first half of the 20th century, physics underwent two revolutions: Einstein’s theory of relativity, connected with large scales of time and space, and quantum mechanics, con-cerned with the very small and dealing with fundamental questions of matter and energy. ...

We can now see exactly what consti-tuted the new paradigm in the life sci-ences: It was the introduction of the idea of information and its physical embodiment in DNA sequences of four different bases. Thus, although the components of DNA are simple chemicals, the complexity that can be generated by different sequences is enormous. In 1953, biochemists were pre-occupied only with questions of matter and energy, but now they had to add informa-tion.
It is amazing how otherwise-intelligent scientists fall for this nonsense. The discovery of DNA sequences was not like what Kuhn described as a paradigm shift. It was not just an incommensurable new view on an old theory. It did not succeed because the opponents grew old and die. The first Nobel prize for relativity was in 1902 to Lorentz and for quantum mechanics in 1918 (Planck) and 1922 (Bohr). Heisenberg, Schrödinger, and Dirac got their prizes in 1932-33 for work done only about 5 years earlier. Compare that to string theory and multiverse theory, where no one has ever gotten a Nobel prize.

Thursday, December 20, 2012

Not living in the matrix

Physicist Silas Beane explains:
The idea that we live in a simulation is just science fiction, isn't it?
There is a famous argument that we probably do live in a simulation. The idea is that in future, humans will be able to simulate entire universes quite easily. And given the vastness of time ahead, the number of these simulations is likely to be huge. So if you ask the question: 'do we live in the one true reality or in one of the many simulations?', the answer, statistically speaking, is that we're more likely to be living in a simulation. ...

But can we improve our own simulations?
The size of the universe we simulate is a just fermi, that's a box with sides 10-15 metres long. But we can use Moore's Law to imagine what we might be able to simulate in future. If the current trends in computing continue, we should be simulating a universe the size of a human within a century and within five centuries, we could manage a box 1026 metres big. That's the size of the observable universe.
This is a pretty crazy extrapolation of Moore's Law. The law might be good for another 10 years, but that's all. Yes, the idea that we are living in a simulation is pure science fiction.

I also don't know how to reconcile this with the recent SciAm claim that fermions cannot be simulated on a computer. How can we hope to simulate the universe if we cannot even simulate a single proton?

Wednesday, December 19, 2012

History of twin paradox

This new paper on the history of the relativity twin paradox has this amusing argument for the psychological significance of using twins to illustrate time dilation:
II 5 Why Twins?
As mentioned above, although the idea of twins was given by Weyl 1922, it is only around the fifties of the preceding century that the name twin paradox was very often used. To find the hidden reason for this habit, I propose to investigate the collective image of twinship as it was elaborated trough myths and religions and to relate it to the use of the word.

I begin by quoting B.Beit_Hallahmi and M.Paluszny (1974) from their paper on twinship:
"…there are common psychological elements in both mythological and scientific approaches to twinship. The major elements are fascination and ambivalence. Fascination with twin births has always been combined with a great deal of apprehension and ambivalence. In both primitive and modern societies, multiple births have been viewed as a potential source of familial and social conflict and complication."

Effectively one finds in the Bible and several mythologies frequent situations of conflict between twins. In the Bible one can mention Cain and Abel with the murder of Abel and also Jacob and Esau. Esau wants to kill his twin brother Jacob but Jacob has time to escape.

In the Greek mythology (See the works of Aeschylus (1966)), the fight between Atreus
(preferred by Zeus) and Thyestes (preferred by the people) with all the successive conflicts in the Atrides family is well known. In the roman mythology, the two twins Remus and Romulus (see the details of the story in Plutarch) who are the founders of Rome, are fighting for the government of the city. It results that Romulus kills his twin Remus.

In a complete different context, Levi-Strauss (1995) analyzed the myths of some North
America and South America Indians and considers the antagonism between twins as a
source of disequilibrium. In fact, the twins' conflict is not completely general since there are also some cases where the twins manage their life in peace. For example in the Greek mythology the twins Castor and Pollux are in good relationship and there are taken as example of a peaceful twinship. Nevertheless the collective image of twins is frequently related to conflict and violence. This double presence of two identical persons is seen as something scandalous and abnormal. One of them must disappear or at least put in a bad position.

I propose to adopt this point of view considering the twins of the paradox. In the unconscious mind, the twinship must be destroyed and in the paradox this is what happened: one of the twins remains young and the other old, when young is better than old. I propose to interpret the habit to give the name "twin paradox" to the clock paradox as an intrusion of the subconscious in the language of physicists. All the efforts of the physicists are to show the asymmetry in the twins as it must be.
It should not be so surprising for one twin to be older than the other. Even without relativistic effects, if one twin went on a long trip and was put in cryogenic hibernation as in 2001: A Space Odyssey, then he would return noticeably younger just like the twin paradox.

Monday, December 17, 2012

Aether no more than a metaphysical construct

Steve Carlip and Philip Gibbs summarize the history of special relativity:
In 1879 it was thought that light must propagate through a medium in space just as sound propagates through the air and other substances. The two scientists Michelson and Morley set up an experiment to attempt to detect the ether, by observing relative changes in the speed of light as the Earth changed its direction of travel relative to the sun during the year. To their surprise, they failed to detect any change in the speed of light.

Fitzgerald then suggested that this might be because the experimental apparatus contracted as it passed through the ether, in such a way as to countermand the attempt to detect the change in velocity. Lorentz extended this idea to changes in the rates of clocks to ensure complete undetectability of the ether. Einstein then argued that those transformations should be understood as changes of space and time rather than of physical objects, and that the absoluteness of space and time introduced by Newton should be discarded. Just after that, the mathematician Minkowski showed that Einstein's theory of relativity could be understood in terms of a four dimensional non-euclidean geometry that considered space and time as one entity, ever after called spacetime.

This refers to Fitzgerald 1889, Lorentz 1895, Einstein 1905, and Minkowski 1908. It ignores the 1899-1905 work of Lorentz and Poincare, where they perfected the Lorentz transformations and applied them to space and time more generally than Einstein, and before Einstein.
But what if we pursued the original theory of Fitzgerald and Lorentz, who proposed that the ether is there, but is undetectable because of physical changes in the lengths of material objects and the rates of clocks, rather than changes in space and time? For such a theory to be consistent with observation, the ether would need to be completely undetectable using clocks and rulers. Everything, including the observer, would have to contract and slow down by just the right amounts. Such a theory could make exactly the same prediction in all experiments as the theory of relativity; but in that case the ether would be no more than a metaphysical construct unless there was some other way of detecting it — which nobody has found. In the view of Einstein, such a construct would be an unnecessary complication, to be best eliminated from the theory.
That is correct about Lorentz aether theory, and it is what Poincare always said. As early as 1889, he said:
Whether the ether exists or not matters little - let us leave that to the metaphysicians; what is essential for us is, that everything happens as if it existed, and that this hypothesis is found to be suitable for the explanation of phenomena. After all, have we any other reason for believing in the existence of material objects? That, too, is only a convenient hypothesis; only, it will never cease to be so, while some day, no doubt, the ether will be thrown aside as useless.
Poincare also said this in his popular 1902 book, which Einstein read with his physicist friends. They say that he was greatly fascinated by it.

Sometimes Einstein fans imply that Lorentz and Poincare had an inferior understanding of relativity because they did not realize that experiments had reduced the aether to an undetectable metaphysical construct. But that is exactly how Poincare described it. Einstein was not as clear about it, and used language similar to Lorentz.

I explain these points in my book.

Thursday, December 13, 2012

Essay winners not so contrarian

I am disappointed in the winners of the FQXi essay contest. Maybe I am just a sore loser, because my essay is not one of the 20 winners, out of 271 submissions. But I do not think that the winners followed the contest objectives very well.

Most of the submissions violated this rule:
The instructions say this:

“Successful and interesting essays will not use this topic as an opportunity to trot out their pet theories simply because those theories reject assumptions of some other or established theory. Rather, the challenge here is to create new and insightful questions or analysis about basic, often tacit, assumptions that can be questioned but often are not”
Most essays just promoted some crackpot theory without really explaining how or why the textbook theories are wrong.

Unfortunately, FQXi has taken down the page with these rules. You can find most of them here, or maybe in your browser cache.

Of the winning essays, most of them promote some completely mainstream and accepted idea, but try to make it sound original by attacking some silly straw man. Other essays presented some vague and speculative ideas about quantum gravity or some similar field where ideas cannot be tested.

In 2nd place, Ellis gives examples of causation being more easily understood with a top-down view, such as entropy increasing. Weinstein suggests that action-at-a-distance might explain some quantum mechanics and cosmology.

In 3rd place, Barbour doubts that reductionism will explain entanglement. Dribus rejects spacetime in favor of a "causal metric hypothesis". Hossenfelder speculates about quantum gravity. Wharton says the universe is not a computer.

My essay got a lot of attention, and many favorable comments. I have some much more provocative statements than the winning essays, but I thought that was the point. Most of the essays don't actually say Which of Our Basic Physical assumptions are Wrong.

The rules also say:
Interesting: An interesting essay is:
  • Original and Creative: Foremost, the intellectual content of the essay must push forward understanding of the topic in a fresh way or with new perspective. While the essay may or may not constitute original research, if the core ideas are largely contained in published works, those works should be the author's. At the same time, the entry should differ substantially from any previously published piece by the author.
  • Technically correct and rigorously argued, to the degree of a published work or grant proposal.
  • Well and clearly written, so that it is comprehensible and enjoyable to read.
  • Accessible to a diverse, well-educated but non-specialist audience, aiming in the range between the level of Scientific American and a review article in Science or Nature.
So perhaps the judges thought that my essay was not "Technically correct and rigorously argued". If so, then I would have preferred them to say so in the online comments, so I could defend myself. As it is, I do not know what they disliked. If I had submitted it to a journal, at least I would have gotten a rejection report. I guess I could still submit it somewhere else, but it is not really a technical physics advance. It is an essay suited for this contest.

Saturday, December 8, 2012

Lundmark also predated Hubble

I posted last year (and previously) that Lemaitre discovered the expanding universe, and was the real father of the big bang.

Now Ian Steer posts Who discovered Universe expansion?:
Swedish astronomer, Knut Lundmark, were much more advanced than formerly appreciated. ...

Lundmark was the first person to find observational evidence for expansion, in 1924 — three years before Lemaître and five years before Hubble. Lundmark’s extragalactic distance estimates were far more accurate than Hubble’s, consistent with an expansion rate (Hubble constant) that was within 1% of the best measurements today. ...

Hubble’s research in 1929 yielded a value for the Hubble constant that was inaccurate by almost an order of magnitude. It was adopted because it was derived from multiple methods — including one still in use (brightest stars) — and was cross-checked with multiple galaxies with distances based on proven Cepheid star variables.
I credit Lemaitre because he was the first to publish a cosmological model with red-shift proportional to distance, and to publish observational data to back it up. I don't know how Hubble stole the credit.

Friday, December 7, 2012

Physics essay winners announced

The 2012 Questioning the Foundations Winning Essays were just announced. The winning essay presents a distinction between kinematics and dynamics, with the main difference being that a dynamical theory tries to explain change under time. It argues that this distinction is not so important, and that it is better to think about the causal structure for how variables are influenced by variables at earlier times. Its preferred examples involve strange interpretations of quantum mechanics that violate local causality.

General relativity is an example of a theory that has a causal structure but does not really distinguish between kinematics and dynamics. But the paper says:
While proponents of diff erent interpretations of quantum theory and proponents of diff erent approaches to quan-tizing gravity may disagree about the correct kinematics and dynamics, they typically agree that any proposal must
be described in these terms.
No, I very much doubt that any proponents of quantum gravity agree that the kinematics and dynamics must be separated, because everyone has said for a century that they are not separate for gravity.

The object of the essay contest was to answer the question: "Which of Our Basic Physical Assumptions Are Wrong?" I was disappointed that most of the essays do not really answer the question.

Thursday, December 6, 2012

String theory defense published

Peter Woit writes that you can now get a state-of-the-art defense of the theory:
The journal Foundations of Physics has been promising a special issue on “Forty Years of String Theory: Reflecting on the Foundations” for quite a while now, with a contribution first appearing back when it really was 40 years since the beginnings (more like 43 now). The final contribution has now appeared, an introductory essay by the editors (’t Hooft, Erik Verlinde, Sebastian de Haro and Dennis Dieks).

The overall tone of the collection is one of defensive promotion of the subject. The fact that string theory’s massively overhyped claims to give a unified theory of particle physics have led to miserable failure is mostly completely ignored.
Bob Jones adds:
Most people would say that string theory is an idea about quantum gravity. ... You can complain all you want about the lack of testable predictions in particle physics, but these objections seem pretty irrelevant since most string theorists aren’t trying to do phenomenology and since string theory has achieved so much success in other areas…
I may be stupid, but I fail to see that string theory has anything to do with quantum gravity.

The best theory of gravity is general relativity. String theorists nearly always assume that no gravity is present. To account for gravity, they sometimes just say that the underlying space could satisfy the equations of general relativity. Relativity uses a 4-dimensional spacetime, and string theorists use 10 or 11 dimensions, so they extend the equations. But that's all. They do not quantize gravity. They claim to have a theory of everything with quantum field theory have gravity, but they have no quantum gravitational fields. They say that the theory has a graviton, but quantum mechanics is about observables and the graviton is impossible to observe.

I wonder how a non-physicist would understand the excuse that "string theorists aren’t trying to do phenomenology". Wikipedia defines:
The term phenomenology in science is used to describe a body of knowledge that relates empirical observations of phenomena to each other, in a way that is consistent with fundamental theory, but is not directly derived from theory.
Someone might ask how one could be a scientist and not care about phenomenology. The answer is that string theorists have never been able to relate their theories to phenomena, so they ignore phenomena.

Leonard Susskind writes:
Just to be precise about what constitutes string theory, let me give a narrow definition — no doubt much too narrow for many string theorists. But it has the virtue that we know that it mathematically exists. By string theory I will mean the theory of supersymmetric string backgrounds including 11-dimensional M-theory and com-pactifications that preserve some degree of supersymmetry. These backrounds are generally either flat (zero cosmological constant) or anti de Sitter space with negative cosmological constant.

With that definition of string theory, there is no doubt: string theory is not the theory of nature — the world is not supersymmetric, and it has positive cosmolog-ical constant. Exactly how the definition has to be expanded in order to describe the observed universe is not known. Nevertheless string theory has had a pro-found, and I believe lasting, influence on how gravity and quantum mechanics fit together.
Steven B. Giddings tries to summarize the outlook for making string theory a quantum gravity theory:
To summarize the situation, string theory has been a continuous source of new ideas in mathematics and physics, and showed a lot of initial promise for resolving the problems of quantum gravity. However, the more profound problems are yet to be convincingly addressed, and there are deep puzzles about how they might be addressed by string theory. ...

We seek a consistent framework for describing quantum processes, in which spacetime locality emerges in an approximation. This, together with the requirement that it produce an S-matrix (and other local dynamics) with familiar properties of gravity seems a very tall order. This is actually encouraging, as it suggests the problem is sufficiently constrained to guide the resolution of this profoundly challenging set of problems. It remains to be seen what role string theory plays in this, and whether it can provide further clues.
Keep in mind that these are top string theorists trying to defend the theory. But it is painfully that any connection between string theory and quantum gravity is wildly speculative.

Update: I see Lumo has a long tortured explanation of whether string theory makes presumptions about gravitational fields. There is no simple answer.

Sunday, December 2, 2012

New translation of anti-Galileo argument

Christopher M. Graney has just posted this paper:
In January of 1616, the month before before the Roman Inquisition would infamously condemn the Copernican theory as being "foolish and absurd in philosophy", Monsignor Francesco Ingoli addressed Galileo Galilei with an essay entitled "Disputation concerning the location and rest of Earth against the system of Copernicus". A rendition of this essay into English, along with the full text of the essay in the original Latin, is provided in this paper. The essay, upon which the Inquisition condemnation was likely based, lists mathematical, physical, and theological arguments against the Copernican theory. Ingoli asks Galileo to respond to those mathematical and physical arguments that are "more weighty", and does not ask him to respond to the theological arguments at all. The mathematical and physical arguments Ingoli presents are largely the anti-Copernican arguments of the great Danish astronomer Tycho Brahe; one of these (an argument based on measurements of the apparent sizes of stars) was all but unanswerable. Ingoli's emphasis on the scientific arguments of Brahe, and his lack of emphasis on theological arguments, raises the question of whether the condemnation of the Copernican theory was, in contrast to how it is usually viewed, essentially scientific in nature, following the ideas of Brahe.
Galileo is always the proof that religion is anti-science, such as in this recent
episode of Rationally Speaking. Sometimes the Pope is ridiculed for refusing to look into a telescope to see Galileo's proof of the Earth's motion. But it is hard to see how the Church could be blamed for using Tycho's arguments, as Tycho was a more accomplished astronomer than Galileo.

Friday, November 30, 2012

Physics claimed to be irreducibly continuous

SciAm now has a podcast on a previously mentioned essay:
Conventional wisdom says that quantum mechanics is a theory of discreteness, describing a world of irreducible building blocks. It stands to reason that computers—which process information in discrete chunks—should be able to simulate nature fully, at least in principle. But it turns out that certain asymmetries in particle physics cannot be discretized; they are irreducibly continuous. In that case, says David Tong, author of "Is Quantum Reality Analog after All?" in the December 2012 issue of Scientific American, the world can never be fully simulated on a computer.
Tong's essay is quite sensible, but I do not believe his claim that "certain asymmetries in particle physics cannot be discretized". I had thought that lattice gauge theories had advanced to where they could approximate experiments. I would be amazed if someone has actually proved that convergence is impossible.

Here is the audio download.

Update: Tong's references are A Method for Simulating Chiral Fermions on the Lattice, Chiral Symmetry and Lattice Fermions, and Chiral gauge theories revisited. I don't see where any of these papers say that chiral fermion lattice gauge theories are impossible.

Wednesday, November 28, 2012

Boson of the Year

Time magazine has announced its nominations
Take a moment to thank this little particle for all the work it does, because without it, you'd be just inchoate energy without so much as a bit of mass. What's more, the same would be true for the entire universe. It was in the 1960s that Scottish physicist Peter Higgs first posited the existence of a particle that causes energy to make the jump to matter. But it was not until last summer that a team of researchers at Europe's Large Hadron Collider — Rolf Heuer, Joseph Incandela and Fabiola Gianotti — at last sealed the deal and in so doing finally fully confirmed Einstein's general theory of relativity. The Higgs — as particles do — immediately decayed to more-fundamental particles, but the scientists would surely be happy to collect any honors or awards in its stead.
Einstein was Time's Man of the Century, and he always has to credited somehow. For nearly a century they've been saying that something finally confirmed Einstein's general theory of relativity, but this time the Higgs boson has nothing to do with it.

Update: Lumo later finds some other errors in the Time announcement, following Strassler.

Monday, November 26, 2012

Many worlds contrary to positivism

Lumo explains how quantum mechanics is a positivist theory, and belief in the many-worlds interpretation (MWI) is based on a fundamental philosophical error:
But the key feature of the fundamentally subjective theory called quantum mechanics is that if no questions are being asked, no questions need to be answered. If the objects in our environments aren't asking any questions for us, we don't need to answer them and we don't need to imagine that the world is doing anything else than evolving the probability amplitudes according to the continuous Schrödinger's equation (or equivalent equations in other pictures). In particular, there's no "collapse" if there's no subject asking questions and learning answers! What is often called the "collapse" is the process of learning and it is a fundamentally subjective process. ...

Philosophy is never a good science. But when it comes to philosophies, many laymen in quantum mechanics – and even people not considered laymen in quantum mechanics by the society or by themselves – often think that "realism" is the right philosophy behind modern science. This viewpoint, based on millions of years of our everyday monkey-like experience, has strengthened by the 250 years of successes of classical physics and it was – unfortunately – energized by Marxism that repeated the untrue equation "science = materialist ideology" many times. Marx, Lenin, and related bastards surely belong among those who have encouraged people to never leave the mental framework of classical physics. But it is "positivism" which is the philosophy that is closest to the founders of the modern science, especially relativity and quantum mechanics.

Positivism says that all reliable knowledge – the truth we are allowed to become fans of – has to boil down to empirical observations and mathematical and logical treatments of such empirical data. It sounds uncontroversial among science types but many of them don't realize how dramatically it differs from the "materialist ideology". In particular, positivism assumes nothing about the "existence of objective reality".
He is right about this. MWI has no merit, I have argued.

I do agree with Lumo that anyone who advocates MWI (or some of the other peculiar interpretations like Bohm's) has a fundamental misunderstanding of quantum mechanics, and even of what science is all about. There are as misguided as creationists and mystics.

I don't require everyone to adopt my positivist philosophy, but the MWI advocates refuse to even acknowledge that quantum mechanics, as envisioned by Bohr and Heisenberg, is positivist.

As I have also argued, one of the big errors of MWI can be seen in terms of probability, but I disagree somewhat with how Lumo explains it. He says:
After the quantum revolution, we know that all empirical evidence coming from repeated experiments may be summarized as measured probabilities of various outcomes of diverse experiments. Once again, all the empirical knowledge about the physical processes that we have may be formulated as a collection of probabilities. Probabilities are everything we may calculate from quantum mechanics (and from other parts of science, too). So they're surely not a detail.

Orthodox quantum mechanics promotes probabilities to fundamental concepts and uses the standard probability calculus – which existed a long time before quantum mechanics – to give you rules how to verify whether the probabilistic predictions of a theory are right. The basic laws of quantum mechanics are intrinsically probabilistic.
Yes, I agree that probability is important in all of science, because it gives a tool for analyzing repeated experiments. But I do not agree that orthodox quantum mechanics promotes it to being more fundamental than that. Probability is only important in quantum mechanics to the same extent it is important in other sciences.

Nate Silver's final prediction was that Pres. Barack Obama had a 92% chance of being reelected, and Peter Norvig said 85-98%. They used reliable scientific methods to forecast the election, but we cannot say whether their probabilities were correct because we cannot directly observe the probability. (For discussion of well these predictions did, see here and here.)

The probabilities do have meaning, even tho they are not directly observable. But the probability has no meaning in MWI, because the MWI advocates would say that Mitt Romney won the election in many of the parallel universes. They could try to argue that Obama won in more universes than Romney, but there is no way to make sense out of that because we cannot see the other universes. Lumo explains this well.

Update: Lumo is enraged by this comment:
Do you consider it ironic to be attacked by some for pointing out the rationality and utility of String theories but attacked by others for pointing out the irrationality and lack of usefulness of MWI?
That hit a nerve. String theory has no utility either. It does not explain any observable phenomena, or resolve any theoretical puzzles, or simplify any physics, or have any rational justification. But Lumo is ideologically committed to string theory.

Sunday, November 25, 2012

Looking for discrete space and tiny black holes

I have argued that quantum gravity in unobservable, but now there is a proposal in Nature magazine:
Space is not smooth: physicists think that on the quantum scale, it is composed of indivisible subunits, like the dots that make up a pointillist painting. This pixellated landscape is thought to seethe with black holes smaller than one trillionth of one trillionth of the diameter of a hydrogen atom, continuously popping in and out of existence.

That tumultuous vista was proposed decades ago by theorists struggling to marry quantum theory with Einstein's theory of gravity -- the only one of nature's four fundamental forces not to have been incorporated into the standard model of particle physics. If it is true, the idea could provide a deeper understanding of space-time and the birth of the Universe.

Scientists have attempted to use the Large Hadron Collider, gravitational wave detectors and observations of distant cosmic explosions to determine whether space is truly grainy, but results have so far been inconclusive.
Inconclusive? There is no evidence of any kind that space is grainy. It is wildly speculative and implausible to say that space is composed of indivisible subunits, and that tiny black holes continuously pop in and out of existence.

The current SciAm has an essay on that subject:
Editors' note: Last year the Foundational Questions Institute's third essay contest posed the following question to physicists and philosophers: “Is Reality Digital or Analog?” The organizers expected entrants to come down on the side of digital. After all, the word “quantum” in quantum physics connotes “discrete” —hence, “digital”. Many of the best essays held, however, that the world is analog. Among them was the entry by David Tong, who shared the second-place prize. The article here is a version of his essay.
It was from last year's essay contest. You can get the original essay free on the 2011 winner page. It is a sensible essay.

Saturday, November 24, 2012

Entropic gravity and dying SUSY

Peter Woit reports:
Erik Verlinde over the past couple years has gotten 6.5 million euros in prizes and grants to fund his work on entropic gravity (see here). Now, he’ll head up a new institution, the Delta Institute for Theoretical Physics, funded with 18.3 million euros from the Dutch scientific funding agency NWO as part of its Gravitation Programme.
To truly succeed at being a big-shot, you have to have a really crazy idea, I guess.

The most obvious physical fact about gravity is that it is a conservative force. That means that it is reversible. The most obvious fact about entropy is that it is irreversible. Yet Verlinde says gravity and entropy are the same thing. I don't see how this can make sense on any level.

NewScientist reports:
Does a lack of evidence for supersymmetry at the LHC (main story) count against this elegant extension to the standard model?

"SUSY's plausibility is reduced," says Nobel laureate Steven Weinberg, "but not to zero." Others say the theory is flexible; that the latest results merely help to whittle down a list of possible incarnations.

Still the results have stoked a debate about the attention SUSY gets. "The theory, specifically as something we would observe at the LHC, was wildly over-promoted," wrote Matthew Strassler from Rutgers University in New Jersey on his blog.

That might have led other promising theories to suffer, says Raymond Volkas of the University of Melbourne, Australia, as popular theories can reduce interest in others. "Many people feel they have to work on the bandwagon ideas."
A few years ago, most of the theoretical physicists were sold on SUSY, in spite of the lack of hard evidence. It was mostly based on mystical unification ideas. Now the evidence is piling up against SUSY.

Friday, November 23, 2012

No mental picture without irrelevancies

I remarked that physicists keep looking for hidden variables, in spite of an 80-year consensus that such theories contradict quantum mechanics. Satyajit Das writes in response to the annual Edge.org question:
In 1927, Heisenberg showed that uncertainty is inherent in quantum mechanics. ...

The play repeats their meeting three times, each with different outcomes. As Heisenberg, the character, states: "No one understands my trip to Copenhagen. Time and time again I've explained it. To Bohr himself, and Margrethe. To interrogators and intelligence officers, to journalists and historians. The more I've explained, the deeper the uncertainty has become."

In his 1930 text The Principles of Quantum Mechanic. Paul Dirac, a colleague of Heisenberg, contrasted the Newtonian world and the Quantum one: "It has become increasingly evident… that nature works on a different plan. Her fundamental laws do not govern the world as it appears in our mental picture in any direct way, but instead they control a substratum of which we cannot form a mental picture without introducing irrelevancies."
Dirac was right. It is very hard to form a mental picture of the atom without introducing irrelevant hidden variables.

I think that today's search for hidden variables is a misguided attempt to validate some flawed mental picture. They are usually not explicit about their faulty assumptions.

Thursday, November 22, 2012

Many-worlds not conservative

The UK Guardian Science Weekly podcast recommends these books as the best science books of the year (gnl.sci.121119.jp.science_weekly_ads.mp3):
The Better Angels of Our Nature by Steven Pinker
The Information by James Gleick
My Beautiful Genome by Lone Frank
Moonwalking with Einstein by Joshua Foer
The Hidden Reality by Brian Greene
The Viral Storm by Nathan Wolfe
Brian Greene is interviewed (at 35:30 to 38:30), and makes three points: (1)the new idea of quantum mechanics is that you cannot predict with certainty; (2) the weirdness is that the other possible outcomes occur in parallel universes; and (3) this is the most conservative way of interpreting the mathematics.

All three points are false. Greene is treated as if he were some sort of physics genius, but what he says is as nutty as that of New Age spiritualists like Deepak Chopra.

As I have written, quantum mechanics needs no probabilities, and the multiverse is philosophy. And the Many-worlds interpretation is not any simpler or more conservative than other interpretations.

If the theory predicts that two alternatives are possible, such as a cat being alive or dead, and then one of those is observed, then MWI says that the other one happens in a parallel universe. That is supposed to be mathematically "conservative" because then you do not have to adjust your formulas to say that the unseen alternative is no longer possible. That is not being simple or conservative. That is just refusing to accept what you have observed.

The Guardian reporters go on to credit Greene with expanding our notion of science by speculating about ideas that can never be settled by observation. No, Greene is giving physics a bad name.

I found this image on an atheist site. If I accept this analogy, where is Greene's flashlight? He has no better way of illuminating anything that the philosophers and theologians.

Wednesday, November 21, 2012

New quantum crypto work

Some new quantum crypto research was announced: Quantum Cryptography Conquers Noise Problem, and Quantum Cryptography At The End Of Your Road.

None of this has any practical utility. The whole subject is based on misunderstandings of both cryptography and quantum mechanics.

Update: John Markoff of the NY Times reports:
Scientists at Toshiba and Cambridge University have perfected a technique that offers a less expensive way to ensure the security of the high-speed fiber optic cables that are the backbone of the modern Internet.

The research, which will be published Tuesday in the science journal Physical Review X, describes a technique for making infinitesimally short time measurements needed to capture pulses of quantum light hidden in streams of billions of photons transmitted each second in data networks. Scientists used an advanced photodetector to extract weak photons from the torrents of light pulses carried by fiber optic cables, making it possible to safely distribute secret keys necessary to scramble data over distances up to 56 miles.

Such data scrambling systems will most likely be used first for government communications systems for national security. But they will also be valuable for protecting financial data and ultimately all information transmitted over the Internet.
No, there is no value in using this technology for the internet. (I do not capitalize "internet" because it is a generic term, not a trademark or brand name.)
The approach is based on quantum physics, which offers the ability to exchange information in a way that the act of eavesdropping on the communication would be immediately apparent. The achievement requires the ability to reliably measure a remarkably small window of time to capture a pulse of light, in this case lasting just 50 picoseconds — the time it takes light to travel 15 millimeters.

The secure exchange of encryption keys used to scramble and unscramble data is one of the most vexing aspects of modern cryptography.

Public key cryptography uses a key that is publicly distributed and a related secret key that is held privately, allowing two people who have never met physically to securely exchange information. But such systems have a number of vulnerabilities, including potentially to computers powerful enough to decode data protected by mathematical formulas.
This is confused. Quantum key distribution is no substitute for public key cryptography because public key cryptography allows messages to be signed and authenticated.
If it is possible to reliably exchange secret keys, it is possible to use an encryption system known as a one-time pad, one of the most secure forms. Several commercially available quantum key distribution systems exist, but they rely on the necessity of transmitting the quantum key separately from communication data, frequently in a separate optical fiber, according to Andrew J. Shields, one of the authors of the paper and the assistant managing director for Toshiba Research Europe. This adds cost and complexity to the cryptography systems used to protect the high-speed information that flows over fiber optic networks.

Weaving quantum information into conventional networking data will lower the cost and simplify the task of coding and decoding the data, making quantum key distribution systems more attractive for commercial data networks, the authors said.
Again, this is badly confused. A one-time pad requires a secret key as long as the message. The whole point of using secret keys is to have keys much shorter than the message. It is only secure in a technical mathematical sense, but not secure in the ordinary sense of the word because it has no way of detecting data corruption errors. So no one ever uses a one-time pad, even if they can reliably exchange secret keys.

Markoff is a smart guy who has been covering cryptology issues for many years, but he has been conned with this story.

Update: As of Friday Nov. 23, this Markoff article is the most emailed article on the NY Times site. That is what happens when research is overhyped.

Monday, November 19, 2012

New kinds of proofs

I argued below on the limits of math that math is limited by what is provable in aximoatic set theory (ZFC). Some comments disagreed with me.

MIT Computer Scientist Scott Aaronson writes on Nov. 17:
Last Friday, I was at a “Symposium on the Nature of Proof” at UPenn, to give a popular talk about theoretical computer scientists’ expansions of the notion of mathematical proof (to encompass things like probabilistic, interactive, zero-knowledge, and quantum proofs).  This really is some of the easiest, best, and most fun material in all of CS theory to popularize.  Here are iTunes videos of my talk and the three others in the symposium: I’m video #2, logician Solomon Feferman is #3, attorney David Rudovsky is #4, and mathematician Dennis DeTurck is #5.  Also, here are my PowerPoint slides.  Thanks very much to Scott Weinstein at Penn for organizing the symposium.
This suggests that there are new kinds of proof that have replaced the old style of math proofs. But in fact all the word he described are regular math theorems provable in ZFC like everything else.

Aaronson does describe some other ways of convincing someone that something is true. For example, someone with a complete proof can release info to convince others that he has a proof, without releasing the whole proof or even enough info to recontruct the proof. Some of these ideas have some practical application, such as the holder of a private cryptographic key can convince an authority that he possesses the key, without revealing the key. That evidence might even be called a "proof of possession". But none of these ideas has really changed the notion of a mathematical proof. If you want to get a math paper published in a math journal, you have to supply a proof from axioms just as always.

Sunday, November 18, 2012

Watson still bragging about stealing credit

I just listened to Jim Watson, scifri201211162.mp3. The first caller chewed him out for being a jerk, and I agree.

Watson is famous for his 1953 work with Crick where they proposed a modification of Linus Pauling's helix model of DNA. Their basis was some unpublished experimental work taken from Rosalind Franklin at a rival lab. When Pauling saw their papers, he corrected their modification and found the chemical structure of DNA.

Lynne Osman Elkin says:
One of the things I proposed last year at AAAS [the American Association for the Advancement of Science annual meeting] is that I think it should be called the Watson-Crick-Franklin structure. As far as I'm concerned, she was a de facto collaborator. Maybe she didn't give them her information directly. But every time they hit a stumbling point, it was her information that they got from Wilkins that straightened it out.
Watson was promoting a new edition of his 1968 book that brags about stealing Franklin's data and unpublished theories, and then tricking her into publishing her paper after the famous Watson-Crick paper. The public was thereby fooled into thinking that her work was done to confirm the Watson-Crick model, when it was actually the chief inspiration for the model.

In the Friday interview, Watson continued to badmouth her, and to refuse to give her credit for what she did. He even boasted that she died without knowing how he and Crick had cheated her.

Watson also claimed to not remember saying that Africans are stupid, and then mumbled something about not wanting to be quoted. His answers to other questions, such as about what defines good science, were surprisingly lame.

I have written extensively about how Einstein has been over-credited for relativity, thereby fueling speculation about how credit relates to his Jewishness. In the case of DNA, Watson, Crick, and Franklin were all atheists, and only Franklin was of Jewish descent. If there is a Jewish conspiracy to credit Jews, then Franklin should have been credited. It has been alleged that sexism was at work, but I do not see any evidence that she was mistreated because she was a woman or a Jew. It appears to me that Watson and Crick were just dishonest and greedy for the credit, and others failed to stick up for her.

Saturday, November 17, 2012

Dissecting Einstein's brain

A SciAm blog reports:
Now a new study in Brain, based on the most comprehensive collection of post-mortem images compiled to date, shows that Einstein’s cerebral cortex, responsible for higher-level mental processes, differs much more dramatically than previously thought from that of your Everyman of average intelligence. The paper, in fact, publishes for the first time the “road map” to the father of relativity’s brain, photographs that image 240 blocks of dissected tissue from the autopsy performed at the University of Pennsylvania by Thomas Harvey. ...

Were there other things that were also unusual?

One of the most interesting things about Einstein’s brain has to do with his sensory and motor cortices. We found an unusual region lower down in the motor cortex that processes information from the face and tongue and laryngeal apparatus. ...

Do you think this has anything to do with that famous photo of Einstein sticking his tongue out?

I’ve been asked that four times in the last three days. The first time the question caught me by surprise and I said I thought it was just a coincidence. Then I got to thinking about it and went to a mirror to see whether I could get my tongue out as far as Einstein had, and I came pretty close. So I think that wonderful photograph was probably Einstein just being spontaneous and impetuous.
This is silly. The comparison should be to other physicists, and I doubt that anyone would find anything unusual. I would be more interested in studying the brains of people with unusual talents, such as calculating prodigies.

Thursday, November 15, 2012

Impossible to observe a graviton

Nearly everyone says that there is a contradiction between quantum mechanics and gravity, and hence we need a theory of quantum gravity to unify all the forces. But no one has ever been able to demonstrate any such contradiction.

Famous mathematical physicist Freeman Dyson writes:
A simple calculation, based on the known laws of gravitation and quantum mechanics, leads to a striking result. To detect a single graviton with a LIGO apparatus, the mirrors must be exactly so heavy that they will attract each other with irresistable force and collapse into a black hole. In other words, nature herself forbids us to observe a single graviton with this kind of apparatus.

I propose as a hypothesis, based on this single thought-experiment, that single gravitons may be unobservable by any conceivable apparatus.

If this hypothesis were true, it would imply that theories of quantum gravity are untestable and scientifically meaningless. The classical universe and the quantum universe could then live together in peaceful coexistence. No incompatibility between the two pictures could ever be demonstrated. Both pictures of the universe could be true, and the search for a unified theory could turn out to be an illusion.
There is no problem quantizing weak gravitational fields, or linear approximations. That is good enough for all conceivable observations. But the theoretical research puzzles have to do with gravitons and absurd black hole scenarios that will never be observed.

Raphael Bousso argues in his explanation:
As Quantum Mechanics surely spells trouble for General Relativity, the existence of singularities suggests that General Relativity may also spell trouble for Quantum Mechanics. It will be fascinating to watch this battle play out.
It has played out. There is no trouble.

As I explain in my book, How Einstein Ruined Physics, theoretical physics has gone down a bad path of arguing about things that can never be observed. Most physicists will say that quantum gravity is one of the biggest problems in physics, but it is not a scientific problem at all.

Tuesday, November 13, 2012

The silly Everettian revolution

Frank Tipler writes:
The most revolutionary, beautiful, elegant, and important idea to be advanced in the past two centuries is the idea that reality is made up of more than one universe. By an infinity of parallel universes, in fact. By "parallel universe" I mean universes exactly like ours, containing individuals exactly like each and every one of us. ...

A truly mind-boggling idea, because were it to be true, it would infinitely expand reality. It would expand reality infinity more than the Copernican Revolution ever did, because at most, all that Copernicus did was increase the size of this single universe to infinity. The parallel universes concept proposes to multiply that single infinite Copernican universe an infinite number of times. Actually, an uncountable infinity of times.

... That is, if you accept quantum mechanics — and more than a century of experimental evidence says you have to — then you have to accept the existence of the parallel universes.

Like the Copernican Revolution, the Everettian Revolution will take decades before it is accepted by all educated people, and it will take even longer for the full implications of the existence of an infinite number of parallel universes to be worked out. The quantum computer, invented by the Everettian physicist David Deutsch, is one of the first results of parallel universe thinking.
The crazy part of this is where is says that many-worlds is a consequence of quantum mechanics. Dyson was one of the creators of quantum field theory, and he says that many-worlds is not even science.

No, quantum mechanics does not give any mathematical or physical reason to accept parallel universes.

Here is one explanation that is falling out of favor:
Researchers at the Large Hadron Collider have detected one of the rarest particle decays seen in Nature.

The finding deals a significant blow to the theory of physics known as supersymmetry.

Many researchers had hoped the LHC would have confirmed this by now.
Supersymmetry was supposed to solve at least five different problems in physics.

Czech string theorist Lubos Motl wrote:
By far the most important argument in favor of supersymmetry is the fact that it seems to be implied by string theory, the only known - and, most likely, the only mathematically possible - consistent unifying theory of fundamental forces including gravity.
String theory is dead. Links to other views are here.

Sunday, November 11, 2012

Edge.org essays on explanations

I remarked that physicists keep looking for hidden variables, in spite of an 80-year consensus that such theories contradict quantum mechanics. Satyajit Das writes in response to the annual Edge.org question:
In 1927, Heisenberg showed that uncertainty is inherent in quantum mechanics. ...

The play repeats their meeting three times, each with different outcomes. As Heisenberg, the character, states: "No one understands my trip to Copenhagen. Time and time again I've explained it. To Bohr himself, and Margrethe. To interrogators and intelligence officers, to journalists and historians. The more I've explained, the deeper the uncertainty has become."

In his 1930 text The Principles of Quantum Mechanic. Paul Dirac, a colleague of Heisenberg, contrasted the Newtonian world and the Quantum one: "It has become increasingly evident… that nature works on a different plan. Her fundamental laws do not govern the world as it appears in our mental picture in any direct way, but instead they control a substratum of which we cannot form a mental picture without introducing irrelevancies."
Dirac was right. It is very hard to form a mental picture of the atom without introducing irrelevant hidden variables.

I think that today's search for hidden variables is a misguided attempt to validate some flawed mental picture. They are usually not explicit about their faulty assumptions.

For an example of someone who expects mathematics to explain everything, Antony Garrett Lisi writes:
What is tetrahedral symmetry doing in the masses of neutrinos?! Nobody knows. But you can bet there will be a good explanation. It is likely that this explanation will come from mathematicians and physicists working closely with Lie groups. The most important lesson from the great success of Einstein's theory of General Relativity is that our universe is fundamentally geometric, and this idea has extended to the geometric description of known forces and particles using group theory. It seems natural that a complete explanation of the Standard Model, including why there are three generations of fermions and why they have the masses they do, will come from the geometry of group theory. This explanation does not yet exist, but when it does it will be deep, elegant, and beautiful — and it will be my favorite.
Yes, we have geometric description of the known forces and particles, but I am not so sure that the our universe is fundamentally geometric.

Eric R. Weinstein also promotes the geometrical theory of everything:
But the most important lesson is that, at a minimum, Einstein's minor dream of a world of pure geometry has largely been realized as the result of a large group effort. All known physical phenomena can now be recognized as fashioned from the pure, if still heterogeneous, marble of geometry through the efforts of a new pantheon of giants. Their achievements, while still incomplete, explain in advance of unification that the source code of the universe is overwhelmingly likely to determine a purely geometric operating system written in a uniform programming language.
Lisa Randall writes:
The Standard Model's success nonetheless illustrates another beautiful idea essential to all of physics, which is the concept of an "effective theory." The idea is simply that you can focus on measurable quantities when making predictions and leave understanding the source of those quantities to later research when you have better precision.
Not only physics. This idea has been essential to all of science for millennia. She is just describing a theory that has been stripped of its pretentions of being some sort of perfect knowledge.

It is funny that she describes effective theory as if it were just one type of a physical theory. She also studies string theory which never makes any predictions and claims to be a theory of everything. She acts as if these are just two kinds of scientific theories. No, there is one kind of scientific theory. String theory is not science.

All of the Edge essays are on this page.

Saturday, November 10, 2012

Bad week for hidden variables

The Quantum Frontiers blog writes:
Today, it became abundantly clear that it’s been a tough week for hidden variable theories. ...

Let me explain. Hidden variable theories were proposed by physicists in an attempt to explain the ‘indeterminism’ which seems to arise in quantum mechanics, and especially in the double-slit experiment. ...

However, the two recent Science papers in question used quantum-information variants of this delayed-choice experiment to hammer additional nails in the coffin of local hidden variable theories.
As I explained this week about a third paper against hidden variables, these theories have been dead since 1930.

So why do physicists keep disproving hidden variable theories if no one believes in them anyway? Einstein and other quantum mechanics skeptics expressed a preference for hidden variable theories.

I argued in my FQXi essay that pursuit of hidden variable theories is rooted in a false belief in that mathematization of nature. People refuse to acknowledge that they are even making the assumption.

Friday, November 9, 2012

Minkowski conclusions independent of Einstein

Galina Weinstein writes in a new paper:
In the summer of 1905, Minkowski and Hilbert led an advanced seminar on mathematical physics, on electrodynamical theory. Minkowski told Born later that it came to him as a great shock when Einstein published his paper in which the equivalence of the different local times of observers moving relative to each other was pronounced; for he had reached the same conclusions independently. He never made a priority claim and always gave Einstein his full share in the great discovery. In his famous talk, "Space and Time" Minkowski wrote that the credit of first recognizing sharply that t and t' are to be treated the same, is of A. Einstein.
In Space and Time, Minkowski does credit Einstein for observing that local time is the local time of an electron. It is the only part of relativity that Minkowski credits to Einstein. Minkowski's wrote:
Lorentz denoted the combination t' of (t and x) as the local time (Ortszeit) of the uniformly moving electron, and used a physical construction of this idea for a better comprehension of the contraction-hypothesis. But to perceive clearly that the time of an electron is as good as the time of any other electron, i.e., that t and t' are to be treated equivalently, has been the service of A. Einstein.
This is quite similar to what Einstein had just written in his 1907 review paper:
It required only the recognition that the auxiliary quantity introduced by H. A. Lorentz, and called by him "local time", can be defined as simply "time."
Einstein seems to be taking credit for this idea, but Poincare clearly had this idea before Einstein and Minkowski. Lorentz may have had it also, as Poincare attributed it to Lorentz. Poincare even nominated Lorentz for a Nobel Prize in 1902 to recognize his idea of local time. Minkowski studied Poincare's papers in his seminar, and should have known that Poincare had the idea before Einstein. As Weinstein explains:
Scott Walter writes, "This story of Minkowski’s recollection of his encounter with Einstein’s paper on relativity is curious, in that the idea of the observable equivalence of clocks in uniform motion had been broached by Poincaré in one of the papers studied during the first session of the electron-theory seminar. It is possible, of course, that Poincaré’s operational definition of local time escaped Minkowski’s attention, or that Minkowski was thinking of an exact equivalence of timekeepers".25 Before 1905 Poincaré stressed the importance of the method of clocks and their synchronization by light signals. He gave a physical interpretation of Lorentz's local time in terms of clock synchronization by light signals, and formulated a principle of relativity. John Stachel explains: "Poincaré had interpreted the local time as that given by clocks at rest in a frame moving through the ether when synchronized as if – contrary to the basic assumptions of Newtonian kinematics – the speed of light were the same in all inertial frames. Einstein dropped the ether and the 'as if': one simply synchronized clocks by the Poincaré convention in each inertial frame and accepted that the speed of light really is the same in all inertial frames when measured with clocks so synchronized".26
Stachel has made a career out of editing Einstein's papers and finding ways to credit Einstein, and is correct about Poincare having the operational definition of local time that Minkowski credited to Einstein.

The only thing left for Stachel to credit Einstein is Poincare's aether and "as if". But Poincare's explanation did not involve the aether or an "as if". Stachel is quoting himself, not Poincare.

I did not know that Minkowski told Born that he reached some relativity conclusions independently of Einstein. But Minkowski appears to have gotten those conclusions from Poincare. His famous Space And Time paper does not mention Poincare, but his previous paper cites Poincare and directly follows him with the Lorentz group, four-dimensional spacetime, and electromagnetic covariance. So yes, Minkowski probably did come to all those conclusions independently of Einstein because he got them from Poincare.

At any rate, it was Minkowski's 1908 Space And Time article that got everyone excited about relativity and popularized the new geometrical interpretation of spacetime. Einstein's 1905 paper was considered an elaboration of Lorentz's ideas. As Weinstein documents, Minkowski wrote to Einstein in late 1907 asking for a reprint of his 1905 paper, just a couple of months before Minkowski presented his own results. So Minkowski's seminar and work were based almost entirely on Lorentz and Poincare. The supposedly original idea of the operational identification of local time had been published years earlier by Poincare.Brother HL-2270DW Laser Printer (Google Affiliate Ad)

Wednesday, November 7, 2012

New paper tries to disprove quantum mechanics

I commented in May on work supposedly Showing that the wavefunction is real. Now the theorists are trying experimental tests, and have posted this paper:
Can different quantum state vectors correspond to the same physical state? An experimental test

A century on from the development of quantum theory, the interpretation of a quantum state is still discussed. If a physicist claims to have produced a system with a particular wave function, does this represent directly a physical wave of some kind, or is the wave function merely a summary of knowledge, or information, about the system? A recent no-go theorem shows that models in which the wave function is not physical, but corresponds only to an experimenter's information about a hypothetical real state of the system, must make different predictions from quantum theory when a certain test is carried out. Here we report on an experimental implementation using trapped ions. Within experimental error, the results confirm quantum theory. We analyse which kinds of theories are ruled out. ...

The result of Ref. [3] can be considered as a no-go the-orem for interpretations of quantum theory, analogous to Bell's theorem. Each theorem states that a certain class of theories must make different predictions from quan-tum theory -- locally causal theories in the case of Bell's theorem and ψ-epistemic theories in the case of Ref. [3]. We have suggested that a natural threshold is defined by quantum state discrimination.
So they are claiming that they have disproved ψ-epistemic theories, just as Bell and Bell test experiments have disproved locally causal theories.

No, Bell did not disprove locally causal theories. He only disproved a type of hidden variable theories that no one believed anyway. Likewise, these guys are only trying to disprove ψ-epistemic theories that no one believes. I explain further here and here.

Quantum mechanics was discovered in the 1920s, and incorporated into physics textbooks in the 1930s. Ever since, the mainstream interpretation has been that it is a locally causal theory and a ψ-epistemic theory. So why are all these physicists, 80 years later, trying to do experiments that prove quantum mechanics correct but also disprove local causality and ψ-epistemic ontology?

Maybe this paper has some clever new way of testing quantum mechanics. I don't know. But it is not telling us anything except what was common knowledge 80 years ago.

I will be interested to see whether this paper gets publicized as some profound new result, and if the editors require that the claims be scaled back before publication. If I were the referee, I would require that the article say that it was only confirming what Bohr, Heisenberg, and the textbooks said 80 years ago. I doubt that the editor will require any such change.

Tuesday, November 6, 2012

Aether dominates accepted physics

Canon T3i 18.0MP Digital SLR Camera with 18-55mm IS Lens - Digital (Google Affiliate Ad)The Wikipedia article on the Relativity priority dispute says:
But they [most historians of science like Holton] argue that it was Einstein who completely eliminated the classical ether ...
Yes, most historians like Holton say that, but it is also directly contrary to what other reputable sources say. For example, a very prominent physicist says:
“Quite undeservedly, the ether has acquired a bad name. There is a myth repeated in many popular presentations and textbooks, that Albert Einstein swept it into the dustbin of history. The real story is more complicated and interesting. I argue here that the truth is more nearly the opposite: Einstein first purified, and then enthroned, the ether concept. As the 20th century has progressed, its role in fundamental physics has only expanded. At present, renamed and thinly disguised, it dominates the accepted laws of physics.” [Fantastic Realities: 49 Mind Journeys And a Trip to Stockholm, By Frank Wilczek, Betsy Devine, 2006, p.293]
Yes, Wilczek is correct. Yes, the aether is essential to modern theories of electromagnetism. The biggest physics news of the 21st century is the discovery of the Higgs boson, and that was the confirmation of a 50 year old aether theory.

It is crazy to talk of eliminating the aether as if that were some great accomplishment. Explaining the failure to detect aether velocity led to the theory of relativity, but Einstein himself said that it was a mistake to deny the aether. The aether (under various names) is essential to modern physics.

Sunday, November 4, 2012

Explaining quantum weirdness

Johannes Koelman explains some quantum paradox with analogies, and then gives a choice of interpretations of the weirdness:
1) Deny Its Existence
2) Accept Non-Local Socks
3) Embrace Precognition
4) Deny Free Will
5) Avoid The "Would Have Happened If" Fallacy
He takes the last one, of course. It is the positivist view that Bohr advocated all along. The other views cannot be ruled out, but they are pretty crazy.

Most explanations of quantum mechanics are filled with nonsense. For example, the latest Scientific American says:
Throughout the 20th century scientists and mathematicians have had to accept that some things will always remain beyond the grasp of reason. In the 1930s Kurt Gödel famously showed that even in the rational universe of mathematics, for every paradox that deep thinking slaps down, new ones pop up. Economists and political theorists found similar limitations to rational rules for organizing society, and historians of science punctured the belief that scientific disputes are resolved purely by facts. The ultimate limits on reason come from quantum physics, which says that some things just happen and you can never know why. ...

Quantum mechanics may be a better model for human behavior than classical logic, which fails to predict the human impulse to cooperate and act altruistically. Instead of trying to force our thinking into a rational framework, we are better off expanding the framework.
No, Goedel and economists did not show that. The claim about "historians of science" seems to be a reference to Kuhnian paradigm shifts. That is where philosophers say that physicists are irrational.

Friday, November 2, 2012

Meeting on naturalism

A bunch of atheist intellectuals just had an informal discussion meeting, and a philosopher noted:
Coyne declared himself to be an incompatibilist (no surprise there), accusing compatibilists of conveniently redefining free will in order to keep people from behaving like beasts. However, Jerry himself admitted to having changed his definition of free will, and I think in an interesting direction. His old definition was the standard idea that if the tape of the history of the universe were to be played again you would somehow be able to make a different decision, which would violate physical determinism. Then he realized that quantum indeterminacy could, in principle, bring in indeterminism, and could even affect your conscious choices (through quantum effects percolating up to the macroscopic level). So he redefined free will as the idea that you are able to make decisions independently of your genes, your environments and their interactions. To which Dennett objected that that’s a pretty strange definition of free will, which no serious compatibilist philosopher would subscribe to. ...

During the follow-up discussion Weinberg declared his leaning toward Dennett’s position, despite his (Weinberg’s) acceptance of determinism.
This shows the confusion caused by physical determinism.

I say that determinism is absurd, and that so are Coyne's consequences. I am surprised to see the famous Steven Weinberg in the determinist camp, because most physicists believe that quantum mechanics proves that the world is fundamentally probabilistic and non-deterministic.

I am not advocating non-determinism either. There is no scientific evidence one way or the other. The main argument for non-determinism is quantum mechanics, but that is no more probabilistic than any other scientific theory.

These intellectuals move on to free will, morality, God, and various other issues. Needless to say, their foolish conclusions are even sillier than their faulty premises.

Wednesday, October 31, 2012

Einstein agreed with Lorentz

Sometimes Einstein books give the impression that he created a new relativity theory in 1905, and it was only later noticed that Lorentz had the same formulas earlier. In fact, the prevailing opinion in 1905 was the Einstein's theory was the same as Lorentz's. Neither Einstein, nor Lorentz, nor anyone said that there was any significant difference between Lorentz's and Einstein's theories.

Galina Weinstein has a new paper on Variation of Mass with Velocity: "Kugeltheorie" or "Relativtheorie":
Kaufmann concluded, from 1905 onwards, that the mathematical expression proposed by Alfred Bücherer could also be in accord with his measurements and that one could not definitively decide between that expression and that of Abraham as it was derived from his experiments. In the same paper, Kaufmann noted that the two theories of Lorentz and Einstein yielded the same equations of motion for the electron, and he gave the first clear account of the basic theoretical difference between Lorentz's and Einstein's views.29

In the annual general meeting of the German Society of Scientists and Physicists (Deutsche Gesellschaft der Naturforscher und Ärrzte) in Stuttgart, on the 19th of September 1906, scientists discussed three world pictures, the electromagnetic theories of Abraham, Bücherer, or the other picture based on Lorentz and Einstein's "Principle of Relativity". A discussion revolving around the foundations of physics was held after Planck's lecture. The participants in the discussion were, among others, Kaufmann, Planck, Bücherer, Abraham, Arnold Sommerfeld and others. Scientists did not yet distinguish between Lorentz's theory and Einstein's theory. There were two main theories relating to the electron: Abraham's and Lorentz-Einstein's. An inclination towards Einstein and Lorentz's theories, on the part of scientists such as Planck and Max Laue, was evident.
Lorentz did credit Einstein with having a slightly different approach, but neither expressed any differences in the conclusions, except for minor technical errors. Lorentz said that the chief difference was that Einstein postulated what he had proved. No one saw much difference until several years later when Minkowski's approach became popular, and Einstein adopted it.

Walter Kaufmann's 1906 paper says:
Then, a work by H. A. Lorentz[13] appeared in the year 1904, in which the attempt was made to remove the difficulties which sill existed in the optics of moving bodies, by somewhat modified fundamental assumptions on the electron and also on the molecular forces acting in-between the material body-particles. ... Lorentz now showed, that one could arrive at such a result, when it is assumed that the dimensions of all physical bodies, including their individual molecules and electrons, would change their shape in a very specific way with velocity ...

It is now very remarkable, that, starting from quite different [492] assumptions, Einstein[17] recently arrived at results, which are in agreement with those of Lorentz concerning the consequences accessible to observation, though in which the previously mentioned difficulties of epistemological kind have been avoided. Einstein introduced the principle of relative motion, at least as regards translations, as a postulate. He thus places the theorem at the top, that physical phenomena observable in any rigid system, must be independent from whether the system (together with the observer) is moving relatively to any other system. ... The results accessible to observation are thus the same with respect to both authors; however, while Lorentz only shows that his hypotheses lead to the desired result without excluding that the same can also be achieved in another way, it is shown by Einstein, that when the desired result, namely the principle of relative motion, is placed at the top of the whole of physics, then the kinematics of the rigid [493] body must necessarily be changed in the way stated, and that the equations of electrodynamics[18] must assume the form stated by Lorentz. ...

The measurement results are not compatible with the fundamental assumption of Lorentz-Einstein.
Note that Kaufmann is trying to refute Lorentz and Einstein, and considers refuting one the same as refuting the other. Most of the paper just mentions Lorentz without Einstein, such as the section titled, "Comparison with the theories of Abraham, Lorentz and Bucherer."

Kaufmann appears to be not aware of Poincare's theory, or how Einstein got the relativity postulate from Poincare.

Kaufmann says Einstein "places the theorem at the top". That is his version of Lorentz's famous statement that Einstein simply postulates what we have deduced. I give a more technical explanation of what this means.

Kaufmann's argument is a little misleading where he says that Einstein showed "that the equations of electrodynamics must assume the form stated by Lorentz." This sounds like a strong statement, but if you read the footnote, Einstein assumes Maxwell's equations in the rest frame, and hence in the moving frame via the relativity principle. So Einstein is really postulating the equations of electrodynamics. It was Lorentz and Poincare who tried to prove the equations for the moving frame, assuming the equations for the rest frame.