Wednesday, July 30, 2014

Denmark ignored Galileo

There is a widespread my that Galileo invented the telescope, discovered heliocentrism, and was suppressed by a Pope that would not tolerate new ideas.

The respected physics historia Helge Kragh writes: Galileo in early modern Denmark, 1600-1650
The scientific revolution in the first half of the seventeenth century, pioneered by figures such as Harvey, Galileo, Gassendi, Kepler and Descartes, was disseminated to the northernmost countries in Europe with considerable delay. In this essay I examine how and when Galileo's new ideas in physics and astronomy became known in Denmark, and I compare the reception with the one in Sweden. It turns out that Galileo was almost exclusively known for his sensational use of the telescope to unravel the secrets of the heavens, meaning that he was predominantly seen as an astronomical innovator and advocate of the Copernican world system. Danish astronomy at the time was however based on Tycho Brahe's view of the universe and therefore hostile to Copernican and, by implication, Galilean cosmology. Although Galileo's telescope attracted much attention, it took about thirty years until a Danish astronomer actually used the instrument for observations. By the 1640s Galileo was generally admired for his astronomical discoveries, but no one in Denmark drew the consequence that the dogma of the central Earth, a fundamental feature of the Tychonian world picture, was therefore incorrect.
This is not surprising. The Dane Tycho invented the instruments that made the best astronomical observations in the world, and that data was used for the best models. Galileo had nothing to compete with that.

Galileo said Mathematics is the language in which God has written the universe, but his telescopic observations and heliocentric arguments were not very mathematical.

Kragh concludes:
Whereas Galileo was well known and highly reputed in the first two decades of the seventeenth century, it took longer before he was discovered by astronomers and natural philosophers in the Nordic countries. Tycho Brahe was aware of him at an early date, but he was an exception. The first time Galileo was mentioned in print by a Danish scholar was in 1617, and five years later he appeared in a Swedish publication. Yet, still around 1640 there were only few references to his scientific work. What eventually attracted attention to the innovative Italian were almost exclusively his astronomical discoveries made by means of the amazing telescope. His advocacy of the Copernican world system was noted, but without making any impact. In the first half of the century there still were no Copernicans in either Denmark or Sweden. Astronomers were either Tychonians or supporters of the Ptolemaic theory.

Galileo’s international fame undoubtedly rested on his telescopic discoveries, but of course he also did pioneering work in mechanics and other branches of natural philosophy. First of all, he introduced the experimental method. There seems to be no mention in the Danish scholarly literature of the physical rather than astronomical Galileo. One looks in vain for awareness of or comments on his theory of the pendulum, his laws of freely falling bodies or his ideas about inertial motion; nor is his views on atomism, the void and the nature of heat to be found in the learned literature. These parts of Galileo’s work were foreign to Danish natural philosophers who predominantly thought in terms of Aristotelian concepts and tended to interpret the Bible quite literally. The situation in Sweden was not very different. Finally it is worth mentioning that apparently the process against Galileo in 1633 did not create much interest. It was known but not, as far as I can tell, discussed in print until much later.
Kepler's astronomy was a whole lot more important than Galileo's during this period. Galileo was the first to publish observations about the moons of Jupiter, but others made the same conclusions once they get telescopes. Kepler had a sophisticated mathematical model that was way beyond Tycho's, and Tycho's was way beyond Galileo's. There was no good reason for Danes to pay much attention to Galileo's astronomy. Galileo's confrontation with the Pope made a good story, but scientifically, it wasn't that important.

Sunday, July 27, 2014

Born rule is incompatible with many-worlds

Physicist Sean M. Carroll makes another bad attempt at explaining quantum mechanics:
One of the most profound and mysterious principles in all of physics is the Born Rule, named after Max Born. In quantum mechanics, particles don’t have classical properties like “position” or “momentum”; rather, there is a wave function that assigns a (complex) number, called the “amplitude,” to each possible measurement outcome. The Born Rule is then very simple: it says that the probability of obtaining any possible measurement outcome is equal to the square of the corresponding amplitude. (The wave function is just the set of all the amplitudes.)
This is really confused. Particles certainly do have properties like position and momentum. These are observed all the time. The whole idea of a particle accelerator is to put particles in a particular position and particular momentum. They only have these properties when they are observed that way, but then they are not even particles unless they are observed that way.

The description of a wave function is over-simplified. For a system as trivial as one election, the wave function is already more complicated, and the probability is not just the square of the probability.

More importantly, once you accept the quantum mechanics premise that observables are operators on a Hilbert space, then there is nothing mysterious about the Born rule. There is no other way to make sense out of observables being operators. It is only mysterious to many-worlds advocates like Carroll, because they do not believe in probabilities. They believe that all possibilities happen in alternate universes and that there is no way to quantify those universes.

A comment explains:
As a theory, Many Worlds is in a bad state, and this paper is an example of why.

If someone tells me that there are many quantum worlds in a single wavefunction, I expect that they can tell me exactly what part of a wavefunction is a world, and how many worlds there are in a given wavefunction.

As Sean says in his article, a naive attempt to be concrete about what a world is, and how many there are in a given wavefunction, leads to something which *disagrees* with experiment.

But rather than regard this as a point against Many Worlds, and rather than try new ways to carve up the wavefunction into definite worlds… instead we have contorted sophistical arguments about how you should *think* in a multiverse, as the explanation of the Born rule.

The intellectual decline comes when people stop regarding Born probabilities as frequencies, and stop wanting a straightforward theory in which you can “count the worlds”.

Common sense tells me that if A is observed happening twice as often as B, and if we are to believe in parallel universes, then there ought to be twice as many universes where A happens, or where A is seen to happen. But a detailed multiverse theory in which this is the case is hard to construct (Robin Hanson is one of the few to have tried).

Instead what we are getting (from Deutsch, from Wallace, now here) are these rambling arguments about decision theory, rationality, and epistemology in a multiverse. They all aim to produce a conclusion of the form, “you should think that A is twice as likely as B”, without having to exhibit a clear picture of reality in which A-worlds are twice as *common* as B-worlds.
Lumo picks Carroll apart in greater detail, and concludes:
I am really annoyed by the proliferation of this trash and I am annoyed by the fact that this trash is being repetitively pumped into the public discourse by the media and blogs run by narcissist crackpots like Sean Carroll, building upon Goebbels' claim that a lie repeated 100 times becomes the truth. At the end, the reason why I am so annoyed is that people don't have time to appreciate the clever, precious, consistent, and complete way how Nature fundamentally describes phenomena, and the people – like Heisenberg et al. – who have found those gems. These people are the true heroes of the human civilization. Instead, we're flooded by junk by Carroll-style crackpots whose writings don't make any sense and who are effectively spitting on Heisenberg et al.
He is over-the-top, as usual, but he is right that this advocacy of many-worlds is crackpot stuff.

Friday, July 25, 2014

New Poincare scientific biography

There is a new review of Henri Poincare: A Scientific Biography:
Poincaré was, with the possible exception of Hilbert, the deepest, most prolific, and most versatile mathematician of his time. His collected works fill eleven large volumes, and that does not include several volumes on mathematical physics and another several volumes of essays on science and philosophy for the educated reader. ...

How did Poincaré find himself in non-Euclidean geometry? Bolyai and Lobachevskii developed nonEuclidean geometry in the 1820s, and Beltrami put it on a firm foundation (using Riemann’s differential geometry) in 1868. So non-Euclidean geometry was already old news, in some sense, when Poincaré began his research in the late 1870s. But in another sense it wasn’t. Non-Euclidean geometry was still a fringe topic in the 1870s, and Poincaré brought it into the mainstream by noticing that non-Euclidean geometry was already present in classical mathematics. ...

For most of his career, Poincaré was as much a physicist as a mathematician. He taught courses on mechanics, optics, electromagnetism, thermodynamics, and elasticity, and contributed to the early development of relativity and quantum theory. He was even nominated for the Nobel Prize in physics and garnered a respectable number of votes. ...

Another interesting thread that runs through the book is Poincaré’s interest in physics, particularly his near-discovery of special relativity. Gray shows how Poincaré took many of the right steps, starting from Maxwell’s equations and getting as far as introducing the Lorentz group. But he lacked Einstein’s physical insight, and the mathematical insight that could have made up for this, Minkowski’s space-time, was not yet available. As Gray memorably puts it (p. 378):
For Poincaré ... to have grasped the full implications of special relativity he would have had to be not Einstein, but Minkowski.
It is funny how these authors go out of their way to praise Einstein, even when the comments do not make any sense. Any discussion of special relativity always credits Einstein as the discoverer.

While he says that Poincare lacked Eintein's insight, he also says that Poincare grasped it all except for Minkowski’s space-time.

Since Poincare had more of the theory than Einstein, the only way to credit Einstein is to claim that Poincare was deficient in some way. Sometimes the claim is that he did not understand what he wrote. In this review, the argument is that he did not explain what Minkowski wrote 3 years later, and which Einstein did not even understand until about 5 years later.

Poincare and Einstein both published their big special relativity papers in 1905, with Poincare announcing his results first. Minkowski published his famous papers in 1907 and 1908, based on Poincare, not Einstein. Minkowski's 1908 paper emphasized non-Euclidean geometry, and that was what caused relativity to catch on among physicists.

Poincare's 1905 paper had space and time united in a 4-dimensional spacetime, the Lorentz group and algebra as a 4D symmetry, 4-vectors, and the covariance of Maxwell's equations. In short, he presented relativity as a non-Euclidean geometry. Einstein had none of this and did not even mention it in a relativity survey paper he wrote a year later.

Minkowski extended Poincare's geometry with Minkowski diagrams and worldlines. Again, Einstein had none of this, and admitted that he did not understand it.

The book credits Poincare with seeing the gravitational implications in 1905, but suggests that he might have had an epistemological confusion by failing to distinguish between the length of a measuring rod, and what he measures the length of a measuring rod to be. That is, distinguishing the actual length from the apparent length. Overbye's Einstein book also mentioned this distinction.

But you can check the papers yourself for this distinction. Einstein does not make it in his famous 1905 paper. Poincare does in his long 1905 paper:
Or this part which would be, so to speak, common to all the physical phenomena, would be only apparent, something which would be due to our methods of measurement.
Poincare explains how this new view is different from Lorentz's, but Einstein never made any such claim.

Poincare's special relativity was vastly superior to Einstein's work in every aspect. This is detailed in my book, How Einstein Ruined Physics. Einstein is only credited with nonsensical arguments.

Wednesday, July 23, 2014

Aaronson sarcastic about quantum info in 2040

Scott Aaronson writes an article on How Might Quantum Information Transform Our Future?:
Picture, if you can, the following scene. It’s the year 2040. You wake up in the morning, and walk across your bedroom to your computer to check your email and some news websites. Your computer, your mail reader, and your web browser have some new bells and whistles, but all of them would be recognizable to a visitor from 2014: on casual inspection, not that much has changed. But one thing has changed: if, while browsing the web, you suddenly feel the urge to calculate the ground state energy of a complicated biomolecule, or to know the prime factors of a 5000-digit positive integer — and who among us don’t feel those urges, from time to time? — there are now online services that, for a fee, will use a quantum computer to give you the answer much faster than you could’ve obtained it classically. Scientists, you’re vaguely aware, are using the new quantum simulation capability to help them design drugs and high-efficiency solar cells, and to explore the properties of high-temperature superconductors. Does any of this affect your life? Sure, maybe it does — and if not, it might affect your children’s lives, or your grandchildren’s. At any rate, it’s certainly cool to know about. ...

As magical as it all sounds, this is the wondrous science-fiction future that my sixteen years of research in quantum computing and information lead me to believe is possible. Assuming, of course, that we actually do build scalable quantum computers.
So even if scalable quantum computers are invented, the impact on our lives will be negligible.

I guess he is being sarcastic here, so it is hard to tell whether he thinks quantum computers will ever be possible, or whether he is making fun of those who do. He alternates between over-hyping the subject, and criticizing those who do.

Predicting the future is tricky, of course, but computer technology has been on a stable predictable path for a long time. Processing power has followed Moore's Law for 50 years. Artificial intelligence, such as voice and image recognition, has progressed more or less on schedule. Some say that we are headed for the Singularity around 2040. That seems optimistic to me, but they are not even assuming quantum computer benefits.

But there is still no experiment demonstrating that quantum computers are feasible, and I doubt that there ever will be.

I agree with Gil Kalai:
I find the article entertaining and enjoyable in spite of me being one of the “skeptics” who think that superior computation through quantum computers is not possible. (And I really mean “not possible” :) .) ... Of course, impossibility of computationally superior quantum computing is not in conflict with quantum mechanics.
Even if quantum computers are possible, the main application will be destructive -- breaking our current computer security and requiring complicated and expensive work-arounds to achieve what is easy today.

Aaronson adds:
To be honest, I have no idea whether QKD will ever find a significant market or not. But at least the technology already exists (and “works,” over short enough distances), if a nontrivial market were ever to develop.

It’s true that there are classical cryptosystems that are probably secure even against quantum computers. However, the trouble is that all such systems currently known are either
(a) private-key, and hence cumbersome to use, or else
(b) public-key systems like the lattice-based systems, which currently require key sizes and message sizes large enough to make them impractical for most applications.

Of course, it’s possible that more practical quantum-secure public-key cryptosystems will eventually be discovered. Certainly lots of people have been thinking about that. But if no such systems are discovered, and if (on the other side) the technology of QKD were to improve so that it could handle much higher bit-rates and distances, then there really could be a good use case for QKD.
Quantum key distribution will probably never have any practical utility. Sure it works, but much simpler methods give much better security.

Monday, July 21, 2014

Myth of the lone genius

Joshua Wolf Shenk writes in a NY Times op-ed:
But the lone genius is a myth that has outlived its usefulness. Fortunately, a more truthful model is emerging: the creative network, ...

Today, the Romantic genius can be seen everywhere. Consider some typical dorm room posters — Freud with his cigar, the Rev. Dr. Martin Luther King Jr. at the pulpit, Picasso looking wide-eyed at the camera, Einstein sticking out his tongue. These posters often carry a poignant epigraph — “Imagination is more important than knowledge” — but the real message lies in the solitary pose.

In fact, none of these men were alone in the garrets of their minds. Freud developed psychoanalysis in a heated exchange with the physician Wilhelm Fliess, whom Freud called the “godfather” of “The Interpretation of Dreams”; King co-led the civil rights movement with Ralph Abernathy (“My dearest friend and cellmate,” King said). Picasso had an overt collaboration with Georges Braque — they made Cubism together — and a rivalry with Henri Matisse so influential that we can fairly call it an adversarial collaboration. Even Einstein, for all his solitude, worked out the theory of relativity in conversation with the engineer Michele Besso, whom he praised as “the best sounding board in Europe.”
Freud's dream theory? That stuff is nonsense, and Freud was a crackpot, not a genius.

Besso? Maybe he is the only one that Einstein honestly thanked, but his work depended on many others.

When people talk about Einstein as a lone genius, they are usually talking about his 1905 special relativity paper, or maybe the 1905 photon paper. His later work on general relativity is better documented, and is well-known that he very heavily relied on Grossmann, Levi-Civita, Hilbert, and others.

The 1905 papers were supposed done in isolation, while working at the Swiss patent office. But he had the papers of Lorentz and Poincare, and his relativity paper had no new ideas that are not explained better by them.

The myth that Einstein worked in isolation has promoted the idea that a lone genius can ponder ideas that were known for 50 years, look at them differently, and revolutionize physics. Even philosophers and historians of science perpetuate this Einstein myth. Einstein's paper was merely a presentation of recent research.

Friday, July 18, 2014

Voigt discovered Lorentz transformations

There is a new paper on the history of special relativity, titled The wave equation in the birth of spacetime symmetries:
Woldemar Voigt published in 1887 the article [1]: “On
Doppler’s Principle” which has unfortunately received little recognition by physicists and historians of physics [2–6]. Apparently, he was the first — or at least one of the firsts — who demanded form invariance of a physical law to obtain a set of transformation equations. This remarkable idea began the search for physical symmetries in field theories. More precisely, Voigt demanded form invariance of the homogeneous wave equation in inertial frames and obtained a set of spacetime transformations now known as the Voigt transformations ...

In the creation of special relativity, we traditionally find the names of Lorentz, Larmor, Poicar´e and Einstein. They appear to be the main actors. Voigt is relegated to being a minor player, in the best of cases. But this tradition is not faithful to the history of physics, since Voigt was the first in applying the two postulates of special relativity. He deserves a place in textbooks. The idea of demanding that the wave equation should not change its form when observed by different inertial frames, was the great conceptual contribution of Voigt, since it opened the gate to the world of physical symmetries. This is the legacy of Voigt’s 1887 paper.
Voigt's transformations were not exactly the same as the Lorentz ones, and he did not have some of the other essential relativity breakthrus of Lorentz and Poincare.

Deducing symmetries from equations of physics became one of the great ideas of XX century physics. Voigt was a pioneer in 1887.

Thursday, July 17, 2014

Connecting the aether to hidden variable theories

I often post on how people get the history of relativity wrong, and how they misinterpret quantum mechanics. Sometimes these are related, as in this essay:
To begin with, it is crucial to make a distinction between two different senses in which a theory might be said to be "relativistic". First, a theory might be empirically relativistic. This means that what it predicts for the outcomes of experiments will exhibit the usual relativistic properties — for example, it should predict the familiar relativistic behavior of clocks and meter sticks in relative motion. More generally, it should agree with classical relativistic mechanics about the behavior of macroscopic objects and it should predict (ignoring for the moment gravitation and general relativity) that an experimenter cannot tell whether an appropriately isolated laboratory has been set into uniform motion.

Despite the central role it is given in certain philosophies of science, however, observation is not everything. We thus need to recognize (at least) a second sense in which a theory might be said to be "relativistic" — namely, that it is compatible with relativity through and through, and not just at the (relatively superficial) level of empirical predictions. Such a theory will be said to be fundamentally relativistic. To make the distinction clear, it is helpful to contrast two different versions of classical electromagnetism. Let us call these the Lorentzian and the Einsteinian theories.

According to the Lorentzian theory, there is a physically meaningful notion of absolute rest (defined by the so-called "ether" rest frame) and a physically meaningful notion of absolute time. These correspond to the existence of a preferred family of coordinate systems over spacetime and the dynamics of the theory is defined, with respect to these coordinate systems, by the usual equations of electromagnetism. ...

By contrast, the Einsteinian version of classical electromagnetism is of course relativistic, not just empirically, but fundamentally. The notion of a really-existing but unobservable "ether" rest frame is dispensed with and all uniform states of motion are regarded as equivalent.

Sometimes it is thought (and taught) that certain experiments from the late 19th or early 20th century refuted the Lorentzian theory in favor of the Einsteinian one. But this is not correct. With regard to their empirical predictions, there is no difference between the Lorentzian and Einsteinian theories. ...

For our purposes, there are three important lessons here. The first is that the empirical violation of Bell-type inequalities does not require theories that fail to be empirically relativistic. But this is hardly sufficient to assuage the worry: if empirical relativity is the only kind of relativity that can be saved, it's not clear that relativity, in any substantial sense, is being saved. The second lesson is thus that, if we want to insist on preserving compatibility with relativity, it is fundamental relativity (not mere empirical relativity) that we must insist on.

But the third lesson is that perhaps abandoning fundamental relativity should be on the table as a serious option. Doing so would not necessitate empirical predictions at odds with the experimental results that are normally taken to support relativity. And it is clear that the use of a dynamically preferred but unobservable "ether" frame would make it very easy for theories to incorporate the non-local interactions that Bell's theorem (and the associated experiments) require.

Lorentz never said that there was a "physically meaningful notion of absolute rest" or a "physically meaningful notion of absolute time." After all, he achieved empirical relativity, so what would the physical meaning be?

This distinction between the Lorentzian and Einsteinian theories is an entirely modern invention, and a nonsensical one. In 1906, it was called the Lorentz-Einstein theory, and no one noticed any difference. There is some point in writing physics in covariant equations, but Poincare and Minkowski added that to special relativity, not Einstein.

There is no difference between a theory with a preferred frame, and one without one. Mathematically, a space is called a manifold if it has no preferred frame, but textbooks often define a manifold in terms of a particular frame. It is obvious that it makes no difference.

Some people, like Einstein, Bell, and the authors of the above essay have a strange belief that somehow quantum mechanics will make more sense if it more directly refers to unobservable hidden variables. It is live believing in an unobservable aether. Bell actually said that relativity makes more sense that way.

The above essay is wrong when it says that Bell's theorem requires nonlocal interactions. And it is crazier still to think that some trivial choice of frame for the aether will somehow make those interactions more comprehensible. If anything, Bell's theorem says that hidden variable cannot help understand certain quantum paradoxes. They are better explained by orthodox quantum mechanics.

Monday, July 14, 2014

Humans not animals with capacity to think

A new paper claims to prove:
Mill’s godson Bertrand Russell also had no doubt that causality and determinism were needed to do science. “Where determinism fails, science fails”, he said. Russell could not find in himself “any specific occurrence that I could call ’will’”. Charles Sanders Peirce “that the state of things existing at any time, together with certain immutable laws, completely determine the state of things at every other time (for a limitation to future time is indefensible). Thus, given the state of the universe in the original nebula, and given the laws of mechanics, a sufficiently powerful mind could deduce from these data the precise form of every curlicue of every letter I am now writing”. ...

A rigorous analysis, from the foundations of physics, it is possible to prove that humans are not even animals or beings with the capacity to think (or decide), in fact, no being is capable of such an ability (according to physics). If the principles upon which physics is constructed are correct, a being able to decide is an impossibility, as we are a fraction of matter subjected to specific laws, the same laws used to describe a stone or any other object apply to humans as well (since humans, as we call it, does not have a privilege state in the universe). Every configuration of the universe in a given moment is predetermined by the conditions of the universe an instant before, dictating in a precise and absolute way the evolution of the entire system, inasmuch as the physical laws are strict and well defined.
This is a fantasy, of course.

I post this to point out that many people believe that determinism is a logical consequence of a scientific worldview. Those people have a big problem with quantum mechanics.

Saturday, July 12, 2014

Defending physics from post-empiricism

Peter Woit attacks a string theory philosophy book on a new philosophy blog:
The book in question was Richard Dawid’s String Theory and the Scientific Method [4], which comes with blurbs from Gross and string theorist John Schwarz on the cover. Dawid is a physicist turned philosopher, and he makes the claim that string theory shows that conventional ideas about theory confirmation need to be revised to accommodate new scientific practice and the increasing significance of “non-empirical theory confirmation.”
This is on the blog that defended philosophy from attacks by physicists. I posted some comments:
Does anyone here think that modern philosophers have contributed something useful to this topic? String theory has failed to make testable prediction, or to reproduce existing theory. The Standard Model has succeeded in LHC experiments. Thus the Standard Model has won. Modern philosophy of science has contributed nothing.

Yes, there is a competition to find the best theories for high-energy physics. Many leading physicists were openly hoping that the LHC would falsify the Standard Model so that competing theories could gain traction.

I do think that philosophy of science should be able to say something about whether string theory is a worthwhile scientific endeavor.

I read Pigliucci's complaint that Tyson said that "philosophy has basically parted ways from the frontier of the physical sciences" in the early 20th century. Sorry, but Tyson is right. It is nearly impossible to find any philosopher who has anything worthwhile to say about 20th century physics. Dawid's post-empiricism is just the latest example of foolishness, as Woit explains.
If I were to pick one failure of 20th century philosophers on the subject of physics, it would be the textbook resolution of the Bohr–Einstein debates. From the famous R.P. Feynman Lectures on Physics:
Another thing that people have emphasized since quantum mechanics was developed is the idea that we should not speak about those things which we cannot measure. (Actually relativity theory also said this.)
Philosophers love to give opinions about things that cannot be measured. That is okay with me. My problem is with those who say that it is somehow a shortcoming of the physical theory that it only explains observables.

Thursday, July 10, 2014

Four quantum interpretations defended

Physicist Sean M. Carroll explains a video with four major interpretations of quantum mechanics:
The other participants are David Albert, Sheldon Goldstein, and Rüdiger Schack, with the conversation moderated by Brian Greene. The group is not merely a randomly-selected collection of people who know and love quantum mechanics; each participant was carefully chosen to defend a certain favorite version of this most mysterious of physical theories.

David Albert will propound the idea of dynamical collapse theories, such as the Ghirardi-Rimini-Weber (GRW) model. They posit that QM is truly stochastic, with wave functions really “collapsing” at unpredictable times, with a tiny rate that is negligible for individual particles but becomes rapid for macroscopic objects.

Shelly Goldstein will support some version of hidden-variable theories such as Bohmian mechanics. It’s sometimes thought that hidden variables have been ruled out by experimental tests of Bell’s inequalities, but that’s not right; only local hidden variables have been excluded. Non-local hidden variables are still very viable!

Rüdiger Schack will be telling us about a relatively new approach called Quantum Bayesianism, or QBism for short. (Don’t love the approach, but the nickname is awesome.) The idea here is that QM is really a theory about our ignorance of the world, similar to what Tom Banks defended here way back when.

My job, of course, will be to defend the honor of the Everett (many-worlds) formulation. I’ve done a lot less serious research on this issue than the other folks, but I will make up for that disadvantage by supporting the theory that is actually true. And coincidentally, by the time we’ve started debating I should have my first official paper on the foundations of QM appear on the arxiv: new work on deriving the Born Rule in Everett with Chip Sebens.
QBism is essentially the same as the Copenhagen interpretation, as promoted by Bohr and Heisenberg around 1930.

GRW has the virtue of being potentially testable, but there is no evidence for it yet. Albert likes it mainly out of a philosophical rejection of positivism, and a belief that GRW somehow better satisfies his beliefs in scientific realism.

The argument for Bohmian mechanics is even more philosophically misguided. It is based on a belief that a theory of nonlocal hidden variables with action-at-a-distance is somehow more scientific that a theory based on observables and experiment. The truth is nearly the opposite.

The many-worlds interpretation adds mysterious unobservable parallel universes, without any scientific payoff.

Quantum mechanics has certain paradoxes, but so do other theories. Relativity has the twin paradox, the reality of the length contraction, and the fact that two events can appear simultaneous to one observer, and not to another.

We do not have reputable physicists running around complaining that relativity does not match their intuitions. Relativity explains the observations, and if it upsets your intuitions, then there is something wrong with your intuitions.

Likewise quantum mechanics matches the observations. For Albert, Carroll, Greene, and Goldstein to complain that it is not scientifically realistic or has some other philosophical defect, they are just showing their stubbornness to accept what was established in 1930.

Lumo explains:
The many worlds "interpretation" doesn't allow us to calculate – or imprint – the generally different probabilities of different outcomes into the "strands" of the film. Even the fans of this religion admit it's the case but some of them say that they are "working on a fix" which is supposed to be enough. (A similarly "modest" fix makes Genesis compatible with all the detailed data about the cosmic microwave background, the DNA, and genetics.)

What they don't realize is that the inability of this theory or "interpretation" to calculate any probabilities isn't just a moderate vice or disadvantage. It is a complete, rigorous proof that this philosophy has nothing to do with the empirical data or science whatever and people who are defending it don't have the slightest clue what they are talking about. Everything that modern science predicts are probabilities or their functions. If your ideas don't predict any probabilities, they don't predict anything at all. They have nothing to do with science.

Moreover, there can't be any fix. There can't even exist a candidate theory that would "extract" the probabilities from something else.
You might think that that an event would be consider more probable if it occurs in most of the parallel universes. However the MWI advocates deny any scientific meaning to counting the universes. Mitchell Porter explains:
If you are a many-worlds theorist, and you want to explain e.g. why QM says event A is twice as probable as event B, the logical explanation is that event A is twice as common as event B, when all the parallel worlds are considered. But Deutsch, Wallace, and now Carroll and Sebens, all reject this approach.

Carroll and Sebens explicitly recommend against trying to count parallel worlds / branches of the wavefunction. For example, on page 15 they speak of “the unrealistic assumption that the number of branches in which a certain outcome occurs is well-defined”.
They simply don't believe in probabilities of outcomes because they believe all outcomes happen. They are denying modern science as we know it, and so are Albert, Greene, and Goldstein.

Lumo writes again:
The people who are trying to revive the objective reality – whether they are assuming Bohmian "real particles plus guiding waves" or various forms of "objective GRW-like collapses" or "many worlds" or any other classical visualization what's going on – are not closer than proper quantum mechanical physicists to genuine science. Instead, they are stubbornly defending the indefensible, a notion (of the objective reality) similar to a tooth fairy that cannot be extracted from our real perception of observations and that may actually be shown to be wrong by a careful enough (but not excessively complicated) analysis of several representative processes in the microscopic world.
I would not phrase it that was, but I do think that belief in those interpretation is rooted in some faulty idea about what science is all about. Instead of accepting that science is about observations, they have a belief that science is about hidden variables instead.

Monday, July 7, 2014

Magazines promote nonlocal pilot waves

Smithsonian Mag has an article against quantum mechanics:
What If There's a Way to Explain Quantum Physics Without the Probabilistic Weirdness?
An old idea is back in vogue as physicists find support for "pilot wave theory," a competitor to quantum mechanics ...

According to modern notions of quantum physics, at the very smallest scales — in the realm of electrons and photons and quarks — the world is not obvious, direct and deterministic. Rather, the world is one of probabilities. Electrons seem to exist in a cloud of possibilities, inhabiting an area but no particular space. It isn't until you look that this aura of probability collapses and the electron inhabits a particular place.

For some people, such a probabilistic interpretation of the world is simply unnerving. For others, though, the probabilistic interpretation seems unnecessary from a scientific perspective. There might be another way to explain the weird behavior seen in the double-slit experiment that doesn't devolve into quantum mechanics' usual probabilistic weirdness, says Quanta Magazine.

Known as “pilot wave theory” this line of thinking goes that, rather than electrons and other things being both quasi-particles and quasi-waves, the electron is a discrete particle that is being carried along by a separate wave. What this wave is made of no one knows. But recent experimental research shows that, in the lab, particles being carried around by waves will exhibit many of the same weird behaviors that were thought to be exclusive to the domain of quantum mechanics (as seen in the video above).

Not being able to explain what the wave is is a problem, but so is the inherent randomness of modern quantum physics.
This old idea is a crazy idea. It is based on a strange ideological prejudice against probabilities.

The concept of probability pervades modern science. It is often used on quantum mechanics, but no more so that any other branch of science. There is no reason to get of it. Nor is it possible, as any experimental attempt to confirm Bohmian mechanics or pilot wave theory will use the same statistics as with quantum mechanics.

Quantum mechanics does have some mysterious aspects to it, but probability is least of it.

The pilot wave theory approach is to try to nail down the electron as a particle, which it is not. So it becomes a particle attached to a pilot wave, which is a very strange entity that is more mysterious than anything in orthodox quantum mechanics.

So how is that better? There are philosophers who have a mystical belief in nonlocality. They would like to believe that their consciousness is at one with the universe. The Bohm pilot wave theory is nonlocal. So they like it.

Quanta Mag article also says:
Later, the Northern Irish physicist John Stewart Bell went on to prove a seminal theorem that many physicists today misinterpret as rendering hidden variables impossible. But Bell supported pilot-wave theory. He was the one who pointed out the flaws in von Neumann’s original proof. And in 1986 he wrote that pilot-wave theory “seems to me so natural and simple, to resolve the wave-particle dilemma in such a clear and ordinary way, that it is a great mystery to me that it was so generally ignored.”

The neglect continues. A century down the line, the standard, probabilistic formulation of quantum mechanics has been combined with Einstein’s theory of special relativity and developed into the Standard Model, an elaborate and precise description of most of the particles and forces in the universe. Acclimating to the weirdness of quantum mechanics has become a physicists’ rite of passage. The old, deterministic alternative is not mentioned in most textbooks; most people in the field haven’t heard of it. Sheldon Goldstein, a professor of mathematics, physics and philosophy at Rutgers University and a supporter of pilot-wave theory, blames the “preposterous” neglect of the theory on “decades of indoctrination.” At this stage, Goldstein and several others noted, researchers risk their careers by questioning quantum orthodoxy.
Neglect? I would not say "combined with Einstein's theory" because Einstein stubbornly refused to accept quantum mechanics. The textbooks do explain why he lost the Bohr–Einstein debates. In short, Bohr, Heisenberg, Schroedinger, von Neumann, and quantum mechanics were correct, and Bohm, Einstein, Bell, and hidden variables were wrong.

Orthodox quantum mechanics has led to about a trillion dollar computer chip economy. Bohm, pilot waves, Bell, and hidden variables have led to nothing.

Update: Lumo piles on:
I've been overwhelmed by the sheer amount of idiocy about quantum mechanics that we may encounter in the would-be scientific mainstream media. A new wave of nonsense claiming that someone overthrew quantum mechanics is appearing on a daily basis. ...

I really can't understand what drives people to saying and even writing these breathtakingly childishly stupid things. If you have a slightly retarded 6-year-old baby, it may ask you why cars move. And you explain to her that there is an elephant inside the car. ...

Clearly, quantum mechanics was too hard and too new for some physicists from the beginning – including a revolutionary named Einstein – so these people clearly didn't belong among the "we" of the people who properly understood quantum mechanics. But even the people who effectively understood quantum mechanics would find some differences in their "interpretation" of quantum mechanics – and the most reasonable ones would point out that the very phrase "interpretation of quantum mechanics" is silly. Quantum mechanics is the new theory so once we describe its rules and axioms, we know them and there's nothing to interpret.

Sunday, July 6, 2014

More doubts about big bang gravity wave

Quanta Mag tells doubts about BICEP2:
Most of them were careful to point out that the revolutionary claim from the scientists involved in the experiment, which used the BICEP2 telescope (for Background Imaging of Cosmic Extragalactic Polarization, second generation), must be confirmed by additional experiments that could rule out alternative explanations. But because BICEP2’s very technical paper asserted that the possibility of the gravitational-wave signal being false was very low, the discovery seemed like a done deal to most people.

But not to David Spergel. Spergel earned his stripes in astrophysics as a team member on the Wilkinson Microwave Anisotropy Probe (WMAP), a radio telescope that launched into orbit in 2001. ...

QUANTA MAGAZINE: You are a leading critic of the BICEP2 team’s claim to have discovered the existence of primordial gravitational waves. Why?

DAVID SPERGEL: The BICEP2 astrophysicists should have been more cautious. The evidence they have reported fails to convince me that the signal they interpret as being caused by gravitational waves is not, in fact, caused by galactic dust.
So maybe they saw a shock wave from the biggest event in the history of the universe. Or maybe they just saw dust. No one can be sure.
Why didn’t they use the raw Planck data?

Planck’s raw data will not be made public until the end of the year.

Did you ask the BICEP2 scientists for access to their data?

Yes, but they declined to release it, saying that their data is not calibrated.

Is it common practice to hold onto such data?

It’s fine to keep raw data secret until it is properly calibrated by scientists who understand the limitations of the instrument that collected the data. But if you are willing to publish a paper based upon your data, then you should make that data public.
That is a little fishy. When scientists hold press conferences, and start divvying up the Nobel prizes, they ought to be willing to share their data with peers for analysis.

Wednesday, July 2, 2014

Carroll goes nuts with many worlds

After defending bad science philosophy, physicist Sean M. Carroll foes off the deep end with his own bad quantum philosophy, and posts Why the Many-Worlds Formulation of Quantum Mechanics Is Probably Correct:
There are other silly objections to EQM, of course. The most popular is probably the complaint that it’s not falsifiable. That truly makes no sense. It’s trivial to falsify EQM — just do an experiment that violates the Schrödinger equation or the principle of superposition, which are the only things the theory assumes. Witness a dynamical collapse, or find a hidden variable. Of course we don’t see the other worlds directly, but — in case we haven’t yet driven home the point loudly enough — those other worlds are not added on to the theory. They come out automatically if you believe in quantum mechanics.
This is nonsense. I don't want to keep picking on Carroll, but it seems that more and more physicists are reciting such nonsense in favor of MWI. (Carroll also calls it Everettian quantum mechanics, EQM.)

Lumo explains how Carroll is wrong
At the same time, salesmen like Carroll offer you lots of incredible statements such as the statement that this "many worlds interpretation" directly follows from quantum mechanics, is directly justified by quantum mechanics (more justifiable in quantum mechanics than in classical physics), and unlike proper quantum mechanics, it doesn't introduce any new physical laws. All these statements are untrue. They are really the polar opposite of the truth.

First, many worlds surely don't follow from quantum mechanics.

Quantum mechanics is the universal framework of modern physics which is a natural science. As every natural science, physics predicts or explains the observations that are actually being made in one Universe.
The 2013 paper, On Quantum Theory, by Berthold-Georg Englert also explains:
Quantum theory had essentially taken its final shape by the end of the 1920s and, in the more than eighty years since then, has been spectacularly successful and reliable — there is no experimental fact, not a single one, that contradicts a quantum-theoretical prediction. Yet, there is a steady stream of publications that are motivated by alleged fundamental problems: We are told that quantum theory is ill-defined, that its interpretation is unclear, that it is nonlocal, that there is an unresolved “measurement problem,” and so forth.

It may, therefore, be worth reviewing what quantum theory is and what it is about.
That is correct. Quantum mechanics, as it has been described in textbooks for decades, is a perfectly good theory. It is strikingly successful, and yet from Einstein in the 1930s to many well-known physicists today, they act as if the theory is broken. They even say that realizing that quantum mechanics requires many worlds is like the Copernican revolution.

This modern rejection of quantum mechanics is just as crazy as if modern physicists went around claiming that particles can go faster than light because they don't believe relativity. I wonder how they ever passed their PhD qualifying exams without understanding quantum mechanics.

Carroll is proof that a physicist can lose the capacity for scientific thinking if his brain is infected with lousy philosophy.

The principle reason for rejecting MWI is that it postulates an infinity of unobservable worlds without any physical benefit. It adds no practical or conceptual advantages over textbook quantum mechanics, and is unscientific in having supernatural beliefs. It even has disadvantages, because it makes probabilities nearly impossible to interpret. Its advocates claim that it cures some philosophical defect of quantum mechanics, but there is no such defect.

The quantum computing folks, like David Deutsch, love MWI because the extra universes are supposedly where the super-Turing computation takes place. No such computation has ever been observed, and the whole field is a big funding scam.

Tuesday, July 1, 2014

Poincare also led celestial and quantum mechanics

I have posted about how Henri Poincaré discovered the essence of special relativity, and that we got the Lorentz group, spacetime, non-Euclidean geometry, electromagnetic covariance, etc. from him, and not Einstein.

Poincare was primarily known as a mathematician. So did he do any other physics, besides relativity? Yes, he was also a leader in celestial mechanics and quantum mechanics. See and The algebraic cast of Poincaré's Méthodes nouvelles de la mécanique céleste and Poincaré's proof of the quantum discontinuity of nature. (And also articles here and here on quantum mechanics, and Lorentz 1914.)

Poincare introduced his 1911 quantum paper by saying:
Here is the profoundest revolution that natural philosophy has undergone since Newton.
This was from the guy who revolutionized space, time, electromagnetism, and gravity several years earlier. That paper, more than any other single paper, convinced Europe of Planck's quantum hypothesis. Poincare was the first to compare special relativity to Copernicus, and the first to say that quantum mechanics was revolutionary. Now these are considered the two great physics revolutions of the 20th century.

Mark Yasuda wrote:
Although it's rather well known that Henri Poincare anticipated a number of results in special relativity prior to Einstein's 1905 publication, it seems that fewer people are aware that Poincare also played a role in another revolution in physics at the beginning of the 20th century -- namely, quantum mechanics (I guess there are some people who might also argue that he participated in another revolution for his pioneering work in dynamical systems). Recently, I had the good fortune to come across Russell McCormmach's excellent article "Henri Poincare and the Quantum Theory" (Isis; Volume 58 (191); 1967; pages 37-55). Besides discussing Poincare's work, it offers some fascinating glimpses into the first Solvay Conference that took place in October and November of 1911. Below are a few (six) excerpts that I've selected from McCormmach's article, which I highly recommend for people interested in the historical development of physics during this time period:

1. At the time, Maurice de Broglie remarked to F. A. Lindemann that of all those present Poincare and Einstein were in a class by themselves (p 40).

2. Lorentz recalled that in the discussions Poincare had shown "all the vivacity and penetration of his spirit, and that one had admired the facility with which he entered vigorously into even those questions of physics which were new to him" (p. 40).

3. ... it was Planck, however, who stimulated Poincare's most penetrating, questioning spirit ....

4. In a descriptive essay he spelled out the essence of Planck's theory as it appeared to him: "A physical system is capable of only a finite number of distinct states; it jumps from one of those states to another without going through a continuous series of intermediate states." The image of a physical system jumping from one discrete state to another put him in a speculative frame of mind. He considered the possibility that a particle might trace only certain allowed paths in phase space, shifting discontinuously between them. And he supposed that the universe as well as an electron ought to experience quantum jumps. Since there would be no distinguishable instants within the motionless states between universal jumps, there should exist an "atom of time." Such were the kinds of ideas going through Poincare's mind shortly before he died; there was nothing timid or grudging about his late acquaintance with the quantum theory (p 50).

5. ... above all it was the unquestioned authority of Poincare in mathematical matters which secured him an attentive audience. Jeans undoubtedly voiced a majority sentiment when he said that "we shall probably feel inclined to trust to the accuracy of Poincare's mathematics." (pp 51-52).

6. Whereas Jeans had strongly opposed the quantum theory in Brussels ..., he came out vigorously in support of quanta at the Birmingham meeting of the British Association in September 1913, fourteen months after Poincare's death. There is no doubt about what caused him to change his mind. Jeans had read Poincare's paper and been converted by it. ... The French scientist's arguments had been so completely persuasive that from this time on every theory would have to "logically involve either the belief that Poincare is wrong, or the belief that he is right, together with all that this involves. ... And Jeans himself felt compelled to accept the quantum hypothesis in its entirety." (p. 53).
The 1927 Solvay Conference was famous for getting together the founders of quantum mechanics and discussing what it meant. But that first 1911 Solvey Conference was for getting quantum mechanics started. It must have seemed strange at the time for some rich business to invite a few scholars in an obscure field to a conference.

Saturday, June 28, 2014

Free will claimed to be random

I have argued that the free will experiments do not tell us whether we have free will, and that people use the word "random" as just a euphemism for data that is not understood. Here is an example:
The concept of free will could be little more than the result of background noise in the brain, according to a recent study.

It has previously been suggested that our perceived ability to make autonomous choices is an illusion – and now scientists from the Center for Mind and Brain at the University of California, Davis, have found that free will may actually be the result of electrical activity in the brain.

According to the research, published in the Journal of Cognitive Neuroscience, decisions could be predicted based on the pattern of brain activity immediately before a choice was made.

Volunteers in the study were asked to sit in front of a screen and focus on its central point while their brains’ electrical activity was recorded. They were then asked to make a decision to look either left or right when a cue symbol appeared on the screen, and then to report their decision.

The cue to look left or right appeared at random intervals, so the volunteers could not consciously or unconsciously prepare for it.

The brain has a normal level of so-called background noise; the researchers found that the pattern of activity in the brain in the seconds before the cue symbol appeared - before the volunteers knew they were going to make a choice - could predict the likely outcome of the decision. ...

“This random firing, or noise, may even be the carrier upon which our consciousness rides, in the same way that radio static is used to carry a radio station.”

This latest experiment is an extension of psychologist Benjamin Libet’s 1970s research into the brain’s electrical activity immediately before a decision.
The article makes it sound as if something profound has been discovered. Not true. It is saying that in certain experiments, free will is attributable to random process in the brain, meaning that human decision making is based on processes that are not understood. This is a null result, as it leaves us with what has been thought for millennia.

This is a similar confusion in quantum mechanics, with physicists always saying that something is random. The use of the word "random" doesn't mean anything except that it is not known to be predictable with the available data.

It seems possible that free will is connected with quantum randomness. That is, when reductionist brain research attempts to isolate human brain decisions, some sort of quantum uncertainty could block a completely deterministic model of the brain.