Thursday, August 28, 2014

Quantum computer joke

I like intellectual geek jokes, like this one:
There are only 10 types of people in the world: those who understand binary, and those who don't.
Here is an high-brow joke:
“Werner Heisenberg, Kurt Gödel, and Noam Chomsky walk into a bar. Heisenberg turns to the other two and says, ‘Clearly this is a joke, but how can we figure out if it's funny or not?’ Gödel replies, ‘We can't know that because we're inside the joke.’ Chomsky says, ‘Of course it's funny. You're just telling it wrong.’ ”
Just the idea of Heisenberg, Gödel, and Noam Chomsky having to talk to each other is funny already. I am not sure if this is a reference to Chomsky being a expert on language and human nature, or his goofy politics, or his combative demeanor, or what.

Scott Aaronson just told a good one:
Let me start with a story that John Preskill told me years ago.  In the far future, humans have solved not only the problem of building scalable quantum computers, but also the problem of human-level AI.  They’ve built a Turing-Test-passing quantum computer.  The first thing they do, to make sure this is actually a quantum computer, is ask it to use Shor’s algorithm to factor a 10,000-digit number.  So the quantum computer factors the number.  Then they ask it, “while you were factoring that number, what did it feel like?  did you feel yourself branching into lots of parallel copies, which then recohered?  or did you remain a single consciousness — a ‘unitary’ consciousness, as it were?  can you tell us from introspection which interpretation of quantum mechanics is the true one?”  The quantum computer ponders this for a while and then finally says, “you know, I might’ve known before, but now I just … can’t remember.”
I don't get it, but I am probably not smart enuf. This story has all the appearances of being both profound and funny. Maybe someone will explain it to me.

Wednesday, August 27, 2014

SUSY particles predicted for LHC

Gordon Kane writes in SciAm:
In “Supersymmetry and the Crisis in Physics,” Joseph Lykken and Maria Spiropulu discuss hopes that evidence of supersymmetry, which proposes that all known particles have hidden superpartners, will be found at CERN's Large Hadron Collider within a year's time — and the effects on physics as a whole if it is not. ...

Predictions based on such theories should be taken seriously. I would like to bet that some superpartners will be found at the LHC, but I have trouble finding people who will bet against that prediction.
That article was in the May issue, so it will be proved wrong on May 1, 2015. Make your bets now, before Kane changes his mind.

Monday, August 25, 2014

No need for quantum interpretation

I just found this sensible March 2000 Physics Today article, Quantum Theory Needs No ‘Interpretation’, by Christopher A. Fuchs and Asher Peres:
Recently there has been a spate of articles, reviews, and letters in PHYSICS TODAY promoting various “interpretations” of quantum theory (see March 1998, page 42; April 1998, page 38; February 1999, page 11; July 1999, page 51; and August 1999, page 26). Their running theme is that from the time of quantum theory’s emergence until the discovery of a particular interpretation, the theory was in a crisis because its foundations were unsatisfactory or even inconsistent. We are seriously concerned that the airing of these opinions may lead some readers to a distorted view of the validity of standard quantum mechanics. If quantum theory had been in a crisis, experimenters would have informed us long ago!

Our purpose here is to explain the internal consistency of an “interpretation without interpretation” for quantum mechanics. Nothing more is needed for using the theory and understanding its nature. ...

The thread common to all the non-standard “interpretations” is the desire to create a new theory with features that correspond to some reality independent of our potential experiments. But, trying to fulfill a classical worldview by encumbering quantum mechanics with hidden variables, multiple worlds, consistency rules, or spontaneous collapse, without any improvement in its predictive power, only gives the illusion of a better understanding. Contrary to those desires, quantum theory does not describe physical reality. What it does is provide an algorithm for computing probabilities for the macroscopic events (“detector clicks”) that are the consequences of our experimental interventions. This strict definition of the scope of quantum theory is the only interpretation ever needed, whether by experimenters or theorists.
Fuchs now promotes Quantum Bayesianism, which is essentially the same as the original Copenhagen interpretation.

I go even farther, and say that probability is just an interpretation, and is not necessary for quantum mechanics. Probability is a mathematical convenience for evaluating experiments in quantum mechanics or any other branch of science, but it is not an observable physical thing.

Quantum mechanics without interpretation has been called the Instrumentalist interpretation
Any modern scientific theory requires at the very least an instrumentalist description that relates the mathematical formalism to experimental practice and prediction. In the case of quantum mechanics, the most common instrumentalist description is an assertion of statistical regularity between state preparation processes and measurement processes. ...

By abuse of language, a bare instrumentalist description could be referred to as an interpretation, although this usage is somewhat misleading since instrumentalism explicitly avoids any explanatory role; that is, it does not attempt to answer the question why.
This article applies it to the Stern-Gerlach experiment. Sometimes the Ensemble interpretation is said to be the minimalist one, but that does not predict individual outcomes.

Sunday, August 24, 2014

Three arguments for string theory

This is summarized from a David Gross lecture:
String theory is a framework, not a specific theory making specific down-to-earth predictions about realistically doable or ongoing experiments that could decide about its fate, ...

The three arguments that either instinctively or knowingly contribute to the competent physicists' faith and growing confidence in string theory are:

UEA: unexpected explanatory coherence argument. If the theory weren't worth studying, it would probably almost never lead to unexpected answers, explanations, and ways to solve problems previously thought to be independent
NAA: no alternative argument. There's no other game in town. The argument has existed in the case of the Standard Model – in recent decades, NAA was getting increasingly important.
MIA: meta inductive argument. String theory is a part of the same research program that includes theories whose success has already been established.
Are you kidding me? Theologians make better arguments.

Here is another opinion, that string theory resulted from physics being trapped in a wrong philosophy:
Horgan: Do you ever think it’s time for physicists to abandon the quest for a unified theory?

Rovelli: The “quest for a unified theory” is a misconception. Physicists never really searched for it. They stumbled upon string theory, which to some appeared as a possible unification of everything, and, for lack of imagination, put too much energy into strings. When the enthusiasm for strings begun to fade, many felt lost. Now that supersymmetry is not showing up where string theorists expected it, it is a disarray. ...

Here is an example: theoretical physics has not done great in the last decades. Why? Well, one of the reasons, I think, is that it got trapped in a wrong philosophy: the idea that you can make progress by guessing new theory and disregarding the qualitative content of previous theories. This is the physics of the “why not?” Why not studying this theory, or the other? Why not another dimension, another field, another universe? Science has never advanced in this manner in the past. Science does not advance by guessing. It advances by new data or by a deep investigation of the content and the apparent contradictions of previous empirically successful theories. Quite remarkably, the best piece of physics done by the three people you mention is Hawking’s black-hole radiation, which is exactly this. But most of current theoretical physics is not of this sort. Why? Largely because of the philosophical superficiality of the current bunch of scientists.
It is not just the string theorists who suffer from a unified theory delusion. Einstein suffered from that since about 1925, and so have nearly all of the leading theoretical physicists of the last 40 years.

Leftist-atheist-evolutionist Jerry Coyne is upset that the Rovelli interview is not sufficiently atheistic, and writes:
Well, if you see compatibility as the ability of human minds to do science and believe in fairy tales, then Rovelli’s right. ...

Most of the rest of the first paragraph is good; it’s useful to realize that religious people dislike science because it forces us to live with doubt, and many believers aren’t comfortable with that. (Richard Feynman particularly emphasized that difference, as in the video below)
The Feynman video does not say anything about religion.

Coyne's blog attacks religious people all the time. But Coyne always writes with great certainty while the religious people express doubt. So Coyne seems to have missed Feynman's point, and is attacking a straw man.

Update: Now Coyne says that he does not want to diss philosophy, and disagrees with those who do, but he cannot find any example of where philosophy helped science.

Saturday, August 23, 2014

Bad argument that math is science

Astrophysicist and exoplanet searcher Coel Hellier writes A scientism defence of Logical Positivism:
Like everyone else I read Ayer’s Language, Truth and Logic as a teenager and, like many people of a scientific bent, I loved it. The Logical Positivism that it espoused can be summarised as the claim that knowledge is of two types: (1) logical reasoning from axioms, such as used by mathematics; and (2) claims about the universe that can (in principle) be verified empirically. Anything else — such as metaphysics — is literally meaningless.
That is correct up to the word "meaningless". There might be some meaning to metaphysics, but it is not the sort of knowledge that comes with a demonstration that it is true or false.

He also writes Defending scientism: mathematics is a part of science on his blog, and tries to defend these views on the Scientia philosophy blog:
I will take one statement as standing proxy for the whole of mathematics (and indeed logic). That statement is: 1 + 1 = 2

Do you accept that statement as true? If so (and here I presume that you answered yes), then why?

I argue that we accept that statement as true because it works in the real world. All of our experience of the universe tells us that if you have one apple in a bag and add a second apple then you have two apples in the bag. Not three, not six and a half, not zero, but two. 1 + 1 = 2 is thus a very basic empirical fact about the world [3]. ...

I have argued that all human knowledge is empirical and that there are no “other ways of knowing.” Further, our knowledge is a unified and seamless sphere, reflecting (as best we can discern) the unified and seamless nature of reality. ... I thus see no good reason for the claim that mathematics is a fundamentally different domain to science, with a clear epistemological demarcation between them.
I also defend logical positivism, but not this. He has abandoned the "logical" part of logical positivism. Logic and math are forms of knowledge that need no empirical verification, and usually do not get any.

Here are comments I left:
Coel, the flaw in your argument is in the triviality of your math examples. "1+1=2" is not much of a theorem, and is more accurately the definition of "2". Try applying your argument to a real theorem, such as the infinity of primes, as someone suggested.

There certainly is a sharp qualitative difference between the work of Riemann and Einstein. The mathematical theory of general relativity was worked out by Minkowski, Grossmann, Levi-Civita, and Hilbert -- all mathematicians. Einstein did not prove any theorems and rarely even made any mathematically precise statements.

Your comments about Godel's theorem are about like saying that the irrationality of the square root of 2 shattered hopes for geometry, or that comets shattered hope for astronomy. And it surely does not help your argument, unless you can explain how the theorem can be empirically understood or validated.

Your lesson from this is that "scientific results are always provisional". Maybe so, but mathematical results are not. Godel's theorem is not provisional.

You can, of course, define "science" any way you please, but you have failed to give a definition that includes mathematics. To you, science is empirical and provisional, but you do not give a single mathematical result with these properties. Do you really want to argue that "1+1=2" is a provisional result subject to empirical acceptance or rejection? Will you please tell us how this equation might be rejected?

You deny a "clear epistemological demarcation", but you do not give an example on the boundary of math and science. Your closest example is string theory, but you must realize that most of that subject is viewed by outsiders as neither science nor mathematics.

Coel, you say that it is " epistemologically identical", except that one uses empiricism and plausibility arguments and the other uses axioms and logic. In other words, not similar at all.

Part of the problem here is that what mathematicians mean by math is quite a bit different from what most scientists mean. I have heard science and engineering professors tell their students not to take math classes for math majors, because they have proofs in them. The professors act as if a proof is some sort of mysticism or voodoo with no applicability. Most of them do not understand what a proof is.

Coel argues that knowledge is science, and science is provisional, but that is just not true about mathematical knowledge. Mathematical truths are not provisional or subject to any empirical tests. He suggests that "1+1=2" can be tested by looking to see if alternate definitions can be used to predict eclipses. But there are lots of alternative number systems that are completely mathematically valid, even if they are not used to predict eclipses.

I could test "1+1=2" by laying 2 1-foot pieces of string together, and measuring the length. If so, I am likely to get 2.01 or something else not exactly 2. The mathematician says that 1+1 is exactly 2. So what have you tested? You certainly have not validated the mathematical truth that 1+1 is exactly 2. You have an empirical result about the usefulness of the equation, and that's all.

So when Coel says that math is epistemologically identical to science, he is not talking about how mathematicians do math.

SciSal is correct that most math has nothing to do with modeling the real world.

String theory is an odd beast. There are some mathematicians who prove theorems about string models, and physicists who look for empirical tests. But the vast majority of string theorists are not concerned with either of these pursuits. They are more like people playing Dungeons & Dragons in their own imaginary universe.

Coel, you say that math axioms are codified regularities of nature. The most common axiom system for math is ZFC (Zermelo-Frankel). Can you explain how those axioms relate to nature?

Coel, you repeatedly deny any distinction between a definition, a theorem, and an equation that empirically seems approximately valid. So I would lump you in with those other science and engineering professors who do not recognize the value of a proof.
I believe that my comments and other comments refute his position.

Friday, August 22, 2014

Essays on steering the future

FQXi has announced the winners of its 2014 essay contest. Here is an excerpt from the first-place winner:
We do not get anywhere with bemoaning that most people do not understand climate models or do not read information brochures about genetically modified crops. It is time to wake up. We’ve tried long enough to educate them. It doesn’t work. The idea of the educated and well-informed citizen is an utopia. It doesn’t work because education doesn’t please people. They don’t like to think. It is too costly and it’s not the information they want. What they want is to know how much an estimated risk conflict with their priorities, how much an estimated benefit agrees with their values. They tolerate risk and uncertainty, but they don’t tolerate science lectures. If a webpage takes more than 3 seconds to load it’ll lose 40% of visitors. Split-second looks at photos. That’s the realistic attention span. That’s what we have to work with.
I liked my essay better, of course.

Thursday, August 21, 2014

Maudlin and the non-locality fallacy

Some people foolishly exaggerate the importance of Bell's theorem. Here is a recent example.

Philosopher Tim Maudlin writes What Bell Did:
On the 50th anniversary of Bell's monumental 1964 paper, there is still widespread misunderstanding about exactly what Bell proved. This misunderstanding derives in turn from a failure to appreciate the earlier arguments of Einstein, Podolsky and Rosen. I retrace the history and logical structure of these arguments in order to clarify the proper conclusion, namely that any world that displays violations of Bell's inequality for experiments done far from one another must be non-local. Since the world we happen to live in displays such violations, actual physics is non-local.

The experimental verification of violations of Bell’s inequality for randomly set measurements at space-like separation is the most astonishing result in the history of physics. Theoretical physics has yet to come to terms with what these results mean for our fundamental account of the world. ...

Unfortunately, many physicists have not properly appreciated what Bell proved: they take the target of his theorem — what the theorem rules out as impossible — to be much narrower and more parochial than it is. Early on, Bell’s result was often reported as ruling out determinism, or hidden variables. Nowadays, it is sometimes reported as ruling out, or at least calling in question, realism. But these are all mistakes. What Bell’s theorem, together with the experimental results, proves to be impossible (subject to a few caveats we will attend to) is not determinism or hidden variables or realism but locality, in a perfectly clear sense. What Bell proved, and what theoretical physics has not yet properly absorbed, is that the physical world itself is non-local.
He also just posted a Reply to Werner. He does not reference the Werner criticisms, and the closest I could find was this Reinhard F. Werner post:
I am one of those who see in “local realism” a conjunction of two concepts: locality and realism. Bell’s argument shows that this conjunction is not in agreement with the observed facts. The separation between the concepts is not difficult, something that I expect students to understand. Quantum mechanics as I understand it takes the local option, in the sense of not containing spooky signals. Of course, if you insist on a classical “realist” description they are all over the place. It is clear that if you are altogether unwilling to even debate realism (or “classicality”) you can soak your language in it to such a degree that it would seem like an undeniable demand of basic logic. But that is just sloppy thinking, which is not improved by any degree of shouting or religious devotion.

“Realism” has a double meaning in this context. On one hand, it is a basic principle of science, the demand to check any claims against reality, to go for empirical content rather than storytelling. On the other hand, it stands for a particular way of constructing a theory, namely assuming that every individual system has an in principle complete description in terms of its properties (“classicality”). The irony of quantum mechanics is that it brings these two into conflict. Those insisting on the second kind of realism, like the Bohmian school, thereby lose sight of the first: Bohmian trajectories have no connection to empirical fact, and even the Bohmian theory itself claims no connection. So they are just a piece of fantasy. You may call the trajectories the reality givers (I even heard “realizors”) of the theory, and base an “ontology” on them. But they are still but a figment of your imagination.
(Maudlin was replying to a different Werner essay, but that is not online yet, according to the comment below.)

Werner is on the mark here. There are two kinds of realism here. Quantum mechanics is contrary to the sort of realism associated to classical or hidden variables.

Maudlin and the other Bell fans yearn for some sort of classical realism, and prefer to reject locality. However it is foolish to conclude that "the physical world itself is non-local." If that were true, then Nobel prizes would have been given to Bell and his followers long ago.

Maudlin writes:
Finally, it has become fashionable to say that another way to avoid Bell’s result and retain locality is to abandon realism. But such claims never manage to make clear at the same time just what “realism” is supposed to be and just how Bell’s derivation presupposes it. I have heard an extremely distinguished physicist claim that Bell presupposes realism when he uses the symbol λ in his derivation.
That distinguished physicist is correct. Bell assumes λ parameterizes a hidden variable theory that functions according to classical (non-quantum) rules. If realism means that thew world is ruled by classical hidden variables, then realism has been disproved by quantum mechanics and Bell's theorem.

Monday, August 18, 2014

Book says purpose is to understand the physical world

A reader quotes Quantum Mechanics and the Particles of Nature: An Outline for Mathematicians by Anthony Sudbery:
"Moreover, it cannot be true that the sole purpose of a scientific theory is to predict the results of experiments. Why on earth would anyone want to predict the results of experiments? Most of them have no practical use; and even if they had, practical usefulness has nothing to do with scientific inquiry. Predicting the results of experiments is not the purpose of a theory, it is a test to see if the theory is true. The purpose of a theory is to understand the physical world"
The book is out of print, but you can download the full text here, and the above quote is on p.214.

In contrast, R.P. Feynman's textbook said:
Another thing that people have emphasized since quantum mechanics was developed is the idea that we should not speak about those things which we cannot measure. (Actually relativity theory also said this.)
In relativity, someone could ask: What is the real time? Are the events really simultaneous? Do the rods really contract or just appear to contract? Why do the twins age differently?

At some point, you have to accept the fact that if you ask questions that cannot be resolved by experiment, then you may not get a satisfactory answer. You may not get that understanding of the physical world that you desire.

One of the main points to XX century physics is to stick to what is measurable. Sudbury wants more out of a scientific theory. I take the more positivist view. Speculate about unobservables all you want, but face the fact that if there is no experiment to say whether you are right or wrong, then you have left the domain of science.

Thursday, August 14, 2014

Philosophers reject logical positivism

It is sometimes said that quantum mechanics brought a new philosophy of science, inspired by logical positivism. Eg, see this comment:
Bohr is an example of a scientist who made some philosophical remarks in defense of a scientific theory.

Where do you think that philosophy came from? Did you ever hear of the Vienna Circle, that collaboration of philosophers, mathematicians and scientists?

Bohr collaborated with people like Schlick, Neurath, Godel and others on this philosophical approach. This was a completely new way of looking at science, the suspension of the automatic assumption that it was about a ‘real world’ or ‘external reality’.

This is not to say that they denied an external reality (although some, like Schrodinger did) but that it was not a hypothesis that physics needed.

You could certainly argue that this is one of the things that enabled physics in the 20th century to make such bold leaps.
Maybe so, but those philosophical ideas have been rejected by nearly all philosophers of the last 50 years. I have posted how positivism has been attacked by Michael Polanyi and Steven Weinberg.

This is at the core of why I say that philosophers are at war with science. They are the equivalent of Flat Earthers, because they are rejecting what underlies XX century science.

Joshua Engel tries to answer Has logical positivism been successfully debunked? on Quora:
"Debunked" is the wrong word for it. "Abandoned as unsuitable for its purpose" is more apt. And that's only because its ultimate purpose is so lofty, and the power of so intense, that it is the only philosophical movement I've ever known to actually be able to determine that it was on the wrong track.

As a name for the not-completely-rigorous discipline of scientific materialism, "logical positivism" remains in use. What's often called "logical positivism" isn't: it's commonly associated with "falsifiability", which is actually antithetical to the key idea of logical positivism that statements could be positively proved to be true.

In its original, more rigorous definition, "logical positivism" has been abandoned for a number of reasons. The most important of these, I think, is in the Problem of Induction: how can you possibly derive a rigorous definition of truth from finite numbers of observations? The Logical Positivist school had been inspired Wittgenstein's effort to make language rigorous and thus to translate the entire world into a single, true logical formalism, but there's an inherent gap between observation and reality that they could never cover.

The goal had been to replace "metaphysics" with observation and proof, but this turned out to be a self-swallowing problem: you always end up creating your formalism according to what you perceive now, and this determines the kinds of observations you make. In other words, there's always a metaphysics there, whether you notice it or not.
Olaf Simons defends logical positivism on his blog, and answers:
The idea of “debunking” positivism is intriguing since logical positivism has indeed attracted immensely popular refutations. Some of them went viral as killer applications to be used against the odd logical positivist, while logical positivism itself has rather silently disappeared from the map of active philosophies. Did it vanish because of these refutations? Did Karl Popper kill logical positivism? Or did it kill itself thanks to Ludwig Wittgenstein’s contribution to this particular philosophy?

Quora is a medium of rather short and opinion driven statements. My short assessment is that this is a philosophy that virtually imploded in the shock wave Wittgenstein’s Tractatus sent out. The popular refutations should deserve a perspective even though. They betray a public desire to see this philosophy as dead as if it had never been invented. The very idea of “debunking” logical positivism betrays this desire.
Here is one of the arguments:
Logical positivism: all statements that can't be empirically verified are meaningless.

Response to logical positivism: you can't empirically verify that claim…
Apparently some philosophers accept this as a refutation, but it is nonsense. A more precise statement would be that logical positivism is interested in empirically verifiable statements, and yes you can verify their claims, if you state them correctly.

This reminds me of people who argue that Goedel somehow refuted logical analysis because he showed that certain logic systems could not prove all truths internally.

The 2nd main argument against logical positivism is that Karl Popper showed that falsificationism is better. That is, proving things false is better than proving things true.

The 3rd main argument is that paradigm shift theory is better. That is, scientists accept what is popular, not what has been shown to be true.

These arguments are very weak. Logical positivism remains the best way to understand modern science. Philosophical attacks on positivism are mainly attacks on modern science.

Monday, August 11, 2014

Rothman on oversimplified textbooks

A reader quotes a essay:
Physicist Tony Rothman said terrible things about all of Physics in the 2011 May-June issue of American Scientist in the article "The Man Behind the Curtain: Physics is not always the seamless subject it pretends to be"

"Nevertheless, as a physicist travels along his (in this case) career, the hairline cracks in the edifice become more apparent, as does the dirt swept under the rug, the fudges and the wholesale swindles, with the disconcerting result that the totality occasionally appears more like Bruegels Tower of Babel as dreamt by a modern slumlord, a ramshackle structure of compartmentalized models soldered together into a skewed heap of explanations as the whole jury-rigged monstrosity tumbles skyward" ...

"One doesn't have to go so far in quantum theory to be confused. The concept of electron "spin" is basic to any quantum mechanics course, but what exactly is spinning is never made clear. Wolfgang Pauli, one of the concept's originators, initially rejected the idea because if the electron was to have a finite radius, as indicated by certain experiments, then the surface would be spinning faster than the speed of light. On the other hand, if one regards the electron as a point particle, as we often do, then it is truly challenging to conceive of a spinning top whose radius is zero, not to mention the aggravation of infinite forces"

Then he attacks the 2-slit experiment:

"....Rather than describing how the light interacts with the slits, thus explaining why it behaves as it does, we merely demand that the light wave meet certain conditions at the slit edge and forget about the actual forces involved. The results agree well with observation, but the most widely used of such methods not only avoids the guts of the problem but is mathematically inconsistent"

He goes on and on...attacking Lagrangian mechanics, ....etc

"The great swindle of of introductory physics is that every problem has an exact solution. Not only that, students are expected to find it.
Of course the books say what is spinning -- the electron spins. What they do not do, and cannot do, is to give a classical model of a mechanical spinning electron to visualize. Quantum mechanics shows that the electron behaves differently from those classical objects, but there really is an electron and it really does spin.

He also complains that some systems can be solved numerically, but not in closed-form exact formulas. Some are also chaotic, making long-term prediction an impossibility. And real-life problems are more complicated that the over-simplified textbook examples.

That essay concludes:
“Explanation” in physics generally means to find a causal mechanism for something to happen, a mechanism involving forces, but textbook optics affords no such explanation of slit experiments. ...

Such examples abound throughout physics. Rather than pretending that they don’t exist, physics educators would do well to acknowledge when they invoke the Wizard working the levers from behind the curtain. Even towards the end of the twentieth century, physics was regarded as received Truth, a revelation of the face of God. Some physicists may still believe that, but I prefer to think of physics as a collection of models, models that map the territory, but are never the territory itself. That may smack of defeatism to many, but ultimate answers are not to be grasped by mortals. Physicists have indeed gone further than other scientists in describing the natural world; they should not confuse description with understanding.
Some people argue that Newtonian gravity theory is not really an "explanation" of the solar system, because it fails to give a causal mechanism for how the mass of the Sun exerts its force on the planets. I thought that was where Rothman was going in the penultimate paragraph, but then he seems to say that physics does not require any such explanation.

It seems entirely appropriate to me for physics textbooks to explain what they can, and not speculate much about what they cannot explain. Maybe after discussing the hydrogen atom, the book should say that a carbon atom is a whole lot more complicated. But isn't that obvious?

So of course physics textbooks are going to give particle and wave descriptions of light, and give the quantum mechanics description. If you find that intellectually dissatisfying, that's too bad, because that is the best we can do.

Like other physics expositors, he cannot resist talking about Einstein, and mentions:
Einstein did not consider his theory of gravitation — general relativity — complete until he could derive his field equations from an action, a feat that the mathematician David Hilbert accomplished five days before Einstein himself.
I am a little surprised he phrases it that way, because there is no evidence that Einstein derived the field equations from an action. Hilbert and Einstein had many meetings in 1915 before they each published those general relativity papers. Hilbert's paper describes the action, and Einstein's 5-days-later paper does not.

The way it appears to me, Einstein was tentative about general relativity until two things happened in the fall of 1915: (1) he learned that Hilbert could use a covariant action to derive essentially the same field equations that Grossmann published in 1913; and (2) he was able to use those equations to extend Poincare's analysis of the relativistic effect on Mercury's orbit. Einstein then had to hurry up and publish so he could claim full credit, or else Hilbert would get credit.

Friday, August 8, 2014

Statistical interpretation is not minimalist

A reader suggests the Statistical Interpretation of quantum mechanics, aka the Ensemble interpretation, is superior to others.

My main complaint is that this claims to be a "minimalist interpretation". It is not. Sometimes the many-worlds also claims to be minimalist. It is not either. The most minimal of the of the mainstream interpretations is the Instrumentalist interpretation.

The statistical interpretation essentially says that only probabilities are observable, and only in large systems. But quantum mechanics is commonly used to make predictions about small systems, such as a the energy levels of a 2-particle hydrogen atom. Turning this into a statistical observation about an ensemble of hydrogen atom is to add a lot of extraneous junk. A minimalist interpretation has to say something about one atom.

I prefer the positivist interpretation, as being truly minimalist.

Probability is a mathematical layer with multiple interpretations, and any quantum mechanics interpretation that requires probability is not minimalist.

Part of the confusion is that people think that probability is essential to quantum mechanics, via the Born rule. Probability is important to quantum mechanics in the same way that it is important to other scientific theories, but not essential.

Wednesday, August 6, 2014

A political book, not a math book

I have criticized physicist Sean B. Carroll many times, as he gets outsized publicity for his silly ideas, and now his wife reviews several math books in the NY Times:
HOW NOT TO BE WRONG
The Power of Mathematical Thinking
By Jordan Ellenberg
Penguin Press, $27.95.

Every math teacher cringes at the inevitable question from students: “When am I ever going to use this?” Ellenberg, a math professor at the University of Wisconsin, admits that even though we’ll never need to compute long lists of integrals in our daily lives, we still need math. It’s “a science of not being wrong about things,” he writes, and it gives us tools to enhance our reasoning, which is prone to false assumptions and cognitive biases.
She does not mention the political bias, exposed by these Amazon reviews:
Mr Ellenberg seems like he knows his math, but he's so entrenched in liberal academia that it probably never dawned on him that others might have different views. I only got to chapter six. When he started in on yet another example of statistical abuse, this time defending President Obama from his detractors, I just couldn't stick it any more.

I don't demand that an author agree with me politically. I'm perfectly aware (as Mr Ellenberg seems not to be) that thoughtful people can disagree. There was just no reason for every second or third example chosen by Mr Ellenberg to illustrate his ideas to be political and absolutely no reason at all for all of them to lean one way. Both parties commit egregious violations of mathematical principles every day, why go out of your way to alienate half of your audience? I'm surprised that the editor at Penguin let this pass. ...

The first indication I had that this isn't a book about how to analyze things objectively through mathematics was when I read the inside flap and it raised the question about who really won Florida in the 2000 election. I skipped ahead to that part of the book, hoping to see a sterile mathematical analysis. But there wasn't one.

There was a lot of totally subjective talk about why Scalia and the other justices ruled the way they did in Bush vs. Gore, with thinly veiled criticism of Scalia. This included the suggestion that Scalia doesn't care about finding the truth in a murder case, based on the author's superficial, sound-bite reference to a case that Scalia once ruled on.

Then he said that the overall count would have been more accurate if the court had allowed a recount in the counties that Gore asked for. This is totally false. The overall count would not have been more accurate if the recount - which could result in more votes being tallied - was only in counties deliberately chosen because they were likely to have more votes for one particular candidate. If you chose a few counties at random for a more thorough recount it would theoretically make the overall count more accurate, but not when you add the bias of choosing counties favorable to one candidate. A math expert should know this.

And, astonishingly, he disregarded the post-election recount sponsored by the New York Times and other media outlets. How can one do a mathematical analysis of the election without using this recount?

Well, he didn't do a mathematical analysis. He just concluded with conjecture about why he thinks the justices ruled the way they did - something that he has no way of knowing, and that isn't a mathematical analysis - with the implication that the result of the election would have been different had they not ruled that way. And, again, he gave NO math no support this.

From what I could see, the rest of the book wasn't any better. It's a political book, not a math book.
Ellenberg does not rebut these statements in the Amazon comments.

Many math and physics professors are living in a liberal bubble where they have no practice in critical political thinking skills.

Another Jennifer Ouellette review says:
We take it as given today that a continuous straight line is made up of an infinite number of distinct tiny parts — a concept that in the 17th century became the foundation of calculus. But it wasn’t always the case. As Alexander, a U.C.L.A. historian, reminds us, there was a time when “infinitesimals” were considered downright heretical. In 1632 the Society of Jesus forbade their use, and they ignited much contentious debate within London’s Royal Society. The debate still raged in the 1730s, when the Anglican bishop George Berkeley mockingly dismissed infinitesimals as “ghosts of departed quantities.”

The argument had little to do with how we look at a simple line, and everything to do with the major cultural shifts at the time, as the rise of scientific thinking challenged longstanding precepts of faith, and aristocratic privilege was beset by a wave of liberal egalitarianism.
No, the bishop did not consider infinitesimals heretical. They were not the ghosts, but his "ghosts" were the derivatives, being the ghostly limits of a sequence of ratios.

A line has been defined as a infinite set of points ever since Euclid, two millennia ago. The concept did not challenge medieval precepts of faith or anything like that. Berkeley's book is quoted here:
The infidel mathematician is believed to have been either Edmond Halley or Isaac Newton. He argued that although the calculus led to true results, its foundations were no more secure than those that underpin religion. He stated that the calculus involved a logical fallacy and described derivatives thus:

"And what are these fluxions? The velocities of evanescent increments? And what are these same evanescent increments? They are neither finite quantities, nor quantities infinitely small, nor yet nothing. May we not call them ghosts of departed quantities?"

In modern language, this could be read as:

"What are these 'instantaneous' rates of change? The ratios of vanishing increments? And what are these 'vanishing' increments? They are neither finite quantities nor 'infinitesimal' quantities, nor yet nothing. May we not call them the ghosts of departed quantities?"

His interesting theory as to why the calculus actually worked was that it was the result of two compensating errors.

As a consequence of the controversy surround Berkeley's publication, the foundations of calculus were rewritten in a much more formal and rigorous manner using limits.
A Scottish university site says:
Berkeley's criticisms were well founded and important in that they focused the attention of mathematicians on a logical clarification of the calculus. He developed an ingenious theory to explain the correct results obtained, claiming that it was the result of two compensating errors. Ren writes in [30]:-
By reviewing Berkeley's lifetime and the content of the "Analysts", we conclude that his critique was correct and that it impelled the improvement of the foundations of calculus objectively. It is helpful for the normal development of mathematics to accept various forms of critique positively.
Many of the other references which we give also discuss Berkeley's attack on the calculus; see [5], [11], [19], [21], [26], [30], and [33]. De Moivre, Taylor, Maclaurin, Lagrange, Jacob Bernoulli and Johann Bernoulli all made attempts to bring the rigorous arguments of the Greeks into the calculus.
Yes, achieving rigor in calculus was a good thing, and not an ignorant prejudice of medieval faith and aristocratic privilege, as Ouellette would have you believe. I don't know whether she shares her husband's atheist ideology, but she is taking cheap shots at religious scholars who did legitimate work.

It is easy to laugh at people who did not have a modern understanding centuries ago. Putting calculus on firm logical foundations took a long time from a lot of smart mathematicians.

Monday, August 4, 2014

More Galileo myths

The Renaissance Mathematicus writes:
How much can you get wrong in an eight hundred word biographical sketch of a very famous sixteenth and seventeenth-century mathematicus and philosophicus? – One helluva lot it seems?

If someone is doing the Internet equivalent of being a big-mouthed braggart and posting an article with the screaming title, “10 Absurdly Famous People You Probably Don’t Know Enough About” you would expect them to at least get their historical facts right, wouldn’t you? Well you would be wrong at least as far as “absurdly famous” person number seven is concerned, Galileo Galilei. Tim Urban the author of this provocative article on the ‘Wait But Why’ blog appears to think that history of science is something that you make up as you go along based on personal prejudice mixed up with some myths you picked up some night whilst drunk in a bar.
He is a little hard on the site -- it was just reciting the standard Galileo myths.

He previously posted:
Now anybody reading, in particular, the popular literature on Galileo with a half way critical mind will very rapidly become aware that it is all permeated with an incredible level of hyperbole, it would appear that Signor Galileo is superhuman. If we just take some of the statements from the, on the whole fairly good, Wikipedia article we have the following collection of exaggerated statements:

Galileo has been called the “father of modern observational astronomy” the “father of modern physics” the “father of science”and “the Father of Modern Science.” Stephen Hawkins says, “Galileo, perhaps more than any other single person, was responsible for the birth of modern science.”

At the bottom of the article we have the following title: The Person of the Millennium: The Unique Impact of Galileo on World History.

I have also in my reading come across the following claims: “G is the inventor of the scientific method”, “G was the first to apply mathematics to science”, “G discovered the first mathematical law of science” and so on and so forth…

The image of Galileo Galilei has been inflated like a dirigible airship that floats above the early modern period obscuring the efforts and achievements of all the other scientists, his shadow only being broken by the light of that other god of science Isaac Newton. Unfortunately this image is total bullshit and its propagation leads to a major distortion in our understanding of the historical development of science.
On another post, he makes an odd distinction between theory and hypothesis:
The heliocentric hypothesis says that heliocentricity offers a possible model to explain the observed motion of the planets; it says nothing about the truth-value of this model. The heliocentric theory says that the universe is in reality heliocentric. In 1616 the Church banned the heliocentric theory but not the hypothesis. This might at first seem like splitting hairs but in reality it is a very important distinction. Astronomers were completely free to go on discussing and researching the possibility of heliocentricity but until they produced actual proof that the universe is indeed heliocentric they were not allowed to claim that it was. So in reality the Church was here not even attempting to actively suppress a line of scientific activity. As a side note it should be pointed out from an epistemological standpoint the Church was right to deny the correctness of the heliocentric theory at that time, which does not however excuse their primitive attempt to ban it.
Typical definitions of theory are this from Webster's:
a plausible or scientifically acceptable general principle or body of principles offered to explain phenomena ("the wave theory of light")
Or WordNet:
a tentative theory about the natural world; a concept that is not yet verified but that if true would explain certain facts or phenomena ("A scientific hypothesis that survives experimental testing becomes a scientific theory")
The Church never banned the heliocentric theory. Scientists were free to use heliocentric principles to explain and predict the night sky.

There was probably more free speech in 17th century Italy than anywhere else in the world.

In the recent 5 Common Evolution Myths, Debunked, endorsed by evolutionist professor Jerry Coyne, the first myth is some goofy quibble about the definition of the word theory:
Myth 1: It's just a theory

The truth: The word "theory" has a different meaning inside the scientific community than it does elsewhere.

In everyday language, you and I would use "theory" to describe a whim feeling: a theory that eating the crust on your sandwiches makes you taller, or, say, that Marty Hart's daughter on True Detective was actually involved with the Tuttle clan the whole time (pshh).

Either would totally work in this case. In the general sense, an idea doesn't necessarily need to make sense, or even be true, to be considered a theory.

A scientific theory, on the other hand, refers to a comprehensive explanation for a variety of phenomena.

It begins as a hypothesis. Then, if enough evidence exists to support it, through repeated and thorough testing, it moves to the next step in the scientific method — a theory — where it is accepted as a credible explanation.

One example is atomic theory, which shows how matter is composed of atoms.

Evolution, similarly, is accepted by the vast majority of scientists and backed up by research in fields such as embryology, molecular biology and paleontology.
No, the dictionary definition of "theory" matches both everyday and scientific use.

Consider the above science examples, "wave theory of light" and "atomic theory". These are extremely useful theories, even without necessarily accepting the underlying hypotheses as correct. Many physicists would say that light consists of particles, not waves, and that atomic theory oversimplifies atoms. Scientific theories are not necessarily proven. Sometimes the evidence for them is extremely weak or non-existent, such as with string theory.

For some reason, evolutionists and Galileo myth-promoters want to define a theory as something that is "accepted as a credible explanation." It is a spineless way of arguing that something is true with demonstrating it.

If Galileo were really the father of modern science, he would have understood that he could have a theory like heliocentrism to explain the astronomical observations, without heliocentrism being necessarily provable. If he had understood that, then he would not have had any trouble with the Pope. Galileo's mistake was to claim that he could prove heliocentrism, whereas relativity shows that it was impossible for him to have any such proof.

In Coyne's case, promoting evolution is intertwined with promoting atheism and leftist politics. So he doesn't really want to talk about what can be proven. He wants to be able to say that there is an atheistic theory for life on Earth, and therefore we should accept atheism.

Wednesday, July 30, 2014

Denmark ignored Galileo

There is a widespread my that Galileo invented the telescope, discovered heliocentrism, and was suppressed by a Pope that would not tolerate new ideas.

The respected physics historia Helge Kragh writes: Galileo in early modern Denmark, 1600-1650
The scientific revolution in the first half of the seventeenth century, pioneered by figures such as Harvey, Galileo, Gassendi, Kepler and Descartes, was disseminated to the northernmost countries in Europe with considerable delay. In this essay I examine how and when Galileo's new ideas in physics and astronomy became known in Denmark, and I compare the reception with the one in Sweden. It turns out that Galileo was almost exclusively known for his sensational use of the telescope to unravel the secrets of the heavens, meaning that he was predominantly seen as an astronomical innovator and advocate of the Copernican world system. Danish astronomy at the time was however based on Tycho Brahe's view of the universe and therefore hostile to Copernican and, by implication, Galilean cosmology. Although Galileo's telescope attracted much attention, it took about thirty years until a Danish astronomer actually used the instrument for observations. By the 1640s Galileo was generally admired for his astronomical discoveries, but no one in Denmark drew the consequence that the dogma of the central Earth, a fundamental feature of the Tychonian world picture, was therefore incorrect.
This is not surprising. The Dane Tycho invented the instruments that made the best astronomical observations in the world, and that data was used for the best models. Galileo had nothing to compete with that.

Galileo said Mathematics is the language in which God has written the universe, but his telescopic observations and heliocentric arguments were not very mathematical.

Kragh concludes:
Whereas Galileo was well known and highly reputed in the first two decades of the seventeenth century, it took longer before he was discovered by astronomers and natural philosophers in the Nordic countries. Tycho Brahe was aware of him at an early date, but he was an exception. The first time Galileo was mentioned in print by a Danish scholar was in 1617, and five years later he appeared in a Swedish publication. Yet, still around 1640 there were only few references to his scientific work. What eventually attracted attention to the innovative Italian were almost exclusively his astronomical discoveries made by means of the amazing telescope. His advocacy of the Copernican world system was noted, but without making any impact. In the first half of the century there still were no Copernicans in either Denmark or Sweden. Astronomers were either Tychonians or supporters of the Ptolemaic theory.

Galileo’s international fame undoubtedly rested on his telescopic discoveries, but of course he also did pioneering work in mechanics and other branches of natural philosophy. First of all, he introduced the experimental method. There seems to be no mention in the Danish scholarly literature of the physical rather than astronomical Galileo. One looks in vain for awareness of or comments on his theory of the pendulum, his laws of freely falling bodies or his ideas about inertial motion; nor is his views on atomism, the void and the nature of heat to be found in the learned literature. These parts of Galileo’s work were foreign to Danish natural philosophers who predominantly thought in terms of Aristotelian concepts and tended to interpret the Bible quite literally. The situation in Sweden was not very different. Finally it is worth mentioning that apparently the process against Galileo in 1633 did not create much interest. It was known but not, as far as I can tell, discussed in print until much later.
Kepler's astronomy was a whole lot more important than Galileo's during this period. Galileo was the first to publish observations about the moons of Jupiter, but others made the same conclusions once they get telescopes. Kepler had a sophisticated mathematical model that was way beyond Tycho's, and Tycho's was way beyond Galileo's. There was no good reason for Danes to pay much attention to Galileo's astronomy. Galileo's confrontation with the Pope made a good story, but scientifically, it wasn't that important.

Sunday, July 27, 2014

Born rule is incompatible with many-worlds

Physicist Sean M. Carroll makes another bad attempt at explaining quantum mechanics:
One of the most profound and mysterious principles in all of physics is the Born Rule, named after Max Born. In quantum mechanics, particles don’t have classical properties like “position” or “momentum”; rather, there is a wave function that assigns a (complex) number, called the “amplitude,” to each possible measurement outcome. The Born Rule is then very simple: it says that the probability of obtaining any possible measurement outcome is equal to the square of the corresponding amplitude. (The wave function is just the set of all the amplitudes.)
This is really confused. Particles certainly do have properties like position and momentum. These are observed all the time. The whole idea of a particle accelerator is to put particles in a particular position and particular momentum. They only have these properties when they are observed that way, but then they are not even particles unless they are observed that way.

The description of a wave function is over-simplified. For a system as trivial as one election, the wave function is already more complicated, and the probability is not just the square of the probability.

More importantly, once you accept the quantum mechanics premise that observables are operators on a Hilbert space, then there is nothing mysterious about the Born rule. There is no other way to make sense out of observables being operators. It is only mysterious to many-worlds advocates like Carroll, because they do not believe in probabilities. They believe that all possibilities happen in alternate universes and that there is no way to quantify those universes.

A comment explains:
As a theory, Many Worlds is in a bad state, and this paper is an example of why.

If someone tells me that there are many quantum worlds in a single wavefunction, I expect that they can tell me exactly what part of a wavefunction is a world, and how many worlds there are in a given wavefunction.

As Sean says in his article, a naive attempt to be concrete about what a world is, and how many there are in a given wavefunction, leads to something which *disagrees* with experiment.

But rather than regard this as a point against Many Worlds, and rather than try new ways to carve up the wavefunction into definite worlds… instead we have contorted sophistical arguments about how you should *think* in a multiverse, as the explanation of the Born rule.

The intellectual decline comes when people stop regarding Born probabilities as frequencies, and stop wanting a straightforward theory in which you can “count the worlds”.

Common sense tells me that if A is observed happening twice as often as B, and if we are to believe in parallel universes, then there ought to be twice as many universes where A happens, or where A is seen to happen. But a detailed multiverse theory in which this is the case is hard to construct (Robin Hanson is one of the few to have tried).

Instead what we are getting (from Deutsch, from Wallace, now here) are these rambling arguments about decision theory, rationality, and epistemology in a multiverse. They all aim to produce a conclusion of the form, “you should think that A is twice as likely as B”, without having to exhibit a clear picture of reality in which A-worlds are twice as *common* as B-worlds.
Lumo picks Carroll apart in greater detail, and concludes:
I am really annoyed by the proliferation of this trash and I am annoyed by the fact that this trash is being repetitively pumped into the public discourse by the media and blogs run by narcissist crackpots like Sean Carroll, building upon Goebbels' claim that a lie repeated 100 times becomes the truth. At the end, the reason why I am so annoyed is that people don't have time to appreciate the clever, precious, consistent, and complete way how Nature fundamentally describes phenomena, and the people – like Heisenberg et al. – who have found those gems. These people are the true heroes of the human civilization. Instead, we're flooded by junk by Carroll-style crackpots whose writings don't make any sense and who are effectively spitting on Heisenberg et al.
He is over-the-top, as usual, but he is right that this advocacy of many-worlds is crackpot stuff.