Wednesday, December 31, 2014

Why supersymmetry is wrong

I believe that supersymmetry SUSY is wrong, and no SUSY particles will ever be found.

Most theoretical physicists over the last 40 years have been telling us that SUSY particles will surely be found in the next big accelerator. How can I disagree with experts with a much higher IQ?

Simple. My main arguments are:

1. The theoretical arguments for SUSY are faulty.

2. Billions of dollars and the world's smartest people have been devoted to finding SUSY, and all efforts have failed.

3. Existence of SUSY would have a lot of surprising consequences that seem implausible to me, such as dozens of new particles.

One of the arguments for SUSY is a mystical belief in grand unification. That is, and the Big Bang singularity, there was just pure energy, and fermions and bosons were all the same thing. Physicists point to a long history of explaining nature by looking for symmetries that appear to be broken. And there is some very nice mathematics for dealing with supersymmetries, and a lot of people have faith all good math is good for something.

Sounds great, but science is all about explaining the real world, and fermions are nothing like bosons.

The SUSY proponents make it sound as if they are on the side of elegance and simplicity, but they are not. A SUSY universe would be vastly more complicated than the Standard Model.

You can ask: If God were creating the universe in order to make human life possible, what would be the most straightforward structure?

I am not sure that question makes any sense, but I sure don't see SUSY as part of the answer.

Someone might say "Your argument is just a hand wave. You have no proof that SUSY does not exist, or principle that prevents them from existing."

That's right, I don't. You can prove me wrong by just finding a SUSY particle. All you have to do is build a $10B particle smasher, and have 5k people run it for 10 years. But that has been done now, and no SUSY.

I am against quantum computing for similar reasons.

I have listed 5 arguments against quantum computers, and why the universe is not one.

Scott Aaronson claims to rebut the 11 arguments against quantum computers in his Democritus book, and presumably again in the new book he is promising. (Read this review, and you will see why he needs a new book.) He explains how some people are skeptical about wave functions with coefficient values of about 10-45, and how a quantum computer will depend on these being both extremely small and nonzero. Quantum mechanics has, in some cases, made predictions to 10-digit accuracy, but nothing like this. He says:
The obvious repudiation of argument 4, then, is that I can take a classical coin and flip it a thousand times. Then, the probability of any particular sequence is 2-1000, which is far smaller than any constant we could ever measure in nature. Does this mean that probability theory is some "mere" approximation of a deeper theory, or that it's going to break down if I start flipping the coin too many times?
It is one thing to say that a probability is very small, and then confirm it by noticing that it never happens. It is quite another to expect that a lot of fancy arithmetic on such numbers to represent a physical device that is going to factor a 200-digit number.

He gives this response to a negative Amazon review claiming that quantum computing is impossible:
Simple question: if Christian were right, then wouldn't it be even MORE worthwhile to try to build scalable quantum computers---since the failure of those computers to behave as conventional theory predicted would overthrow our current understanding of QM, prove Christian right to the entire world, and ignite a scientific revolution?
This is like saying, "if scientists are right about thermodynamics, then it is all the more worthwhile to build perpetual motion machines to watch them fail." Physicists have spent 20 years trying to build scalable quantum computers, and watching them fail. A better response to this commenter would have been to refer him to the Bell test experiments, as they have confirmed quantum mechanics without confirming quantum computing.

There are SUSY advocates who go on believing regardless of the evidence. SciAm reported:
It is not an exaggeration to say that most of the world’s particle physicists believe that supersymmetry must be true — the theory is that compelling. These physicists’ long-term hope has been that the LHC would finally discover these superpartners, providing hard evidence that supersymmetry is a real description of the universe…

Indeed, results from the first run of the LHC have ruled out almost all the best-studied versions of supersymmetry. The negative results are beginning to produce if not a full-blown crisis in particle physics, then at least a widespread panic. ...

What if supersymmetry is not found at the LHC, he asked, before answering his own question: then we will make new supersymmetry models that put the superpartners just beyond the reach of the experiments.
That is how it is with quantum computers, with the enthusiasts saying for 20 years that the super-Turing payoff is just out of reach.

Monday, December 29, 2014

The crisis in p-hacking

Here is a new article in American Scientist:
The Statistical Crisis in Science

Data-dependent analysis—a “garden of forking paths”— explains why many statistically significant comparisons don't hold up.

Andrew Gelman, Eric Loken

There is a growing realization that reported “statistically significant” claims in scientific publications are routinely mistaken. Researchers typically express the confidence in their data in terms of p-value: the probability that a perceived result is actually the result of random variation. The value of p (for “probability”) is a way of measuring the extent to which a data set provides evidence against a so-called null hypothesis. By convention, a p-value below 0.05 is considered a meaningful refutation of the null hypothesis; however, such conclusions are less solid than they appear.
It is a good article, but the author admits:
Russ correctly noted that the above statement is completely wrong, on two counts:

1. To the extent the p-value measures “confidence” at all, it would be confidence in the null hypothesis, not confidence in the data.

2. In any case, the p-value is not not not not not “the probability that a perceived result is actually the result of random variation.” The p-value is the probability of seeing something at least as extreme as the data, if the model (in statistics jargon, the “null hypothesis”) were true. ...

Russ also points out that the examples in our paper all are pretty silly and not of great practical importance, and he wouldn’t want readers of our article to get the impression that “the garden of forking paths” is only an issue in silly studies.
I guess it is just an editing error. It is amazing that a major science magazine can hire expert statisticians to explain what is wrong with p-values, and get it wrong in the first paragraph.

Sunday, December 28, 2014

Why string theory has unraveled

Smithsonian magazine published a Brian Greene article under the title "Is String Theory About to Unravel?" Usually the editor chooses the title, without necessarily any input from the author.

He gives a short history of string theory, how he became one of the theory's biggest enthusiasts, but the past 30 years is mostly a story of theoretical and experiment failures. He even concedes:
I now hold only modest hope that the theory will confront data during my lifetime. ... Looking back, I’m gratified at how far we’ve come but disappointed that a connection to experiment continues to elude us.
So the editor probably read the article and concluded that the theory was unraveling. But either Greene or the other string theorists complained, and now the title reads, "Why String Theory Still Offers Hope We Can Unify Physics".

Arguments for unverifiable theories usually cite Einstein:
Unification has become synonymous with Einstein, but the enterprise has been at the heart of modern physics for centuries. Isaac Newton united the heavens and Earth, revealing that the same laws governing the motion of the planets and the Moon described the trajectory of a spinning wheel and a rolling rock. About 200 years later, James Clerk Maxwell took the unification baton for the next leg, showing that electricity and magnetism are two aspects of a single force described by a single mathematical formalism.

The next two steps, big ones at that, were indeed vintage Einstein. In 1905, Einstein linked space and time, showing that motion through one affects passage through the other, the hallmark of his special theory of relativity. Ten years later, Einstein extended these insights with his general theory of relativity, providing the most refined description of gravity, the force governing the likes of stars and galaxies. With these achievements, Einstein envisioned that a grand synthesis of all of nature’s forces was within reach. ...

While spectacularly successful at predicting the behavior of atoms and subatomic particles, the quantum laws looked askance at Einstein’s formulation of gravity. ...

I like to think that Einstein would look at string theory’s journey and smile, enjoying the theory’s remarkable geometrical features while feeling kinship with fellow travelers on the long and winding road toward unification.
It is not so well known, but Einstein hated geometric formulations of relativity. The string theorists like to pretend that Einstein would have liked the geometric aspects of string theory, but that is unlikely. And his vision of force unification was foolishness, as he never even considered the strong or weak nuclear forces, and never accepted quantum mechanics.

The only string theory accomplishment given is this:
Take black holes, for example. String theory has resolved a vexing puzzle by identifying the microscopic carriers of their internal disorder, a feature discovered in the 1970s by Stephen Hawking.
He is referring to black hole entropy being proportional to the area of the event horizon. The string theorists have another argument for this formula. So they have a new view of some old and untestable formula.

Friday, December 26, 2014

Velocity-addition formula interpretation

Jerzy Kocik writes a new paper on a Geometric diagram for relativistic addition of velocities:
A geometric diagram that allows one to visualize the Poincaré formula for relativistic addition of velocities in one dimension is presented. An analogous diagram representing the angle sum formula for trigonometric tangent is given. ...

IV. HISTORICAL NOTE The relativistic velocity addition formula first appears in the letter of Poincaré to Lorentz, roughly in May 1905, and was presented to the French Academy of Science in 6 June of the same year. Basic ingredients of relativity principle were presented by Poincaré at the world scientific conference in St Louis in September 1904. Poincaré also noted that the expression d2 = x2 + y2 + z2 − c2t2 defines a relativistic invariant. Although he used 4-vectors, Poincaré did not pursue the geometric consequences. It was Hermann Minkowski who followed up on this idea in 1907. Albert Einstein decisively removed the concept of ether and simplified the derivation of Lorenz’ transformation rules in 1905.
He gives a very nice and simple geometric interpretation of the special relativity velocity-addition formula. I am surprised that no one discovered it before.

The paper also gives an accurate historical account. The velocity addition formula is maybe the most important 1905 Einstein discovery that was not already published by Lorentz. But Poincare had it earlier and Einstein did not have the geometric consequences.

Wednesday, December 24, 2014

Philosophers like many-worlds

A couple of prominent Oxford philososphers, Harvey R Brown and Christopher G Timpson, write a new paper:
Bell on Bell's theorem: The changing face of nonlocality

Between 1964 and 1990, the notion of nonlocality in Bell's papers underwent a profound change as his nonlocality theorem gradually became detached from quantum mechanics, ... In our view, the significance of the Bell theorem, both in its deterministic and stochastic forms, can only be fully understood by taking into account the fact that a fully Lorentz-covariant version of quantum theory, free of action-at-a-distance, can be articulated in the Everett interpretation.
Action-at-a-distance theories are contrary to relativistic causality, and contrary to the reductionist methodology of science. If proved scientifically, we would have to accept action-at-a-distance, but we would rather not.

So these guys use that as an excuse to prefer the many-worlds interpretation of quantum mechanics, where zillions of unobservable parallel universes are postulated.

This is just more evidence that philosophers are detached from modern science. The MWI is promoted as an interpretation, meaning there isn't really any empirical evidence for it. So what good is some physical argument about action-at-a-distance? I suppose that is supposed to be better than other philosophers who promote interpretations that introduce action-at-a-distance, like Bohmian mechanics, but I am not sure. The parallel universes are just as contrary to scientific thinking.

Monday, December 22, 2014

Relativity depended on the crucial experiment

I posted a defense of the integrity of physics based on the scientific method. It should not be necessary, but the method is under attack from theoretical physics promoting untestable theories, and philosophers denying that science discovers truth.

To attack the importance of the experimentum crucis (crucial experiment), some scholars (strangely and wrongly) cite the history of relativity. For an example, here is Pentcho Valev commenting on the Nature article:
Don't rely too much on the experiment. Experimental results can be interpreted so as to "confirm" the opposite of what was actually confirmed. For instance, in 1887 (prior to FitzGerald and Lorenz advancing the ad hoc length contraction hypothesis) the Michelson-Morley experiment unequivocally confirmed the variable speed of light predicted by Newton's emission theory of light and refuted the constant (independent of the speed of the source) speed predicted by the ether theory and adopted by Einstein in 1905 (nowadays the opposite is taught):
These efforts were long misled by an exaggeration of the importance of one experiment, the Michelson-Morley experiment, even though Einstein later had trouble recalling if he even knew of the experiment prior to his 1905 paper. This one experiment, in isolation, has little force. Its null result happened to be fully compatible with Newton's own emission theory of light. Located in the context of late 19th century electrodynamics when ether-based, wave theories of light predominated, however, it presented a serious problem that exercised the greatest theoretician of the day. Norton
It is true that the 1887 Michelson-Morley experiment could be interpreted to be consistent with an emitter theory or an aether drift theory, but the experts of the day had rejected those theories on other grounds.

But it is just wrong to say that Michelson-Morley had little force. The experiment was done following suggestions by Maxwell and Lorentz that it would be crucial. The length contraction was first suggested by FitzGerald in 1889 as a logical consequence of the experiment. Lorentz's relativity theories of 1892 and 1895 were explicitly created to explain the experiment. Poincare's criticisms of 1900 and Lorentz's improvement of 1904 were based on the accuracy of the experiment.

The most popular early relativity paper was Minkowski's 1908 paper saying:
But all efforts directed towards this goal [of optically detecting the Earth's motion], and even the celebrated interference-experiment of Michelson have given negative results. In order to supply an explanation for this result, H. A. Lorentz formed a hypothesis ... According to Lorentz every body in motion, shall suffer a contraction in the direction of its motion, ...

This hypothesis sounds rather fantastical. For the contraction is not to be thought of as a consequence of resistances in the ether, but purely as a gift from above, as a condition accompanying the state of motion.

I shall show in our figure, that Lorentz's hypothesis is fully equivalent to the new conceptions about time and space, by which it becomes more intelligible.
It is a historical fact that relativity was developed with Michelson-Morley as the crucial experiment.

Einstein's 1905 paper gives an exposition in terms of two postulates that everyone agrees he got from Lorentz and Poincare. Maybe Einstein understood how those postulates depended on Michelson-Morley in 1905 and maybe he did not. Statements by Einstein and by historians are confusing on the matter. He did seem to understand later, as he said so in his 1909 paper. But it does matter when Einstein learned the significance of Michelson-Morley, because he was not the one who discovered the length contraction, local time, time dilation, constant speed of light, relativity principle, covariance of Maxwell's equations, spacetime geometry, and all the other crucial aspects of relativity. Those who discovered those things relied on Michelson-Morley, and that is a historical fact.

Modern philosophers reject the positivism of most scientists, and their foolish views depend largely on a misunderstanding of how relativity depended on experiment.

Saturday, December 20, 2014

Disqualified from using the title Skeptic

A bunch of prominent skeptics are upset that they do not have a monopoly on the word:
As Fellows of the Committee for Skeptical Inquiry, we are concerned that the words “skeptic” and “denier” have been conflated by the popular media. ...

Real skepticism is summed up by a quote popularized by Carl Sagan, “Extraordinary claims require extraordinary evidence.” Inhofe’s belief that global warming is “the greatest hoax ever perpetrated on the American people” is an extraordinary claim indeed. He has never been able to provide evidence for this vast alleged conspiracy. That alone should disqualify him from using the title “skeptic.” ...

Please stop using the word “skeptic” to describe deniers.
Sen. Inhofe wrote a book on the subject, and usually says that man-made catastrophic global warming is the hoax. He is not a scientist, and his argument is largely political.

I thought that Sagan's quote was about scientific claims. They are taking Inhofe way too literally if they are expecting a comparison to other hoaxes. His book certainly does provide evidence for his claims.

My dictionary defines "skeptic" as "Someone who habitually doubts accepted beliefs". It does not say that you have to have extraordinary evidence for your doubt.

I suspect that the core of this complaint is political. Looking that the list of skeptic names, the ones I recognize are all left-wingers. Inhofe is a Republican who quotes the Bible and opposes govt regulation of the energy industry.

Inhofe is probably wrong in many of the things he says, but he certainly qualifies as a skeptic.

I consider myself a skeptic of string theory, multiverse, and quantum computing, and I explain here, but I certainly did not think that I had to be qualified to be a skeptic. Would these folks call me a multiverse denier, because I am not in their skeptic club or have their politics?

Friday, December 19, 2014

Defend the integrity of physics

I often comment that the more scientific physicists should more actively criticize the fringe views. For example, Sean M. Carroll's blog has an article on how some new variant of the many-worlds interpretation supposed clears up all the quantum mysteries and makes quantum mechanics just like classical Newtonian mechanics. Criticism is left to Lubos Motl, who explains:
Classical physics allows us to believe in the existence of a "meta-observer" who knows all the exact positions and velocities (or values of fields). This metaobserver may be called God. But in contrast with Sebens' claim, classical physics doesn't assert that God in this sense must exist.

The second sentence suggests that in quantum mechanics, a particle is in two places at once. But even though this language is often used in popular presentations and even physicists' sloppy formulations, it's not the case. When a particle is described by a wave function that is a superposition of "here" and "there", it doesn't mean that the particle is in two places at once. It means that the particle is "here OR there" (not "AND") and we can't know which is the right. But the locations "here" and "there" are only potential locations, not "objectively real locations" of anything, and they are mutually exclusive because the two position eigenstates are orthogonal to each other.

A quantum mechanical particle can't be in two places at once.
Yes, I agree with that. I would even go further, and say that there are not really particles in quantum mechanics. There is no particle in the sense of a specific position and momentum. The particle-like objects also have wave properties, and only truly look like particles in the instant of a particular observation.

The double-slit experiment is often explained as the particle being in both slits at the same time. I guess you can think of it that way, but it is not helpful because the particles are never observed in both places at once.

After much more criticism, Lumo goes into full rant:
So "dreamers" not only want to "undo" the quantum revolution. They pretty much want to undo the transition of physics to "field theory" as well and claim that the world is described by a classical deterministic Newtonian mechanics. ...

The only real problem, Mr Sebens, is the combination of complete stupidity and arrogance, values that you reconcile so smoothly and naturally.

I am totally infuriated by this junk because complete idiots who are, scientifically speaking, at least 300 years behind the times claim – and use similar idiots in the media to claim – that they are close to cutting-edge science.
I would say only 100 years behind the times.

Einstein is famous for saying quantum mechanics was all wrong. That was over 80 years ago. He had nothing better. Neither do any of the other quantum critics. At some point, you have to dismiss these people as crackpots.

Nature mag has a dose of common sense, endorsed by Peter Woit:
Scientific method: Defend the integrity of physics

Attempts to exempt speculative theories of the Universe from experimental verification undermine science, argue George Ellis and Joe Silk.

This year, debates in physics circles took a worrying turn. Faced with difficulties in applying fundamental theories to the observed Universe, some researchers called for a change in how theoretical physics is done. They began to argue — explicitly — that if a theory is sufficiently elegant and explanatory, it need not be tested experimentally, breaking with centuries of philosophical tradition of defining scientific knowledge as empirical. We disagree. As the philosopher of science Karl Popper argued: a theory must be falsifiable to be scientific.

Chief among the 'elegance will suffice' advocates are some string theorists. Because string theory is supposedly the 'only game in town' capable of unifying the four fundamental forces, they believe that it must contain a grain of truth even though it relies on extra dimensions that we can never observe. Some cosmologists, too, are seeking to abandon experimental verification of grand hypotheses that invoke imperceptible domains such as the kaleidoscopic multiverse (comprising myriad universes), the 'many worlds' version of quantum reality (in which observations spawn parallel branches of reality) and pre-Big Bang concepts.

These unprovable hypotheses are quite different from those that relate directly to the real world and that are testable through observations — such as the standard model of particle physics and the existence of dark matter and dark energy. As we see it, theoretical physics risks becoming a no-man's-land between mathematics, physics and philosophy that does not truly meet the requirements of any.
I think that most scientists would agree with this. Not sure about Lumo, as he likes string theory, but even criticizes most of these untestable ideas as not scientific.
In our view, cosmologists should heed mathematician David Hilbert's warning: although infinity is needed to complete mathematics, it occurs nowhere in the physical Universe.
Also correct. Infinities come up a lot in physics, as in the unrenormalized charge of an electron, the center of a black hole, the initialization of the big bang, the infinite extent of space, etc. But none of these should be taken too seriously. They are mathematical models. Arguments about infinitely many universes are just silly.

Popper was the last philosopher of science that scientists took seriously. He is mainly liked for what he said about falsifiability as a criterion to demarcate what is or is not science in the 1930s. Peter Morgan responds:
There are limitations to a Popperian account of scientific methodology. The critique has been manifold and substantive at least since Quine. A simply stated criticism is that we don't falsify a theory, we only quantify how accurate it is in different ranges of application. We don't abandon a scientific theory, we find a new theory that is more accurate in part of the old theory's range of application (but there may be other issues, such as greater usability and tractability and good enough accuracy for practical purposes over a wide enough range, so that we are often happy, for example, to use classical mechanics instead of GR or QM).
Popper did say some foolish things about scientific methodology. But he is being cited here for how he distinguished scientific theories, like relativity, from unscientific ones, like Freudian interpretations of dreams.

Freudians and other promoters of false or untestable or fringe theories have always hated Popper. He is also hated by leftists for writing The Open Society and Its Enemies, a political refutation of Marxism. So yes, people like to pretend that Popper has been discredited. I even criticize Popper's anti-positivism. But his basic idea of using falsification for science demarcation is sound.

String theorist Gordon Kane protests:
Ellis and Silk rely heavily on the absence of evidence of superpartners in the first LHC run. How will they respond if superpartners are found in the next run? Does that count as a successful test and reverse their opinion? The literature contains clear and easily understood predictions published before LHC from compactified string theories that gluinos, for example, should have been too heavy to find in Run 1 but willbe found in Run 2 (gluino mass of about 1.5 TeV). Third, it is common that some tests of theories at a given time are not technically possible but become possible later. Entanglement in quantum theory is an example.
Yes, supersymmetry in the LHC energy ranges was a testable prediction. I think everyone agrees to that. So far the evidence shows that there is no such thing, but we will not know for sure for another couple of years.

I do not know why he says entanglement was not testable. Entanglement was essential to quantum mechanics going back to the 1920s, and was tested many times. Just about any multi-particle test of quantum mechanics is a test of entanglement.

Sure enuf, Lumo defends string theory:
String theory is known to be almost certainly right because it's the most elegant, explanatory, consistent, and unifying reconciliation of general relativity and quantum field theory. ...
Chief among the 'elegance will suffice' advocates are some string theorists. Because string theory is supposedly the 'only game in town' capable of unifying the four fundamental forces, they believe that it must contain a grain of truth even though it relies on extra dimensions that we can never observe.
They completely misrepresent that status of string theory. String theory is not "supposedly" the only game in town. It is the only game in town. And it doesn't contain just a "grain" of truth. It contains the whole truth. Due to its basic mathematical properties, it cannot be just another approximation that has to be deformed to obtain the right theory. It must be the right theory exactly.
Jim Baggott writes:
However, scientists must learn to be willing to let go of Popper's principle of falsifiability. This lost support among philosophers of science quite some time ago, ...

I agree with Ellis and Silk that the very integrity of science is at stake here. But I think that to deal with these issues properly, we must first understand why there has come to be such an appetite for what I have elsewhere referred to as 'fairy-tale' physics. In his critical 1987 review of John Barrow and Frank Tipler's book 'The Anthropic Cosmological Principle', Danish science historian Helge Kragh wrote: 'This kind of escapist physics, also cultivated by authors like Wheeler, Sagan and Dyson, appeals to the religious instinct of man in a scientific age.' It is also true that this kind of 'escapist' physics is now an integral part of the staple of some aspects of our contemporary culture - from Michio Kaku's 'Science Fantastic', to Morgan Freeman's 'Through the Wormhole' to Christopher Nolan's blockbuster 'Interstellar', not to mention the string of bestselling books that have appeared on string theory and multiverse in the last 10 years or so. We may worry that consumers are getting a very distorted impression of what science is all about, but we can only counter this by helping to make 'real' science at least equally accessible and entertaining.
He is right that falsifiability lost support among philosophers of science, but those philosophers are unable to say anything worthwhile about that "escapist physics", and they will not stand up for the integrity of science. Falsifiability is still a useful concept for debunking that untestable physics speculation.

Cliff makes a series of unsupported attacks on the testability concept, such as:
General relativity also flagrantly defies your standard: We did modify this theory after it failed to describe the rotation of galaxies, by postulating dark matter. GR+DM is still by far the most successful theory of galactic motion, so clearly any putative definition of science had better be able to accommodate this kind of tweaking that you want to exclude.
No, this is a poor argument. Dark matter does not require any modification to general relativity. The observed galaxy rotation, combined with Newtonian gravity or GR, implies that the galaxies have more mass than what we directly see. Some people thought that it might be dust, black holes, neutrinos, or something like that, but the consensus is that these possibilities are not enuf. So the leading view is that there is some dark matter that is otherwise unknown to us. It is also possible that gravity is slightly distorted on galactic scales. But all these ideas are testable and falsifiable, so Cliff makes no sense.

Tuesday, December 16, 2014

Cox gives bad quantum explanation

In Britain, Brian Cox is a rock star physicist, and the leading physics popularizer after Hawking. In this BBC broadcast interview, he talks about himself and tries to explain quantum mechanics.

I am happy to see real physicists explain their subject to the public, but I am not sure he is helping. First he says that he is an advocate of the many-worlds interpretation (MWI), as more and more other physicists are becoming converts, and does not like the Copenhagen interpretation. He says that MWI is simpler because it avoids wave function collapse. Under Copenhagen, the reality of the Schroedinger cat becomes actualized when you open the box, and the cat is alive or dead. In MWI, cats live and die in parallel universes.

Then he explains [at 21:20] quantum mechanics in terms of particles that hop from place to place according to probabilities given by a very simple rule called the action, according to the Feynman path integral formulation.

This is incoherent. The appeal of the MWI is that the wave function is supreme, and there are no particles, no hopping, and no probabilities. Electrons only look like hopping particles when you collapse the wave function, or you get fooled by some interference between the electron and the measuring equipment. And MWI cannot give probabilities because all possibilities happen in parallel universes, and there is no way to say that some universes are more likely than others.

The Feynman action (ie, path integral formulation) is not a very simple rule.

I previously criticized Cox for saying that quantum mechanics causes your body to shift, "albeit absolutely imperceptibly", in response to events on the other side of the universe.

I get the impression that these physics publicity seekers have discovered that they get much more attention if they babble nonsense about quantum mechanics. I really do not see how this is any better than peddling some pseudoscience like astrology. I would think that the respectable physicists would be annoyed that people like Cox are making the subject seem silly, and correct him.

Sunday, December 14, 2014

Poincare screwed out of history books



Cracked lists 23 Amazing People Who Were Screwed Out of History Books, citing http://plato.stanford.edu/entries/poincare/.

It drew some criticism:
#21 is objectively false. AuntieMeme must have misunderstood the literature. Poincare argued that LORENTZ should be credited with the invention of special relativity. The cited source describes how he PHILOSOPHICALLY argued that spacetime should warp as it does in general relativity, but did not formulate any kind of model or equations. Source: https://en.wikipedia.org/wiki/Relativity_priority_dispute
Poincare certainly published the equations for the special relativity of spacetime. He proposed some equation for general relativity, but died before they were perfected. That Wikipedia article has a lot of useful info on the subject.

Another comment:
The thing about Einstein and Henri Poincare is a bit misleading. First of all, Poincare had only helped to develop the theory of Special Relativity (which can be derived simply from trigonometry and some fundamental kinematics). Poincare had NOTHING to do with General Relativity which is a far more powerful theory.

Second, at the time Einstein published his research, Poincare hadn't formally published his, rather he only published a short blurb about using the concept of relativity to interpret some of Lorrentz's work on four dimensional transformations. So although he discussed the concept that objects don't act any different if they are still or moving, he didn't derive any equations to explain how that affects the universe.

Third, Poincare did absolutely nothing to try to prove it was correct. Einstein was the first one to propose an experiment for confirming it.
All three points are false. Poincare help develop general relativity by introducing a spacetime metric and geometry, by presenting relativistic theories of gravity, by formulating gravity waves, and by partially explaining an anomaly in Mercury's orbit.

Poincare's work was mostly published before Einstein, with one big 1905 paper being published at about the same time. He certainly had all the equations Einstein did.

Poincare agreed with Lorentz that the Michelson-Morley and other experiments forced special relativity. Einstein was the one who ignored the experimental evidence, and is famous for just postulating it in his 1905 paper.

Another comment:
Poincare is one of the most famous mathematicians of all time, but he didn't invent Special Relativity. Even he gladly admitted to that. He was toying with the right mathematical ideas, but it was Einstein who went all-in on the concepts and really got what was going on conceptually for the first time. That was where Einstein's genius really shown. (And shown again, several times, with the development of Quantum Mechanics.) ...

As for QM, until 1926, most of the work on Quantum could quite reasonably be said to have been Einstein's. Bohr and others had done some, but all had quite quickly retreated from the implications of their work. Einstein was the single person who was interested in exploring what QM really implied and making sense of it as a really complete theory and not an ad hoc hypothesis. (Much as people had been doing with SR before he arrived.) Unfortunately, after 1926 and the advent of Schrodinger's Wave Mechanics and Heisenberg's Matrix Mechanics (equivalent approaches, of course), Einstein got fed up with where things were going. To a fair extend, I'd argue he was fed up with the Copenhagen interpretation in particular, although I don't know how he'd have dealt with the alternatives. That's when he seems to have turned on QM, although, again, it's a lot more subtle that "his disagreed". And remember, Schrodinger *also* didn't like how things went. That's why he cooked up the cat thought experiment, after all.
No, it was Poincare who went all-in on the relativity concepts, and worked out the spacetime geometry. Einstein did not dare contradict Lorentz's theory. Einstein's work on quantum mechanics was mostly to disagree with it.

Another comment:
The one about Einstein and Poincare is complete falsehood. Poincare's first paper on relativity came out three months before Einstein's. There is no way Einstein could have used it for the basis of his own paper, as is implied. (Do you know how long it takes to get a paper submitted and published?) Also, Poincare was wrong about relativity. He denied until his death that energy carried mass, considering mathematics that indicated this 'fictitious'.

However, he was a brilliant mathematician and physicist, and Einstein had acknowledged that his and Poincare's mathematical treatment as equivalent in his 1906 paper, "The principle of the conservation of center of mass motion and the inertia of energy." Einstein continued to acknowledge Poincare publicly for the rest of his life.

In fact this is exactly backwards. It was Poincare who refused to acknowledge Einstein. And just to reiterate, Poincare was wrong about relativity and Einstein was right. Poincare interpreted mathematics that indicated time dilation and mass-energy equivalence as fallacious variables that had no real world effect, rather than real physical effects as Einstein correctly did.
Einstein's papers were not refereed, and were published it quickly. In interviews, he claims that he wrote it in about the 6 weeks before his submission date, and he could have studied Poincare's short 1905 paper for about 4 weeks of that. Einstein understood that he had to publish quickly.

Einstein did not acknowledge Poincare publicly. All his life, Einstein denied reading Poincare's papers, even tho that 1906 paper does credit Poincare for E=mc2.

Saying that Poincare had "fallacious variables that had no real world effect" is backwards. I think he means fictitious variables. Lorentz and Poincare derived their formulas from real physical experiments. Einstein derived his from postulates.

Poincare did refuse to mention Einstein. Poincare's major relativity papers were all submitted for Einstein had published anything on the subject. Poincare never used any of Einstein's ideas. Poincare did once write a gracious letter of recommendation for Einstein.

Friday, December 12, 2014

Simple teleportation over-hyped again

LiveScience reports:
A new distance record has been set in the strange world of quantum teleportation.

In a recent experiment, the quantum state (the direction it was spinning) of a light particle instantly traveled 15.5 miles (25 kilometers) across an optical fiber, becoming the farthest successful quantum teleportation feat yet. Advances in quantum teleportation could lead to better Internet and communication security, and get scientists closer to developing quantum computers.
No, almost every part of this story is wrong. There is no such thing as teleportation. The quantum state is not just the direction a particle is spinning. Nothing traveled instantly. This cannot lead to better Internet and communication security, or get scientists closer to developing quantum computers.

All they did was to split a laser beam into two identical light pulses, send one 15 miles thru optical fiber, and detect the polarization at both ends to see that they match.

The light pulse goes at the speed of light, not instantly. The only teleportation is that a bit of info is being conveyed by the polarization to the detection device at the other end of the fiber.

The application to cryptography and quantum computers is all a big hoax. It does not exist.
Quantum teleportation doesn't mean it's possible for a person to instantly pop from New York to London, or be instantly beamed aboard a spacecraft like in television's "Star Trek." Physicists can't instantly transport matter, but they can instantly transport information through quantum teleportation. This works thanks to a bizarre quantum mechanics property called entanglement.
It is not that bizarre. As you read this, you got the info as light pulses were transmitted over fiber in massive internet trunk lines.

The only addition in the new experiment is that a copy of the polarized light pulse is at both ends. This is a cute demonstration of quantum mechanics, but the effect has been well understood for decades and there are no practical applications.
Quantum entanglement happens when two subatomic particles stay connected no matter how far apart they are. When one particle is disturbed, it instantly affects the entangled partner. It's impossible to tell the state of either particle until one is directly measured, but measuring one particle instantly determines the state of its partner.
They are only connected in the sense that they were produced in a way that produces equal and opposite light pulses. No one has ever done an experiment to prove that disturbing one particle has any effect on the other.

That last sentence may surprise you if you read popularizations of quantum mechanics. But I fully accept quantum mechanics, as described by the textbooks. I assure you that Nobel prizes would be given if anyone did such an experiment.

What we do know is that it is possible to put two particles in an entangled state such that measurements give equal (or opposite) results. Measuring one might tell you what the other measurement is going to be, but there is no causal effect.
Physicists think quantum teleportation will lead to secure wireless communication — something that is extremely difficult but important in an increasingly digital world. Advances in quantum teleportation could also help make online banking more secure.
We already have secure online communication for banking. Confidential and authenticated messages can be sent all over the world using the internet.

The above technology, if perfected, might be able to send a few bits confidentially from one router to another. It cannot send messages across routers, and it cannot authenticate messages. It is commercially worthless.

Wednesday, December 10, 2014

Update on Tyson quotes

I mentioned Neil Tyson getting caught using some bogus quotes, and now a NY Times op-ed puts a charitable spin on it:
The science of memory distortion has become rigorous and reliable enough to help guide public policy. It should also guide our personal attitudes and actions. In Dr. Tyson’s case, once the evidence of his error was undeniable, he didn’t dig his hole deeper or wish the controversy away. He realized that his memory had conflated his experiences of two memorable and personally significant events that both involved speeches by Mr. Bush. He probably still remembers it the way he described it in his talks — but to his credit, he recognizes that the evidence outweighs his experience, and he has publicly apologized.

Dr. Tyson’s decision is especially apt, coming from a scientist. Good scientists remain open to the possibility that they are wrong, and should question their own beliefs until the evidence is overwhelming. We would all be wise to do the same. ...

Politicians should respond as Dr. Tyson eventually did: Stop stonewalling, admit error, note that such things happen, apologize and move on.
Eventually? I don't want to beat a dead horse, but the only one to call Tyson on his quotes was not even mentioned in this article, due to editing. It was TheFederalist blog, and it fills in the details. It will be interesting to see whether he continues to use the misquotes.

Monday, December 8, 2014

Philosophical realism is not what you think

One of the themes of this blog is to counter bad ideas of science. I am not interested in the fringe pseudosciences, but the ideas bad enuf to be taught in Harvard classrooms.

The chief offenders are the theoretical physicists pushing wacky untestable ideas, and the philosophers and historians pushing paradigm shift theory.

Philosopher and former biologist Massimo Pigliucci explains how nearly all philosophers and historians of science have adopted a view of science that is diametrically opposed to what most scientists think. He attributes the difference to a lack of self-awareness on the part of scientists.

I say that scientists reject modern philosophy because philosophers have turned anti-science, but he just just accuses me of ignorance.

Pigliucci's philosophy of science is the realism-antirealism debate. You might think that this would be something mildly sensible like the Bohr-Einstein debates of the 1930s, but it is not. A realist is someone who decides what is real according to the prevailing paradigm. An anti-realist is someone who relies on empirical evidence. All the modern philosophers of science are realists.

For examples, he says that Ptolemy and Newton were fundamentally wrong, because Copernicus and Einstein respectively replaced them with qualitatively different theories. Likewise classical mechanics wrong because it was replaced by quantum mechanics. Only an anti-realist would say that Newtonian gravity is correct as applied to getting Apollo 11 to the Moon.

The paradigm shifters go further, and say that Newtonian mechanics cannot even be compared to relativity because mass means something different in the two theories.

The philosophers are wrong on many levels. Mass does not mean something different, and Newtonian mechanics certainly is correct in its domain of applicability. More importantly, the philosophers are wrong about the qualitative differences.

These theories do not really contradict. Consider Ptolemy and Copernicus. It is a simple mathematical fact that Sun-centered and Earth-centered coordinate systems are mathematically equivalent. XX century physics recognizes that, and both are considered valid. Ptolemy was considered with the appearance with the sky, and does not attempt a 3D model. He does have some arguments for the Earth being stationary, but that has almost nothing to do with his model. I do not know whether he realized that his main epicycles could be interpreted as the Earth's revolution, but that is essentially what Copernicus did.

Thus Copernicus's contribution was essentially to take Ptolemy's 2D model and turn it into a 3D model by making a frame choice and an interpretation of the major epicycles. That contribution had some conceptual advantages, but it was more or less mathematically equivalent as it did not make much difference to the predictions of what we see in the sky.

This example is crucial to paradigm shift theory, because the Copernicus model had no measurable advantages over Ptolemy. This is used to support the goofy idea that scientific progress is illusory and irrational.

The Einstein story is also crucial to philosophers. They say that Einstein ignored the evidence and created relativity based on what he thought was right, abruptly overthrowing everything beforehand. Nearly all parts of their story is wrong, as I have detailed on this blog and in my book.

The quantum mechanics revolution is cited less often, because T. Kuhn denied that it was a paradigm shift. It had measurable advantages over the previous theories that were immediately accepted, and therefore did not fit his pattern for scientific revolutions and paradigm shifts.

I left this comment on Pigliucci's site:
SciSal, Newtonian mechanics certainly is an approximation to relativity, and this is commonly explained in relativity textbooks and on Wikipedia. Space and time are not two aspects of the same thing in relativity. In spite of your claim that causality does not appear in fundamental physics, space-time in relativity has a causal structure that treats space very differently from time.

It is also not quite right to say that Newtonian mechanics is different because space and time are fixed. Both Newtonian and relativistic mechanics can use coordinate changes in space and time. You say that relativity space-time is continuously dynamic, but it is really the gravitational potential that is continuously dynamic, and a Newtonian potential is also continuously dynamic.

It is true that relativistic gravity has an interpretation in terms of curvature, but so does Newtonian gravity.

You also claim that classical and quantum mechanics are radically different in how they treat space and time, and hence cannot be applied to the same problems. I disagree. See the correspondence principle for how classical mechanics is a macroscopic approximation to quantum mechanics, contrary to Cartwright.

Alexander is quite right that calling Newtonian physics "wrong" is a rather strange use of the word wrong. Perhaps the historian/philosophers use the word to mean something different from everyone else. Phoffman56 explains this correctly, and he is right that the philosopher quibbles have no credibility at all with physicists.

SciSal argues that Einstein space-time is some sort of exception to the view of steady progress in science. I disagree. The history of relativity was one of steady progress over decades, and was not really so radically different. You can still talk about forces as being the rate of change of momentum, and many other Newtonian concepts carry over.
A theme of this blog is the steady progress of science. See my Latin motto above. I thought this progress was obvious, but most philosophers deny it.

Pigliucci promotes his idea of science in his 2010 book, Nonsense on Stilts: How to Tell Science from Bunk, and in this interview promoting the book. In it, he argues that Newtonian theory of gravity is wrong, because it does not have Einstein's concept of space-time. He acknowledges that string theory might be wrong, but it is not pseudoscience because it is done by smart people and it makes all the same predictions as the Standard Model. He attacks Bjorn Lomborg for speaking about global warming alarmism because his degree is not in climate science.

This is pretty nutty, and Pigliucci himself would be an example of something speaking outside his expertise. The difference between classical mechanics and relativity is not what he thinks. String theory is not able to make any of the Standard Model prediction, and is pseudoscience in the sense of not making any testable predictions. Lomborg is concerned with the economic choices that people make in response to global warming, and he has expertise in that subject (even with the unpopularity of his view that other problems are more important).

Pigliucci represents a common academic philosophy view of what science is about, and he is way off base. His book has many favorable Amazon reviews, such as this ideological one:
A major political party in America has been taken over by the forces of darkness. Purveyors of scientific illiteracy continue to bombard American citizens with their claims about "legitimate rape," the willful ignorance of creationism, denial of global climate change and other aspects of reality. Pigliucci provides common-sense ammuniton for those who will resist this wave of anti-science reality denial.
But you might want to check out the negative reviews, as they have some very specific and damning criticisms.

Friday, December 5, 2014

Using the disentanglement principle

Yale Law and Psychology professor Dan Kahan writes:
There is a small segment of highly science-literate citizens who can reliably identify what the prevailing scientific view is on the sources and consequences of climate change. But they are no less polarized than the rest of society on whether human activity is causing global warming! ...

Al Gore is right that that climate debate is “a struggle for the soul of America” — and that is exactly the problem. If we could disentangle the question “what do we know” from the question “whose side are you on,” then democratic engagement with the best evidence would be able to proceed. ...

To their immense credit, science education researchers have used empirical methods to address this challenge. What they’ve discovered is that a student’s “disbelief” in evolution in fact poses no barrier whatsoever to his or her learning of how random mutation and genetic variance combine with natural selection to propel adaptive changes in the forms of living creatures, including humans.

After mastering this material, the students who said they “disbelieved” still say they “disbelieve” in evolution. That’s because what people say in response to the “do you believe in evolution” question doesn’t measure what they know; it measures who they are.

Indeed, the key to enabling disbelievers to learn the modern synthesis, this research shows, is to disentangle those two things — to make it plain to students that the point of the instruction isn’t to make them change their “beliefs but to impart knowledge; isn’t to make them into some other kind of person but to give them evidence along with the power of critical discernment essential to make of it what they will.
What he is saying here is that if you disentangle the science from the politics and religion, then students readily learn and accept the science!

I would have thought this would be obvious, but empirical evidence has turned this into a big revelation.

Apparently teachers of evolution and climate science cannot resist inserting assertions of belief that students do not necessarily accept. Cut that out, and students are not so anti-science after all.

The catch here is that you might train one of those highly science-literate citizens, but he might not accept your policy conclusions.

The current Atlantic magazine has an article on You Can't Educate People Into Believing in Evolution. Why are they so eager to "educate people into believing"? They would do better if they educated people into learning the facts, and letting them believe what they want.
“The psychological need to see purpose, that is really interesting," said Jeffrey Hardin, a professor of zoology at the University of Wisconsin, at the Faith Angle Forum in Miami on Tuesday. “Many Christians consider Neo-Darwinian theory to be dysteleological, or lacking in purpose." Hardin is himself an evangelical Christian; he often speaks with church communities about evolution in his work with the BioLogos Foundation. In these conversations, he said, many evangelicals point to statements like that of paleontologist George Gaylord Simpson, who wrote in his 1967 book, The Meaning of Evolution, "Man is the result of a purposeless and materialistic process that did not have him in mind. He was not planned." When this is echoed by outspoken atheists like Richard Dawkins and Sam Harris, Hardin said, "Evangelicals look at it and go, ‘I can’t accept that, and therefore I cannot accept thinking at all about evolutionary biology.'"
Physicist-atheist Steven Weinberg famously said at the end of a book about the Big Bang:
It is very hard to realize that this all is just a tiny part of an overwhelming hostile universe. It is even harder to realize that this present universe has evolved from an unspeakably unfamiliar early condition, and faces a future extinction of endless cold or intolerable heat. The more the universe seems comprehensible, the more it also seems pointless.
Maybe if they disentangle the science from the leftist-atheist dogma, then people will at least learn the science. But to many leftist professors, putting political spin on evolutionary science is essential to their worldview.

Wednesday, December 3, 2014

Philosophers reject physics causality

Massimo Pigliucci is a biologist-turned-philosopher who posts opinions on a variety of philosophical topics, with his favorites being in philosophy of science. He attacks pseudoscience like creationism, of course, but also blasts mainstream scientists for ignoring the philosophy of the last 50 years.

In my opinion, scientists of the last 50 years have rightfully ignored philosophers because they have nearly all adopted an anti-science agenda that has become irrelevant to modern science.

His latest is On the (dis)unity of the sciences:
Fodor’s target was, essentially, the logical positivist idea (still exceedingly common among scientists, despite the philosophical demise of logical positivism a number of decades ago) that the natural sciences form a hierarchy of fields and theories that are (potentially) reducible to each next level, forming a chain of reduction that ends up with fundamental physics at the bottom. So, for instance, sociology should be reducible to psychology, which in turn collapses into biology, the latter into chemistry, and then we are almost there.

But what does “reducing” mean, anyway?
He mostly complains about reductionism as believed by most scientists, and expressed in extreme form by Steven Weinberg.

The main gripe is that scientists usually feel that they are part of a collective enterprise that is systematically progressing towards the true workings of the universe, as part of one coherent methodology called science. But academic philosophers have all concluded decades ago that there is no such thing as scientific truth, that science does not make progress, that scientists jump on paradigms like fads, that science is a huge hodge-podge of contradictory theories, and that even within a subject like physics, science is just a bunch of loosely-related ideas that have little to do with each other. Scientists are just not self-aware enuf to see what they are doing.

When you discover that most scientists subscribe to a philosophy that all philosophers reject, that should be a clue that the philosophers are on the wrong track.

This is why scientists ignore modern philosophers: the philosophers are crackpots, and are so completely wrong it is hard to explain why they are so wrong. The simplest way to see it is that scientists accomplish things, and philosophers have zero influence on science or anything else.

He also says, in response to a commenter:
“I think the philosophical debate over scientific realism vs anti-realism is itself confused”

I beg to differ. It is one of the clearest debates one can find in philosophy.
But another pointed out that the Stanford encyclodia says:
It is perhaps only a slight exaggeration to say that scientific realism is characterized differently by every author who discusses it, and this presents a challenge to anyone hoping to learn what it is.
So nobody knows what realism means, and yet its virtue is one of philosophy's clearest debates.

Pigliucci responds, referring to this study:
Richard, yeah, it’s not just a slight exaggeration, it’s a pretty serious misrepresentation of the situation. A recent survey of a number of opinions of philosophers, co-authored by David Chlamers, clearly pointed to realism having won the day, especially within the epistemically relevant community, philosophers of science.
Yes, the big majority said that they favor scientific realism, but the survey did not define it or establish that there was a common definition. These philosophers can babble for 100s of pages on the subject, but never clarify their terminology enuf to say who was the realist and anti-realist in the Bohr-Einstein debates. The whole subject has the flavor of people who have never heard of XX-century science.

Several years ago I would have said that of course I was a scientific realist. But then I learned that when the term is applied to quantum mechanics, it usually means people like Einstein, Bohm, and Bell who believe that some hidden variable theory was going to disprove quantum mechanics. I do not know how "realism" came to mean a mystical belief that have never been seen or found any useful application to the real world, but because of that, put me down as an anti-realist, and the philosophers as being hopelessly confused.

Crazy as all that is, here is where Pigliucci really goes off the deep end:
Ah, yes, causality! I stayed away from that one too, partly because I’m thinking of a separate essay about it. The interesting thing is that the concept of causality plays an irreplaceable role in the special sciences, but hardly appears at all in fundamental physics. That’s something to explain, of course, and I’d like to hear a physicist’s opinion of it.
I responded:
I am baffled by this comment, as causality is absolutely central to fundamental physics. I cannot think of any part of physics that can function without it. Saying that causality does not appear in physics is like saying energy does not appear in physics. What is physics without energy and causality?

Perhaps you have been influenced by some misguided philosopher like Bertrand Russell. He wrote a 1913 essay saying that “the law of causality, as usually stated by philosophers, is false, and is not employed in science.” He also said that physics never even seeks causes. I do not know how he could say anything so silly, as all the physics textbooks use causality.

Vojinovic: “notion of causality is related to the notion of determinism”

Not really. There are people who work on deterministic and non-causal interpretations of quantum mechanics, and non-deterministic and causal interpretations. So they are not so tightly related.

I also disagree with your claim that radioactive decay is causeless, and that a causal chain of events makes no sense with quantum phenomena. The atomic nucleus consists of oscillating quarks and gluons, and the decay might be predictable if we could get a wave function for everything in that nucleus. That seems impossible, but there is still a causal theory for how that decay takes place, and the decay can be analyzed as a causal chain of events. Saying that radioactive decay is causeless is like saying a coin toss is causeless. It may seem like it has a random outcome to the casual observer, but there are causal explanations for everything involved.

Quantum mechanics is all about finding causal explanations for things like discrete atomic spectra, where such an explanation must have seemed impossible.

Yes, there are philosophers of physics today who work on those non-causal interpretations. As far as I know, nothing worthwhile has ever come out of that work.
Pigliucci posted this response:
No, it is your comment that is one more example of groundless condescending while refusing to engage with the actual argument in a way that may persuade others to actually take you seriously. ...

I’m afraid you either don’t get what Cartwright is saying or you don’t understand physics. Or possibly both. ...

It isn’t a question of what type of causality, it is an issue of causality period. According to my understanding of fundamental physics (and yes, I did check this with several physicists, Sean Carroll among them) the concept of causality plays little or no role at that level of description / explanation. Quantum mechanical phenomena “occur” with this or that probability, following the predictions of a set of deterministic equation, but one doesn’t need to deploy talk of causality at all.

On the contrary, one simply can’t do anything in the special sciences without bringing up causes. This is true for non fundamental physics, chemistry, biology (especially ecology and evolutionary biology), and so forth.
How did Sean M. Carroll get to be a bigger physics authority than Steve Weinberg? Carroll is just some sort of lab assistant at Caltech, not part of the regular faculty, and he trolls the internet to fund his kooky ideas, likes the many-worlds interpretation and his take on the arrow of time. He is not sure whether your behavior has been determined by the Big Bang and the laws of physics, but he is sure that causality has no place in physics?

I realize how people can disagree with me on quantum computing, and even free will. But I really do not see how anyone can deny that causality is central to physics.

If I really misunderstand physics that badly, maybe I should shut down this blog. Can someone please give me some example of some part of physics that does not involve causality?

The only example given was radioactive decay, but you can find causal explanations from modern quantum field theory on the Wikipedia page. Someone claimed that there are no causal chains in quantum mechanics, but Feynman diagrams do exactly that.

It is true that physics textbooks do not use the word "causality" a lot. Instead of "A causes B", they might say "solving the equations of motion with initial conditions A gives B" or "A is the Cauchy data for B". Instead of saying "A has no causal effect on B" they might say "B is outside the light cone of A" or "A and B are are space-like separated".

It doesn't matter whether the theory is deterministic or not. Even if you are using a wave function to predict a probability, you still have to solve an initial value problem for the Schroedinger equation, which means that a wave function at one time is causing a wave function at future times. When explaining a laser, you still say that it is causing electrons to jump to excited states, and then causing emission of coherent photons.

Causality is so important in quantum field theory that it is one of the four Wightman axioms for such theories.

At the cosmological scale, physicists are always talking about things like what causes supernovas, even if they have never predicted one.

Why would someone think that causality is important in chemistry, but not fundamental physics? I am really trying to understand how someone could be so wrongheaded. What do they think physics is all about, if not causality?

Please tell me if I am wrong about causality. If so, I might have to post a correction, like Scott or Neil.

Monday, December 1, 2014

Aaronson lecture on Computational Physics

MIT complexity theorist Scott Aaronson posts a video lecture (download here) on "Computational Phenomena in Physics":
Scott Aaronson discusses the quest to understand the limits of efficient computation in the physical universe, and how that quest has been giving us new insights into physics over the last two decades. He explores the following questions: Can scalable quantum computers be built? Can they teach us anything new about physics? Is there some new physical principle that explains why they cannot be built? What would quantum computers be good for? Can quantum computing help us resolve which interpretation of quantum mechanics is the right one? Which systems in nature can be universal computers, and which cannot? Aaronson will end by describing a striking recent application of computational complexity theory to the black hole information loss problem.
At 37:00 he says he takes the "conservative" view that quantum computers (QC) are possible, or else "we would have to rewrite all the physics books."
For me, I like to say the single most important application of a quantum computer, even more important than a quantum simulation, is disproving the people who said it was impossible. The rest is icing.
You think his personal biases are clouding his judgment?

I say QC are impossible, and that no textbooks will have to be rewritten. I defy him to show me one quantum textbook that would need more than one page of change.

I doubt that his audience was convinced that QC is possible. He compared it to other impossibilities like faster-than-light travel, and perpetual motion machines. He also proposed that P!=NP might be a principle of physics. That is, even a QC would not be able to solve the traveling salesman problem or other NP-hard problems.

It is just as plausible that it is a principle of physics that no super-Turing computation is possible, and QC is like that perpetual motion machine or faster-than-light travel.

At 45:10, he says D-Wave has spent $150M to build a QC, but it is really no faster than a Turing machine. He explains how D-Wave could be running into physical limits that could prevent it from ever achieving its computational goals.

So why is it so hard to believe that other QC attempts will also run into physical limits?

He quotes David Deutsch as saying that QC should be possible because the computation can take place in parallel worlds of the many-worlds interpretation. But he also explains the flaw that such reasoning would lead you to believe that QC could solve NP-hard problems, and thus is more powerful than it really is.

I should not be too hard on Aaronson because at least he appears to desperately afraid that QC will turn out to be impossible. Almost everyone else refuses to admit the possibility.

Aaronson's latest post is somewhat embarrassing. He says that he is most known for a theorem in quantum computer complexity theory. He says it was easy, and he proved it in about an hour, and published it ten years ago. It is widely cited, but recently people found serious errors in the proof. To fix it, he posted a query publicly on MathOverflow, and people found clever work-arounds for his problem. He also says that some serious errors were found in other papers he wrote ten years ago. Hmmmm. I don't want to pile on here, but he is obviously not always right about everything, and he may not be right about his predictions for QC.

Friday, November 28, 2014

Broader significance of quantum entanglement

Quantum Frontiers writes:
This is a jubilee year.* In November 1964, John Bell submitted a paper to the obscure (and now defunct) journal Physics. That paper, entitled “On the Einstein Podolsky Rosen Paradox,” changed how we think about quantum physics. ...

But it’s not obvious which contemporary of Bell, if any, would have discovered his inequality in Bell’s absence. Not so many good physicists were thinking about quantum entanglement and hidden variables at the time (though David Bohm may have been one notable exception, and his work deeply influenced Bell.) Without Bell, the broader significance of quantum entanglement would have unfolded quite differently and perhaps not until much later. We really owe Bell a great debt.
No, this is silly. Bell had a clever idea for disproving quantum mechanics. Had he succeeded, he would be hailed as a great genius.

But the likelihood is that dozens of physicists had similar insights, but decided that disproving quantum mechanics was an impossible task. All attempts, including those of Bell, Bohm, and their followers, have failed.

Lumo responds:
However, I strongly believe that

the fathers of quantum mechanics could collectively solve the particular thought experiment and see the incompatibility of the quantum vs local realist predictions; even without that, the amount of evidence they had supporting the need for the new, quantum core of physics has been overwhelming since the mid 1920s

much of the explicit findings and slogans about entanglement had been known for 29 years, since the 1935 works by Einstein, Podolsky, Rosen; and Schrödinger

Bell's results didn't really help in the creation of the quantum computing "engineering industry" which would only start in 1970 and which has little to do with all the quasi-philosophical debates surrounding entanglement

most frustratingly, Bell's correct results were served in a mixed package along with lots of wrong memes, unreasonable expectations, and misleading terminology and the negative price of these "side effects" is arguably larger than the positive price of Bell's realizations
He goes on to explain why John von Neumann's 1930 argument against hidden variables is not as silly as Bohm and Bell pretend.

Whether or not you find von Neumann's argument persuasive, the consensus since 1930 has been that hidden variables do not work. Bohm and Bell were just chasing a wrong idea.

The whole idea that Bell's theorem is necessary to understand the broader significance of quantum entanglement is nutty. Entanglement was always essential to multi-particle system, and Bell did not influence that. He only influenced dead-end searches for other interpretations.

UK BBC News reports:
Belfast City Council is to name a street after John Stewart Bell, one of Northern Ireland's most eminent scientists.

However, his full name will not be used, as the council has "traditionally avoided using the names of people" when deciding on street names.

Instead, the street in the city's Titanic Quarter will be called Bell's Theorem Way or Bell's Theorem Crescent.

Dr Bell is regarded as one of the 20th Century's greatest physicists. ...

Bell's Theorem, more formally known as On the Einstein-Podolsky-Rosen paradox, demonstrated that Einstein's views on quantum mechanics - the behaviour of very small things like atoms and subatomic particles - were incorrect.
Greatest physicist? The consensus back in 1935 was that Bohr won the Bohr-Einstein debates.

MIT physicist David Kaiser writes in the NY Times:
Bell’s paper made important claims about quantum entanglement, one of those captivating features of quantum theory that depart strongly from our common sense. ...

The key word is “instantaneously.” The entangled particles could be separated across the galaxy, and somehow, according to quantum theory, measurements on one particle should affect the behavior of the far-off twin faster than light could have traveled between them.
No, that would be nonlocality, and that has not been demontrated.
In his article, Bell demonstrated that quantum theory requires entanglement; the strange connectedness is an inescapable feature of the equations. ...

As Bell had shown, quantum theory predicted certain strange correlations between the measurements of polarization as you changed the angle between the detectors — correlations that could not be explained if the two photons behaved independently of each other. Dr. Clauser and Mr. Freedman found precisely these correlations.
The correlations are not that strange. If the two photons generated in a way to give equal and opposite properties, and you make identical measurements on them after they have separated a big distance, then you get equal and opposite results. If you make slightly different measurements, then you get correlations that are predicted by quantum mechanics, but not by hidden variable theories.

He discusses closing experimental loopholes, and then proposes to address superdeterminism:
How to close this loophole? Well, obviously, we aren’t going to try to prove that humans have free will. But we can try something else. In our proposed experiment, the detector setting that is selected (say, measuring a particle’s spin along this direction rather than that one) would be determined not by us — but by an observed property of some of the oldest light in the universe (say, whether light from distant quasars arrives at Earth at an even- or odd-numbered microsecond). These sources of light are so far away from us and from one another that they would not have been able to receive a single light signal from one another, or from the position of the Earth, before the moment, billions of years ago, when they emitted the light that we detect here on Earth today.

That is, we would guarantee that any strange “nudging” or conspiracy among the detector settings — if it does exist — would have to have occurred all the way back at the Hot Big Bang itself, nearly 14 billion years ago.
It is hard to see how anyone would be convinced of this. The problem is that if there is superdeterminism, then we do not have the free will to make the random choices in the experiment design or inputs, and we will somehow be guided to make choices that make the experiment turn out right. We cannot toss coins either, because they are also determined. Everything is determined. But somehow that distant quasar light will be the most random thing we can do.

LuMo agrees:
Aspect says the usual thing that up to Bell, it was a matter of taste whether Bohr or Einstein was right. Well, there was a difference: Bohr actually had a theory that predicted all these entanglement measurements while Einstein didn't have a damn thing – I mean, he didn't have any competing theory. How can a "perfectly viable theory agreeing with observations" vs "no theory" may be "up to one's taste", I don't know. Moreover, it is not really true that it wasn't understood why the predictions of quantum mechanics were not imitable by any local classical theory.
Bell and his followers (like Aspect) successfully revived the Bohr-Einstein debates, but the outcome is always the same. Bohr and quantum mechanics turn out correct, and the naysayers have no substantive alternative.

Wednesday, November 26, 2014

NY Times history of Big Bang stories

I have posted (such as here and here) how LeMaitre is the father of the big bang, even tho Hubble is usually credited. LeMaitre had the theory and the data, and published well before Hubble. Hubble had a bigger telescope and published data that seemed more convincing at the time, but we now know that his data was wildly inaccurate.

I assumed that Hubble got more credit in America because he was an American, and because LeMaitre published in an obscure journal. But now I find that the NY Times credited LeMaitre all along:
In 1927, Georges Lemaître, a Roman Catholic priest and astronomer from Belgium, first proposed the theory that the universe was born in a giant primeval explosion. Four years later, on May 19, 1931, The New York Times mentioned his new idea under a Page 3 headline: “Le Maître Suggests One, Single, Great Atom, Embracing All Energy, Started the Universe.” And with that, the Big Bang theory entered the pages of The Times.

Over the years, The Times mentioned the theory often, and used a variety of terms to denote it — the explosive concept, the explosion hypothesis, the explosion theory, the evolutionary theory, the Lemaître theory, the Initial Explosion (dignified with capital letters). Occasionally, descriptions approached the poetic: On Dec. 11, 1932, an article about Lemaître’s visit to the United States referred to “that theoretical bursting start of the expanding universe 10,000,000,000 years ago.”
The article goes on with some terminology history:
It was not until Dec. 31, 1956, that The Times used Hoyle’s term and then only derisively: “this ‘big bang’ concept,” the anonymous reporter called it in an article discussing discoveries that “further weakened the ‘big bang’ theory of the creation of the universe.” ...

By March 11, 1962, it was becoming clear that the phrase “big bang” had been co-opted by proponents of the theory. An article in The Times Magazine said that the sudden “explosion and expansion is by some called ‘Creation’; by others merely ‘the big bang.' ” ...

Today, when almost all astronomers have accepted the theory, The New York Times Manual of Style and Usage unequivocally requires Big Bang theory — uppercase, no quotation marks.
The theory has changed a little bit since LeMaitre, with the addition of inflation and dark energy.

Monday, November 24, 2014

Listening to authorities claiming settled science

Scott Aaronson posts Kuperberg’s parable on why we should accept non-expert authority figures on subjects like global warming.

In the parable, the physician cannot explain to the patient the risks of cigarette smoking, and can only say that the science is settled that it is bad. This is not enuf info for the patient, and asks more questions about the actual risk. The physician does not have the expertise to answer the questions, and just wants the patient to do as he is told.

This reveals an authoritarian mindset. I have met people who believe in always doing whatever the physician or other professional recommends. They seem baffled if I say that I like to make my own decisions. The fact is that when a physician makes a recommendation on some subject like eating saturated fats, he is probably not following the settled science.

I realize that there are a lot of people with irrational objections to scientific evidence, and that it may be a waste of time for the physician to learn the details of smoking risks. But he could refer the patient to a reliable source for the patient who wants to be informed.

I follow the hard science, and that is almost never wrong. But the science authorities are often wrong. They join fads and over-hype results.

I see physicists jumping on stupid fads all the time. Climate science is probably ten times worse.

There is hard science showing that CO2 concentrations are increasing, that the increases are attributable to humans burning fossil fuels, and that CO2 absorbs infrared heat. I do not question any of that. But the climate authorities want us to accept all sorts of other things, like the recent USA-China agreement where China makes a non-binding commitment to stick to projections of emission reductions starting in 2030. It seems like just a public relations stunt to me.

There are huge uncertainties in the temperature projections, and considerable doubt about the policy implications. A slightly warmer world might do more good than harm. It is a crazy idea that we should all drive electric cars because some authority figure says the climate science is settled.

Greg Kuperberg explains his parable:
If you look carefully, the patient in effect demanded certitude from the doctor, only to then use it against her. The doctor was willing to provide the general correlation between smoking and fatal illness, and to discuss causal models. The patient brushed that away as well, first by citing that correlation does not imply causation, then by implying that a general theoretical model might not apply to him specifically.

These are all clever debate positions, but what’s behind them is a misinterpretation of the role of uncertainty in science; and a misreading of the purpose of talking to scientists or doctors. The role of scientists is not to win debates — maybe debates with each other sometimes, but not debates with people in general. The role of scientists is to explain and advise to their best abilities. Part of a good explanation is admitting to uncertainties. But then the clever debater can always say, “Aha! Until you know everything, you don’t really know anything.” (As Scott already pointed out.)
No, this is wrongheaded for multiple reasons. First, the physician is not a scientist. He does not do experiments on patients. He is not trained to do or evaluate scientific research. One might occasionally publish his findings, but they nearly always treat patients according to accepted textbook wisdom.

Physicians need to explain and advise, but that is not the role of scientists. Scientists need to demonstrate evidence showing how their hypotheses are superior to the alternatives. And yes, their role includes estimating the uncertainty in their assertions. But the doctor in the parable fails to do all those things.

It really is true that a lot of medical studies show a correlation without causation. And there are population risks that do not apply to particular individuals. Table salt is an example. The advice is commonly given that cutting back on salt will make people healthier, but salt has no known adverse affects for most people.

A central point in Al Gore's movie is a chart showing a correlation between CO2 concentrations and warming. What he does not explain is that the CO2 increase seems to occur 800 years after the warming. Yes, people are right to ask these questions. Aaronson and Kuperberg show their authoritarian mindset when they say that we should just accept official advice without question.

Scott adds:
I think it’s unfortunate that people focus so much on the computer climate models, and in retrospect, it seems clear that climate researchers made a mistake when they decided to focus so heavily on them—it was like announcing, “as soon as these models do a poor job at predicting any small-scale, lower-order phenomenon, you’re free to ignore everything we say.” ...

On reflection, I’d like to amend what I said: building detailed computer models of the climate, and then improving them when they’re wrong, is exactly the right thing to do scientifically. It’s only a mistake politically, to let anyone get the idea that the case for action on climate hinges on the outputs of these models. ...

In my own life, when people talk nonsense about quantum computing, I often feel the urge to give them a complex, multilayered reply that draws heavily on my own recent research, even if a much simpler reply would suffice. After all, that way I get to showcase the relevance of my research! Yet whenever I succumb to that urge, I almost always decide right afterward that it was a mistake.
He is currently writing a book whose main purpose is addressing the quantum computing nonsense that people still had after reading his previous book. I wonder whether he will address my arguments against QC.

I guess he is saying here that he has does interesting research into the computational complexity classes that arise from QC, and he thinks that such research is worthwhile, but it all collapses if QC turns out to be impossible. This is similar to the arguments for string theory -- mainly that lots of very smart people have discovered some fascinating mathematical models, and it would be shame if it had nothing to do with the real world.

It is argument from authority, in spite of all real world evidence being to the contrary.

Update: Here is Scott's latest, giving another example of the leftist-authoritarian mind at work:
But doctors have always gone beyond laying out the implications of various options, to tell and even exhort their patients about what they should do. That’s socially accepted as part of the job of a doctor.

The reason for this, I’d say, is that humans are not well-modeled as decision-theoretic agents with a single utility function that they’re trying to maximize. Rather, humans are afflicted by laziness, akrasia, temptation, and countless things that they “want to want, but don’t want.” Part of the role of the doctor is to try to align the patient’s health choices with his or her long-term preferences, rather than short-term ones.

In the specific case of marijuana, I’d think it reasonable for the doctor to say something like this: “well, marijuana is much safer and less addictive than tobacco, so if you’re going to smoke something, then I’m happy it’s the former. But please consider getting your marijuana through a vaporizer, or mixing it in smoothies or baking it in cakes, rather than smoking it and ingesting carcinogenic ash into your lungs.”

Akrasia is an obscure Greek word meaning "the state of acting against one's better judgment" out of a lack of self-control or weakness of will.

In other words, the people are sheep who need authorities to tell them what is good for them. But such advice should be modified to not conflict with popular leftist causes, like dope-smoking. He also says:
I haven’t studied the IPCC reports enough to say, but my guess is that they massively understate the long-term danger — for example, by cutting of all forecasts at the year 2100.
He also says that he is rejecting policies that cause: "no extra suffering today, but massive suffering 100 years from now that causes the complete extinction of the human race".

The climate experts are only predicting a sea level rise of a couple of feet in the next century, not complete extinction of the human race. So it seems clear that he is not concerned with the hard evidence for what is really going to happen. He wants some like-minded leftist authoritarians to dictate policy to the masses, and have them accept it as progress towards a better world.

Update: Lubos Motl piles on. He frequently criticizes global warming alarmism. (That is, he says there may be some human-induced warming but it is not economically significant.)
However, the doctor wasn't able to offer any numbers to the smoker. I will do it for you momentarily. More importantly, the doctor was a Fachidiot who considers the recommendations she hears from fellow doctors as a holy word – and seems to completely overlook the human dimension of the problem, especially the fact that people have some positive reasons to smoke, reasons she is completely overlooking. (In the same way, people may have very good reasons to veto a surgery or some treatment, too.) She looks at the patient as if she were looking at a collection of tissues and the only goal were to keep these tissues alive for a maximum amount of time. We sometimes hear that doctors are inhuman and look at their patients as if they were inanimate objects; however, it's rarely admitted that the physicians' recommendations "you have to stop A, B, C" are important examples of this inhuman attitude!
This parable does seem to illustrate different thinking styles. I am really surprised that smart people like Kuperberg and Aaronson so aggressively defend such an anti-factual persuasion method.