Friday, November 28, 2014

Broader significance of quantum entanglement

Quantum Frontiers writes:
This is a jubilee year.* In November 1964, John Bell submitted a paper to the obscure (and now defunct) journal Physics. That paper, entitled “On the Einstein Podolsky Rosen Paradox,” changed how we think about quantum physics. ...

But it’s not obvious which contemporary of Bell, if any, would have discovered his inequality in Bell’s absence. Not so many good physicists were thinking about quantum entanglement and hidden variables at the time (though David Bohm may have been one notable exception, and his work deeply influenced Bell.) Without Bell, the broader significance of quantum entanglement would have unfolded quite differently and perhaps not until much later. We really owe Bell a great debt.
No, this is silly. Bell had a clever idea for disproving quantum mechanics. Had he succeeded, he would be hailed as a great genius.

But the likelihood is that dozens of physicists had similar insights, but decided that disproving quantum mechanics was an impossible task. All attempts, including those of Bell, Bohm, and their followers, have failed.

Lumo responds:
However, I strongly believe that

the fathers of quantum mechanics could collectively solve the particular thought experiment and see the incompatibility of the quantum vs local realist predictions; even without that, the amount of evidence they had supporting the need for the new, quantum core of physics has been overwhelming since the mid 1920s

much of the explicit findings and slogans about entanglement had been known for 29 years, since the 1935 works by Einstein, Podolsky, Rosen; and Schrödinger

Bell's results didn't really help in the creation of the quantum computing "engineering industry" which would only start in 1970 and which has little to do with all the quasi-philosophical debates surrounding entanglement

most frustratingly, Bell's correct results were served in a mixed package along with lots of wrong memes, unreasonable expectations, and misleading terminology and the negative price of these "side effects" is arguably larger than the positive price of Bell's realizations
He goes on to explain why John von Neumann's 1930 argument against hidden variables is not as silly as Bohm and Bell pretend.

Whether or not you find von Neumann's argument persuasive, the consensus since 1930 has been that hidden variables do not work. Bohm and Bell were just chasing a wrong idea.

The whole idea that Bell's theorem is necessary to understand the broader significance of quantum entanglement is nutty. Entanglement was always essential to multi-particle system, and Bell did not influence that. He only influenced dead-end searches for other interpretations.

UK BBC News reports:
Belfast City Council is to name a street after John Stewart Bell, one of Northern Ireland's most eminent scientists.

However, his full name will not be used, as the council has "traditionally avoided using the names of people" when deciding on street names.

Instead, the street in the city's Titanic Quarter will be called Bell's Theorem Way or Bell's Theorem Crescent.

Dr Bell is regarded as one of the 20th Century's greatest physicists. ...

Bell's Theorem, more formally known as On the Einstein-Podolsky-Rosen paradox, demonstrated that Einstein's views on quantum mechanics - the behaviour of very small things like atoms and subatomic particles - were incorrect.
Greatest physicist? The consensus back in 1935 was that Bohr won the Bohr-Einstein debates.

MIT physicist David Kaiser writes in the NY Times:
Bell’s paper made important claims about quantum entanglement, one of those captivating features of quantum theory that depart strongly from our common sense. ...

The key word is “instantaneously.” The entangled particles could be separated across the galaxy, and somehow, according to quantum theory, measurements on one particle should affect the behavior of the far-off twin faster than light could have traveled between them.
No, that would be nonlocality, and that has not been demontrated.
In his article, Bell demonstrated that quantum theory requires entanglement; the strange connectedness is an inescapable feature of the equations. ...

As Bell had shown, quantum theory predicted certain strange correlations between the measurements of polarization as you changed the angle between the detectors — correlations that could not be explained if the two photons behaved independently of each other. Dr. Clauser and Mr. Freedman found precisely these correlations.
The correlations are not that strange. If the two photons generated in a way to give equal and opposite properties, and you make identical measurements on them after they have separated a big distance, then you get equal and opposite results. If you make slightly different measurements, then you get correlations that are predicted by quantum mechanics, but not by hidden variable theories.

He discusses closing experimental loopholes, and then proposes to address superdeterminism:
How to close this loophole? Well, obviously, we aren’t going to try to prove that humans have free will. But we can try something else. In our proposed experiment, the detector setting that is selected (say, measuring a particle’s spin along this direction rather than that one) would be determined not by us — but by an observed property of some of the oldest light in the universe (say, whether light from distant quasars arrives at Earth at an even- or odd-numbered microsecond). These sources of light are so far away from us and from one another that they would not have been able to receive a single light signal from one another, or from the position of the Earth, before the moment, billions of years ago, when they emitted the light that we detect here on Earth today.

That is, we would guarantee that any strange “nudging” or conspiracy among the detector settings — if it does exist — would have to have occurred all the way back at the Hot Big Bang itself, nearly 14 billion years ago.
It is hard to see how anyone would be convinced of this. The problem is that if there is superdeterminism, then we do not have the free will to make the random choices in the experiment design or inputs, and we will somehow be guided to make choices that make the experiment turn out right. We cannot toss coins either, because they are also determined. Everything is determined. But somehow that distant quasar light will be the most random thing we can do.

LuMo agrees:
Aspect says the usual thing that up to Bell, it was a matter of taste whether Bohr or Einstein was right. Well, there was a difference: Bohr actually had a theory that predicted all these entanglement measurements while Einstein didn't have a damn thing – I mean, he didn't have any competing theory. How can a "perfectly viable theory agreeing with observations" vs "no theory" may be "up to one's taste", I don't know. Moreover, it is not really true that it wasn't understood why the predictions of quantum mechanics were not imitable by any local classical theory.
Bell and his followers (like Aspect) successfully revived the Bohr-Einstein debates, but the outcome is always the same. Bohr and quantum mechanics turn out correct, and the naysayers have no substantive alternative.

Wednesday, November 26, 2014

NY Times history of Big Bang stories

I have posted (such as here and here) how LeMaitre is the father of the big bang, even tho Hubble is usually credited. LeMaitre had the theory and the data, and published well before Hubble. Hubble had a bigger telescope and published data that seemed more convincing at the time, but we now know that his data was wildly inaccurate.

I assumed that Hubble got more credit in America because he was an American, and because LeMaitre published in an obscure journal. But now I find that the NY Times credited LeMaitre all along:
In 1927, Georges Lemaître, a Roman Catholic priest and astronomer from Belgium, first proposed the theory that the universe was born in a giant primeval explosion. Four years later, on May 19, 1931, The New York Times mentioned his new idea under a Page 3 headline: “Le Maître Suggests One, Single, Great Atom, Embracing All Energy, Started the Universe.” And with that, the Big Bang theory entered the pages of The Times.

Over the years, The Times mentioned the theory often, and used a variety of terms to denote it — the explosive concept, the explosion hypothesis, the explosion theory, the evolutionary theory, the Lemaître theory, the Initial Explosion (dignified with capital letters). Occasionally, descriptions approached the poetic: On Dec. 11, 1932, an article about Lemaître’s visit to the United States referred to “that theoretical bursting start of the expanding universe 10,000,000,000 years ago.”
The article goes on with some terminology history:
It was not until Dec. 31, 1956, that The Times used Hoyle’s term and then only derisively: “this ‘big bang’ concept,” the anonymous reporter called it in an article discussing discoveries that “further weakened the ‘big bang’ theory of the creation of the universe.” ...

By March 11, 1962, it was becoming clear that the phrase “big bang” had been co-opted by proponents of the theory. An article in The Times Magazine said that the sudden “explosion and expansion is by some called ‘Creation’; by others merely ‘the big bang.' ” ...

Today, when almost all astronomers have accepted the theory, The New York Times Manual of Style and Usage unequivocally requires Big Bang theory — uppercase, no quotation marks.
The theory has changed a little bit since LeMaitre, with the addition of inflation and dark energy.

Monday, November 24, 2014

Listening to authorities claiming settled science

Scott Aaronson posts Kuperberg’s parable on why we should accept non-expert authority figures on subjects like global warming.

In the parable, the physician cannot explain to the patient the risks of cigarette smoking, and can only say that the science is settled that it is bad. This is not enuf info for the patient, and asks more questions about the actual risk. The physician does not have the expertise to answer the questions, and just wants the patient to do as he is told.

This reveals an authoritarian mindset. I have met people who believe in always doing whatever the physician or other professional recommends. They seem baffled if I say that I like to make my own decisions. The fact is that when a physician makes a recommendation on some subject like eating saturated fats, he is probably not following the settled science.

I realize that there are a lot of people with irrational objections to scientific evidence, and that it may be a waste of time for the physician to learn the details of smoking risks. But he could refer the patient to a reliable source for the patient who wants to be informed.

I follow the hard science, and that is almost never wrong. But the science authorities are often wrong. They join fads and over-hype results.

I see physicists jumping on stupid fads all the time. Climate science is probably ten times worse.

There is hard science showing that CO2 concentrations are increasing, that the increases are attributable to humans burning fossil fuels, and that CO2 absorbs infrared heat. I do not question any of that. But the climate authorities want us to accept all sorts of other things, like the recent USA-China agreement where China makes a non-binding commitment to stick to projections of emission reductions starting in 2030. It seems like just a public relations stunt to me.

There are huge uncertainties in the temperature projections, and considerable doubt about the policy implications. A slightly warmer world might do more good than harm. It is a crazy idea that we should all drive electric cars because some authority figure says the climate science is settled.

Greg Kuperberg explains his parable:
If you look carefully, the patient in effect demanded certitude from the doctor, only to then use it against her. The doctor was willing to provide the general correlation between smoking and fatal illness, and to discuss causal models. The patient brushed that away as well, first by citing that correlation does not imply causation, then by implying that a general theoretical model might not apply to him specifically.

These are all clever debate positions, but what’s behind them is a misinterpretation of the role of uncertainty in science; and a misreading of the purpose of talking to scientists or doctors. The role of scientists is not to win debates — maybe debates with each other sometimes, but not debates with people in general. The role of scientists is to explain and advise to their best abilities. Part of a good explanation is admitting to uncertainties. But then the clever debater can always say, “Aha! Until you know everything, you don’t really know anything.” (As Scott already pointed out.)
No, this is wrongheaded for multiple reasons. First, the physician is not a scientist. He does not do experiments on patients. He is not trained to do or evaluate scientific research. One might occasionally publish his findings, but they nearly always treat patients according to accepted textbook wisdom.

Physicians need to explain and advise, but that is not the role of scientists. Scientists need to demonstrate evidence showing how their hypotheses are superior to the alternatives. And yes, their role includes estimating the uncertainty in their assertions. But the doctor in the parable fails to do all those things.

It really is true that a lot of medical studies show a correlation without causation. And there are population risks that do not apply to particular individuals. Table salt is an example. The advice is commonly given that cutting back on salt will make people healthier, but salt has no known adverse affects for most people.

A central point in Al Gore's movie is a chart showing a correlation between CO2 concentrations and warming. What he does not explain is that the CO2 increase seems to occur 800 years after the warming. Yes, people are right to ask these questions. Aaronson and Kuperberg show their authoritarian mindset when they say that we should just accept official advice without question.

Scott adds:
I think it’s unfortunate that people focus so much on the computer climate models, and in retrospect, it seems clear that climate researchers made a mistake when they decided to focus so heavily on them—it was like announcing, “as soon as these models do a poor job at predicting any small-scale, lower-order phenomenon, you’re free to ignore everything we say.” ...

On reflection, I’d like to amend what I said: building detailed computer models of the climate, and then improving them when they’re wrong, is exactly the right thing to do scientifically. It’s only a mistake politically, to let anyone get the idea that the case for action on climate hinges on the outputs of these models. ...

In my own life, when people talk nonsense about quantum computing, I often feel the urge to give them a complex, multilayered reply that draws heavily on my own recent research, even if a much simpler reply would suffice. After all, that way I get to showcase the relevance of my research! Yet whenever I succumb to that urge, I almost always decide right afterward that it was a mistake.
He is currently writing a book whose main purpose is addressing the quantum computing nonsense that people still had after reading his previous book. I wonder whether he will address my arguments against QC.

I guess he is saying here that he has does interesting research into the computational complexity classes that arise from QC, and he thinks that such research is worthwhile, but it all collapses if QC turns out to be impossible. This is similar to the arguments for string theory -- mainly that lots of very smart people have discovered some fascinating mathematical models, and it would be shame if it had nothing to do with the real world.

It is argument from authority, in spite of all real world evidence being to the contrary.

Update: Here is Scott's latest, giving another example of the leftist-authoritarian mind at work:
But doctors have always gone beyond laying out the implications of various options, to tell and even exhort their patients about what they should do. That’s socially accepted as part of the job of a doctor.

The reason for this, I’d say, is that humans are not well-modeled as decision-theoretic agents with a single utility function that they’re trying to maximize. Rather, humans are afflicted by laziness, akrasia, temptation, and countless things that they “want to want, but don’t want.” Part of the role of the doctor is to try to align the patient’s health choices with his or her long-term preferences, rather than short-term ones.

In the specific case of marijuana, I’d think it reasonable for the doctor to say something like this: “well, marijuana is much safer and less addictive than tobacco, so if you’re going to smoke something, then I’m happy it’s the former. But please consider getting your marijuana through a vaporizer, or mixing it in smoothies or baking it in cakes, rather than smoking it and ingesting carcinogenic ash into your lungs.”

Akrasia is an obscure Greek word meaning "the state of acting against one's better judgment" out of a lack of self-control or weakness of will.

In other words, the people are sheep who need authorities to tell them what is good for them. But such advice should be modified to not conflict with popular leftist causes, like dope-smoking. He also says:
I haven’t studied the IPCC reports enough to say, but my guess is that they massively understate the long-term danger — for example, by cutting of all forecasts at the year 2100.
He also says that he is rejecting policies that cause: "no extra suffering today, but massive suffering 100 years from now that causes the complete extinction of the human race".

The climate experts are only predicting a sea level rise of a couple of feet in the next century, not complete extinction of the human race. So it seems clear that he is not concerned with the hard evidence for what is really going to happen. He wants some like-minded leftist authoritarians to dictate policy to the masses, and have them accept it as progress towards a better world.

Update: Lubos Motl piles on. He frequently criticizes global warming alarmism. (That is, he says there may be some human-induced warming but it is not economically significant.)
However, the doctor wasn't able to offer any numbers to the smoker. I will do it for you momentarily. More importantly, the doctor was a Fachidiot who considers the recommendations she hears from fellow doctors as a holy word – and seems to completely overlook the human dimension of the problem, especially the fact that people have some positive reasons to smoke, reasons she is completely overlooking. (In the same way, people may have very good reasons to veto a surgery or some treatment, too.) She looks at the patient as if she were looking at a collection of tissues and the only goal were to keep these tissues alive for a maximum amount of time. We sometimes hear that doctors are inhuman and look at their patients as if they were inanimate objects; however, it's rarely admitted that the physicians' recommendations "you have to stop A, B, C" are important examples of this inhuman attitude!
This parable does seem to illustrate different thinking styles. I am really surprised that smart people like Kuperberg and Aaronson so aggressively defend such an anti-factual persuasion method.

Sunday, November 23, 2014

Attack on free will is misunderstood Marxism

I have wondered how American professors could adopt such foolish disbeliefs in free will, and maybe this is a clue. It is rooted in misunderstood Marxism!

The New Yorker reports
[B. F.] Skinner was enthralled. Two years after reading the Times Magazine piece, he attended a lecture that Pavlov delivered at Harvard and obtained a signed picture, which adorned his office wall for the rest of his life. Skinner and other behaviorists often spoke of their debt to Pavlov, particularly to his view that free will was an illusion, and that the study of human behavior could be reduced to the analysis of observable, quantifiable events and actions.

But Pavlov never held such views, according to “Ivan Pavlov: A Russian Life in Science” (Oxford), an exhaustive new biography ...

Pavlov’s research originally had little to do with psychology; it focussed on the ways in which eating excited salivary, gastric, and pancreatic secretions. ... That research won him the 1904 Nobel Prize in Physiology or Medicine. But a dog’s drool turned out to be even more meaningful than he had first imagined: it pointed to a new way to study the mind, learning, and human behavior. ...

The Soviets came to regard Pavlov as a scientific version of Marx. The comparison could not entirely have pleased Pavlov, who rebelled at the “divine” authority accorded Marx (“that fool”) and denied that his own “approach represents pure materialism.” Indeed, where others thought that the notion of free will would come to be discarded once we had a full understanding of how the mind worked, Pavlov was, at least at times, inclined to think the opposite. “We would have freedom of the will in proportion to our knowledge of the brain,” he told Gantt in 1927, just as “we had passed from a position of slave to a lord of nature.”

That year, Stalin began a purge of intellectuals. Pavlov was outraged. At a time when looking at the wrong person in the wrong way was enough to send a man to the gulag, he wrote to Stalin saying that he was “ashamed to be called a Russian.”
I realize that people doubted free will before this. Wikipedia says Bertrand Russell rejected free will along with Christianity as a teenager (in around 1890), but also says that he was a big advocate of freedom of thought all his life. I don't know how he reconciles that.

Skinner was voted in 2002 as the most influential psychologist of the 20th century. He denied free will. His biological determinism was based on behavior, while others focused on DNA and genes. Marxism was not necessarily opposed to free will, but very much advocated a sort of historical determinism. They all seemed to think that XX century science is going to make humanity so predictable that there will be no room left for free will.

I am just trying to understand what would lead leftist Harvard professors to reject free will. This is just a clue, and I am sure it is an incomplete explanation.

Friday, November 21, 2014

Free will is stochastic

Leftist-atheist-evolutionist Jerry Coyne attacks neuroscientist Michael Gazzaniga on the subject of free will:
Gazzaniga’s thesis is that, although determinism reigns at the brain level, so that our actions are determined in advance (though not 100% predictable), humans nevertheless still have free will and moral responsibility. In other words, he’s a compatibilist. Compatibilism is, of course, the notion that “free will” can still exist despite physical determinism of our behaviors, including “choice”. ...

As an incompatibilist, I reject the notion that humans have moral responsibility for their actions, since the concept of “moral responsibility” involves “ability to choose otherwise.” ...

There are dozens of different (and sometimes incompatible!) ways to define “free will” to make it compatible with determinism, which leads me to suspect that compatibilists are like theologians, who redefine God so it always remains compatible with the latest findings of science ...

Gazzaniga’s whole thesis is undercut by this misguided statement: “My contention is that ultimately responsibility is a contract between two people rather than a property of a brain, and determinism has no meaning in this context.” ...

And let me say this one more time: philosophers who are truly concerned with changing society based on reason wouldn’t be engaged in compatibilism, they’d be engaged in working out the consequences of determinism, especially its implications for how we reward and punish people.
It sure is funny how he can reject free will, and still be full of opinions on what everybody should be doing. Rejecting personal moral responsibility is too nutty for this blog, but I want to address some of his dubious scientific assertions.
One of the most obvious resemblances of theology to compatibilism is the continual redefinition of “free will” so that (like God) it’s always preserved despite scientific advances. When Libet and Soon et al. showed that they could predict a person’s behavior several seconds in advance of that person’s conscious decision, the compatibilists rushed to save their definition, declaring that these experiments are completely irrelevant to the notion of free will. They’re not. For if free will means anything, it means that our choices are coincident with our consciousness of making them (to libertarians, our consciousness makes those choices, and we could have chosen otherwise).
John Maynard Keynes (and others) have been quoted as saying:
When my information changes, I alter my conclusions. What do you do, sir?
So there is nothing wrong with theologians revising their views based on scientific evidence. Now that I have seen Libet's experiments, I would not be surprised at all if my urge to eat a cheeseburger was detectable in my brain before I was consciously aware of it. But I still have the free will to choose to eat or not eat that cheeseburger.

Coyne says that he follows this Anthony R. Cashmore paper:
Hence, the popular debate concerning the relative importance of genes and environment on behavior, is commonly inadequate for two reasons: both because it ignores the question of responsibility (or lack of) and because of the additional stochastic component that influences biology (12). ...

The introduction of stochasticism would appear to eliminate determinism. However there are three additional points that need to be addressed here. The first point is that, at least in some instances, what at first glance may appear to be stochastic might simply reflect microenvironmental differences and may not be the direct consequence of some inherent stochastic property of atomic particles. The second point is that some physicists, for example ’t Hooft (14), do not necessarily accept the apparent unpredictability associated with the quantum mechanical view of matter (It was concern about this unpredictability that prompted Einstein to offer the viewpoint that “God does not play dice”). Finally, even if the properties of matter are confirmed to be inherently stochastic, although this may remove the bugbear of determinism, it would do little to support the notion of free will: I cannot be held responsible for my genes and my environment; similarly, I can hardly be held responsible for any stochastic process that may influence my behavior! ...

I believe that free will is better defined as a belief that there is a component to biological behavior that is something more than the unavoidable consequences of the genetic and environmental history of the individual and the possible stochastic laws of nature.
It is true that there is a split among physicists about that "stochastic component". Einstein was an ordinary determinist, with his beliefs backed by religion more than science, and rejected quantum mechanics for that reason. 'tHooft has expressed sympathy for superdeterminism, an extreme view. Most physicists seem to be persuaded that quantum mechanics proves that nature is inherently random. A few deny randomness by supporting the many-worlds or Bohmian interpretations, and all the unobservable ghosts that go along with those views.

I stick to the hard science, and a lot of smart people are seriously confused. A stochastic process is just one that is parameterized by some measure space whose time evolution is not being modeled.

Unless you are modeling my urges for cheeseburgers, then my appetite is a stochastic process. By definition. Saying that it is stochastic does not rule out the idea that I am intentionally choosing that burger.

Certain quantum mechanical experiments, like radioactive decay or Stern–Gerlach experiment, are stochastic processes according to state-of-the-art quantum mechanics. That just means that we can predict certain statistical outcomes, but not every event. Whether these systems are truly deterministic, we do not know, and it is not clear that such determinism is really a scientific question.

The nature-nurture folks are fond of twin studies, where they separate influences into genetic (aka heritable), environmental (aka shared environment), and stochastic (aka non-shared environment) factors. Until these studies, most human behaviors were thought to be mostly environmental. These studies consistently show that they are about half genetic and half stochastic, with very little attributable to the (shared) environment. That is, when two kids grow up in the same household and community, their similarities are attributable to their genes (if siblings or identical twins) or to unknown factors that have nothing to do with that household and community. A 2000 review (pdf) says, "When genetic similarity is controlled, siblings often appear no more alike than individuals selected at random from the population." These findings are hard to accept, as it seems obvious that parental child-rearing practices should be a big factor.

The word "stochastic" here is chosen to include all factors other than heredity (aka genetic determinism) and environment. The twin studies do not know what it is, but need some term for unexplained variance. But now Cashmore goes a step further, and defines free will as what goes beyond heredity, environment, and stochastic process. Then he concludes that there is no such thing as free will.

These guys have some underlying misunderstanding of mathematics. Or physics. Or philosophy of science, I am not sure. If we have free will, and psychologists or sociologists go around studying people making decisions, then that free will would be seen as a stochastic process. We cannot model those decisions deterministically, and can only say that they are choosing from some auxiliary space of possibilities in a way that is not externally controlled. That is exactly what a stochastic process is.

Cashmore and Coyne say that they should not be responsible for their behavior if it is influenced by a stochastic process. They say that as if it were an obvious truth, and give no justification. But free will is a stochastic process. And of course they are responsible. The essence of their argument is to say that life is either random or non-random, and they are not responsible either way. Coyne adamantly contends that this is an irrefutable logical argument that only religious people would reject, and that is another reason for exterminating the evils of religion.

Again, I just deal with the hard science here, and not the theology. Cashmore and Coyne is simply logically incorrect in their dismissal of free will. Their view has no support from science or common sense.

Tuesday, November 18, 2014

The universe is not a quantum computer

Scott Aaronson and his fellow MIT professor are again bragging about the possibility of quantum computers, while dancing around the question of their impossibility.
[Lloyd] Immediately after the Physics Nobel Laureate Richard Feynman presented his vision of the quantum computer in 1982, researchers began to investigate how one could use the superposition of quantum states to do things that are not a conventional computer. ...

[Scott] If we were to discover that there is a deep reason why quantum computers are impossible, then that would ironically be the most exciting result. Because we would then have to rewrite the physics books in order to take account of the new understanding.

[Shmi] Scott, how can a quantum computer used to simulate quantum systems be impossible? After all, it’s just another quantum system, only somewhat more controlled. What might be the distinction between the simulator and the system itself that makes a quantum simulation impossible?

[Scott] You should probably be asking the QC skeptics that question, rather than me. ;-) But I assume they would answer something like: “the quantum systems that can occur in Nature are all extremely special, in ways that probably allow us to simulate them efficiently using a classical computer. So yes, you can presumably build a computer that simulates those systems, but what you had built wouldn’t deserve the title ‘quantum computer.’ What we don’t think you can build is a machine that implements a completely arbitrary sequence of 1- and 2-qubit unitary transformations, in order (for example) to efficiently factor a 10,000-digit number using Shor’s algorithm. And of course, consistent with our prediction, such machines are nowhere found in Nature.”
I am one of those QC skeptics, so I answer here. I have posted 5 reasons I believe QC is impossible, as well as various agreements and disagreements with Scott.

R.P. Feynman proposed a universal quantum simulator in 1982. He pointed out that if you program an ordinary Turing machine (ie, classical computer) to simulate the rules of quantum mechanics, then it suffers an exponential slowdown. But if a quantum computer had the mysteries of quantum mechanics baked in, then it could efficiently simulate a quantum system.

I have no argument with any of that, or with the basics of quantum mechanics.

Shmi argues above that any system could be said to simulate itself, and therefore a quantum system is simulating itself as a quantum computer. So quantum computers must exist, and he does not see how anyone can deny them. Scott calls this quantum computing in their sleep. (“Ein Quantencomputer hingegen wuerde solche Aufgaben im Schlaf erledigen …”)

It is a good question, because I am not denying quantum systems. Let me explain.

Consider a very simple quantum system, such as an electron trapped in a small cavity. Assume that the cavity is just sufficient to confine the electron. According to classical mechanics, the electron bounces around the cavity and cannot get out.

Experimentally, the electron eventually leaks out, at a seemingly random time.

Quantum mechanics teaches that the electron is really a wave of some sort, and eventually tunnels thru the potential energy barrier. Using a wave function for the electron, quantum mechanics can predict probabilities for the location of the electron and the time of escape. Physicists are satisfied with these probabilities because they believe in an inherent randomness of nature, so the quantum mechanical formulas are considered to be a complete solution to the problem.

Some people believe that there is, yet to be discovered, some underlying reality that will explain deterministically how the electron gets out.

This is all reasonable and straightforward, but simulating it has high complexity. Here is why. The simulator can never be sure when the electron leaves the cavity. As soon as the electron leaves, it will likely trigger a chain reaction of other events. So the simulator has to keep track of all possible times for the electron to leave, and then all those chain reactions. An electron leaving at an early time might trigger a very different chain reaction from the electron leaving at a later time.

Each of those chain reactions is another quantum process, and therefore subject to its own exploding number of possibilities. Thus the simulator has to keep track of many possibilities, each of which leads to many other possibilities, and so. Soon, there are way too many possibilities to keep track of.

You can use an approximation that divides time into discrete steps. But the number of possibilities increases exponentially with the number of steps. This is impractical on a Turing computer. The only hope, it seems, it to use a fancy quantum computer that can keep track of multiple states simultaneously without an exponential complexity explosion.

This is a valid argument that quantum simulations have very high Turing complexity. But does it prove the existence of a quantum computer?

For that, we need to run the argument backwards. We know that there are quantum systems in nature, and simulating themselves must have high Turing complexity, so those systems could be viewed as computers that can outperform a Turing computer. That is the (flawed) argument.

Let's go back to the electron in a cavity. Is it doing a quantum computation? To view it that way, you have to imagine that it is magically and silently keeping track of an exponentially increasing set of possibilities, and encoding them all in its wave function. Once you convince yourself that this electron is doing a computation that no Turing machine can do, you are then supposed to believe that advanced technology will be able to extract that ability, and apply it to factoring 200-digit numbers.

The flaw in the argument is the assumption that the natural process is the same as how we model it in the simulation. It is not. The electron is just a dumb particle-like wave-like object that bounces around a cavity until it eventually busts out. It is not doing computations on its wave function.

The typical wave function will represent an electron that is 3/4 in the cavity and 1/4 outside the cavity. Nothing like that is ever observed in nature. Every time we measure the electron, we always find it either entirely inside or entirely outside the cavity. The wave function is just our way of representing our uncertainty about whether the electron has escaped yet. In philosophical jargon, the wave function is epistemic, not ontic.

To answer Scott's question, I doubt that a classical computer can efficiently simulate a quantum system. Too many possibilities proliferate. The wave function may well be the best possible mathematical representation of an electron, but still not a true representation of the physical object. The probabilities help us make sense out of what the electron is doing, but I very much doubt that what the electron is really doing is a computation on probabilities.

I am reminded of artificial intelligence (AI) researchers who try to program a computer to tell the difference between the image of a dog and a cat. They would have surely concluded that it is impossible, were it not for the fact that a 3-year-old child does it easily. The correct conclusion is that the 3yo child is doing something different from what the AI researchers are doing. Likewise the electron is doing something different from what the quantum computing simulators are doing.

Scott wants a proof that quantum computing is impossible, or else he dismisses me as a crackpot who cannot back up what I say. I do not have a proof. I cannot rule out the possibility that an electron really is typically 3/4 in a cavity, even tho no one has ever seen it that way, and that this split in possible electron states can somehow be used to do a computation that is otherwise impossible. It just seems very far-fetched to me.

I do say that the arguments for quantum computing are fallacious. The above argument is essentially to say that simulating a quantum system is too complex so some computational ability can be extracted from that complexity. It is like saying that distinguishing a dog and cat is too complex for our Turing machines, so let's harness the brainpower of 3yo kids to factor 200-digit numbers.

The usual argument for quantum computing is a variation of the above argument. It says that QC is a logical consequence of quantum mechanical rules that have been verified for 90 years. It is true that superpositions have been used to predict probabilities of events for almost 90 years, but QC goes further than that, and uses superpositions to predict superpositions and extract computations.

To me, the observables and events are real, and the probabilities and superpositions are mathematical artifacts. No one ever observes a probability. That is just our way of organizing what we see. And we do not observe a superposition either.

Quantum mechanics was created with a positivist philosophy that views science as being all about explaining observations and measurement. Nowadays it is more common to attach interpretations to wave functions with descriptions like "ontic" and "realist". The positivist view questions whether these have any scientific meaning. There is no experiment that shows that the ontic or realist interpretations have any tangible merit. Just metaphysical fluff, the positivist would say.

That metaphysical fluff is crucial for the belief in QC.

I know that there are 100s of papers claiming to see Schroedinger cat states, where something is alleged to be analogous to a cat being half alive and half dead. And 100s more claiming to use QC or quantum mechanics to do a computation. I admit that every transistorized computer is using quantum mechanics to do a computation.

What we do not see is anyone using cat states to do a super-Turing computation. Scott admits this, as he bashes those who make exaggerated claims. If some does that, I will be proved wrong. Until that, QC is just a sci-fi fantasy like those Interstellar wormholes.

Update: Here is Scott's 2012 offer, with the argument that simulating reality is an argument for QC:
An even more dramatic way to put the point is this: if quantum computing is really impossible, then we ought to be able to turn that fact on its head. Suppose you believe that nothing done by “realistic” quantum systems (the ones found in Nature) can possibly be used to outperform today’s classical computers. Then by using today’s classical computers, why can’t we easily simulate the quantum systems found in Nature? What is the fast classical algorithm for simulating those quantum systems? How does it work? Like a wily defense attorney, the skeptics don't even try to address such questions; their only interest is in casting doubt on the prosecution's case.

The reason I made my $100 000 bet was to draw attention to the case that quantum computing skeptics have yet to offer. If quantum computing really does turn out to be impossible for some fundamental reason, then once I get over the shock to my personal finances, I'll be absolutely thrilled. Indeed, I'll want to participate myself in one of the greatest revolutions in physics of all time, a revolution that finally overturns almost a century of understanding of quantum mechanics. And whoever initiates that revolution will certainly deserve my money.
I say that QC is impossible for non-revolutionary reasons. The impossibility of QC will not overturn a century of understanding of quantum mechanics. It will only overturn 30 years of misunderstandings of quantum mechanics.

I do not know the fastest classical algorithm for simulating a quantum system. If the simulation has to keep track of many divergent probabilistic branches, then I expect it would be slow. The real reason we cannot efficiently simulate nature is because our mathematical models of electrons do not fully capture their physical reality.

Sunday, November 16, 2014

Astrophysical problems with Interstellar movie

The dubious physics of movies like Gravity have gotten attention, but the cosmologists seem much more fired up about Interstellar. I have not seen it yet.

Lee Billings writes in SciAm:
Christopher Nolan’s new film, Interstellar, is a near-future tale of astronauts departing a dying Earth to travel to Saturn, then through a wormhole to another galaxy, all in search of somewhere else humanity could call home. ...

If you watch movies for what they do to your mind rather than to your heart, though, the film may leave you less than starry-eyed. Despite being heavily promoted as hewing close to reality—Caltech physicist Kip Thorne wrote the first version of the story, and served as a consultant and producer on the film—some of the science in Interstellar is laughably wrong. Less lamented but just as damning, some parts of the story having nothing to do with science lack the internal self-consistency to even be wrong. ...

Much has already been written about the film’s scientific faults, pointing out fundamental problems with the astrophysics, planetary science and orbital mechanics that underpin key plot points. Just as much ink has been spilled (or pixels burned) saying that such details shouldn’t get in the way of a good story, that this movie wasn’t made for the edification of scientists but for the entertainment of the general public.
Based on reviews, it sure seems to me that this movie was made for the edification of scientists. They took every peculiar aspect of relativity, and worked it into the plot as well as they could. Time dilation, black hole, wormhole, twin paradox, closed timelike curves, and loss of information in black holes. And probably a couple of more that I might not catch even when I see the movie.

Sure, they could have made a more realistic movie, or a more coherent plot. Or just invented their own science fiction, like Star Trek. No, they made a deliberate decision to pack as much relativity in the movie as they could, and still have a marketable movie.

Does it work? I will form an opinion when I watch. I'll try to view it for what it is: a relativity show on a $150M budget.

Update: NY Times Dennis Overbye says that he had to study the physics and watch the movie a second time to appreciate it:
The second time I saw the movie, clued in by Dr. Thorne’s new book, “The Science of Interstellar,” I enjoyed it more, and I could appreciate that a lot of hard-core 20th- and 21st-century physics, especially string theory, was buried in the story — and that there was a decipherable, if abstruse, logic to the ending. But I wonder if a movie that requires a 324-page book to explicate it can be considered a totally successful work of art. ...

At one point, director Nolan asked for a planet on which the dilation of time because of immensely powerful gravity was so severe that one hour there would correspond to seven years on Earth — an Einsteinian effect that plays a big role in the plot. Dr. Thorne’s first reaction was “no way.” But after thinking about it, he says he found a way, which would require the planet to be very close to a massive black hole spinning at nearly its maximum rate. The hole would spin space around with it, like a mixer swirling thick dough.

The planet could get its heat and light from the disk of heated material swirling around the hole, Dr. Thorne calculated, as long as the hole was not feeding too strongly — a rather carefully tuned but not impossible situation. The black hole itself sprang directly from Dr. Thorne’s equations, and its renderings by the movie’s visual effects supervisor, Paul Franklin, showed details that Dr. Thorne plans to write papers about.

Wormholes are another thing that easily pass the beer test. Einstein himself pointed out that such shortcuts through space-time were at least allowed by his equations, but nobody knows how to make one or to keep it from collapsing, or how to install one near Saturn without its gravitational field’s disrupting the entire solar system.

Ditto the fifth dimension, a logical consequence of various brands of string theory. ...

“So why did they take care with relativity but not even bother with planetary science?” he went on. “Arthur C. Clarke is spinning in his stargate!”
So the relativity is bizarre, but it makes more sense than the rest of the movie.

Update: Kip Thorne has more movie explanations.

Friday, November 14, 2014

Hawking scientific contributions

The new Hawking movie got two NY Times reviews, one complaining about the facts, and one with other complaints:
But it is in showing the application of that intelligence that “The Theory of Everything” tumbles into a black hole of biopic banality. My colleague Dennis Overbye recently enumerated some of the film’s historical and scientific lapses in The New York Times, but those are not really my concern. Taking liberties with facts is a prerogative of storytelling. ...

The substance of Stephen’s work is shoehorned into a few compressed, overly dramatic scenes. Equations are scribbled on chalkboards. Fellow physicists proclaim, “It’s brilliant!” or “It’s rubbish!” One of Stephen’s friends diagrams a hypothesis in beer foam on a pub table. Several earnest, potted conversations take place about whether Stephen believes in God, as his wife does. But nothing establishes the stakes in these arguments or dramatizes the drive for knowledge.

Movies often have a hard time with science, as they do with art.
There are lots of TV shows that explain science, on the Discovery, Science, PBS, and NatGeo channels. There is no good reason for the movies to have a hard time with science.

The TV sitcom The Big Bang Theory is able to get its scientific facts correct, even tho it has much greater production pressures. Obviously they make it a priority to check the scripts with experts.

The 1998 movie Pi does not even bother to get the digits of pi correct. It gets about the first ten, and then gives random digits. What could be easier than getting pi correct? Obviously the makers had no interest in accuracy.

Shaunmaguire writes:
In anticipation of The Theory of Everything which comes out today, ... I wanted to provide an overview of Stephen Hawking’s pathbreaking research. ...

Singularity theorems applied to cosmology: Hawking’s first major results, starting with his thesis in 1965, was proving that singularities on the cosmological scale — such as the big bang — were indeed generic phenomena and not just mathematical artifacts. This work was published immediately after, and it built upon, a seminal paper by Penrose. ...

Singularity theorems applied to black holes: ... In the very late ‘60s and early ’70s, Hawking, Penrose, Carter and others convincingly argued that black holes should exist. ...

No hair theorem: ... In the early ’70s, Hawking, Carter, Israel and Robinson proved a very deep and surprising conjecture of John Wheeler–that black holes have no hair! ...

Black holes evaporate (Hawking Radiation): ...

Black hole information paradox: ... Most famously, the information paradox considered what would happen if an “encyclopedia” were thrown into the black hole. GR predicts that after the black hole has fully evaporated, such that only empty space is left behind, that the “information” contained within this encyclopedia would be destroyed. ... This prediction unacceptably violates the assumptions of quantum mechanics, which predict that the information contained within the encyclopedia will never be destroyed. ...
It is nice to see a movie about a physicist. I have not seen it yet. It probably emphasizes his disease and love life, but butchers his physics.

Let's not overstate this. His work on relativity singularities was comparable to Penroses, and was instrumental in convincing people of the big bang and black holes.

But do the singularities really exist? If relativity is correct, the singularities would be unobservable. The laws of physics as we know them would break down in a vicinity of the singular, so we cannot say with any confidence what would be happening. At the big bang, inflation is widely believed, but no one know how it would work.

So while the Penrose-Hawking theorems make useful arguments, they do not prove singularities in the real universe.

By now, Hawking is probably better known for the black hole evaporation and information paradox. Again, this has never been observed, and it is harder to separate his sensible thought ideas from his crazier ideas.

It seems crazy to me to say that the information in an encyclopedia can never be destroyed. Just burn it, then the info is gone for good. Quantum mechanics does not teach that you can get the info back. The many-worlds interpretation does say that the info escapes to an inaccessible parallel universe. There is no way to have evidence for that, and it just seems like a childish belief to me.

Hawking has spent the last 30 years on this sort of nonsense. He is frequently mentioned as a Nobel Prize candidate, but I doubt it. The Nobel committee only likes theories if they are confirmed by experiment.

His personal story is remarkable. He was diagnosed with Lou Gehrig's disease and given 2 years to live, but it appears that he was misdiagnosed. He probably has some other degenerative neural disease.

Tuesday, November 11, 2014

Evolutionist complains Christians fund free will

Leftist-atheist-evolution Jerry Coyne is ranting again against free will, and against the evils of religion, which he blames for all of these supposedly wrong ideas.

In his terminology, "libertarian free will" refers to the religious idea that we are able to act on our choices, and "incompatibilism" refers to his belief that scientific atheism requires believing that everything we do has been determined since the big bang. In the middle is "compatibilism", which says that determinism is compatible with our ordinary belief in free will, and hence you can believe in free will whether the laws of nature are determinist or not.

I stick to the hard science, and it has not resolved either determinism or free will, and probably never will. They are metaphysical beliefs. So I am a compatibilist, as I do not believe that they are inherently contradictory as philosophers use these terms.

In Coyne's analysis, if someone fails to reject free will, then some religious conspiracy must have bribed him into presenting that view:
The question is actually “are we free?” and, in the main, the interlocutors answer “yes.” After all, Templeton wants science to show that we still have free will, something that Dan Dennett mentioned the other day when reviewing Alfred Mele’s new book that defends free will (Dennett likes the book but suggested that Mele’s objectivity might have been compromised because his views are congenial to the source of his funding). Mele is in charge of two multimillion-dollar Templeton grants (here and here; see his response to Dennett here). One of them is the free will project touted in the ad.
My guess is that Coyne is getting his funding from leftist atheist evolutionists, so I guess that explains his views. Or maybe they were pre-ordained since the big bang.

What especially bugs him is the idea that people will live more Christian lives if they know that they have the free will to make their own decisions. He has to spoil it by telling everyone that they are mindless soulless automatons descended from apes.
When Darwin famously told the Bishop of Worcester's wife about his theory of evolution, she remarked: Descended from the apes! My dear, let us hope that it is not true, but if it is, let us pray that it will not become generally known.
The research does seem to show that people take more moral responsibility for their decisions if they believe that they had the free will to make those decisions.
Besides, what makes this whole argument from consequences bizarre is that research shows that most people’s conception of free will is not compatibilist, but libertarian. Yes, there’s one study showing a compatibilist belief in general, but most studies show the opposite. In my own discussions with scientists, many of them, while not explicit dualists, still believe in a libertarian free will in which there is an “I” who makes decisions at any given time, and could have decided otherwise. (I was surprised to learn that physicist Steve Weinberg, an atheist, believes this.)
Back to physics here. It should not be surprising that a big shot atheist physicist believes in free will. I do not think that anyone's Nobel Prize acceptance speech said that he did nothing to deserve it except to carry out the commands that he was programmed to do.

Seriously, the law of physics are not known to be deterministic. Einstein famously complained about this in the 1930s, and the consensus was that Einstein was wrong.

The loudest argument for determinism from today's physicists comes from the many-worlds interpretation of quantum mechanics. They say that what appears to be chance is really the universe splitting into parallel universes. When all universes are taken into account, they are all determined from the big bang. Our free will is just an illusion arising from the fact that we do not care about the other universes that have no interaction with our own.

Coyne adds:
I disagree that determinism is a useless concept, and it can be tested scientifically, as in the tests of Bell’s inequality (it doesn’t hold on the quantum level). ... And there is no scientific evidence that quantum mechanics plays a role in human behavior on a macro level.
He is badly confused. Tests of Bell's inequality do not show anything about determinism. They only show that certain deterministic non-quantum models are wrong. But the experiments favor quantum mechanics, so all the non-quantum models are wrong, as far as we know.

Quantum mechanics plays a role in all chemical reactions. Humans use chemical reactions in everything they do. If I decide to eat oatmeal for breakfast, we do not have a quantum mechanical description for how that decision works, but we don't have any other description either.

A difference between right-wingers and left-wingers is that left-wingers are frequently preoccupied with the motives of others. In this case, Coyne is upset that someone might defend free will out of Christian motives, or funding from a Christian organizations. You see this also in the recent American election, where many Democrats campaigned on motives while Republicans campaigned on results. I guess you see the world a lot differently if you do not believe in free will.

Update: Coyne complains again about the same Dennett article.

Monday, November 10, 2014

Audio explanation of quantum computers

SciAm is hosting the Tech Talker on Detangling Quantum Computers:
For example, it is really hard for normal computers to find factors of prime numbers. [at 1:30]
It is just as hard for quantum computers to find factors of prime numbers. Primes do not have any nontrivial factors. Okay, I am nitpicking. I have heard this error many times.

At 3'00 to 3'30, it explains superposition as one observer measuring an electron as spin up at the same time another measures spin down.

No, that is impossible.

I am not sure it is even useful to talk about an electron being in a superposition of spin states. It always has a spin in some direction. Only if you force it into another direction does it look like a possibility of up or down in that direction.

And there is no mention of the fact that quantum computers are completely hypothetical fantasies, like time travel machines.

On a recent Slate podcast, I heard:
The word quantum is Latin for: (a) uncertainty (b) parallel (c) natural (d) what amount
The correct answer is (d). Other wrong answers might be "small" or "discrete"

Sunday, November 9, 2014

Philosopher a fan of crackpot evolutionist

Biologist-philosopher Massimo Pigliucci did not like me trashing modern philosophers as having no worthwhile influence on physics, and wrote:
You have obviously read no philosophy of science whatsoever.
He did not like my reaction to this comment:
As for philosophical accomplishments, I can name two:
1) The Scientific Method — Science was not a separate field until recently; not until the 19th or 20th century, I believe. Galileo, Newton, Copernicus, Kepler, along with all the other ‘scientific minds’ of the Scientific Revolution, were natural philosophers. Science is the child of philosophy.
2) The Sex/Gender Distinction — Unless I am mistaken, it is a firmly held belief in various fields (feminism, anthropology, psychology(?), etc.), and a common belief among lay men, that there is a distinction between one’s reproductive role (sex) and the cultural, behavioral norms they must follow qua sexed person (gender). This distinction was introduced by Simone De Beauvoir, an Existentialist philosopher. This seems like a big accomplishment for philosophy.
I did suspect that this comment was a joke. Sure, the scientific method was big, but that goes back millennia. But the sex/gender distinction? Really? Is that the best philosophy has done in 300 years? The distinction is of interest to a few trannies, but not much anyone else.

Pigliucci is a big fan of Stephen Jay Gould, and wrote a 1998 5-star review of Gould's most famous book, The Mismeasure of Man:
Steven J. Gould is most famous among the general public for his collections of essays from his long Natural History series, "This View of Life". But the best of Gould's writing is perhaps to be found in his single-theme books. And The Mismeasure of Man is arguably the finest among them. The volume is about the long history of the search for scientific justification of racism, and the many faux pas that science has committed when it comes to the study of human intelligence. ...

He illustrates this with an array of definitely intelligent people whose brain sizes covered almost the whole gamut displayed by non-pathological individuals. However, this is indeed one of the troublesome aspects of this book and, I dare say, of Gould's writing in general. He dismisses contrary evidence or arguments so fast that one gets the impression of seeing a magician performing a trick. One cannot avoid the feeling of having being duped by the quickness of the magician's movement, instead of having observed a genuine phenomenon. In this particular instance, I can vouch for Gould as a biologist, but I'm not so sure that the general public is willing to trust him on his word. After having dismissed both craniometry and the aberrant work of Cesare Lombroso on the anthropological stigmata of criminals, Gould moves on to his main target: IQ and intelligence testing. ...

That is, the scores on each test are correlated to each other, because they all reflect one underlying quantity, which Spearman named "g", or general intelligence. Spearman therefore provided one of the two pillars of the eugenic movement: there seemed indeed to be one way to rank individuals by their intelligence with the use of one number: this was the score on the g-factor, instead of the score on any of the available IQ tests. Burt's major achievement was a supposed confirmation of the second fundamental piece of the puzzle eugenic puzzle: his studies of genetically identical twins suggested a high heritability (incorrectly read as a high level of genetic determination) of intelligence. So, not only do individuals differ in intelligence, but this is easy to measure and genetically determined. Environment, and with it education and social welfare, cannot alter the innate difference among individuals, genders, and races. QED Well, not really.
No, this book is crackpot stuff. It has no scientific merit, and is wrong on almost every level. You can read the Wikipedia article for details.

Yes, there is overwhelming evidence that there are innate differences among individuals, and a lot of work on how much can be attributed to genetics and how much to the environment. Very few traits are entirely genetic or environmental. This stuff is not particularly controversial, except for Gould fans.

Pigliucci posts a lot of good essays and interviews, and of course he is on the warpath against pseudoscience and creationism. But he is just another philosopher of science with a warped idea of science.

Friday, November 7, 2014

Against quantum realism

Lubos Motl writes:
All the major "realist" attempts to reform the foundations of quantum mechanics – de Broglie-Bohmian mechanics, Ghirardi-Rimini-Weber-like collapse theories, and Everett-style many worlds – are known to suffer from serious diseases. To a large extent, "realism" itself is the problem.

I am still willing to admit that there is no truly "rock-solid proof" of the statement that "there cannot be any realist reinterpretation or 'improvement' of quantum mechanics".
The quest for "realism" in physics is an Einsteinian disease, and led me to write my book on the subject.

We have 80 years of theoretical and experimental evidence, all pointing to the supposedly-realist models of quantum mechanics being wrong. Isn't that enuf?

A Nature article reports:
The bizarre behaviour of the quantum world — with objects existing in two places simultaneously and light behaving as either waves or particles — could result from interactions between many 'parallel' everyday worlds, a new theory suggests. ...

Charles Sebens, a philosopher of physics at the University of Michigan in Ann Arbor, says he is excited about the new approach. He has independently developed similar ideas, to which he has given the paradoxical name of Newtonian quantum mechanics. ...

The next step for the team will be to come up with ways in which their idea can be tested. If the many-interacting-worlds approach is true, it will probably predict small differences from quantum theory, Wiseman says.
This is the game plan. Refuse to accept 1930 physics. Pretend that the world is more like your Newtonian intuitions. Except add some bizarre parallel universes or other nonsense that would be stranger and more non-intuitive than quantum mechanics. Find some way to test it in order to disprove quantum mechanics. Then discover that quantum mechanics still works, just like in 1930. This story has played out a thousand times. Reporting this is like reporting someone's idea for a perpetual motion machine.

It is bizarre that these models are even called realist. Bohmian mechanics supposedly tells you where the electron is, but then requires you to believe that the electron has a sort of guardian angel that has a magical and nonlocal influence on it. The many-worlds requires vast parallel universes that can never be observed.

Webster's defines realism:
1: concern for fact or reality and rejection of the impractical and visionary
2a : a doctrine that universals exist outside the mind; specifically : the conception that an abstract term names an independent and unitary reality
b : a theory that objects of sense perception or cognition exist independently of the mind — compare nominalism
3: the theory or practice of fidelity in art and literature to nature or to real life and to accurate representation without idealization
I guess the idea is that the Copenhagen interpretation of quantum mechanics makes predictions based on our knowledge of the system, and is therefore in the mind. A theory that posits unobservable ghosts would be outside the mind, and therefore realist.

This is backwards. Realism, to me, means accepting what is demonstrably observed, and not living in some fantasy world of unobservables.

So I say that quantum mechanics is realist because it is based on observables and verifiable predictions. Everyone else says that quantum mechanics is an inadequate theory because it is not realist, and it needs to be made realist by hypothesizing unobservable and magical hidden variables or pilot waves or parallel universes.

I can't blamed de Broglie for having some of these ideas in 1923. They were intensely debated in the late 1920s, and thought to be settled then. But it appears that no amount of evidence will convince the Einsteinian physicists who insist that there is a missing ingredient to quantum mechanics that is going to make it realist.

Sean M. Carroll is one who believes that quantum mechanics is missing realism:
But despite its triumphs, quantum mechanics remains somewhat mysterious. Physicists are completely confident in how they use quantum mechanics -- they can build theories, make predictions, test against experiments, and there is never any ambiguity along the way. And nevertheless, we're not completely sure we know what quantum mechanics really is. There is a respectable field of intellectual endeavor, occupying the time of a substantial number of talented scientists and philosophers, that goes under the name of "interpretations of quantum mechanics." A century ago, there was no such field as "interpretations of classical mechanics" -- classical mechanics is perfectly straightforward to interpret. We're still not sure what is the best way to think and talk about quantum mechanics.

This interpretational anxiety stems from the single basic difference between quantum mechanics and classical mechanics, which is both simple and world-shattering in its implications:
According to quantum mechanics, what we can observe about the world is only a tiny subset of what actually exists.
Attempts at explaining this principle often water it down beyond recognition. "It's like that friend of yours who has such a nice smile, except when you try to take his picture it always disappears." Quantum mechanics is a lot more profound than that. ...

Most modern physicists deal with the problems of interpreting quantum mechanics through the age-old strategy of "denial." ...

At some point, however, we need to face the music.
For him, facing the music means believing in unobservable parallel universes.

The idea that we can only observe a tiny subset of reality is just a mystic belief. Or religious. Jesus said, "My kingdom is not of this world." John 18:36. Maybe so, but physics can only deal with what is of this world.

Thursday, November 6, 2014

DNS troubles

This blog was down for a couple of days, because of technical server issues. It should be back up when you read this.

Monday, November 3, 2014

Paradigm shift book is still highly cited

Nature magazine has published the Google Scholar top-100 cited list, and near the top is the 1962 philosophy book The Structure of Scientific Revolutions. It popularized the paradigm shift.

This book dominated modern philosophy of science, but it is garbage. Its main arguments are to deny that science makes progress toward truth, deny that new scientific theories are adopted from quantitative evidence, and deny that scientists are rational.

Philosophers and other anti-science academics love this book because it undermines the validity of science. Science crackpots also love this book also, because it presents science as driven by fads. Instead of arguing that their theories are superior in some way, they can just gripe that they are out of fashion.

MIT debate on multiverse

A couple of MIT physics professors debated whether we live in the multiverse:
Arguing in favor was Max Tegmark, a cosmologist at MIT. His MIT colleague, Frank Wilczek, (winner of the Nobel for his work on the strong force) took the opposing position. ...

And then there's the many-worlds interpretation, in which the wavefunction keeps working, but reality breaks, or at least fragments a bit. In this view, there are particles in every location the wavefunction predicts we'll find them. It's just that those locations end up in different universes once the measurement happens. So, as soon as someone makes a measurement, they split the Universe into multiple universes, each with the particle in a different location specified by the wavefunction. ...

Wilczek was a bit more detailed in his criticisms. He's perfectly happy to accept the wavefunction's existence, but feels that "many worlds is metaphysical baggage added on." His main issue is that all the worlds but one aren't accessible to our experience, and therefore can't be explored scientifically. "I am very worried about ascribing full credence to something other than reality," Wilczek said.

Tegmark for his part, thought that was a very narrow view. "This has less to do with QM than the way we perceive reality," he said, later adding, "it's partly 'what's real is what we can observe,' which seems like an ostrich with its head in the sand." But scientifically, Tegmark said that "If Schroedinger's equation applies to every system, no matter how large, we should have many worlds."...

If inflation is right—and most physicists think it is—then the Universe we see is only a small fraction of the whole. And, if you could somehow go past the observable universe, you'd come to a region where inflation is an ongoing process, rapidly expanding space and ultimately creating additional universes.

"If space is infinite, and the odds of being you is not zero—which it's not, since you exist—then you must exist in other places," Tegmark said. "Maybe you'll first find a Shmioneer Works, but if you keep looking, you'll find another [you]." And this weirdness is a necessary outcome of a theory that most physicists agree on. So, in Tegmark's view, you can't dismiss the many worlds interpretation just because it's weird.
The reason for rejecting the multiverse is not that it is weird. It is no weirder than a lot of religious beliefs that people have.

The reasons for rejecting the multiverse are that (1) there is no evidence for it, and no prospects for getting any; (2) it does not solve any theoretical problems; and (3) it is methodologically incoherent, as it destroys the meaning of the Born rule.

If inflation is true, then it is reasonable to say that part of the universe is outside our observable horizon, and hence outside our possible knowledge. The scientific approach would be to acknowledge that there might be something out there, but leave it at that. To say that there are infinitely many copies of ourselves out there is strange and unscientific.

Tegmark is trying to trick you into one of the various paradoxes about infinity, but he has admitted that one of these paradoxes has driven him to conclude that no infinities occur in nature. So I don't know why he keeps bringing up infinities.

I am glad to see physicists debating the many-worlds MWI because it is a silly idea that cannot hold up under debate. It has become somewhat trendy for physicists to announce that they believe in it, but it is stupid on every level -- philosophically, mathematically, and physically.

Saturday, November 1, 2014

Black holes in the movies

Dennis Overbye gripes about a new movie in the NY Times:
But the movie doesn’t deserve any prizes for its drive-by muddling of Dr. Hawking’s scientific work, leaving viewers in the dark about exactly why he is so famous. Instead of showing how he undermined traditional notions of space and time, it panders to religious sensibilities about what his work does or does not say about the existence of God, which in fact is very little. ...

But when it came to science, I couldn’t help gnashing my teeth after all. Forget for a moment that early in the story the characters are sitting in a seminar in London talking about black holes, the bottomless gravitational abysses from which not even light can escape, years before that term had been coined. Sadly, a few anachronisms are probably inevitable in a popular account of such an arcane field as astrophysics.

It gets worse, though. Skip a few scenes and years ahead. Dr. Hawking, getting ready for bed, is staring at glowing coals in the fireplace and has a vision of black holes fizzing and leaking heat.

The next thing we know he is telling an audience in an Oxford lecture hall that black holes, contrary to legend and previous theory, are not forever, but will leak particles, shrink and eventually explode, before a crank moderator declares the session over, calling the notion “rubbish.”

The prediction of Hawking radiation, as it is called, is his greatest achievement, the one he is most likely to get a Nobel Prize for. But it didn’t happen with a moment of inspiration staring at a fireplace. And in telling the story this way, the producers have cheated themselves out of what was arguably the most dramatic moment in his scientific career. ...

His discovery has turned out to be a big, big deal, because it implies, among other things, that three-dimensional space is an illusion. Do we live in a hologram, like the picture on a credit card? Or the Matrix?

None of this, alas, is in the movie. That is more than bad history.
Bad history, but I am sure that it would be bad physics either way. No one has shown that 3D space is an illusion. Hawking radiation from black holes has not been observed, and probably won't be. And he won't get that Nobel prize. (Possibly he might get some credit from some analogous radiation that is not from black holes, but I doubt it.) No we do not live in a hologram just because some funny stuff happens on the event horizon of a black hole.

Meanwhile, Sean M. Carroll raves about a new movie:
I haven’t seen it yet myself, nor do I know any secret scoop, but there’s good reason to believe that this film will have some of the most realistic physics of any recent blockbuster we’ve seen. ...

Kip recognized that a wormhole was what was called for, but also realized that any form of faster-than-light travel had the possibility of leading to travel backwards in time. Thus was the entire field of wormhole time travel born. ...

I know that Kip has been very closely involved with the script as the film has developed, and he’s done his darnedest to make sure the science is right, or at least plausible. (We don’t actually whether wormholes are allowed by the laws of physics, but we don’t know that they’re not allowed.) ...

And that’s not all! Kip has a book coming out on the science behind the movie, which I’m sure will be fantastic. And there is also a documentary on “The Science of Interstellar” that will be shown on TV, in which I play a tiny part.
I should reserve judgment until I see the movie. It will probably be entertaining. But if the plot uses wormholes for Earthlings to escape global warming and colonize another planet, then I would not call it realistic physics.

Update: Interstellar has some great reviews, but the Bad Astronomer hated the silly and nonsensical physics plot (relativity, black hole, wormhole).