Friday, April 18, 2014

Counterfactuals: Soft Science

Research papers in the soft sciences usually use a type of counterfactual called a null hypothesis. The object of such a paper is often to disprove a hypothesis by collecting data and applying statistical analysis.

A pharmaceutical company might do a controlled clinical study of a new drug. Half of the test subjects get the new drug, and half get a placebo. The null hypothesis is that the drug and the placebo are equally effective. The company hopes to prove that the null hypothesis is false, and that the drug is better than the placebo. The proof consists of a trial with more subjects doing better with the drug, and a statistical argument that the difference was unlikely to be pure luck.

To see how a statistical disproof of a null hypothesis would work, consider a trial consisting of 100 coin tosses. The null hypothesis is that heads and tails are equally likely. That means that we would get about 50 heads in a trial, on average, with the variation being a number called "sigma". The next step in the analysis is to figure out what sigma is. In this case, for a fair coin, sigma is 5. That means that the number of heads in a typical trial run will differ from 50 by about 5. Two thirds of the trials will be within one sigma, or between 45 and 55. 95% will be within two sigmas, or between 40 and 60. 99% will be within three sigmas, or between 35 and 65.

Thus you can prove that a coin is biased by tossing it 100 times. If you get more than 65 heads, then either you were very unlucky or the chance of heads was more than 50%. A company can show that its drug is effective by giving it to 100 people, and showing that it is better than the placebo 65 times. Then the company can publish a study saying that the probability that the data matches the (counterfactual) null hypothesis is 0.01 or less. That probability is called the p-value. A p-value of 0.01 means that the company can claim that the drug is effective, with 99% confidence.

The p-value is the leading statistic for getting papers published and drugs approved, but it does not really confirm a hypothesis. It just shows an inconsistency between a dataset and a counterfactual hypothesis.

As a practical matter, the p-value is just a statistic that allows journal editors an easier decision on whether to publish a paper. A p-value under 0.05 is considered statistically significant, and not otherwise. It does not mean that the paper's conclusions are probably true.

A Nature mag editor writes:
scientific experiments don't end with a holy grail so much as an estimate of probability. For example, one might be able to accord a value to one's conclusion not of "yes" or "no" but "P<0.05", which means that the result has a less than one in 20 chance of being a fluke. That doesn't mean it's "right".

One thing that never gets emphasised enough in science, or in schools, or anywhere else, is that no matter how fancy-schmancy your statistical technique, the output is always a probability level (a P-value), the "significance" of which is left for you to judge – based on nothing more concrete or substantive than a feeling, based on the imponderables of personal or shared experience. Statistics, and therefore science, can only advise on probability – they cannot determine The Truth. And Truth, with a capital T, is forever just beyond one's grasp.
This explanation is essentially correct, but some scientists who should know better argue that it is wrong and anti-science. A fluke is an accidental (and unlikely) outcome under the (counterfactual) null hypothesis. The scientific paper says that either the experiment was a fluke or the null hypothesis was wrong. The frequentist philosophy that underlies the computation does not allow giving a probability on a hypothesis. So the reader is left to deduce that the null hypothesis was wrong, assuming the experiment was not a fluke.

The core of the confusion is over the counterfactual. Some people would rather ignore the counterfactual, and instead think about a subjective probability for accepting a given hypothesis. Those people are called Bayesian, and they argue that their methods are better because they more completely use the available info. But most science papers use the logic of p-values to reject counterfactuals because assuming the counterfactual requires you to believe that the experiment was a fluke.

Hypotheses are often formulated by combing datasets and looking for correlations. For example, if a medical database shows that some of the same people suffer from obesity and heart disease, one might hypothesize that obesity causes heart disease. Or maybe that heart disease causes obesity. Or that overeating causes both obesity and heart disease, but they otherwise don't have much to do with each other.

The major caution to this approach is that correlation does not imply causation. A correlation can tell you that two measures are related, but the operation is symmetrical and cannot say what causes what. To establish causality requires some counterfactual analysis. The simplest way in a drug study is to randomly gives some patients placebo instead of the drug in question. That way, the intended treatment can be compared to counterfactuals.

A counterfactual theory of causation has been worked out by Judea Pearl and others. His 2000 book begins:
Neither logic, nor any branch of mathematics had developed adequate tools for managing problems, such as the smallpox inoculations, involving cause-effect relationships. Most of my colleagues even considered causal vocabulary to be dangerous, avoidable, ill-defined, and nonscientific. "Causality is endless controversy," one of them warned. The accepted style in scientific papers was to write "A implies B" even if one really meant "A causes B," or to state "A is related to B" if one was thinking "A affects B."
His theory is not particularly complex, and could have been worked out a century earlier. Apparently there was resistance to analyzing counterfactuals. Even the great philosopher Bertrand Russell hated causal analysis.

Many theories seem like plausible explanations for observations, but ultimately fail because they offer no counterfactual analysis. For example, the famous theories of Sigmund Freud tell us how to interpret dreams, but do not tell us how to recognize a false interpretation. A theory is not worth much without counterfactuals.

Tuesday, April 15, 2014

How often are scientific theories overturned?

The Dilbert cartoonist posts whimsical ideas all the time, but only gets hate mail if he says something skeptical about biological evolution or global warming. Those are sacred cows of today's intellectual left.

He now writes:
Let's get this out of the way first...

In the realm of science, a theory is an idea that is so strongly supported by data and prediction that it might as well be called a fact. But in common conversation among non-scientists, "theory" means almost the opposite. To the non-scientist, calling something a theory means you don't have enough data to confirm it.

I'll be talking about the scientific definition of a theory in this post. And I have one question that I have seen asked many times (unsuccessfully) on the Internet: How often are scientific theories overturned in favor of new and better theories? ...

Note to the Bearded Taint's Worshippers: Evolution is a scientific fact. Climate change is a scientific fact. When you quote me out of context - and you will - this is the paragraph you want to leave out to justify your confused outrage.
He has taken his definition of theory from his evolutionist critics, like PZ Myers, but I do not see the term used that way. Physicists use terms like "string theory" even tho it is not supported by any facts or data at all.

I also don't see non-scientists using the word to mean the opposite. Not often, anyway. The first I saw was when Sean M. Carroll was on PBS TV explaining BICEP2 and cosmic inflation. As you can see in the video, the dopey PBS news host asks:
Those predictions have always been theories. How do you then go about proving a theory not to be a theory, and is that what we have actually done here? Has it been proven? [at 2:50]
With exceptions like this, my experience is that scientists and non-scientists use the term "theory" in the same say. Eg, global warming is a theory whether you accept the IPCC report or not.

A comment points out the Wikipedia article: Superseded scientific theories.

To answer the question, you first have to agree on what an overturned theory is. Did Copernicus overturn Ptolemy? Did general relativity overturn Newtonian gravity?

I would say that these theories were embellished, but not overturned. The old theories continued to work just as well for nearly all situations.

You might say that the Bohr atom has been overturned, but it was never more than a heuristic model, and it is still a good heuristic model. Not as good as quantum mechanics, but still a useful way of thinking about atoms.

Monday, April 14, 2014

The equation and the bomb

The London Guardian reports:
This is the most famous equation in the history of equations. It has been printed on countless T-shirts and posters, starred in films and, even if you've never appreciated the beauty or utility of equations, you'll know this one. And you probably also know who came up with it – physicist and Nobel laureate Albert Einstein. ...

It would be nice to think that Einstein's equation became famous simply because of its fundamental importance in making us understand how different the world really is to how we perceived it a century ago. But its fame is mostly because of its association with one of the most devastating weapons produced by humans – the atomic bomb. The equation appeared in the report, prepared for the US government by physicist Henry DeWolf Smyth in 1945, on the Allied efforts to make an atomic bomb during the Manhattan project. The result of that project led to the death of hundreds of thousands of Japanese citizens in Hiroshima and Nagasaki.
Is this saying that the equation was first connected to the bomb in 1945? The first bomb had already been built by then.

I am not sure the equation did have much to do with the atomic bomb. The energy released by uranium fission was explained by the electrostatic potential energy. That is, protons repel each from their like electric charge, and so a lot of energy must have been needed to bind them together in a nucleus. Splitting the nucleus is like releasing a compressed metal spring.

Understanding the energy of H-bomb requires considering the strong nuclear force. One of the first applications of quantum mechanic was George Gamow figuring out in 1928 that protons could tunnel thru the electrostatic repulsion to explain fusion in stars.

The relation between mass and energy was first given by Lorentz in 1899. He gave formulas for how the mass of an object increases as energy is used to accelerate it. This was considered the most striking and testable aspect of relativity theory, and it was confirmed in experiments in 1902-1904. Einstein wrote a paper in 1906 crediting Poincare with E=mc2 in a 1900 paper.

Saturday, April 12, 2014

Aaronson attempts a more honest sell

Computer scientist Scott Aaronson previously urged telling the truth when selling quantum computing to the public, and he has posted an attempt on PBS:
A quantum computer is a device that could exploit the weirdness of the quantum world to solve certain specific problems much faster than we know how to solve them using a conventional computer. Alas, although scientists have been working toward the goal for 20 years, we don’t yet have useful quantum computers. While the theory is now well-developed, and there’s also been spectacular progress on the experimental side, we don’t have any computers that uncontroversially use quantum mechanics to solve a problem faster than we know how to solve the same problem using a conventional computer.
It is funny how he can claim "spectacular progress" and yet no speedup whatsoever. It is as if the Wright brothers claims spectacular progress in heavier-than-air flight, but had never left the ground. Or progress in perpetual motion machines.
But is there anything that could support such a hope? Well, quantum gravity might force us to reckon with breakdowns of causality itself, if closed timelike curves (i.e., time machines to the past) are possible. A time machine is definitely the sort of thing that might let us tackle problems too hard even for a quantum computer, as David Deutsch, John Watrous and I have pointed out. To see why, consider the “Shakespeare paradox,” in which you go back in time and dictate Shakespeare’s plays to him, to save Shakespeare the trouble of writing them. Unlike with the better-known “grandfather paradox,” in which you go back in time and kill your grandfather, here there’s no logical contradiction. The only “paradox,” if you like, is one of “computational effort”: somehow Shakespeare’s plays pop into existence without anyone going to the trouble to write them!
Now this is science fiction.
But cooling takes energy. So, is there some fundamental limit here? It turns out that there is. Suppose you wanted to cool your computer so completely that it could perform about 1043 operations per second — that is, one about operation per Planck time (where a Planck time, ~10-43 seconds, is the smallest measurable unit of time in quantum gravity). To run your computer that fast, you’d need so much energy concentrated in so small a space that, according to general relativity, your computer would collapse into a black hole!
Okay, if my computer ever runs that fast, I'll worry about being sucked into a black hole.

He also claims that if they can ever make true qubits, then they could simulate some dumbed-down models of quantum mechanics. And maybe the qubits could help with quantum gravity, if anyone can figure out what that is.

I guess this is why quantum computing is usually hyped with dubious claims about breaking internet security systems.

Thursday, April 10, 2014

New movie on geocentrism

NPR radio reports on a trailer for a new documentary:
It has the look and feel of a fast-paced and riveting science documentary.

The trailer opens with actress Kate Mulgrew (who starred as Capt. Janeway in Star Trek: Voyager) intoning, "Everything we think we know about our universe is wrong." That's followed by heavyweight clips of physicists Michio Kaku and Lawrence Krauss.

Kaku tells us, "There is a crisis in cosmology," and Krauss says, "All of these things are rather strange, and we don't know why they are occurring right now."

And then, about 1:17 into the trailer, comes the bombshell: The film's maker, Robert Sungenis, tells us, "You can go on some websites of NASA and see that they've started to take down stuff that might hint to a geocentric [Earth-centered] universe."

The film, which the trailer promises will be out sometime this spring, is called The Principle. Besides promoting the filmmaker's geocentric point of view, it seems to be aimed at making a broader point about man's special place in a divinely created universe.
Max Tegmark is also in the trailer. Kaku, Krauss, Tegmark, and mainstream physics documentaries say kooky stuff all the time. If this movie implies that Kaku and Krauss have some sympathies for geocentrism, it should not be any more embarrassing than many other interviews.

A central premise of relativity is that motion is relative, and that the covariant equations of cosmology can be written in any frame. So a geocentric frame is a valid frame to use. This movie apparently goes farther and says that the geocentric frame is superior, but I don't see how that is any wackier than many-worlds or some of the theories coming out of physics today.

Krauss denies responsibility:
It is, after all, impossible in the modern world to shield everyone from nonsense and stupidity. What we can do is provide the tools, through our educational system, for people to be able to tell sense from nonsense. These tools include the scientific method, skeptical questioning, empirical evidence, verifying sources, etc.

So, for those of you who are scandalized that a film narrated by a well-known TV celebrity with some well-known scientists promotes geocentrism, here is my suggestion: Let’s all stop talking about it from today on.
That celebrity says:
I understand there has been some controversy about my participation in a documentary called THE PRINCIPLE. Let me assure everyone that I completely agree with the eminent physicist Lawrence Krauss, who was himself misrepresented in the film, and who has written a succinct rebuttal in SLATE. I am not a geocentrist, nor am I in any way a proponent of geocentrism. ... I was a voice for hire, and a misinformed one, ...
Lumo says they deserve some criticism:
I think that their hype about the coming revolutions in cosmology is untrue, easily to be misinterpreted so that it is dangerously untrue, and this hype ultimate does a disservice to science although any hype is probably good enough for those who want to remain visible as "popularizers of science".
This reminds me of gripes about the 2004 movie What the Bleep Do We Know!?. Some scientists grumbled about it exaggerating the mysteriousness of quantum mechanics.

Wednesday, April 9, 2014

Counterfactuals: Probability

Once you agree that the past is definite and the future is uncertain, then probability theory is the natural way to discuss the likelihood of alternatives. That is, if you believe in counterfactuals, then different things could happen, and quantifying those leads to probability.

Probability might seem like a simple concept, there there are different probability interpretations. The frequentist school believes that chance is objective, and the Bayesians say that probability is just a measure of someone's subjective belief.

The frequentists say that they are more scientific because they are more objective. The Bayesians say that they are more scientific because they more fully use the available info.

Mathematically, the whole idea of chance is a useful fiction. It is just a way of using formulas for thinking about uncertainty. There is no genuine uncertainty in math. A random variable is just a function on some sample space, and the formulas are equally valid for any interpretation.

Coin tosses are considered random for the purpose of doing controlled experiments. It does not matter to the experiment if some theoretical analysis of Newtonian forces on the coin is able to predict the coin being heads or tails. The causal factors on the coin will be statistically independent of whatever is being done in the experiment. There is no practical difference between the coin being random and being statistically independent from whatever else is being measured.

It is sometimes argued that radioactive decay is truly random, but there is really no physical evidence that it is any more random than coin tosses. We can measure the half-life of potassium, but not predict individual decays. According to our best theories, a potassium-40 nucleus consists of 120 quarks bouncing around a confined region. Maybe if we understood the strong interaction better and had precise data for the wave function, we could predict the decay.

The half-life of potassium-40 is about a billion years, so any precise prediction seems extremely unlikely. But we do not know that it is any different from putting dice in a box and shaking it for a billion years.

All fields of science seek to quantify counterfactuals, and so they use probabilities. They may use frequentist or Bayesian statistics, and may debate which is the better methodology. Only quantum physicists try to raise the issue to one of fundamental reality, and argue whether the probability is psi-ontic or psi-epistemic. The terms come from philosophy, where ontology is about what is real, and epistemology is about knowledge. So the issue is whether the wave function psi is used to calculate probabilities that are real, or that are about our knowledge of the system.

It seems like a stupid philosophical point, but the issue causes endless confusion about Schroedinger's cat and other paradoxes. Physicist N. David Mermin argues that these paradoxes disappear if you take a Bayesian/psi-epistemic view, as was common among the founders of quantum mechanics 80 years ago. He previously argued that quantum probability was objective, like what Karl Popper called "propensity". That is the idea that probability is something physical, but nobody has been able to demonstrate that there is any such thing.

Max Tegmark in the March 12, 2014 episode of Through the Wormhole uses multiple universes to deny randomness:
Luck and randomness aren't real. Some things feel random, but that's just how it subjectively feels whenever you get cloned. And you get cloned all the time. ... There is no luck, just cloning.
There are more and more physicists who say this nonsense, but there is not a shred of evidence that anyone ever gets cloned. There is just a silly metaphysical argument that probabilities do not exist because all possible counterfactuals are real in some other universe. These universes do not interact with each other, so there can be no way to confirm it.

Scott Aaronson argues that the essence of quantum mechanics is that probabilities can be negative. But the probabilities are not really negative. The wave function values can be positive, negative, complex, spinor, or vector, and they can be used to calculate probabilities, but those probabilities are never negative.

There is no experiment to tell us whether the probabilities are real. It is not a scientific question. Even tho the Bayesian view solves a lot of problems, as Mermin says, most physicists today insist that phenomena like radioactive decay and spin quantization prove that the probabilities are real.

Quantum mechanics supposedly makes essential use of probabilities. But that is only Born's interpretation. Probabilities are no more essential to quantum mechanics than to any other branch of science, as I explained here.

Monday, April 7, 2014

How Should Humanity Steer the Future?

Time for the annual FQXi essay contest:
Contest closes to entries on April 18, 2014. ...

The theme for this Essay Contest is: "How Should Humanity Steer the Future?".

Dystopic visions of the future are common in literature and film, while optimistic ones are more rare. This contest encourages us to avoid potentially self-fulfilling prophecies of gloom and doom and to think hard about how to make the world better while avoiding potential catastrophes.

Our ever-deepening understanding of physics has enabled technologies and ways of thinking about our place in the world that have dramatically transformed humanity over the past several hundred years. Many of these changes have been difficult to predict or control—but not all.

In this contest we ask how humanity should attempt to steer its own course in light of the radically different modes of thought and fundamentally new technologies that are becoming relevant in the coming decades.

Possible topics or sub-questions include, but are not limited to:

* What is the best state that humanity can realistically achieve?

* What is your plan for getting us there? Who implements this plan?

* What technology (construed broadly to include practices and techniques) does your plan rely on? What are the risks of those technologies? How can those risks be mitigated?

(Note: While this topic is broad, successful essays will not use this breadth as an excuse to shoehorn in the author's pet topic, but will rather keep as their central focus the theme of how humanity should steer the future.)

Additionally, to be consonant with FQXi's scope and goals, essays should be sure to touch on issues in physics and cosmology, or closed related fields, such as astrophysics, biophysics, mathematics, complexity and emergence, and the philosophy of physics.
I am drafting a submission, based on recent postings to this blog. No, I am not going to shoehorn anything about quantum computing, as that subject may have gotten me blackballed in the past.

Saturday, April 5, 2014

New Penrose interview

Roger Penrose is writing a book on "Fashion, Faith, and Fantasy", and gives this interview:
Sir Roger Penrose calls string theory a "fashion," quantum mechanics "faith," and cosmic inflation a "fantasy." Coming from an armchair theorist, these declarations might be dismissed. But Penrose is a well-respected physicist who co-authored a seminal paper on black holes with Stephen Hawking. What's wrong with modern physics—and could alternative theories explain our observations of the universe?
He has his own speculative theories, such as the BICEP2 data not being evidence of inflation or gravity waves, but of magnetic fields in a previous universe before the big bang.

A lot of people are skeptical about string theory and inflation models. He thinks that quantum mechanics is incomplete because we do not understand wave function collapse.

He is one of the leading mathematical physicists alive today, and his ideas should be taken seriously.

Friday, April 4, 2014

Counterfactuals: Causality

The concept of counterfactuals requires not just a reasonable theory of time but also a reasonable theory of causality.

Causality has confounded philosophers for centuries. Leibniz believed in the Principle of Sufficient Reason that everything must have a reason or cause. Bertrand Russell denied the law of causality, and argued that science should not seek causes.

Of course causality is central to science, and to how we personally make sense out of the world.

It is now commonplace for scientists to deny free will, particularly among popular exponents of atheism, evolution, and leftist politics. Philosopher Massimo Pigliucci rebuts Jerry Coyne and others, and John Horgan rebuts Francis Crick.

The leading experiments against free will are those by Benjamin Libet and John-Dylan Haynes. They show that certain brain processes take more time than is consciously realized, but they do not refute free will. See also contrary experiments.

The other main argument against free will is that a scientific worldview requires determinism. Eg, Jerry Coyne argues against contra-causal free will, and for biological determinism of behavior. Einstein hated quantum mechanics because it allowed for the possibility of free will.

A common belief is that the world must be either deterministic or random, but the word "random" is widely misunderstood. Mathematically, a random process is defined by the Kolmogorov axioms, and a random variable is a function on a measure-1 state space. That is, it is just a way of parameterizing outcomes based on some measurable set of samples. Whether or not this matches your intuition about random variables depends on your choice of Probability interpretation.

Wikipedia has difficulty defining what is random:
Randomness means different things in various fields. Commonly, it means lack of pattern or predictability in events.

The Oxford English Dictionary defines "random" as "Having no definite aim or purpose; not sent or guided in a particular direction; made, done, occurring, etc., without method or conscious choice; haphazard." This concept of randomness suggests a non-order or non-coherence in a sequence of symbols or steps, such that there is no intelligible pattern or combination.
In mathematics, the digits of Pi (π) can be said to be random or not random, depending on the context. Likewise scientific observations may or may not be called random, depending on whether there is a good explanation. Leading evolutionists Richard Dawkins and S.J. Gould had big disputes over whether evolution was random.

There is no scientific test for whether the world is deterministic or random or something else. You can drop a ball repeatedly and watch it fall the same way, so that makes the experiment appear deterministic. You will also see small variations that appear random. You can also put a Geiger detector on some uranium, and hear intermittent clicks at seemingly random intervals. But the uranium nucleus may be a deterministic chaotic system of quarks. We can never know, as any attempt to observe those quarks will disturb them.

Likewise there can be no scientific test for free will. You would have to clone a man, replicate his memories and mental state, and see if he makes the same decisions. Such an experiment could never be done, and would not convince anyone even if it could be, as it is not clear how free will would be distinguished from randomness. Free will is a metaphysical issue, not a scientific one.

Even if you believe in determinism, it is still possible to believe in free will.

A debate between determinists Dan Dennett and Sam Harris was over statements like:
If determinism is true, the future is set — and this includes all our future states of mind and our subsequent behavior. And to the extent that the law of cause and effect is subject to indeterminism — quantum or otherwise — we can take no credit for what happens. There is no combination of these truths that seems compatible with the popular notion of free will.
But that is exactly what quantum mechanics is -- a combination of those facts that is compatible with the popular notion of free will.

In biology, this dichotomy between determinism and randomness has been called the Causalist-Statisticalist Debate.

At the core of their confusion is a simple counterfactual:
Consider the case where I miss a very short putt and kick myself because I could have holed it. It is not that I should have holed it if I had tried: I did try, and missed. It is not that I should have holed it if conditions had been different: that might of course be so, but I am talking about conditions as they precisely were, and asserting that I could have holed it. There is the rub. [Austin’s example]
The problem here is that they think that determinism is a philosophical necessity, and so they fail to grasp the meaning of a counterfactual.

In public surveys, people overwhelmingly reject this deterministic view:
Imagine a universe (Universe A) in which everything that happens is completely caused by whatever happened before it. This is true from the very beginning of the universe, so what happened in the beginning of the universe caused what happened next, and so on right up until the present. For example one day John decided to have French Fries at lunch. Like everything else, this decision was completely caused by what happened before it. So, if everything in this universe was exactly the same up until John made his decision, then it had to happen that John would decide to have French Fries.
And so an atheist biologist writes:
To me, the data show that the most important task for scientists and philosophers is to teach people that we live in Universe A.
That is a tough sell, as Universe A is contrary to common sense, experience, and our best scientific theories.

Steven Weinberg has argued that the laws of physics are causally complete, but also that we are blindly searching for the final theory that will solve the mysteries of the universe. A final theory would explain quark masses and cancel gravity infinities.

Einstein had an almost religious belief in causal determinism, and many others seem to believe that a scientific outlook requires such a view. On the other hand, a majority of physicists today assert (incorrectly) that quantum mechanics has somehow proved that nature is intrinsically random.

Quantum mechanics is peculiar in that it leaves the possibility of free will. It is the counterexample to the notion that a scientific theory must be causal and deterministic, or otherwise contrary to free will. If you tried to concoct a fundamental physical theory that could accommodate free will, it is hard to imagine a theory being better suited for free will.

Some interpretations of quantum mechanics are deterministic and some are not, as so, as Scott Aaronson explains, determinism is not a very meaningful concept in the context of quantum mechanics.

If you reject free will and the flow of time, and believe that everything is determined by God or the Big Bang, then counterfactuals make no sense. Most of time travel in fiction makes no sense either. The concept of counterfactuals depends on the possibility of alternate events, and on time moving forward into an uncertain future.

Regardless of brain research and the scientific underpinnings of free will, counterfactuals are essential to how human beings understand the progress of time and the causality of events.

Tuesday, April 1, 2014

Urging the truth about quantum computing

MIT computer scientist Scott Aaronson has confessed he has physics envy:
I confess that my overwhelming emotion on watching Particle Fever was one of regret — regret that my own field, quantum computing, has never managed to make the case for itself the way particle physics and cosmology have, in terms of the human urge to explore the unknown.

See, from my perspective, there’s a lot to envy about the high-energy physicists.  Most importantly, they don’t perceive any need to justify what they do in terms of practical applications.  Sure, they happily point to “spinoffs,” like the fact that the Web was invented at CERN.  But any time they try to justify what they do, the unstated message is that if you don’t see the inherent value of understanding the universe, then the problem lies with you. ...

Now contrast that with quantum computing.  To hear the media tell it, a quantum computer would be a powerful new gizmo, sort of like existing computers except faster.  (Why would it be faster?  Something to do with trying both 0 and 1 at the same time.)
He blames the media?! No, every single scientist in this field tells glowing stories about the inevitable breakthrus in quantum cryptography and computing. Including Aaronson.

Lots of scientists over-hype their work, but the high energy physicists and astronomers have scientific results to show. Others are complete washouts after decades of work and millions in funding. String theory have never been able to show any relationship between the real world and their 10-dimensional models. Quantum cryptography has never found any practical application to information security. Quantum computing has never found even one scalable qubit or any quantum speedup. Multiverse theories have no testable implications and are mathematically incoherent.

Of course a conspiracy of lies brings in the grant money:
Foolishly, shortsightedly, many academics in quantum computing have played along with this stunted vision of their field — because saying this sort of thing is the easiest way to get funding, because everyone else says the same stuff, and because after you’ve repeated something on enough grant applications you start to believe it yourself. All in all, then, it’s just easier to go along with the “gizmo vision” of quantum computing than to ask pointed questions like:

What happens when it turns out that some of the most-hyped applications of quantum computers (e.g., optimization, machine learning, and Big Data) were based on wildly inflated hopes — that there simply isn’t much quantum speedup to be had for typical problems of that kind, that yes, quantum algorithms exist, but they aren’t much faster than the best classical randomized algorithms? ...

I’ll tell you: when this happens, the spigots of funding that once flowed freely will dry up, and the techno-journalists and pointy-haired bosses who once sang our praises will turn to the next craze.  And they’re unlikely to be impressed when we protest, “no, look, the reasons we told you before for why you should support quantum computing were never the real reasons!  and the real reasons remain as valid as ever!”

In my view, we as a community have failed to make the honest case for quantum computing — the case based on basic science — because we’ve underestimated the public.  We’ve falsely believed that people would never support us if we told them the truth: that while the potential applications are wonderful cherries on the sundae, they’re not and have never been the main reason to build a quantum computer.  The main reason is that we want to make absolutely manifest what quantum mechanics says about the nature of reality.  We want to lift the enormity of Hilbert space out of the textbooks, and rub its full, linear, unmodified truth in the face of anyone who denies it.  Or if it isn’t the truth, then we want to discover what is the truth.
If the quantum computer scientists were honest, they would admit that they are just confirming an 80-year-old quantum theory.

Update: Scott adds:
Quantum key distribution is already practical (at least short distances). The trouble is, it only solves one of the many problems in computer security (point-to-point encryption), you can’t store the quantum encrypted messages, and the problem solved by QKD is already solved extremely well by classical crypto. Oh, and QKD assumes an authenticated classical channel to rule out man-in-the-middle attacks. ... I like to say that QKD would’ve been a killer app for quantum information, in a hypothetical world where public-key crypto had never existed.
That's right, and quantum cryptography is commercially worthless for those reasons. Those who claim some security advantage are selling snake oil.

Update: Scott adds:
Well, it’s not just the people who flat-out deny QM. It’s also the people like Gil Kalai, Michel Dyakonov, Robert Alicki, and possibly even yourself (in previous threads), who say they accept QM, but then hypothesize some other principle on top of QM that would “censor” quantum computing, or make the effort of building a QC grow exponentially with the number of qubits, or something like that, and thereby uphold the classical Extended Church-Turing Thesis. As I’ve said before, I don’t think they’re right, but I think the possibility that they’re right is sufficiently sane to make it worth doing the experiment.
I would not phrase it that way. Scott's bias is that he is theoretical computer scientist, and he just wants some mathematical principles so he can prove theorems.

I accept quantum mechanics to the extent that it has been confirmed, but not the fanciful extrapolations like many-worlds and quantum computing. I am skeptical about those because they seem unjustified by known physics, contrary to intuition, and most of all, because attempts to confirm them have failed.

I am also skeptical about supersymmetry (SUSY). I do not know any principle that would censor SUSY. The main reason to be skeptical is that SUSY is a fanciful and wildly speculative hypothesis that is contradicted by the known experimental evidence. Likewise I am skeptical about quantum computing.

Update: Scott rpefers to compare QC to the Higgs boson than SUSY, presumably because the Higgs has been found, and adds:
My own view is close to that of Greg Kuperberg in comment #73: yes, it’s conceivable that the skeptics will turn out to be right, but if so, their current explanations for how they could be right are grossly inadequate. ...

If, hypothetically, QC were practical but only on the surface on Titan, then I’d count that as a practical SUCCESS! The world’s QC center could simply be installed on Titan by robotic spacecraft, and the world’s researchers could divvy up time to dial in to it, much like with the Hubble telescope.
Spoken like a theorist. He does not want his theorems to be vacuous.

Monday, March 31, 2014

High precision needed for quantum computing

Craig Feinstein asks:
Leonid Levin said, "Exponential summations used in QC require hundreds if not millions of decimal places accuracy. I wonder who would expect any physical theory to make sense in this realm."
Peter Shor replies:
If you believe the fault-tolerant threshold theorem for quantum computers, you do not require hundreds of digits of accuracy.

Levin does not believe this theorem. More precisely, he believes that the hypotheses required for the theorem to work do not apply to the actual universe.

I believe his mental model of quantum mechanics resembles the idea that the physics of the universe is being simulated on a classical machine which has floating point errors. I don't believe this is true. ...

The real question is whether the rules of the universe are exact unitary evolution or something else. If they're exact unitary evolution and you have locality of action (quantum field theories, including QED, satisfy these) then the fault-tolerant threshold theorem holds. If the universe has extra levels of weirdness under the quantum field theory, then it's not clear the hypotheses are satisfied.
I am not sure who is right here. Quantum mechanics is a linear theory and has been verified to high precision is some contexts. But a linear theory is nearly always an approximation to a nonlinear theory, and I don't think that the quantum computer folks have shown that they are operating within a valid approximation.

Shor assumes "unitary", but there are interpretations of quantum mechanics that are not unitary, and no one has proved them wrong. So how do we know nature is really unitary?

If being unitary is some physically observed law, like conservation of momentum, then we should have error bars that show us just how close to being unitary the world, and what confidence in different situations.

If being unitary is a metaphysical necessary truth, derived from the conservation of probability, then how have so many textbooks managed to get by with the Copenhagen interpretation?

I say that quantum computing is a vast extrapolation of known physics, and extrapolations are unreliable.

In other news:
An international team of researchers has created an entanglement of 103 dimensions with only two photons, beating the previous record of 11 dimensions.

The discovery could represent an advance toward toward better encryption of information and quantum computers with much higher processing speeds, according to a statement by the researchers.

Until now, to increase the “computing” capacity of these particle systems, scientists have mainly turned to increasing the number of qubits (entangled particles), up to 14 particles. ...

“The most immediate practical use is expected to be in secure communication,” Huber explained to KurzweilAI in an email interview.
I haven't read the paper but I am pretty sure than there is no practical application to secure communication. I expected them to claim that all those dimensions could be used for quantum computing.

Sunday, March 30, 2014

Lectures on the impossibility of quantum computers

Steve Flammia writes:
Gil Kalai has just posted on his blog a series of videos of his lectures entitled “why quantum computers cannot work.”  For those of us that have followed Gil’s position on this issue over the years, the content of the videos is not surprising. The surprising part is the superior production value relative to your typical videotaped lecture (at least for the first overview video).

I think the high gloss on these videos has the potential to sway low-information bystanders into thinking that there really is a debate about whether quantum computing is possible in principle. So let me be clear.
There is no debate! The expert consensus on the evidence is that large-scale quantum computation is possible in principle.
... For now, though, the reality is that quantum computation continues to make exciting progress every year, both on theoretical and experimental levels, and we have every reason to believe that this steady progress will continue. ...

And most importantly, we are open to being wrong.
No, there is no significant progress. No one has made scalable qubits, and no one has demonstrated a quantum speedup.

He sure doesn't sound like someone who is open to being wrong. Papers on this subject by physicists subscribing to this consensus never admit that the whole field is based on speculative premises. I am a skeptic.

Saturday, March 29, 2014

Evidence closes in on singularity

Modern physics teaches certain singularities in general relativity (black holes and big bang) and quantum field theory (renormalization). I have expressed skepticism about whether there is truly a singularity in the black hole and at the big bang. Max Tegmark has also expressed skepticism about actual infinities in nature.

Now that the BICEP2 has given us evidence close to the alleged big bang singularity, Matt Strassler and Lubos Motl have reopened the debate about whether there really is a singularity. Those are sensible mainstream views. Others will push back harder, and speculate about before the big bang and into the multiverse.

I have to agree with Strassler that the evidence points to energies high enough that our physical theories break down, so we cannot go further. I also agree with Tegmark that we never observe true singularities in nature. I am a positivist, and I believe in what has been demonstrated. Infinities and singularities are wonderful mathematical tools, but math is not the same as physics.

Depending on how the inflation evidence plays out, I am not sure the big bang has anything to do with general relativity or a spacetime singularity. The physics was not dominated by gravity or the standard model, as we know them. Something mysterious called an inflaton field was releasing huge amounts of energy. I am not even sure about the reports that BICEP2 saw gravity waves. Maybe they saw inflaton waves. Some physicists have said that this proves gravity is quantized. I don't know how they can say that, when no one knows what the inflaton is or how it relates to gravity.

I expect the meaning of BICEP2 to be settled in the next year or so, but unwarranted speculation about time and multiverses to go on for the foreseeable future.

Update: Strassler argues:
Who is still telling the media and the public that the universe really started with a singularity, or that the modern Big Bang Theory says that it does? I’ve never heard an expert physicist say that. And with good reason: when singularities and other infinities have turned up in our equations in the past, those singularities disappeared when our equations, or our understanding of how to use our equations, improved.

Moreover, there’s a point of logic here. How could we possibly know what happened at the very beginning of the universe? No experiment can yet probe such an early time, and none of the available equations are powerful enough or usable enough to allow us to come to clear and unique conclusions.
Lumo responds:
But by endorsing the idea that the Big Bang singularity exists, we don't claim that the classical general relativity is exactly accurate and all of its conclusions about quantities' being infinite at the singularity are strictly right. We never mean such things.

Thursday, March 27, 2014

Mermin resolves metaphysical issues in Nature

I posted before on Mermin taking Bohr seriously, SciAm pushes Quantum Bayesianism, and Counterfactuals: Time on the metaphysics of time. Now Cornell Physicist N. David Mermin has an essay in the current Nature journal:
Schrödinger wrote in a little-known 1931 letter2 to German physicist Arnold Sommerfeld that quantum mechanics “deals only with the object–subject relation”. Another founder of quantum mechanics, Danish physicist Niels Bohr, insisted in a 1929 essay3 that the purpose of science was not to reveal “the real essence of the phenomena” but only to find “relations between the manifold aspects of our experience”. ...

People who believe wavefunctions to be as real as stones have invested much effort in searching for objective physical mechanisms responsible for such changes in the wavefunction: ...

Another celebrated part of the muddle produced by the exclusion of the perceiving subject is 'quantum non-locality', the belief of some quantum physicists and many mystics, parapsychologists and journalists that an action in one region of space can instantly alter the real state of affairs in a faraway region. Thousands of papers have been written about this mysterious action at a distance over the past 50 years. A clue that the only change is in the expectations of the perceiving subject7 is that to learn anything about such alterations one must consult somebody in the region where the action took place. ...

The issue for Einstein was not the famous revelation of relativity that whether or not two events in two different places happen at the same time can depend on your frame of reference. It was simply that physics seems to offer no way to identify the Now even at a single event in a single place, although a local present moment — Now — is evident to each and every one of us as undeniably real. How can there be no place in physics for something as obvious as that? ...

When I recently mentioned to an eminent theoretical physicist that I was writing an essay explaining how the QBist view of science solves the strictly classical problem of the Now, he said: “Ah, you're going to explain why we all have that illusion.” And a distinguished philosopher of science recently derided the attitude that there ought to be a Now on my world-line as “chauvinism of the present moment”9.
My only quarrel with Mermin is that he acts as if he is saying something new. He is just reciting the view of Bohr and everyone else not infected with Einstein's disease.

There are physicists and philosophers today who (1) believe wavefunctions to be as real as stones; (2) assert quantum non-locality; and (3) deny Now as just chauvinism of the present moment. They have bizarre and foolish philosophies that lead to unresolvable paradoxes. Mermin's common sense explanations from a century ago are perfectly adequate.

Wednesday, March 26, 2014

Counterfactuals: Hard Science

Counterfactual reasoning is used all the time in the hard sciences. When you learn the formulas for gravity, the first thing you do is to answer questions like, “If you drop a rock off a 100-foot cliff, how long will it take to hit the ground?”

Dropping a rock from a cliff could also be called hypothetical reasoning, because one can easily imagine conducting the experiment. However physics also has all sorts of thought experiments that have no hope of ever being carried out. For example, explanations of relativity frequently involve spaceships taking people near the speed of light or being swallowed up in a black hole.

Counterfactuals are essential to the scientific method. Science is all about doing experiments that favor some hypothesis over some counterfactual.

The ability to make a precise prediction from a counterfactual is what distinguishes the hard sciences from the soft.

A famous example is Hendrik Lorentz's discovery of space and time transformations (now called Lorentz transformations) to explain the Michelson-Morley experiment. The counterfactual was aether motion, as Lorentz interpreted experiments to show that no such motion was detectable. He then used his formulas to predict relativistic mass, which was then confirmed by experiment. (Einstein later published similar theories, but the consensus of historians is that he paid no attention to the experiments.)

An example that failed to disprove the counterfactual was the 1543 Copernicus heliocentric model of the solar system. The established theory was Ptolemy's, but both theories predicted the sky with about the same accuracy. Experiments and models with much greater accuracy were achieved by Tycho Brahe and Johannes Kepler around 1600. The 20th century theory of relativity taught that motion is relative, and heliocentricity could never be proven.

Kepler's theory was superior not only for its accurate predictions, but for its counterfactual predictions. He had a complete theory of what kinds of orbits were possible in the solar system, so he could have made predictions about any new planet or asteroid that might be discovered. But he did not have a causal mechanism.

Causality is closely connected with counterfactual analysis. If an event A is followed by an event B, we only say that A caused B if counterfactuals for A would have been followed by something other than B. If rain follows my rain dance, I only argue that the dance caused the rain if I have a convincing argument that it would not have rained if I had not danced. A truly causal argument would provide a connected chain of events from the dance to the rain, with every link in the chain causing the next link.

Isaac Newton found a more powerful theory of mechanics by positing a gravitational force between any two massive objects, and saying that the force causes the orbital motion. Laplace argued in 1814 that all of nature is predictable with causal mechanics, given sufficient data.

This Newtonian causality was not true causality, because it required action-at-a-distance. One planet could exert a force on another planet over millions of miles, without any intermediate effects. A truly causal theory required the invention of the concept of field, such as electric or gravitational field, that can propagate thru empty space from one object to another. James Clerk Maxwell worked out such a theory for electric and magnetic fields in 1865, and that was the first relativistic theory.

A field is a physical way of describing certain counterfactuals. Saying that there is an electric field, at a particular point in space and time, is another way of saying what would happen if an electric charge were put at that point. The field is one of the most important concepts in all of physics, because it allows reducing the universe to the mechanics of locally defined objects. Thus physics is rooted in counterfactuals at every level. You could say that reductionism works in physics because of clever schemes for distributing counerfactual info over space and time.

A trendy topic in theoretical astrophysics is the multiverse. This involves a loose collection of unrelated ideas, but they all involve hypothetical universes outside of our observational abilities. It is a giant counterfactual exercise, with no experiment to decide who is right.

A particular fascination is the possibility of intelligent life in other universes. It appears that our universe is finely tuned for life. That is, it is hard to imagine the development of life in most of the counterfactual universes.

The most bizarre approach to counterfactuals is the many-worlds interpretation (MWI) of quantum mechanics. It simply posits that every possible counterfactual has an objective reality in an alternate universe. The extra universes do not really explain anything because they do not communicate with each other. There can be no experimental evidence for the other universes. There is no theoretical reason either, except that some physicists are unhappy with counterfactuals being just countefactuals.

The many-worlds seems like an endorsement of counterfactual thinking, but it corrupts such thinking by declaring the the counterfactuals real. A counterfactualist might argue, "if a new ice age were beginning, then we would probably notice cooler termperatures, but we don't, so we are not in a new ice age." But in many-worlds, all exceptionally improbable events take place in different universes, and we could be in one of them. Thus many-worlds leaves no good rationale for rejecting counterfactuals.