Wednesday, April 30, 2014

Multiverse compared to theology of evil

A UK Telegraph op-ed suggests that atheists are mentally ill:
In the last few years scientists have revealed that believers, compared to non-believers, have better outcomes from breast cancer, coronary disease, mental illness, Aids, and rheumatoid arthritis. Believers even get better results from IVF. Likewise, believers also report greater levels of happiness, are less likely to commit suicide, and cope with stressful events much better. Believers also have more kids.

What’s more, these benefits are visible even if you adjust for the fact that believers are less likely to smoke, drink or take drugs. And let’s not forget that religious people are nicer.
Leftist-atheist-evolutionist Jerry Coyne responds:
That’s still an admission of ignorance, and doesn’t tell us why God lets little kids gets cancer, or sweeps them away in tsunamis. ...

And if there are benefits in God’s plan to natural evils, let the believers tell us what they are. If they say they don’t know, well, then, they’ll have to allow us scientists to say that we don’t yet know whether there are multiverses, or why the laws of physics are as they are. The difference is that at least science has a chance of finding answers. ...

I tend to avoid calling believers mentally ill, partly because branding so much of society as suffering from illness tends to arouse ire, but mainly because I consider religious belief to be not a full-blown illness, but a situational neurosis or delusion.
Coyne attacks religious believers on a daily basis, but somehow thinks that it is scientific to believe in the multiverse.

I don't mind scientists attacking religion, but I wish they would be scientific about it. It is no argument to say "if God exists he must be evil because he allows tsunamis to kill kids." Just what does he think that a good God would do about it? Create a world in which tsunamis kill adults but not kids? Create a world with no tsunamis or other natural hazards?

I fail to see why a benevolent God would prefer a world without tsunamis. If I had infinite power, getting rid of tsunamis is one of the last things I would do. Coyne gives no explanation.

Coyne is an example of a bad scientist who does not understand counterfactuals. He has a weird faith in materialist determinism and a lack of free will. The multiverse is no more scientific than various theologies.

Psychiatrists define mental illness as behavior that is outside social norms. I like to think that prominent scientists and atheists are free from delusions, compared to religious preachers, but it is hard to find examples.

Update: Coyne has now banned a user for saying "without Christianity, there’ll be no modern science", and giving Wikipedia links to Christian and atheist scientists. Coyne hates the counterfactual, I guess.

Monday, April 28, 2014

Cosmos promotes environmental alarmism

I am enjoying the new Cosmos TV show, and episode 7 tells the story of how measuring lead byproducts of uranium decay were used to estimate the age of the Earth. It ended with:
Today, medical consensus is unanimous: There is no such thing as a non-toxic level of lead in humans, however small.

Today, scientists sound the alarm on other environmental dangers.
Really? So one atom of lead is poisonous, and all experts agree to that?

I checked the Wikipedia article on Lead poisoning, and it cites authorities who "define lead poisoning as blood lead levels higher than 10 µg/dL. ... and someone with elevated lead levels may have no symptoms."

The lesson of the show is that the alarmists were right about lead, so we should accept what the alarmists say on other topics also. And don't believe those greedy oil companies who censor research and lie to Congrees, or those Bible readers who do not even believe in the Old Earth.

I do not think this is helpful. One atom of lead is not toxic. It is a distortion of science in order to chase impossible goals. It feeds alarmist radicals, instead of objective opinions.

Friday, April 25, 2014

Aaronson back to quantum over-hype

Scott Aaronson has a new article on randomness, and promises:
Determining whether numbers truly can have no pattern has implications for quantum mechanics, not to mention the stock market and data security. ...

In my next column, I’ll discuss quantum mechanics and a famous theorem called the Bell inequality, and how they let us generate random numbers under the sole assumption that it’s impossible to send signals faster than light. Notably, the latter approach won’t suffer from the problem of uncomputability — so unlike Kolmogorov complexity, it will actually lead to practical methods for generating guaranteed random numbers. ...

Part II, to appear in the next issue, will be all about quantum entanglement and Bell’s Theorem, and their very recent use in striking protocols for generating so-called “Einstein-certified random numbers”—something of much more immediate practical interest.
If you cannot wait for his sequel, you can read the technical details in A quantum random-number generator for encryption, security and Infinite Randomness Expansion and Amplification with a Constant Number of Devices. The latter starts:
Bell’s Theorem states that the outcomes of local measurements on spatially separated systems cannot be predetermined, due to the phenomenon of quantum entanglement [Bel64]. This is one of the most important “no-go” results in physics because it rules out the possibility of a local hidden variable theory that reproduces the predictions of quantum mechanics. However, Bell’s Theorem has also found application in quantum information as a positive result, in that it gives a way to certify the generation of genuine randomness: if measurement outcomes of separated systems exhibit non-local correlations (e.g. correlations that violate so-called Bell Inequalities), then the outcomes cannot be deterministic.

While Bell’s Theorem does give a method to certify randomness, there is a caveat. The measurement settings used on the separated systems have to be chosen at random! Nevertheless, it is possible to choose the measurement settings in a randomness-efficient manner such that the measurement outcomes certifiably contain more randomness (as measured by, say, min-entropy) than the amount of randomness used as input. This is the idea behind randomness expansion protocols, ...
As usual, Aaronson is overselling the quantum mysticism. Randomness is a metaphysical concept, and quantum mechanics cannot say whether anything is random. I thought he understood this, as I cited him for explaining that quantum mechanics can be interpreted as deterministic or non-deterministic.

Bell's theorem says that there are observable differences between quantum mechanics and hidden variable theories. It was very exciting to those who wanted to disprove quantum mechanics, but all the experiments confirmed what everyone thought for decades, so it was no big deal.

The theory looks at two identical particles emitted from the same process. If you measure both the same way, you get equal and opposite results. If you measure the second one differently, then quantum mechanics gives formulas for probabilities, based on the measurement of the first.

Thus measuring the second particle seems determined if measured like the first, and random otherwise. So you can randomly decide how to do the measurement, and then get a random result.

Impressed by that result? No, of course not. It is trivial. Random just means that it is not easily predicted by a correlation with something else. All this is getting is a second particle measurement that seems random compared to the first. But it is not random in any absolute sense, and might be predicted by knowledge of how the pair was produced.

If you want a random bit, just toss a coin. That will be random in the sense of being uncorrelated with whatever else you are doing. What this research does is to toss two correlated coins, do some fancy manipulations so that they are effectively not correlated, and then conclude that one coin is random because it cannot be predicted by the other.

The second coin is giving you a reference point for the analysis, but it is not adding to the randomness. The whole idea that hte second coin is a source of randomness is fallacious.

Aaronson claims that he can get certified truly random bit from the sole assumption of the lack of faster-than-light communication from one coin to the other. But that is nonsense. That assumption can only get you independence from the first coin, but no more.

All of this Bell hocus pocus has no practical application to cryptography, as much easier methods are in common use, such as SHA-256.

I thought that Aaronson was cleaning up his act.

Wednesday, April 23, 2014

History of infinitesimals

A new book on the history of math, Infinitesimal: How a Dangerous Mathematical Theory Shaped the Modern World, has an Amazon review starting:
The opening chapters of "Infinitesimal" are about a board of Jesuits in the 17th century ruling on legitimacy of a mathematical topic. That which would be binding on every university of the Jesuit order-the most prestigious of the time.
Another reviewer says:
This is a gripping story of human passion, that further debunks the myth that mathematics is "objective". While the "true" mathematics may be objective, the mathematics created by humans is almost as subjective as poetry or religion, and the engaging story told so masterfully by Amir Alexander, will not only teach you about the historical roots and the not-so-hidden agendas of the pioneers that lead to the invention of calculus, but would also challenge the conventional wisdom that mathematics is objective and ideologically neutral.
The NY Times review says:
To the Jesuits, tradition, resoluteness and authority seemed bound up with Euclid and Catholicism; chaos, confusion and paradoxes were associated with infinitesimals and the motley array of proliferating Protestant sects.

Indeed, Galileo, later to be found guilty of heresy, supported some of his ideas with infinitesimal-flavored arguments. Dr. Alexander writes that he was as courageous as possible under the circumstances, not only a great mathematician and physicist but a witty and compelling writer willing to take on the Jesuits. ...

Since the Jesuits succeeded in banning infinitesimals in Italy, the last part of Dr. Alexander’s finely detailed, dramatic story traces their subsequent history north to England. There one of the key figures is Thomas Hobbes, the 17th-century philosopher of authoritarianism, a strong advocate of law, order — and, like the Jesuits, of the top-down hierarchical nature of Euclidean geometry.

Hobbes’s hated antagonist, the mathematician John Wallis, used infinitesimals freely, along with any other ideas he thought might further mathematical insight. And further it they did, leading over time to calculus, differential equations, and science and technology that have truly shaped the modern world.
That makes it sound as if the infinitesimal was heresy. But Galileo was not convicted of heresy, and infinitesimals had nothing to do with it.

This book appears to have a lot of good math history, but you might get the wrong impression about infinitesimals. They are essential to analysis. Math was not axiomatized and made properly rigorous until the 20th century, so early infinitesimal arguments were often non-rigorous. But textbooks today explain how infinitesimal arguments can be a shorthand for completely rigorous arguments using limits or other constructions. The Jesuits were right to be suspicious about infinitesimals, and they are sometimes abused today, but they are now known as good math, if done properly.

I did not know about this alternative development:
A major development in the refounding of the concept of infinitesimal took place in the nineteen seventies with the emergence of synthetic differential geometry, also known as smooth infinitesimal analysis (SIA). Based on the ideas of the American mathematician F. W. Lawvere, ...

Since in SIA all functions are continuous, it embodies in a striking way Leibniz's principle of continuity Natura non facit saltus.
SIA uses some goofy logic, but is apparently a consistent treatment of infinitesimals (in addition to the preferred treatments).

Monday, April 21, 2014

Counterfactuals: Reductionism and Objectivity

Reductionism is the idea that the world can be understood as the sum of its parts. Objectivity is the idea that there is an external reality independent of our biases and measurements. Together, these ideas are implicit in much or all of science.

How could these go wrong? Maybe there is some supernatural phenomenon that is not amenable to scientific study. There could be emergent features that do not reduce. There could be objects that behave differently every time you look at them, without any pattern. Maybe you can reduce a system to particles, but find that those particles seem to still have some complexity, but cannot be reduced any further.

Quantum mechanics is a theory that seems to run up against the limits of reductionism and objectivity. Naive reductionism would lead you to reduce an electron to its charge, mass, spin, position, and momentum, but the uncertainty principle prevents it. Attempts at further reduction and realism have nearly always led to some hidden variable theory. However these theories have all failed. We may never reduce electrons to mathematics.

The subject of reductionism and determinism really creeps people out when applied to genomic influence on behavior and IQ. There is overwhelming evidence that many traits are heritable, but we lack a genetic theory to explain it.

To believe in counterfactuals, you have to believe that all possible events can be divided into the real and the fictitious. And that you can analyze them as if they had an objective existence. I have this belief because it seems essential to scientific thinking.

We should not accept a concept 100% just because it is convenient for science. Reductionism is convenient for science, but there may be limits to it. Maybe some counterfactuals do not make any sense. So there may be limits to counterfactual reasoning, but it is hard to imagine science without it.

Sunday, April 20, 2014

Agree that Mermin is rehashing Bohr

I previously commented on a Nature article:
My only quarrel with Mermin is that he acts as if he is saying something new. He is just reciting the view of Bohr and everyone else not infected with Einstein's disease.
Before that, I said:
I am not sure why a new name is needed to reiterate what Bohr said. After all, Bohr won those Bohr-Einstein debates back in the 1930s.
Lubos Motl now says the same thing:
The particular buzzword "Quantum Bayesianism" (or "Qbism") is meant to describe this 2001 preprint (and 2002 PRA paper) by Carlton M. Caves, Christopher A. Fuchs, Ruediger Schack. It describes probabilities in quantum mechanics in the way they are. Your humble correspondent would probably agree with all papers in literature presenting Qbism. I just have a trouble with the "credits", with the claim that "Quantum Bayesianism" is some really new 21st contribution to physics (and also with the prominent role that is given to Thomas Bayes). It is really just a new brand that describes the very same thing that the Copenhagen school understood well. They just didn't expect that the meaning of probabilities in quantum mechanics – which is really simple and obvious for anyone who is not prejudiced – would remain a source of controversy among professional physicists for at least 90 years so they didn't write long essays and they weren't inventing new words to describe the same thing.
He also mocks Mermin's discussion of the problem of Now, just as I also did. So I actually agree with him about something.

Apparently Mermin and Lumo have to keep repeating 80yo explanations of quantum mechanics because so many get it wrong. Here is the latest example:
"Even great physicists will tell you that nobody understands quantum mechanics, although we use it every day," said Stanford philosophy Professor Thomas Ryckman. ...

One of them, the philosopher maintains, is Albert Einstein's unorthodox critique that quantum theory was incomplete and that a larger mathematical description of reality was possible. "Because his views went against the prevailing wisdom of his time, most physicists took Einstein's hostility to quantum mechanics to be a sign of senility," Ryckman said. ...

"Because he was arguing against a very empirically successful theory," said Ryckman, "his scientific biographer Abraham Pais asserted that after 1925, 'Einstein might as well have gone fishing.'" ...

As it turns out, in the 1960s, a physicist visiting Stanford named John S. Bell wrote a paper reviving Einstein's critique of quantum mechanics, arguing that if the late scientist were right, the quantum formalism would be describing a reality greatly at odds with our everyday experience of familiar objects. "By the 1980s it was possible to do an actual experiment to test this," Ryckman said, "and in fact it was shown that the world of quantum particles is indeed 'entangled.'"
In other words, those experiments showed that the 1930 understanding of quantum mechanics was correct, and that Einstein and Bell were barking up the wrong tree.
Ryckman attempts to restore the great physicist's reputation in his new book, Einstein, co-written with Arthur Fine, professor emeritus of philosophy at the University of Washington. The book is slated to be published by the Routledge Philosophers series in 2015, the centennial of the theory of relativity.
Ryckman is yet another crackpot idolizing Einstein.
"There is little mainstream research in the foundations of quantum mechanics," Ryckman said. "The reason is that most physicists consider it unproductive and not likely to be successful. This is the attitude that is taught to students."
There is too much research that is nothing more than rehashed disproved ideas.

Friday, April 18, 2014

Counterfactuals: Soft Science

Research papers in the soft sciences usually use a type of counterfactual called a null hypothesis. The object of such a paper is often to disprove a hypothesis by collecting data and applying statistical analysis.

A pharmaceutical company might do a controlled clinical study of a new drug. Half of the test subjects get the new drug, and half get a placebo. The null hypothesis is that the drug and the placebo are equally effective. The company hopes to prove that the null hypothesis is false, and that the drug is better than the placebo. The proof consists of a trial with more subjects doing better with the drug, and a statistical argument that the difference was unlikely to be pure luck.

To see how a statistical disproof of a null hypothesis would work, consider a trial consisting of 100 coin tosses. The null hypothesis is that heads and tails are equally likely. That means that we would get about 50 heads in a trial, on average, with the variation being a number called "sigma". The next step in the analysis is to figure out what sigma is. In this case, for a fair coin, sigma is 5. That means that the number of heads in a typical trial run will differ from 50 by about 5. Two thirds of the trials will be within one sigma, or between 45 and 55. 95% will be within two sigmas, or between 40 and 60. 99% will be within three sigmas, or between 35 and 65.

Thus you can prove that a coin is biased by tossing it 100 times. If you get more than 65 heads, then either you were very unlucky or the chance of heads was more than 50%. A company can show that its drug is effective by giving it to 100 people, and showing that it is better than the placebo 65 times. Then the company can publish a study saying that the probability that the data matches the (counterfactual) null hypothesis is 0.01 or less. That probability is called the p-value. A p-value of 0.01 means that the company can claim that the drug is effective, with 99% confidence.

The p-value is the leading statistic for getting papers published and drugs approved, but it does not really confirm a hypothesis. It just shows an inconsistency between a dataset and a counterfactual hypothesis.

As a practical matter, the p-value is just a statistic that allows journal editors an easier decision on whether to publish a paper. A p-value under 0.05 is considered statistically significant, and not otherwise. It does not mean that the paper's conclusions are probably true.

A Nature mag editor writes:
scientific experiments don't end with a holy grail so much as an estimate of probability. For example, one might be able to accord a value to one's conclusion not of "yes" or "no" but "P<0.05", which means that the result has a less than one in 20 chance of being a fluke. That doesn't mean it's "right".

One thing that never gets emphasised enough in science, or in schools, or anywhere else, is that no matter how fancy-schmancy your statistical technique, the output is always a probability level (a P-value), the "significance" of which is left for you to judge – based on nothing more concrete or substantive than a feeling, based on the imponderables of personal or shared experience. Statistics, and therefore science, can only advise on probability – they cannot determine The Truth. And Truth, with a capital T, is forever just beyond one's grasp.
This explanation is essentially correct, but some scientists who should know better argue that it is wrong and anti-science. A fluke is an accidental (and unlikely) outcome under the (counterfactual) null hypothesis. The scientific paper says that either the experiment was a fluke or the null hypothesis was wrong. The frequentist philosophy that underlies the computation does not allow giving a probability on a hypothesis. So the reader is left to deduce that the null hypothesis was wrong, assuming the experiment was not a fluke.

The core of the confusion is over the counterfactual. Some people would rather ignore the counterfactual, and instead think about a subjective probability for accepting a given hypothesis. Those people are called Bayesian, and they argue that their methods are better because they more completely use the available info. But most science papers use the logic of p-values to reject counterfactuals because assuming the counterfactual requires you to believe that the experiment was a fluke.

Hypotheses are often formulated by combing datasets and looking for correlations. For example, if a medical database shows that some of the same people suffer from obesity and heart disease, one might hypothesize that obesity causes heart disease. Or maybe that heart disease causes obesity. Or that overeating causes both obesity and heart disease, but they otherwise don't have much to do with each other.

The major caution to this approach is that correlation does not imply causation. A correlation can tell you that two measures are related, but the operation is symmetrical and cannot say what causes what. To establish causality requires some counterfactual analysis. The simplest way in a drug study is to randomly gives some patients placebo instead of the drug in question. That way, the intended treatment can be compared to counterfactuals.

A counterfactual theory of causation has been worked out by Judea Pearl and others. His 2000 book begins:
Neither logic, nor any branch of mathematics had developed adequate tools for managing problems, such as the smallpox inoculations, involving cause-effect relationships. Most of my colleagues even considered causal vocabulary to be dangerous, avoidable, ill-defined, and nonscientific. "Causality is endless controversy," one of them warned. The accepted style in scientific papers was to write "A implies B" even if one really meant "A causes B," or to state "A is related to B" if one was thinking "A affects B."
His theory is not particularly complex, and could have been worked out a century earlier. Apparently there was resistance to analyzing counterfactuals. Even the great philosopher Bertrand Russell hated causal analysis.

Many theories seem like plausible explanations for observations, but ultimately fail because they offer no counterfactual analysis. For example, the famous theories of Sigmund Freud tell us how to interpret dreams, but do not tell us how to recognize a false interpretation. A theory is not worth much without counterfactuals.

Tuesday, April 15, 2014

How often are scientific theories overturned?

The Dilbert cartoonist posts whimsical ideas all the time, but only gets hate mail if he says something skeptical about biological evolution or global warming. Those are sacred cows of today's intellectual left.

He now writes:
Let's get this out of the way first...

In the realm of science, a theory is an idea that is so strongly supported by data and prediction that it might as well be called a fact. But in common conversation among non-scientists, "theory" means almost the opposite. To the non-scientist, calling something a theory means you don't have enough data to confirm it.

I'll be talking about the scientific definition of a theory in this post. And I have one question that I have seen asked many times (unsuccessfully) on the Internet: How often are scientific theories overturned in favor of new and better theories? ...

Note to the Bearded Taint's Worshippers: Evolution is a scientific fact. Climate change is a scientific fact. When you quote me out of context - and you will - this is the paragraph you want to leave out to justify your confused outrage.
He has taken his definition of theory from his evolutionist critics, like PZ Myers, but I do not see the term used that way. Physicists use terms like "string theory" even tho it is not supported by any facts or data at all.

I also don't see non-scientists using the word to mean the opposite. Not often, anyway. The first I saw was when Sean M. Carroll was on PBS TV explaining BICEP2 and cosmic inflation. As you can see in the video, the dopey PBS news host asks:
Those predictions have always been theories. How do you then go about proving a theory not to be a theory, and is that what we have actually done here? Has it been proven? [at 2:50]
With exceptions like this, my experience is that scientists and non-scientists use the term "theory" in the same say. Eg, global warming is a theory whether you accept the IPCC report or not.

A comment points out the Wikipedia article: Superseded scientific theories.

To answer the question, you first have to agree on what an overturned theory is. Did Copernicus overturn Ptolemy? Did general relativity overturn Newtonian gravity?

I would say that these theories were embellished, but not overturned. The old theories continued to work just as well for nearly all situations.

You might say that the Bohr atom has been overturned, but it was never more than a heuristic model, and it is still a good heuristic model. Not as good as quantum mechanics, but still a useful way of thinking about atoms.

Monday, April 14, 2014

The equation and the bomb

The London Guardian reports:
This is the most famous equation in the history of equations. It has been printed on countless T-shirts and posters, starred in films and, even if you've never appreciated the beauty or utility of equations, you'll know this one. And you probably also know who came up with it – physicist and Nobel laureate Albert Einstein. ...

It would be nice to think that Einstein's equation became famous simply because of its fundamental importance in making us understand how different the world really is to how we perceived it a century ago. But its fame is mostly because of its association with one of the most devastating weapons produced by humans – the atomic bomb. The equation appeared in the report, prepared for the US government by physicist Henry DeWolf Smyth in 1945, on the Allied efforts to make an atomic bomb during the Manhattan project. The result of that project led to the death of hundreds of thousands of Japanese citizens in Hiroshima and Nagasaki.
Is this saying that the equation was first connected to the bomb in 1945? The first bomb had already been built by then.

I am not sure the equation did have much to do with the atomic bomb. The energy released by uranium fission was explained by the electrostatic potential energy. That is, protons repel each from their like electric charge, and so a lot of energy must have been needed to bind them together in a nucleus. Splitting the nucleus is like releasing a compressed metal spring.

Understanding the energy of H-bomb requires considering the strong nuclear force. One of the first applications of quantum mechanic was George Gamow figuring out in 1928 that protons could tunnel thru the electrostatic repulsion to explain fusion in stars.

The relation between mass and energy was first given by Lorentz in 1899. He gave formulas for how the mass of an object increases as energy is used to accelerate it. This was considered the most striking and testable aspect of relativity theory, and it was confirmed in experiments in 1902-1904. Einstein wrote a paper in 1906 crediting Poincare with E=mc2 in a 1900 paper.

Saturday, April 12, 2014

Aaronson attempts a more honest sell

Computer scientist Scott Aaronson previously urged telling the truth when selling quantum computing to the public, and he has posted an attempt on PBS:
A quantum computer is a device that could exploit the weirdness of the quantum world to solve certain specific problems much faster than we know how to solve them using a conventional computer. Alas, although scientists have been working toward the goal for 20 years, we don’t yet have useful quantum computers. While the theory is now well-developed, and there’s also been spectacular progress on the experimental side, we don’t have any computers that uncontroversially use quantum mechanics to solve a problem faster than we know how to solve the same problem using a conventional computer.
It is funny how he can claim "spectacular progress" and yet no speedup whatsoever. It is as if the Wright brothers claims spectacular progress in heavier-than-air flight, but had never left the ground. Or progress in perpetual motion machines.
But is there anything that could support such a hope? Well, quantum gravity might force us to reckon with breakdowns of causality itself, if closed timelike curves (i.e., time machines to the past) are possible. A time machine is definitely the sort of thing that might let us tackle problems too hard even for a quantum computer, as David Deutsch, John Watrous and I have pointed out. To see why, consider the “Shakespeare paradox,” in which you go back in time and dictate Shakespeare’s plays to him, to save Shakespeare the trouble of writing them. Unlike with the better-known “grandfather paradox,” in which you go back in time and kill your grandfather, here there’s no logical contradiction. The only “paradox,” if you like, is one of “computational effort”: somehow Shakespeare’s plays pop into existence without anyone going to the trouble to write them!
Now this is science fiction.
But cooling takes energy. So, is there some fundamental limit here? It turns out that there is. Suppose you wanted to cool your computer so completely that it could perform about 1043 operations per second — that is, one about operation per Planck time (where a Planck time, ~10-43 seconds, is the smallest measurable unit of time in quantum gravity). To run your computer that fast, you’d need so much energy concentrated in so small a space that, according to general relativity, your computer would collapse into a black hole!
Okay, if my computer ever runs that fast, I'll worry about being sucked into a black hole.

He also claims that if they can ever make true qubits, then they could simulate some dumbed-down models of quantum mechanics. And maybe the qubits could help with quantum gravity, if anyone can figure out what that is.

I guess this is why quantum computing is usually hyped with dubious claims about breaking internet security systems.

Thursday, April 10, 2014

New movie on geocentrism

NPR radio reports on a trailer for a new documentary:
It has the look and feel of a fast-paced and riveting science documentary.

The trailer opens with actress Kate Mulgrew (who starred as Capt. Janeway in Star Trek: Voyager) intoning, "Everything we think we know about our universe is wrong." That's followed by heavyweight clips of physicists Michio Kaku and Lawrence Krauss.

Kaku tells us, "There is a crisis in cosmology," and Krauss says, "All of these things are rather strange, and we don't know why they are occurring right now."

And then, about 1:17 into the trailer, comes the bombshell: The film's maker, Robert Sungenis, tells us, "You can go on some websites of NASA and see that they've started to take down stuff that might hint to a geocentric [Earth-centered] universe."

The film, which the trailer promises will be out sometime this spring, is called The Principle. Besides promoting the filmmaker's geocentric point of view, it seems to be aimed at making a broader point about man's special place in a divinely created universe.
Max Tegmark is also in the trailer. Kaku, Krauss, Tegmark, and mainstream physics documentaries say kooky stuff all the time. If this movie implies that Kaku and Krauss have some sympathies for geocentrism, it should not be any more embarrassing than many other interviews.

A central premise of relativity is that motion is relative, and that the covariant equations of cosmology can be written in any frame. So a geocentric frame is a valid frame to use. This movie apparently goes farther and says that the geocentric frame is superior, but I don't see how that is any wackier than many-worlds or some of the theories coming out of physics today.

Krauss denies responsibility:
It is, after all, impossible in the modern world to shield everyone from nonsense and stupidity. What we can do is provide the tools, through our educational system, for people to be able to tell sense from nonsense. These tools include the scientific method, skeptical questioning, empirical evidence, verifying sources, etc.

So, for those of you who are scandalized that a film narrated by a well-known TV celebrity with some well-known scientists promotes geocentrism, here is my suggestion: Let’s all stop talking about it from today on.
That celebrity says:
I understand there has been some controversy about my participation in a documentary called THE PRINCIPLE. Let me assure everyone that I completely agree with the eminent physicist Lawrence Krauss, who was himself misrepresented in the film, and who has written a succinct rebuttal in SLATE. I am not a geocentrist, nor am I in any way a proponent of geocentrism. ... I was a voice for hire, and a misinformed one, ...
Lumo says they deserve some criticism:
I think that their hype about the coming revolutions in cosmology is untrue, easily to be misinterpreted so that it is dangerously untrue, and this hype ultimate does a disservice to science although any hype is probably good enough for those who want to remain visible as "popularizers of science".
This reminds me of gripes about the 2004 movie What the Bleep Do We Know!?. Some scientists grumbled about it exaggerating the mysteriousness of quantum mechanics.

Wednesday, April 9, 2014

Counterfactuals: Probability

Once you agree that the past is definite and the future is uncertain, then probability theory is the natural way to discuss the likelihood of alternatives. That is, if you believe in counterfactuals, then different things could happen, and quantifying those leads to probability.

Probability might seem like a simple concept, there there are different probability interpretations. The frequentist school believes that chance is objective, and the Bayesians say that probability is just a measure of someone's subjective belief.

The frequentists say that they are more scientific because they are more objective. The Bayesians say that they are more scientific because they more fully use the available info.

Mathematically, the whole idea of chance is a useful fiction. It is just a way of using formulas for thinking about uncertainty. There is no genuine uncertainty in math. A random variable is just a function on some sample space, and the formulas are equally valid for any interpretation.

Coin tosses are considered random for the purpose of doing controlled experiments. It does not matter to the experiment if some theoretical analysis of Newtonian forces on the coin is able to predict the coin being heads or tails. The causal factors on the coin will be statistically independent of whatever is being done in the experiment. There is no practical difference between the coin being random and being statistically independent from whatever else is being measured.

It is sometimes argued that radioactive decay is truly random, but there is really no physical evidence that it is any more random than coin tosses. We can measure the half-life of potassium, but not predict individual decays. According to our best theories, a potassium-40 nucleus consists of 120 quarks bouncing around a confined region. Maybe if we understood the strong interaction better and had precise data for the wave function, we could predict the decay.

The half-life of potassium-40 is about a billion years, so any precise prediction seems extremely unlikely. But we do not know that it is any different from putting dice in a box and shaking it for a billion years.

All fields of science seek to quantify counterfactuals, and so they use probabilities. They may use frequentist or Bayesian statistics, and may debate which is the better methodology. Only quantum physicists try to raise the issue to one of fundamental reality, and argue whether the probability is psi-ontic or psi-epistemic. The terms come from philosophy, where ontology is about what is real, and epistemology is about knowledge. So the issue is whether the wave function psi is used to calculate probabilities that are real, or that are about our knowledge of the system.

It seems like a stupid philosophical point, but the issue causes endless confusion about Schroedinger's cat and other paradoxes. Physicist N. David Mermin argues that these paradoxes disappear if you take a Bayesian/psi-epistemic view, as was common among the founders of quantum mechanics 80 years ago. He previously argued that quantum probability was objective, like what Karl Popper called "propensity". That is the idea that probability is something physical, but nobody has been able to demonstrate that there is any such thing.

Max Tegmark in the March 12, 2014 episode of Through the Wormhole uses multiple universes to deny randomness:
Luck and randomness aren't real. Some things feel random, but that's just how it subjectively feels whenever you get cloned. And you get cloned all the time. ... There is no luck, just cloning.
There are more and more physicists who say this nonsense, but there is not a shred of evidence that anyone ever gets cloned. There is just a silly metaphysical argument that probabilities do not exist because all possible counterfactuals are real in some other universe. These universes do not interact with each other, so there can be no way to confirm it.

Scott Aaronson argues that the essence of quantum mechanics is that probabilities can be negative. But the probabilities are not really negative. The wave function values can be positive, negative, complex, spinor, or vector, and they can be used to calculate probabilities, but those probabilities are never negative.

There is no experiment to tell us whether the probabilities are real. It is not a scientific question. Even tho the Bayesian view solves a lot of problems, as Mermin says, most physicists today insist that phenomena like radioactive decay and spin quantization prove that the probabilities are real.

Quantum mechanics supposedly makes essential use of probabilities. But that is only Born's interpretation. Probabilities are no more essential to quantum mechanics than to any other branch of science, as I explained here.

Monday, April 7, 2014

How Should Humanity Steer the Future?

Time for the annual FQXi essay contest:
Contest closes to entries on April 18, 2014. ...

The theme for this Essay Contest is: "How Should Humanity Steer the Future?".

Dystopic visions of the future are common in literature and film, while optimistic ones are more rare. This contest encourages us to avoid potentially self-fulfilling prophecies of gloom and doom and to think hard about how to make the world better while avoiding potential catastrophes.

Our ever-deepening understanding of physics has enabled technologies and ways of thinking about our place in the world that have dramatically transformed humanity over the past several hundred years. Many of these changes have been difficult to predict or control—but not all.

In this contest we ask how humanity should attempt to steer its own course in light of the radically different modes of thought and fundamentally new technologies that are becoming relevant in the coming decades.

Possible topics or sub-questions include, but are not limited to:

* What is the best state that humanity can realistically achieve?

* What is your plan for getting us there? Who implements this plan?

* What technology (construed broadly to include practices and techniques) does your plan rely on? What are the risks of those technologies? How can those risks be mitigated?

(Note: While this topic is broad, successful essays will not use this breadth as an excuse to shoehorn in the author's pet topic, but will rather keep as their central focus the theme of how humanity should steer the future.)

Additionally, to be consonant with FQXi's scope and goals, essays should be sure to touch on issues in physics and cosmology, or closed related fields, such as astrophysics, biophysics, mathematics, complexity and emergence, and the philosophy of physics.
I am drafting a submission, based on recent postings to this blog. No, I am not going to shoehorn anything about quantum computing, as that subject may have gotten me blackballed in the past.

Saturday, April 5, 2014

New Penrose interview

Roger Penrose is writing a book on "Fashion, Faith, and Fantasy", and gives this interview:
Sir Roger Penrose calls string theory a "fashion," quantum mechanics "faith," and cosmic inflation a "fantasy." Coming from an armchair theorist, these declarations might be dismissed. But Penrose is a well-respected physicist who co-authored a seminal paper on black holes with Stephen Hawking. What's wrong with modern physics—and could alternative theories explain our observations of the universe?
He has his own speculative theories, such as the BICEP2 data not being evidence of inflation or gravity waves, but of magnetic fields in a previous universe before the big bang.

A lot of people are skeptical about string theory and inflation models. He thinks that quantum mechanics is incomplete because we do not understand wave function collapse.

He is one of the leading mathematical physicists alive today, and his ideas should be taken seriously.

Friday, April 4, 2014

Counterfactuals: Causality

The concept of counterfactuals requires not just a reasonable theory of time but also a reasonable theory of causality.

Causality has confounded philosophers for centuries. Leibniz believed in the Principle of Sufficient Reason that everything must have a reason or cause. Bertrand Russell denied the law of causality, and argued that science should not seek causes.

Of course causality is central to science, and to how we personally make sense out of the world.

It is now commonplace for scientists to deny free will, particularly among popular exponents of atheism, evolution, and leftist politics. Philosopher Massimo Pigliucci rebuts Jerry Coyne and others, and John Horgan rebuts Francis Crick.

The leading experiments against free will are those by Benjamin Libet and John-Dylan Haynes. They show that certain brain processes take more time than is consciously realized, but they do not refute free will. See also contrary experiments.

The other main argument against free will is that a scientific worldview requires determinism. Eg, Jerry Coyne argues against contra-causal free will, and for biological determinism of behavior. Einstein hated quantum mechanics because it allowed for the possibility of free will.

A common belief is that the world must be either deterministic or random, but the word "random" is widely misunderstood. Mathematically, a random process is defined by the Kolmogorov axioms, and a random variable is a function on a measure-1 state space. That is, it is just a way of parameterizing outcomes based on some measurable set of samples. Whether or not this matches your intuition about random variables depends on your choice of Probability interpretation.

Wikipedia has difficulty defining what is random:
Randomness means different things in various fields. Commonly, it means lack of pattern or predictability in events.

The Oxford English Dictionary defines "random" as "Having no definite aim or purpose; not sent or guided in a particular direction; made, done, occurring, etc., without method or conscious choice; haphazard." This concept of randomness suggests a non-order or non-coherence in a sequence of symbols or steps, such that there is no intelligible pattern or combination.
In mathematics, the digits of Pi (π) can be said to be random or not random, depending on the context. Likewise scientific observations may or may not be called random, depending on whether there is a good explanation. Leading evolutionists Richard Dawkins and S.J. Gould had big disputes over whether evolution was random.

There is no scientific test for whether the world is deterministic or random or something else. You can drop a ball repeatedly and watch it fall the same way, so that makes the experiment appear deterministic. You will also see small variations that appear random. You can also put a Geiger detector on some uranium, and hear intermittent clicks at seemingly random intervals. But the uranium nucleus may be a deterministic chaotic system of quarks. We can never know, as any attempt to observe those quarks will disturb them.

Likewise there can be no scientific test for free will. You would have to clone a man, replicate his memories and mental state, and see if he makes the same decisions. Such an experiment could never be done, and would not convince anyone even if it could be, as it is not clear how free will would be distinguished from randomness. Free will is a metaphysical issue, not a scientific one.

Even if you believe in determinism, it is still possible to believe in free will.

A debate between determinists Dan Dennett and Sam Harris was over statements like:
If determinism is true, the future is set — and this includes all our future states of mind and our subsequent behavior. And to the extent that the law of cause and effect is subject to indeterminism — quantum or otherwise — we can take no credit for what happens. There is no combination of these truths that seems compatible with the popular notion of free will.
But that is exactly what quantum mechanics is -- a combination of those facts that is compatible with the popular notion of free will.

In biology, this dichotomy between determinism and randomness has been called the Causalist-Statisticalist Debate.

At the core of their confusion is a simple counterfactual:
Consider the case where I miss a very short putt and kick myself because I could have holed it. It is not that I should have holed it if I had tried: I did try, and missed. It is not that I should have holed it if conditions had been different: that might of course be so, but I am talking about conditions as they precisely were, and asserting that I could have holed it. There is the rub. [Austin’s example]
The problem here is that they think that determinism is a philosophical necessity, and so they fail to grasp the meaning of a counterfactual.

In public surveys, people overwhelmingly reject this deterministic view:
Imagine a universe (Universe A) in which everything that happens is completely caused by whatever happened before it. This is true from the very beginning of the universe, so what happened in the beginning of the universe caused what happened next, and so on right up until the present. For example one day John decided to have French Fries at lunch. Like everything else, this decision was completely caused by what happened before it. So, if everything in this universe was exactly the same up until John made his decision, then it had to happen that John would decide to have French Fries.
And so an atheist biologist writes:
To me, the data show that the most important task for scientists and philosophers is to teach people that we live in Universe A.
That is a tough sell, as Universe A is contrary to common sense, experience, and our best scientific theories.

Steven Weinberg has argued that the laws of physics are causally complete, but also that we are blindly searching for the final theory that will solve the mysteries of the universe. A final theory would explain quark masses and cancel gravity infinities.

Einstein had an almost religious belief in causal determinism, and many others seem to believe that a scientific outlook requires such a view. On the other hand, a majority of physicists today assert (incorrectly) that quantum mechanics has somehow proved that nature is intrinsically random.

Quantum mechanics is peculiar in that it leaves the possibility of free will. It is the counterexample to the notion that a scientific theory must be causal and deterministic, or otherwise contrary to free will. If you tried to concoct a fundamental physical theory that could accommodate free will, it is hard to imagine a theory being better suited for free will.

Some interpretations of quantum mechanics are deterministic and some are not, as so, as Scott Aaronson explains, determinism is not a very meaningful concept in the context of quantum mechanics.

If you reject free will and the flow of time, and believe that everything is determined by God or the Big Bang, then counterfactuals make no sense. Most of time travel in fiction makes no sense either. The concept of counterfactuals depends on the possibility of alternate events, and on time moving forward into an uncertain future.

Regardless of brain research and the scientific underpinnings of free will, counterfactuals are essential to how human beings understand the progress of time and the causality of events.

Tuesday, April 1, 2014

Urging the truth about quantum computing

MIT computer scientist Scott Aaronson has confessed he has physics envy:
I confess that my overwhelming emotion on watching Particle Fever was one of regret — regret that my own field, quantum computing, has never managed to make the case for itself the way particle physics and cosmology have, in terms of the human urge to explore the unknown.

See, from my perspective, there’s a lot to envy about the high-energy physicists.  Most importantly, they don’t perceive any need to justify what they do in terms of practical applications.  Sure, they happily point to “spinoffs,” like the fact that the Web was invented at CERN.  But any time they try to justify what they do, the unstated message is that if you don’t see the inherent value of understanding the universe, then the problem lies with you. ...

Now contrast that with quantum computing.  To hear the media tell it, a quantum computer would be a powerful new gizmo, sort of like existing computers except faster.  (Why would it be faster?  Something to do with trying both 0 and 1 at the same time.)
He blames the media?! No, every single scientist in this field tells glowing stories about the inevitable breakthrus in quantum cryptography and computing. Including Aaronson.

Lots of scientists over-hype their work, but the high energy physicists and astronomers have scientific results to show. Others are complete washouts after decades of work and millions in funding. String theory have never been able to show any relationship between the real world and their 10-dimensional models. Quantum cryptography has never found any practical application to information security. Quantum computing has never found even one scalable qubit or any quantum speedup. Multiverse theories have no testable implications and are mathematically incoherent.

Of course a conspiracy of lies brings in the grant money:
Foolishly, shortsightedly, many academics in quantum computing have played along with this stunted vision of their field — because saying this sort of thing is the easiest way to get funding, because everyone else says the same stuff, and because after you’ve repeated something on enough grant applications you start to believe it yourself. All in all, then, it’s just easier to go along with the “gizmo vision” of quantum computing than to ask pointed questions like:

What happens when it turns out that some of the most-hyped applications of quantum computers (e.g., optimization, machine learning, and Big Data) were based on wildly inflated hopes — that there simply isn’t much quantum speedup to be had for typical problems of that kind, that yes, quantum algorithms exist, but they aren’t much faster than the best classical randomized algorithms? ...

I’ll tell you: when this happens, the spigots of funding that once flowed freely will dry up, and the techno-journalists and pointy-haired bosses who once sang our praises will turn to the next craze.  And they’re unlikely to be impressed when we protest, “no, look, the reasons we told you before for why you should support quantum computing were never the real reasons!  and the real reasons remain as valid as ever!”

In my view, we as a community have failed to make the honest case for quantum computing — the case based on basic science — because we’ve underestimated the public.  We’ve falsely believed that people would never support us if we told them the truth: that while the potential applications are wonderful cherries on the sundae, they’re not and have never been the main reason to build a quantum computer.  The main reason is that we want to make absolutely manifest what quantum mechanics says about the nature of reality.  We want to lift the enormity of Hilbert space out of the textbooks, and rub its full, linear, unmodified truth in the face of anyone who denies it.  Or if it isn’t the truth, then we want to discover what is the truth.
If the quantum computer scientists were honest, they would admit that they are just confirming an 80-year-old quantum theory.

Update: Scott adds:
Quantum key distribution is already practical (at least short distances). The trouble is, it only solves one of the many problems in computer security (point-to-point encryption), you can’t store the quantum encrypted messages, and the problem solved by QKD is already solved extremely well by classical crypto. Oh, and QKD assumes an authenticated classical channel to rule out man-in-the-middle attacks. ... I like to say that QKD would’ve been a killer app for quantum information, in a hypothetical world where public-key crypto had never existed.
That's right, and quantum cryptography is commercially worthless for those reasons. Those who claim some security advantage are selling snake oil.

Update: Scott adds:
Well, it’s not just the people who flat-out deny QM. It’s also the people like Gil Kalai, Michel Dyakonov, Robert Alicki, and possibly even yourself (in previous threads), who say they accept QM, but then hypothesize some other principle on top of QM that would “censor” quantum computing, or make the effort of building a QC grow exponentially with the number of qubits, or something like that, and thereby uphold the classical Extended Church-Turing Thesis. As I’ve said before, I don’t think they’re right, but I think the possibility that they’re right is sufficiently sane to make it worth doing the experiment.
I would not phrase it that way. Scott's bias is that he is theoretical computer scientist, and he just wants some mathematical principles so he can prove theorems.

I accept quantum mechanics to the extent that it has been confirmed, but not the fanciful extrapolations like many-worlds and quantum computing. I am skeptical about those because they seem unjustified by known physics, contrary to intuition, and most of all, because attempts to confirm them have failed.

I am also skeptical about supersymmetry (SUSY). I do not know any principle that would censor SUSY. The main reason to be skeptical is that SUSY is a fanciful and wildly speculative hypothesis that is contradicted by the known experimental evidence. Likewise I am skeptical about quantum computing.

Update: Scott rpefers to compare QC to the Higgs boson than SUSY, presumably because the Higgs has been found, and adds:
My own view is close to that of Greg Kuperberg in comment #73: yes, it’s conceivable that the skeptics will turn out to be right, but if so, their current explanations for how they could be right are grossly inadequate. ...

If, hypothetically, QC were practical but only on the surface on Titan, then I’d count that as a practical SUCCESS! The world’s QC center could simply be installed on Titan by robotic spacecraft, and the world’s researchers could divvy up time to dial in to it, much like with the Hubble telescope.
Spoken like a theorist. He does not want his theorems to be vacuous.