Pages

Saturday, November 28, 2015

Quantum gravity looks for a miracle

Peter Woit recommends: Lance Dixon Explains Quantum Gravity
What questions do researchers hope to answer with quantum gravity?

Quantum gravity could help us answer important questions about the universe.

For example, quantum effects play a role near black holes ...

Researchers also hope to better understand the very first moments after the Big Bang, ...

One has to realize, though, that processes on Earth occur at much smaller energy scales, with unmeasurably small quantum corrections to gravity. With the LHC, for instance, we can reach energies that are a million billion times smaller than the Planck scale. Therefore, quantum gravity studies are mostly “thought experiments,” ...

What would be a breakthrough in the field?

It would be very interesting if someone miraculously found a theory that we could use to consistently predict quantum gravitational effects to much higher orders than possible today.
As you can infer, quantum gravity has no relation to observational science. All they are doing is thought experiments, and trying to do mathematical calculations that are consistent with their prejudices about the world. That supposed consistency is all they care about, as there is no observation or measurement that will ever have any bearing on what they do.

Scott Aaronson writes about partying with his fellow rationalists, and adds:
he just wanted to grill me all evening about physics and math and epistemology. Having recently read this Nature News article by Ron Cowen, he kept asking me things like: “you say that in quantum gravity, spacetime itself is supposed to dissolve into some sort of network of qubits. Well then, how does each qubit know which other qubits it’s supposed to be connected to? Are there additional qubits to specify the connectivity pattern? If so, then doesn’t that cause an infinite regress?” I handwaved something about AdS/CFT, where a dynamic spacetime is supposed to emerge from an ordinary quantum theory on a fixed background specified in advance. But I added that, in some sense, he had rediscovered the whole problem of quantum gravity that’s confused everyone for almost a century: if quantum mechanics presupposes a causal structure on the qubits or whatever other objects it talks about, then how do you write down a quantum theory of the causal structures themselves?
These are all unanswerable questions, of course. There is no such thing as a network of qubits. A qubit is an idealization that could be used to build quantum computers, if such a thing exists and can be scaled up. But replacing spacetime with qubits is just a stupid quantum gravity pipe dream that has no bearing on reality.

The Nature article says:
A successful unification of quantum mechanics and gravity has eluded physicists for nearly a century. Quantum mechanics governs the world of the small — the weird realm in which an atom or particle can be in many places at the same time, and can simultaneously spin both clockwise and anticlockwise. Gravity governs the Universe at large — from the fall of an apple to the motion of planets, stars and galaxies — and is described by Albert Einstein’s general theory of relativity, announced 100 years ago this month. The theory holds that gravity is geometry: particles are deflected when they pass near a massive object not because they feel a force, said Einstein, but because space and time around the object are curved.

Both theories have been abundantly verified through experiment, yet the realities they describe seem utterly incompatible. And from the editors’ standpoint, Van Raamsdonk’s approach to resolving this incompatibility was  strange. All that’s needed, he asserted, is ‘entanglement’: the phenomenon that many physicists believe to be the ultimate in quantum weirdness. Entanglement lets the measurement of one particle instantaneously determine the state of a partner particle, no matter how far away it may be — even on the other side of the Milky Way.
No, measuring a particle has no effect on a partner on the other side of the galaxy. It appears that people are so spooked by entanglement that you can babble nonsense and be taken seriously by a leading science journal.

It is not true that there is any incompatibility between relativity and quantum mechanics, in any practical sense. The supposed problems are in black holes or at the big bang, where much of we know breaks down for other reasons. There is no foreseeable way for a scientific resolution of what the quantum gravity folks want.

Here is Aaronson's example of a test for quantum gravity:
For example, if you could engineer a black hole to extreme precision (knowing the complete quantum state of the infalling matter), then wait ~1067 years for the hole to evaporate, collecting all the outgoing Hawking radiation and routing it into your quantum computer for analysis, then it’s a prediction of most quantum theories of gravity that you’d observe the radiation to encode the state of the infalling matter (in highly scrambled form), and the precise way in which the state was scrambled might let you differentiate one theory of pre-spacetime qubits at the Planck scale from another one. (Note, however, that some experiments would also require jumping into the black hole as a second step, in which case you couldn’t communicate the results to anyone else who didn’t jump into the hole with you.)
This is science fiction from beginning to end. One rarely even knows the complete quantum state for a single particle, and never for anything complicated.

Van Raamsdonk endorses the wacky idea that entanglement and wormholes are the same thing, on a different scale.

The rationalists (such as the Less Wrong crowd) grapple with their own futuristic compatibility problem. They are convinced that AI robots will take over the Earth, and then will have no use for illogical humans. So they are determined to improve the logical functioning of humans so that they will be able to coexist with the machine super-intelligence. I think that the robots will exterminate us when they discover that our leading scientists believe that detecting a particle can have an instantaneous effect on the other side of the galaxy.

Thursday, November 26, 2015

NY Times celebrates general relativity centennial

NY Times editor and Einstein biographer Dennis Overbye writes on the general relativity centennial:
This is the general theory of relativity. It’s a standard trope in science writing to say that some theory or experiment transformed our understanding of space and time. General relativity really did. ...

Hardly anybody would be more surprised by all this than Einstein himself. The space-time he conjured turned out to be far more frisky than he had bargained for back in 1907. ...

Gravity was not a force transmitted across space-time like magnetism; it was the geometry of that space-time itself that kept the planets in their orbits and apples falling.

It would take him another eight difficult years to figure out just how this elastic space-time would work, during which he went from Bern to Prague to Zurich and then to a prestigious post in Berlin.
He makes it sound as if Einstein invented space-time geometry in 1907, and spent 8 years figuring out how to interpret gravity as geometry.

Physics books often say this also, but it is wrong. Poincare discovered the spacetime geometry in 1905, and was the first to apply it to gravity. As I wrote last month:
Many people assume that Einstein was a leader in this movement, but he was not. When Minkowski popularized the spacetime geometry in 1908, Einstein rejected it. When Grossmann figured out to geometrize gravity with the Ricci tensor, Einstein wrote papers in 1914 saying that covariance is impossible. When a relativity textbook described the geometrization of gravity, Einstein attacked it as wrongheaded.
Einstein did make some contributions to the theory, he should only be credited for what he did.

I previously criticized Overbye for the way he credited Einstein for special relativity:
But in Einstein's formulation did objects actually shrink? In a way, the message of relativity theory was that physics was not about real objects, rather, it concerned the measurement of real objects. ...

No such declaration of grandeur, of course, intruded on the flat and somewhat brisk tone of the paper. Albert simply presented his argument and in many cases left it to the reader to fill in the gaps and to realize the implications.
A more complete quote is here. That is a nice statement of the message of relativity, but it was Poincare's view, and not Einstein's.

Monday, November 23, 2015

Extraordinary pages in the history of thought

A SciAm blogger writes:
“The treatise itself, therefore, contains only twenty-four pages — the most extraordinary two dozen pages in the whole history of thought!”

This Hungarian postage stamp does not depict János Bolyai. No portraits of him survive. For more information about the "real face of János Bolyai," click the picture to read Tamás Dénes's article on the subject.

“How different with Bolyai János and Lobachévski, who claimed at once, unflinchingly, that their discovery marked an epoch in human thought so momentous as to be unsurpassed by anything recorded in the history of philosophy or of science, demonstrating as had never been proved before the supremacy of pure reason at the very moment of overthrowing what had forever seemed its surest possession, the axioms of geometry.”

— George Bruce Halsted, on János Bolyai’s treatise on non-Euclidean geometry, The Science of Absolute Space.
Bolyai's non-Euclidean geometry was published in 1832.

Another geometry book says:
“The discoverers of noneuclidean geometry fared somewhat like the biblical king Saul. Saul was looking for some donkeys and found a kingdom. The mathematicians wanted merely to pick a hole in old Euclid and show that one of his postulates which he though was not deducible from the others is, in fact, so deducible. In this they failed. But they found a new world, a geometry in which there are infinitely many lines parallel to a given line and passing through a given point; in which the sum of the angles in a triangle is less than two right angles; and which is nevertheless free of contradiction.”
The blog concludes:
These quotes all seem a bit dramatic for geometry, but it’s easy not to know how truly revolutionary the discovery of non-Euclidean geometry was. Halsted’s description of Bolyai’s paper as “the most extraordinary two dozen pages in the whole history of thought” certainly sounds hyperbolic, but can you find 24 other pages that compete with it?
The creation of non-Euclidean geometry by Bolyai, Gauss, and others was indeed a great vindication of geometry and the axiomatic method. It later became central to 20th century physics when Poincare and Minkowski created a theory of relativity based on it.

Wednesday, November 18, 2015

Carroll's big picture

Physicist Sean M. Carroll has a new book to be released next year, The Big Picture: On the Origins of Life, Meaning, and the Universe Itself:
This is Sean Carroll's most ambitious book yet: how the deep laws of nature connect to our everyday lives. He has taken us From Eternity to Here and to The Particle at the End of the Universe. Now forThe Big Picture. This is a book that will stand on the shelf alongside the great humanist thinkers from Stephen Hawking and Carl Sagan to Daniel Dennett and E.O. Wilson. It is a new synthesis of science and the biggest questions humans ask about life, death, and where we are in the cosmos. 

Readers learn the difference between how the world works at the quantum level, the cosmic level, and the human level; the emergence of causes from underlying laws; and ultimately how human values relate to scientific reality. This tour of, well, everything explains the principles that have guided the scientific revolution from Darwin and Einstein to the origins of life, consciousness, and the universe, but it also shows how an avalanche of discoveries over the past few hundred years has changed the world for us and what we think really matters. As Carroll eloquently demonstrates, our lives are dwarfed by the immensity of the universe and redeemed by our capacity to comprehend it and give it meaning.
Keep in mind that this is a guy who believes in the many-worlds interpretation (MWI) of quantum mechanics, where your every fantasy is really being played out in some parallel universe.

Update: Publication delayed until May 10, 2016.

Monday, November 16, 2015

Philosopher says scientism leads to leftism

A reader sends an example of a philosopher who supposedly has a scientific outlook. In this podcast, Alex Rosenberg trashes Economics as not being a science. He notes that while economists brag about being able to make certain types of predictions, they are usually completely unable to make any quantitative predictions with any reliability.

Rosenberg has a book on The Atheist's Guide to Reality: Enjoying Life without Illusions.

One chapter is titled "Never let your consious be your guide". I am not sure if he misspelled "conscience", or he is making a pun, or what. Either way, he says "science provides clear-cut answers": You have no mind, you have no free will, and you have no capacity for introspection, rational decision, or moral judgment.

In this interview, he proudly declares his devotion to Scientism, even tho some consider it a pejoritive term. His true science is Physics, and he says he believes in bosons and fermions, and nothing more. But he never actually cites any scientific facts or theories to support any of his opinions. He shows no sign of knowing anything about Physics, except that particles are classified into bosons and fermions.

Even saying that everything is made of bosons and fermions is a little misleading. We have spacetime and quantum fields. Even the quantum vacuum is a nontrivial thing, and I would not say it is just made of bosons and fermions.

After nearly an hour of babbling about the merits of philosophical analysis, he gets to another subject that passes his scientific requirements: political leftism. That's right, the whole thing is just a build-up for a bunch of left-wing political opinions.

He denies that there are any natural rights, but strongly argues for a "core morality". This makes no sense to me.

He says he is "pro-choice" (ie, pro-abortion) but denies that people have any choices because his scientism implies a causal determinism that eliminates free will. So why is he in favor of free choice when there is no such thing?

He says that his scientism implies that we should be soft on crime, because it is inhumane to punish people for doing what they were pre-programmed to do. He also says it is wrong to reward people for hard work or successful business, because again, they are just doing what they were programmed to do. He concedes that we might want a market economy, because science shows that it works better, even tho economics is not a science. But people should never be allowed to get rich, because science proves that they will never deserve the riches.

This is all nonsense. It all rests on Laplacian determinism, but there is no scientific proof of that, and the great majority of physicists today reject it. Even if it were true, his reasoning doesn't even make any sense. He is like a Marxist who advocates pseudo-science with great certainty to fellow humanities professors who like the leftist conclusions and are too innumerate to see the fallacies.

A book review says:
Science – i.e. common sense – tells us that atheism is pretty much a certainty. The reason is quite straightforward. The second law of thermodynamics tells us that disorder and homogeneity steadily increase everywhere in the universe. Whatever physics has left to tell us, it almost certainly won’t contradict this fundamental law. But a purposeful agent, arranging things according to a conscious plan, would be transforming disorder to order. And this is never possible according to the second law (strictly speaking, it is possible, but so improbable as to be ruled out). This rules out most conceptions of God straight away. ...

Vividly and painstakingly, Rosenberg undermines our fundamental belief that we consciously direct our actions. He argues that it is impossible that any of the processes occurring in our brain can have representative content. They can’t be about anything at all. ... This means that our actions can’t, however much they seem to, be caused by thoughts about intended results, or indeed thoughts about anything at all. ...

Despite appearances, then, we aren’t capable of ordering the world by conscious design. In fact, our actions are never caused by conscious intentions. ...

Human life as a whole has no meaning, no goal, and no purpose. Neither our individual lives nor human history in general are governed by purposes – not our own, and not anybody else’s. ... Indeed, there is no ‘you’; the self is another one of those convenient fictions we use to roughly indicate blind processes that have no subjective centre.

The same goes for human history. All apparent progress – the spread of democracy or the global progression towards prosperity – are just local equilibria in a blind arms-race driven by blind selection.
This is neither science nor common sense. God could have created the universe in a very low entropy big bang, and intended that life evolve on Earth while entropy increases in the Sun.

Of course our actions are caused by conscious intentions. And human history has made progress.

I would rather stick to the science, but he does not have much science here, if any. What he has is philosophical nuttiness directed towards some sort of politically leftist nihilism.

Sunday, November 15, 2015

Where are the bits in the brain?

Information is reducible to bits, right? Zeros and one. Maybe the quantum computer enthusiasts add qubits, and maybe the Bell spooks might say that non-bit information can travel faster than light.

Brain scientist do not buy into this quantum stuff, so I thought that they believe in bits. But apparently not:
Most neuroscientists believe that the brain learns by rewiring itself—by changing the strength of connections between brain cells, or neurons. But experimental results published last year, from a lab at Lund University in Sweden, hint that we need to change our approach. They suggest the brain learns in a way more analogous to that of a computer: It encodes information into molecules inside neurons and reads out that information for use in computational operations. ...

From a computational point of view, directions and distances are just numbers. ...

Neuroscientists have not come to terms with this truth. I have repeatedly asked roomfuls of my colleagues, first, whether they believe that the brain stores information by changing synaptic connections — they all say, yes — and then how the brain might store a number in an altered pattern of synaptic connections. They are stumped, or refuse to answer.
I guess not much is known about how the brain stores information.

Saturday, November 14, 2015

The Bell theorem hypotheses

A commenter disputes my contention that Bell's Theorem depends on an assumption of local hidden variables.

This may seem like an obscure technical point, but it helps explain differing views about Bell's Theorem:
Cornell solid-state physicist David Mermin has described the appraisals of the importance of Bell's theorem in the physics community as ranging from "indifference" to "wild extravagance".
There has been a consensus since about 1930 that hidden variable theory is contrary to the core of quantum mechanics. So another experiment confirming Bell's Theorem is just another affirmation of the legitimacy of those Nobel prizes in 1932 and 1933.

That explains the indifference. So the Bell spooks need a better argument to justify their wild extravagance.

Here is one such attempt, from Wikipedia:
Whereas Bell's paper deals only with deterministic hidden variable theories, Bell's theorem was later generalized to stochastic theories[18] as well, and it was also realised[19] that the theorem is not so much about hidden variables, as about the outcomes of measurements that could have been taken instead of the one actually taken. Existence of these variables is called the assumption of realism, or the assumption of counterfactual definiteness.
This is just a sneaky way of defining hidden variables. Quantum mechanics teaches that you cannot precisely measure position and momentum at the same time. If you assume that particles somehow have precise values for those in the absence of a measurement, then you are in the land of anti-QM hidden variables. If there were any merit to this, then much of XX century physics would be wrong.

John Bell himself later tried to fudge his theorem by pretending to be able to deduce hidden variables from other hypotheses:
Already at the time Bell wrote this, there was a tendency for critics to miss the crucial role of the EPR argument here. The conclusion is not just that some special class of local theories (namely, those which explain the measurement outcomes in terms of pre-existing values) are incompatible with the predictions of quantum theory (which is what follows from Bell's inequality theorem alone), but that local theories as such (whether deterministic or not, whether positing hidden variables or not, etc.) are incompatible with the predictions of quantum theory. This confusion has persisted in more recent decades, so perhaps it is worth emphasizing the point by (again) quoting from Bell's pointed footnote from the same 1980 paper quoted just above: "My own first paper on this subject ... starts with a summary of the EPR argument from locality to deterministic hidden variables. But the commentators have almost universally reported that it begins with deterministic hidden variables."
The commentators are correct. The argument does begin with hidden variables.

In my view, the core of the problem here that some physicists and philosophers refuse to buy into the positivist philosophy that comes with quantum mechanics. Quantum mechanics is all about making predictions for observables, but some people, from Einstein to today, insist on trying to give values for hypothetical things that can never be observed.

The comment asks:
Could you tell me what you mean by "deterministic and separable", and in particular could you tell me how your concept of "deterministic and separable" can account for the perfect anti-correlation without hidden variables?

By separable, I mean that if two objects are separated by a space-like 4-vector, then one can have no influence over the other. This is local causality, and says that physical causes act within light cones. It has been one of the main 3 or 4 principles of physics since about 1860. It is the core of relativity theory.

Determinism is a dubious concept. It has no scientific meaning, as there is no experiment to confirm or deny it. It has never been taught in textbooks, and few physicists have believed in it, with Einstein being a notable exception.

If determinism were true, then you could prepare two identical Potassium-40 atoms, wait about a billion years, and watch them both decay at the same time. But it is hopelessly impossible to prepare identical atoms, so this experiment cannot be done.

Some interpretations of quantum mechanics claim to be deterministic and some claim the opposite, and they all claim to be consistent with all the experiments. So this notion of determinism does not seem to tell us anything about quantum mechanics.

As I wrote last year:
If determinism means completely defined by observables or hidden variables obeying local differential equations, then quantum mechanics and chaos theory show it to be impossible.
So I am not pushing determinism. It is an ill-defined and unscientific concept.

If some EPR-like process emits two equal and opposite electrons, then maybe those electrons are physically determined by the emission process. Or maybe they are intrinsically stochastic and have aspects that are not determined until a measurement. Or maybe it is all determined by events pre-dating the emission. I do not think that these distinctions make any mathematical or scientific sense. You can believe in any of these as you please, and positivists like myself will be indifferent.
When the experimenters get together later to compare their results, they make an astounding discovery: Every time the two experimenters happened to measure a pair of entangled electrons along the same direction, they ALWAYS got opposite results (one UP and one DOWN), and whenever they measured in different directions they got the same result (both UP or both DOWN) 3/4 of the time.
What makes this astounding is if you try to model the electron spin by assuming that there is some underlying classical spin to explain all this. That means that there is always a quantitative value for that spin, or some related hidden variable, that defines the reality of the electron. Some people call this "realism", but it is more than that. It is identifying the electron with the outcomes of potential measurements that are never made.

Everything we know about electrons says that this is impossible. You can measure position, and mess up the momentum. You can measure the X-spin, and mess up the Y-spin. And we can only make these measurements by forcing the electron into special traps, thereby making obvious changes to the electron.

Thus, contrary to widespread beliefs, Bell's Theorem and its experimental tests say nothing about locality or determinism or randomness. They only rule out some hidden variable theories that everyone (but Einstein) thought were ruled out in 1930.

I am not saying anything radical or unusual here. This is just textbook quantum mechanics. Those textbooks do not say that Bell proves nonlocality or indeterminism. I don't think so, anyway. That is why so many physicists are indifferent to the subject, and no Nobel prize has been given for this work. It is just 1930s physics with some wrong philosophical conclusions.

Lubos Motl just posted a rant against a BBC documentary:
I have just watched the first of the two episodes of the 2014 BBC Four documentary The Secrets of Quantum Physics. ...

Needless to say, after having said that Einstein's view has been conclusively disproven, Al-Khalili says that he believes in Einstein's view, anyway. Hilarious. Sorry, Mr Al-Khalili, but you just violate the basic rules of logic in the most obvious way. You should really reduce the consumption of drugs.

Before I watched the full episode, and even in 1/2 of it or so, I wanted to believe that it would be a documentary on quantum mechanics that avoids the complete idiocy and pseudoscience. Sadly, my optimism was misplaced. This is another excellent representative of the anti-quantum flapdoodle that is being spread by almost all conceivable outlets of the mass culture.
I haven't watched it, but it sounds like a lot of other popular accounts of quantum mechanics. They get the early history pretty well, and then they tell how Einstein and Bell did not believe it, and tried to prove QM wrong. Then the experiments proved QM right and Einstein and Bell wrong. But then then end up with some crazy conclusions that do not make any sense.

Update: There is some discussion in the comments below about whether Bell assumes hidden variables, and when he leaves Bohr's positivistic view of quantum mechanics as a possibility. You can see for yourself in Bell's 1981 socks paper, which is here, here, and here. He uses the letter lambda for the hidden variables.

Friday, November 13, 2015

Teen wins prize for relativity video

USA Today reports:
Meet Ryan Chester, 18, whose quirky video explaining Albert Einstein’s mind-bending Special Theory of Relativity won the inaugural Breakthrough Junior Challenge Sunday. The prize nets the North Royalton (Ohio) High School senior $250,000 toward college; $50,000 for Rick Nestoff, the physics teacher that inspired him; and $100,000 for a new school science lab.

Parents, take note: That's not too shabby a haul in exchange for a zippy 7-minute clip.
It is a slick video, and it follows closely a common textbook explanation in terms of Einstein's postulates (ie, Poincare's relativity principle and Lorentz's constancy of the speed of light), the Michelson-Morley experiment, and spacetime without an aether.

Thursday, November 12, 2015

Science writers on finding truth

SciAm writer John Horgan reviews the book of fellow science writer George Johnson, contrasting it with his own. They also discuss it in this Bloggingheads video.

I think that the XX century will always be considered the greatest for finding fundamental enduring truths, and Horgan has a similar view:
In 1965 Richard Feynman, prescient as always, prophesied that science would reach this impasse. "The age in which we live is the age in which we are discovering the fundamental laws of nature, and that day will never come again." After the great truths are revealed, Feynman continued, "there will be a degeneration of ideas, just like the degeneration that great explorers feel is occurring when tourists begin moving in on a new territory."
They argree on this:
On the issue of superstrings, Johnson and I are in complete agreement. Superstring theory seems to be a product not of empirical investigation of nature but of a kind of religious conviction about reality's symmetrical structure. Some particle physicists have the same view. Sheldon Glashow, one of the architects of the electroweak theory, has ridiculed superstring researchers as the "equivalents of medieval theologians."
Where they disagree is where Johnson seems to be influenced by foolish philosophers who deny truth and treat science as just another social activity driven by fads:
Johnson proposes that science, too, might be "a construction of towers that just might have been built another way." Although he never mentions The Structure of Scientific Revolutions, Johnson's view evokes the one set forth in 1962 by the philosopher Thomas Kuhn. In that book's coda, Kuhn compared the evolution of science to the evolution of life. Biologists, he noted, have banished the teleological notion that evolution advances toward anything--including the clever, featherless biped known as Homo sapiens. In the same way, Kuhn suggested, scientists should eschew the illusion that science is evolving toward a perfect, true description of nature. In fact, Kuhn asserted, neither life nor science evolve toward anything, but only away from something; science is thus as contingent, as dependent on circumstance, as life is. Johnson espouses a similar view, though more eloquently than Kuhn did. ...

My main disagreement with Johnson is that he plies his doubts too even-handedly. The inevitable result is that all theories, from the most empirically substantiated to those that are untestable even in principle, seem equally tentative. I also cannot accept Johnson's evolutionary, Kuhnian model of scientific progress. ...

Johnson further attempts to undermine the reader's faith in physics by reviewing ongoing efforts to make sense of quantum mechanics.
Johnson's view is probably (unfortunately) the majority among historians, philosophers, social scientists, science writers, and physics popularists.

Tuesday, November 10, 2015

Shimony's opinion of Bell's Theorem

Following up my EPR comments, I post this to show that my opinions are not much different from respected experts on the subject.

Abner Shimony spent his whole career writing papers on quantum spookiness, and here is his opinion of the consequence of Bell's Theorem:
There may indeed be "peaceful coexistence" between Quantum nonlocality and Relativistic locality, but it may have less to do with signaling than with the ontology of the quantum state. Heisenberg's view of the mode of reality of the quantum state was briefly mentioned in Section 2 - that it is potentiality as contrasted with actuality. This distinction is successful in making a number of features of quantum mechanics intuitively plausible - indefiniteness of properties, complementarity, indeterminacy of measurement outcomes, and objective probability. But now something can be added, at least as a conjecture: that the domain governed by Relativistic locality is the domain of actuality, while potentialities have careers in space-time (if that word is appropriate) which modify and even violate the restrictions that space-time structure imposes upon actual events. The peculiar kind of causality exhibited when measurements at stations with space-like separation are correlated is a symptom of the slipperiness of the space-time behavior of potentialities. This is the point of view tentatively espoused by the present writer, but admittedly without full understanding. What is crucially missing is a rational account of the relation between potentialities and actualities - just how the wave function probabilistically controls the occurrence of outcomes. In other words, a real understanding of the position tentatively espoused depends upon a solution to another great problem in the foundations of quantum mechanics - the problem of reduction of the wave packet.
What he is trying to say is that Heisenberg's quantum states represent potential measurements, not actual reality. There are apparent nonlocalities in the potentialities, but that is just in our minds. It does not violate relativistic causality, because that is about actual reality. There is no nonlocality in actual reality.

He still wants to believe in some sort of spookiness, but this shows that there is no proof of nonlocality in actual reality.

Monday, November 9, 2015

Explaining the EPR paradox

A reader asks me to explain my view of the EPR Bohm paradox, since I seem to reject what everyone else says.

Actually, my view is just textbook quantum mechanics. I do not subscribe to hidden variables, superdeterminism, action-at-a-distance, or anything exotic.

I am also a logical positivist, so I stick to what we know, and try not to jump to unsupported conclusions.

Experiments show that electrons and photons have both particle and wave properties. Quantum mechanics teaches that an electron is a mysterious quantum that is not exactly a particle or a wave, but something else. It obeys various physical laws, like conservation of energy and momentum, and Heisenberg uncertainty.

The EPR paper looks at a physical process that emits two equal and opposite electrons. Only I prefer to call them quanta, because they are not really particles with definite position and momentum at the same time.

Mathematically, the two quanta are represented as a single quantum state. A measurement of one collapses the state, according to the rules of quantum mechanics. Quantitative predictions are in excellent agreement with experiment.

In particular, you can measure the momentum of one quantum, and know that the other must be equal and opposite. Physically there is nothing strange about this, as it is a consequence of momentum being conserved.

But it is a little strange if you combine this with Heisenberg uncertainty, which normally prevents us from making a precise statement about momentum until it is measured. Measuring one quantum allows us to say something about a distant quantum.

Bohm pointed out that you get the same paradox when measuring spin, and tried to argue for hidden variables.

One way people have tried to explain this is with action-at-a-distance, with measuring one quantum having an instantaneous physical effect on the other. But that is so contrary to everything else we know about physics, then such an explanation would only be a last resort.

Another is to model the quantum with hidden variables. All such attempts have failed. In particular, if you think of the quantum as having a definite position, momentum, and spin before the measurement, you get contradictions with experiment.

So what is left? There is still the original Bohr logical positivist view. Things are clearer if you avoid trying to answer unanswerable questions. Physics is about observables. Concepts like position, momentum, and spin are not really properties of quanta. They are properties of observations.

We have no true mathematical representation of the quantum. We have a mechanics of observables. We predict observations, and avoid trying to say what is not observed.

When people try to visualize the quantum, they inevitably form some classical (pre-quantum) picture that we know is wrong. So they get paradoxes and say that quantum mechanics is incomprehensible.

Or they try some mathematical model that ends up being a hidden variable model, and it does not work.

So again, here is what happens. A physical process emits two equal and opposite quanta. They seem particle-like and wave-like, but they are physical objects that lack a true mathematical formulation. From the physics, we know that they are equal and opposite, and from quantum mechanics formulas, we can make certain predictions about observables. In particular, observations about the two quanta are correlated. Correlation is not necessarily causation.

Does some physical process like radioactive decay determine the state of an emitted quantum? I have an open mind on that, because I don't see how the question makes any sense. How can any experiment tell us one way or the other? You can believe it or not believe it; it is all the same to me.

Physicists make arguments that EPR-like experiments prove true randomness. I have posted denials of that, and I do not even believe that there is any such thing as true randomness. Randomness is a mathematical formalism that is actually deterministic, or a figure of speech for how certain theories fail to predict certain outcomes. That's all. When physicists talk of true randomness, they are usually talking nonsense.

What about the quanta being separable? Action-at-a-distance seems like hokum to me. It is as implausible as a perpetual motion machine. There is no evidence for it, and a lot of reasons for thinking it impossible.

I say that the physical quanta are separate and cannot influence each other. At the same time, our knowledge of the two quanta are linked, and info about one tells us something about the other.

In the terminology of Bell's theorem, I am rejecting counterfactual definiteness, just as all mainstream physicists have for decades.

Counterfactual definiteness says that the photons in the double-slit experiment must entirely go thru one slit or the other, just as if they were measured as particles at the slits. But mainstream quantum mechanics teaches that this is completely false, and such measurement would destroy the interference pattern. The light goes thru both slits at once.

You cannot do a completely passive observation of a quantum giving it a particular position, momentum, or spin. Any such measurement changes it, because quanta do not have such properties.

Rejection of counterfactual definiteness is essential to XX century physics, and is embodied by these slogans:
Another thing that people have emphasized since quantum mechanics was developed is the idea that we should not speak about those things which we cannot measure. (Actually relativity theory also said this.) [Feynman]

Unperformed experiments have no results. [Peres]
Somehow the 21st century has brought us more and more physicists who would rather believe in spookiness or parallel universes. A serious disease has infected Physics.

You will probably say I am cheating because I am not seeking a complete mathematical description of an electron, or that it is a cop-out to say that the wave function is just a representation of our knowledge.

My answer is that this issue goes to the heart of what science is all about. The job of physics is to use mathematics to predict outcomes for experiments. It is not to provide mathematical representations for things you cannot observe. All of the EPR paradoxes are based on naive expectations for counterfactuals, and not observations. Stick to observables, and the experiments are not so strange.

Friday, November 6, 2015

Philosophers confused over causality

Physicists usually have a dim view of philosophers. One reason is that philosophers reject causality.

Here is a new philosophy paper with more nonsense on the issue:
Issues surrounding the role of causation/causal reasoning in physics have recently been the subject of considerable philosophical discussion (e.g., Norton, 2009, Smith, 2013, Frisch, 2014). There is a spectrum (or perhaps, more accurately, a multi-dimensional space) of different possible positions on this issue: Some, echoing Russell, 1912 take the view that features of fundamental physical laws or the contexts in which these laws are applied imply that causal notions play little or no legitimate role in physics – or at least that they play no “fundamental” role. Others take the even stronger position that causal notions are fundamentally unclear in general and that they are simply a source of confusion when we attempt to apply them to physics contexts (and presumably elsewhere as well. ) A more moderate position is that while causal notions are sometimes legitimate in physics, they are unnecessary in the sense that whatever scientifically respectable content they have can be expressed without reference to causality. Still others (e. g., Frisch, 2014) defend the legitimacy and even centrality of causal notions in the interpretation of physical theories. Those advocating this last position observe that even a casual look at the physics literature turns up plenty of references to “causality” and “causality conditions”. Examples include a micro-causality condition in quantum field theory which says that operators at spacelike separation commute and which is commonly motivated by the claim that events at such separation do not interact causally, and the clustering decomposition assumption referred to in section 5 which is also often motivated as a causality condition. Another example is the preference for “retarded” over “advanced” solutions to the equations of classical electromagnetism (as well as the use of retarded rather than advanced Green’s functions in modeling dispersion relations, as discussed below) where this is motivated by the claim that the advanced solutions represent “non-causal” behavior in which effects temporally precede their causes (violation of another causality condition). Similarly, there is a hierarchy of causality conditions often imposed in models of General Relativity, with, for example, solutions involving closed timelike curves being rejected by some on the grounds they violate “causality”. Causal skeptics (e.g., Norton, 2009 and to some extent, Smith, 2013) respond, however, that these conditions are either unmotivated (because, e.g., vague or unreasonably aprioristic) or superfluous in the sense that what is defensible in them can be restated without reference to any notion of causation.
This is hopelessly confused. If a philosopher cannot see that causality is essential to modern physics, then he has no understanding of physics.

Denying that physics is about causality is like denying that physics is about energy.

Even the guy (Frisch), who defends causal notions in physics, is also confused. See his paper (pdf), where he only allows for causality in the narrowest subfields of physics. His arguments have no merit either.

There are several arguments in play, all nutty. One is that a theory making precise predictions is not causal, because the outcomes are constrained by the math, not the cause. So Newton's F = ma is not causal. Another is that action at a distance is not causal, because it cannot be broken into a chain of events. So Newtonian gravity is not causal. Another is that time-symmetric theories cannot be causal, because the past has to cause the future without the future causing the past. Most physics differential equations are time symmetric, so this eliminates a lot of theories. Another is that indeterministic theories cannot be causal, because outcomes should be determined by the causes, and not by chance. So quantum mechanics cannot be causal.

Of course stochastic theories can be causal. Without getting into mathematical definitions, consider the statement "smoking causes lung cancer". Everyone understands this. But smoking does not cause lung cancer in every single smoker, and is not the sole cause of lung cancer. Smoking raises the probability of lung cancer. It has always been understood that causality can work with probability in this way.

Statements like "the moon causes the tides" are also widely accepted under either Newtonian gravity or general relativity. Under Newtonian gravity, you might object that there was no local causality, but there was non-local causality.

Philosophers do not just attack physics. Some have attacked evolution biology, in the Causalist-Statisticalist Debate. Here is a recent review of this issue. The argument is that principles like "survival of the fittest" have no predictive power, and do not give causal explanation. They just give a framework for generating statistics about nature.

Wednesday, November 4, 2015

More on Musser's spooky book

SciAm editor George Musser complains that I trashed his book without reading it. Okay, fair point. His book is on Amazon as Spooky Action at a Distance: The Phenomenon That Reimagines Space and Time -- and What It Means for Black Holes, the Big Bang, and Theories of Everything.

The endorsement from Frank Wilczek says:
Locality has been a fruitful and reliable principle, guiding us to the triumphs of twentieth-century physics. Yet the consequences of local laws in quantum theory can seem 'spooky' and nonlocal-and some theorists are questioning locality itself. Spooky Action at a Distance is a lively introduction to these fascinating paradoxes and speculations.
Wilczek is a distinguished and level-headed physicist. I read this as saying that he firmly believes in locality as a great triumph of XX century physics. Some quantum experiments may seem spooky and nonlocal, so they are fun to talk about, but he is not endorsing any spooky or nonlocal interpretations or speculations that are in the book.

I can agree with that. Maybe Musser put "spooky" in the title to sell more books. If so, I do not fault him for that, as long as he describes the physics correctly.

I was once appalled by a 1979 book called The Dancing Wu Li Masters: An Overview of the New Physics. While the book was filled with goofy speculations, the actual description of the known physics was pretty accurate. So I ultimately decided that it was a decent book.

The current SciAm has a preview of his book, where he relates nonlocality to general relativity:
When I first learned about the quantum phenomenon known as nonlocality in the early 1990s, I was a graduate student. But I didn't hear about it from my quantum-mechanics professor: he didn't see fit to so much as mention it. Browsing in a local bookshop, I picked up a newly published work, The Conscious Universe, which startled me with its claim that “no previous discovery has posed more challenges to our sense of everyday reality” than nonlocality. The phenomenon had the taste of forbidden fruit. ...

Points in the gravitational field must be interlinked with one another so that they can flop around while collectively still producing the same internal arrangement of objects. These linkages violate the principle that individual locations in space have an autonomous existence. Marolf has put it this way: “Any theory of gravity is not a local field theory. Even classically there are important constraint equations. The field at this point in spacetime and the field at this point in spacetime are not independent.” ...

In short, Einstein's theory is nonlocal in a more subtle and insidious way than Newton's theory of gravity was. Newtonian gravity acted at a distance, but at least it operated within a framework of absolute space. Einsteinian gravity has no such element of wizardry; its effects ripple through the universe at the speed of light. Yet it demolishes the framework, violating locality in what was, for Einstein, its most basic sense: the stipulation that all things have a location. General relativity confounds our intuitive picture of space as a kind of container in which material objects reside and forces us to search for an entirely new conception of place.
I have no quarrel with this, except that I would not use the word "locality" this way. To me, locality means that the physics of a point can be understood from tevents, matter, and fields in its local neighborhood. General relativity satisfies locality in that sense. Musser says that relativity makes a global definition of location more difficult. Yes, that's right, but I would say that is a consequence of locality, not a contradiction to locality.

Update: Motl has just posted a good explanation of locality, and why it is a mistake to give up locality in order to get a more intuitive understanding of quantum mechanics. The spookiness of nonlocality is always less intuitive. I will respond to the first comment below tomorrow.

Tuesday, November 3, 2015

SciAm book promotes spooky action

Physicist Sabine Hossenfelder blogs this book review:
Spooky Action at a Distance: The Phenomenon That Reimagines Space and Time -- and What It Means for Black Holes, the Big Bang, and Theories of Everything
By George Musser
Scientific American, To be released November 3, 2015

“Spooky Action at a Distance” explores the question Why aren’t you here? And if you aren’t here, what is it that prevents you from being here? Trying to answer this simple-sounding question leads you down a rabbit hole where you have to discuss the nature of space and time with many-world proponents and philosophers. In his book, George reports back what he’s found down in the rabbit hole.

Locality and non-locality are topics as confusing as controversial, both in- and outside the community, and George’s book is a great introduction to an intriguing development in contemporary physics. It’s a courageous book. I can only imagine how much headache writing it must have been, after I once organized a workshop on nonlocality and realized that no two people could agree on what they even meant with the word. ...

In his book, George lays out how the attitude of scientists towards nonlocality has gone from acceptance to rejection and makes a case that now the pendulum is swinging back to acceptance again. I think he is right that this is the current trend (thus the workshop).
This sums up what is wrong with physics today. Clear-eyed XX century physicists purged the medieval spooky mysticism of nonlocality from science, and now it is back with no one even knowing what it is.

She goes on to say that the book is a confusing mish-mash of buzzwords and personalities, without ever explaining the physics or saying what is accepted:
I found the book somewhat challenging to read because I was constantly trying to translate George’s metaphors back into equations and I didn’t always succeed. But then that’s a general problem I have with popular science books and I can’t blame George for this. I have another complaint though, which his that George covers a lot of different research in rapid succession without adding qualifiers about these research programs’ shortcomings. There’s quantum graphity and string theory and black holes in AdS and causal sets and then there’s many worlds. The reader might be left with the mistaken impression that these topics are somehow all related with each other. ...

For my taste it’s a little too heavy on person-stories, but then that seems to be the style of science writing today.
This sums up what is wrong with popular science writing. SciAm used to be better than this.

A friend of hers wrote the book, and she recommends it. Sigh.

The Wikipedia article on action at a distance is pretty good. It explains how Maxwell developed his electromagnetic theory by seeking to get rid of action at a distance. This was one of the most important intellectual developments of all time.
To date, all experiments testing Bell-type inequalities in situations analogous to the EPR thought experiment have results consistent with the predictions of quantum mechanics, suggesting that local hidden variables theories can be ruled out. Whether or not this is interpreted as evidence for nonlocality depends on one's interpretation of quantum mechanics.
That's right. You can interpret quantum mechanics to give a scientific causal view of the world, or you can interpret it to allow for unverifiable spooky actions. Your choice. Apparently the current trend among physicists and popular science writers is for the latter.

Lawrence M. Krauss has a pretty good entanglement article in the latest New Yorker mag:
No area of physics causes more confusion, not just among the general public but also among physicists, than quantum mechanics. On the one hand, it’s the source of New Age mythology, and has enabled hucksters to peddle new self-help cures; on the other, for the philosophically inclined, it has provided some illusory hope of free will in an otherwise deterministic universe. Of the aspects of quantum mechanics that confuse and dismay observers, perhaps nothing approaches the property called “entanglement.” Einstein, who never really accepted entanglement’s existence, called it, derisively, “spooky action at a distance.”

Unfortunately for Einstein, entanglement, “spooky” or not, is apparently real, as researchers in the Netherlands demonstrated last week, just in time for Halloween. In doing so, the researchers affirmed once again that quantum mechanics, as strange as it may seem, works in every way we can test it.
Yes, physicists themselves cannot agree on whether entanglement is spooky.

He attacks the essay I cited last week:
Similarly, last week, the Pulitzer prize-winning writer Marilynne Robinson published an essay in which she challenges the nature and relevance of modern science. The essay argued that entanglement “raises fundamental questions about time and space, and therefore about causality.” She went on to say that this called into question the ability of science to explain reality as a whole. It’s easy to understand how Robinson arrived at this incorrect idea: when a measurement of one electron here can instantaneously affect the measurement of another electron on the opposite side of the universe, faster than the speed of light, it does seem as though causality has been thrown out the window.
Yes, fundamental questions about causality would be raised if the "measurement of one electron here can instantaneously affect the measurement of another electron on the opposite side of the universe". But it cannot.

Krauss's explanation is above average, but defective:
As long as the two electrons remain entangled, then this link endures — even if they are separated across the galaxy. If I measure one electron in my lab, the second electron is affected by the measurement of the first electron with no time delay — instantaneously — even though a signal travelling at the speed of light would take millenia to cross the distance between them.

That instantaneous link is the “spooky action at a distance” of which Einstein was so skeptical.
No, measuring an electron does not affect a distant electron.

Quantum mechanics gives a mathematical representation of the electron pair, and a way of making probabilistic predictions about spin. Measuring one electron does affect the prediction being made for the other. But the math is not the same as the physics. The actual electrons may be deterministic and separable.

Spooky action at a distance was debated by physicists in the time of Newton, of Maxwell, and of Bohr and Einstein. I thought that mainstream physics was solidly convinced that no such thing exists. Maybe that was true 50 years ago. What happened? Why have all these otherwise-hard-headed guys gone mushy about physics that perfected in 1930? Physics is in a sorry state when you have to read the Lubos Motl blog or my own to find something sensible on this subject.

Monday, November 2, 2015

Speculation about NSA and quantum computing

I posted a couple of months ago that NSA is cautious about quantum computers. The NSA is a secretive govt spy agency that does very little to explain itself, so there is a lot of speculation about why it would become cautious about quantum computing. Does it know something we don't?

One could also ask why Microsoft and Google are excited about quantum computing. Do they know something that we don't?

As I have often noted on this blog, quantum computing has been a colossal failure, and has no hope of any commercial applications in the foreseeable future. It is doubtful whether it is even physically possible.

Cryptography professor Matthew Green writes:
If you’re looking for a nice dose of crypto conspiracy theorizing and want to read a paper by some very knowledgeable cryptographers, I have just the paper for you. Titled “A Riddle Wrapped in an Enigma” by Neal Koblitz and Alfred J. Menezes, it tackles one of the great mysteries of the year 2015. Namely: why did the NSA just freak out and throw its Suite B program down the toilet?
These guys are leading experts in elliptic curve cryptography, and long-time NSA watchers. So their speculation is probably better than mine.

The popular press has somehow convinced everyone that Snowden proved the NSA has tricked people into using elliptic curves in order to use a pseudorandom number generator that has an NSA trapdoor, thereby allowing the NSA to spy on everyone.

This story is exaggerated. The so-called trapdoor was publicly known without Snowden, and no one had to use it. The basic elliptic curve technology remains sound.

It is curious that the NSA has deprecated the P-256 elliptic curve, as it has no publicly known weaknesses, and is used for all Bitcoin transactions. The Bitcoin network is hugely successful and out of control, and maybe the NSA is trying to cast fear, uncertainty, and doubt (FUD) on it.

My guess is that either the NSA has been suckered by quantum computing hype like Microsoft and Google, or it wants to discourage elliptic curve cryptography because it is too secure.