Saturday, June 28, 2014

Free will claimed to be random

I have argued that the free will experiments do not tell us whether we have free will, and that people use the word "random" as just a euphemism for data that is not understood. Here is an example:
The concept of free will could be little more than the result of background noise in the brain, according to a recent study.

It has previously been suggested that our perceived ability to make autonomous choices is an illusion – and now scientists from the Center for Mind and Brain at the University of California, Davis, have found that free will may actually be the result of electrical activity in the brain.

According to the research, published in the Journal of Cognitive Neuroscience, decisions could be predicted based on the pattern of brain activity immediately before a choice was made.

Volunteers in the study were asked to sit in front of a screen and focus on its central point while their brains’ electrical activity was recorded. They were then asked to make a decision to look either left or right when a cue symbol appeared on the screen, and then to report their decision.

The cue to look left or right appeared at random intervals, so the volunteers could not consciously or unconsciously prepare for it.

The brain has a normal level of so-called background noise; the researchers found that the pattern of activity in the brain in the seconds before the cue symbol appeared - before the volunteers knew they were going to make a choice - could predict the likely outcome of the decision. ...

“This random firing, or noise, may even be the carrier upon which our consciousness rides, in the same way that radio static is used to carry a radio station.”

This latest experiment is an extension of psychologist Benjamin Libet’s 1970s research into the brain’s electrical activity immediately before a decision.
The article makes it sound as if something profound has been discovered. Not true. It is saying that in certain experiments, free will is attributable to random process in the brain, meaning that human decision making is based on processes that are not understood. This is a null result, as it leaves us with what has been thought for millennia.

This is a similar confusion in quantum mechanics, with physicists always saying that something is random. The use of the word "random" doesn't mean anything except that it is not known to be predictable with the available data.

It seems possible that free will is connected with quantum randomness. That is, when reductionist brain research attempts to isolate human brain decisions, some sort of quantum uncertainty could block a completely deterministic model of the brain.

Thursday, June 26, 2014

Carroll defends lousy philosophy

Physicist Sean M. Carroll is trying to defend philosophy in the current Physics-Philosophy wars:
“Philosophy is completely useless to the everyday job of a working physicist.”

Nobody denies that the vast majority of physics gets by perfectly well without any input from philosophy at all.
Pretty weak. He has no reply to this comment:
Could you elaborate a bit more and name one or more examples where “philosophical input is useful” to science, or better yet, physics?
Lumo trashes him:
In particular, the phrase "shut up and calculate" has been used in the context of the foundations of quantum mechanics. Virtually all the philosophers misunderstand the quantum revolution – the very fact that the quantum revolution has forced us to describe Nature in a way that is fundamentally incompatible with the most general assumptions of classical physics and "common sense". They talk a lot and almost everything they say about these problems is pure junk.

Quantum mechanics brought us a new framework for physics and science and it is rather simple to summarize it. Only the results of observations – perceptions by observers – are facts that may be talked about; and the laws of Nature can only calculate probabilities of the individual outcomes from squared absolute values of the probability amplitudes – using the mathematical formulae that apply in every single context of the real world and that are easy to summarize.

The previous sentence summarizes everything that is scientifically meaningful according to the new (well, 90 years old) framework of physics. Every question about Nature that doesn't respect this general template is scientifically meaningless. Every claim about Nature's inner workings that disagrees with the general postulates of quantum mechanics – postulates that render most classical assumptions about the world incorrect and most classical questions about "reality" meaningless – is wrong. ...

What is particularly ironic – and hypocritical – about Carroll's attitude is that he is one of the loudest critics of religions and everything associated with them. Nevertheless, he frantically defends "philosophy" as a way to learn about Nature. It's insane because philosophy is exactly as unscientific as religion. Every religion may be called just another philosophy and pretty much all philosophies and religions are equally deficient when it comes to their basic flaw – the violation of the rules of the scientific method.
I agree with Lumo. If the philosophy of science were so valuable, then the proponents would be able to give some examples. But the philosopher defenders are almost entirely people who deny the scientific method, deny the progress of science, deny that science discovers truths, deny the Copenhagen interpretation of quantum mechanics, and promote silly ideas without evidence.

The philosophers have been at war with science for 60+ years. I am glad to see some physicists point out how worthless their criticisms are.

Here is a new article on the philosophy of quantum mechanics: Circumveiloped by Obscuritads:
Maybe calling a Hilbert-vector (or Weyl-ray, or ...) the mathematical representative of the physical state of a physical system is a mistake: a Hilbert-vector should remain a physically uninterpreted and purely mathematical concept in QM, an auxiliary device to calculate probability distributions of measurement outcomes. There is no ‘physical state’ of the unmeasured cat in purgatory: we are led to believe that the cat has, or is in, a physical state by mistakenly trying to attribute physical meaning to a Hilbertvector that is a superposition of two vectors, which according to the standard property postulate we associate with a cat having the property of being dead and one having the property of being alive, respectively. We believe the unmeasured cat is some particular physical state but perhaps it isn’t. QM associates a Hilbert-vector to the cat, which is devoid of physical meaning, but enables the computation of probability measures over measurement-outcomes, which are full of physical meaning. Thus we have physical meaningfulness out of physical meaninglessness. Sheer magic. Magic does however not help us to understand physical reality.

The wilful jump to meaninglessness seems however a cheap way out. I don’t like it. We believe that the unmeasured cat is either stone dead or breathing, because tertium non possibilium, and we want QM to be logically compatible with this belief, at the very least, and preferrably to imply one or the other belief. After all, QM also predicts that as soon as we peek at (i.e. measure) the cat, through a pinhole, unbeknownst to the cat, it is either dead or alive. Rather than to withhold physical significance from the Hilbert-vector, we should try to assign physical significance to it (or to a Weyl-ray, or ...). For how else could it determine physically meaningful probability measures over measurement-outcomes?
I don't know if this article is serious or not. He seems to understand quantum mechanics, and makes some valid points, but he also seems to be mocking the whole subject.

Here is another:
Philosophical roots of the "eternal" questions in the XX-century theoretical physics

... If we compare XX century physics to a building, we have to say that not only its two bases are not linked into a single foundation, but each of them is full of contradictions (paradoxes). The number of “thunderstorm clouds” in the physical theory is ten times more today than in 1900. After decades of search and debates, even the direction of further research is not defined. Moreover, unlike the XVII-XIX century physics, it has no clear boundaries between the known and the unknown, the reliable and doubtful, true and false. ...

A. Einstein wrote in 1938: “...Nowadays, the subjective and positivist view prevails. Proponents of this view claim that the consideration of nature as an objective reality is an outdated prejudice. That's what theorists involved in Quantum mechanics put to their merit” [20]. ...

As long as the idea of denying objective reality, borrowed from the positivist philosophy, lies in the foundations of Theory of relativity and Quantum mechanics, it is impossible to overcome the crisis of physics by some partial improvements in these theories. This is convincingly demonstrated by the history of physics of the twentieth century. It will only be possible to find a way out of the crisis if the physical theory is based on the principles of classical physics.
This is wacky stuff. He thinks that 20th (XX) century physics is in crisis because of some philosophical disagreement with positivism.

Tuesday, June 24, 2014

Microsoft bets on quantum computing

The NY Times reports:
Modern computers are not unlike the looms of the industrial revolution: They follow programmed instructions to weave intricate patterns. With a loom, you see the result in a cloth or carpet. With a computer, you see it on an electronic display.

Now a group of physicists and computer scientists who are funded by Microsoft are trying to take the analogy of interwoven threads to what some believe will be the next great leap in computing, so-called quantum computing.

If they are right, their research could lead to the design of computers that are far more powerful than today’s supercomputers and could solve problems in fields as diverse as chemistry, material science, artificial intelligence and code-breaking.
If they are right. Too bad they are not right.
The proposed Microsoft computer is mind-bending even by the standards of the mostly hypothetical world of quantum computing.

Conventional computing is based on a bit that can be either a 1 or a 0, representing a single value in a computation. But quantum computing is based on qubits, which simultaneously represent both zero and one values. If they are placed in an “entangled” state — physically separated but acting as though they are connected — with many other qubits, they can represent a vast number of values simultaneously.

And the existing limitations of computing power are thrown out the window.

In the approach that Microsoft is pursuing, which is described as “topological quantum computing,” precisely controlling the motions of pairs of subatomic particles as they wind around one another would manipulate entangled quantum bits. Although the process of braiding particles takes place at subatomic scales, it is evocative of the motions of a weaver overlapping threads to create a pattern.

By weaving the particles around one another, topological quantum computers would generate imaginary threads whose knots and twists would create a powerful computing system. Most important, the mathematics of their motions would correct errors that have so far proved to be the most daunting challenge facing quantum computer designers. ...

On Thursday, however, an independent group of scientists reported in the journal Science that they had so far found no evidence of the kind of speedup that is expected from a quantum computer in tests of a 503 qubit D-Wave computer. The company said through a spokesman that the kinds of problems the scientists evaluated would not benefit from the D-Wave design.

Microsoft’s topological approach is generally perceived as the most high-risk by scientists, because the type of exotic anyon particle needed to generate qubits has not been definitively proved to exist.
No progress. No speedup. All based on hypothetical ideas that have never been observed.

How many years can the press write these articles without any concrete results to point to?
For some time, many thought quantum computers were useful only for factoring huge numbers — good for N.S.A. code breakers but few others. But new algorithms for quantum machines have begun to emerge in areas as varied as searching large amounts of data or modeling drugs. Now many scientists believe that quantum computers could tackle new kinds of problems that have yet to be defined.

Indeed, when Mr. Mundie asked Dr. Freedman what he might do with a working quantum computer, he responded that the first thing he would program it to do would be to model an improved version of itself.
Freedman was joking. He is a genius in a different field, but this is not going to happen.

A lot of smart people are betting on quantum computing, and you are not going to take my word over them. But this is not like string theory, where smart people can just go on forever with untestable bogus theories. The quantum computer folks are claiming to build a machine that can outperform conventional computers. These claims have been going for 20 years or so, and are likely to go on for another 20 years without any such machine having any demonstrable advantage. How long are you going to believe the hype without results?

I say it will not happen, for reasons explained on this blog.

Thursday, June 19, 2014

Aaronson on quantum randomness

MIT computer science professor Scott Aaronson has posted the sequel to his earlier article on quantum randomness.

My earlier criticisms apply. He explains correctly how quantum mechanics is contrary to local hidden variable theories. This has been conventional wisdom since about 1930. He also explains that you can never truly prove randomness. His main purpose is to describe some recent results on how you can use some random numbers to generate other random numbers, under some quantum mechanical assumptions.

These results are of some interest in his field of abstract quantum computer complexity theory, but they are of no practical importance, and do not really shed any light on either quantum mechanics or randomness. He suggests utility, such as:
Pironio’s group has already done a small, proof-of-concept demonstration of their randomness expansion protocol, using it to generate about 40 “guaranteed random bits.” Making this technology useful will require, above all, improving the bit rate (that is, the number of random bits generated per second). But the difficulties seem surmountable, and researchers at the National Institute of Standards and Technology are currently working on them. So it’s likely that before too long, we will be able to have copious bits whose randomness is guaranteed by the causal structure of spacetime itself, should we want that. And as mentioned before, for cryptographic purposes it can matter a lot whether your randomness is really random.
No, those bits are no more random than coin tosses, and have no guarantees that are useful to cryptography.

He explains:
Before going further, it’s worth clarifying two crucial points. First, entanglement is often described in popular books as “spooky action at a distance”: If you measure an electron on Earth and find that it’s spinning left, then somehow, the counterpart electron in the Andromeda galaxy “knows” to be spinning left as well! However, a theorem in quantum mechanics—appropriately called the No-Communication Theorem—says that there’s no way for Alice to use this effect to send a message to Bob faster than light. Intuitively, this is because Alice doesn’t get to choose whether her electron will be spinning left or right when she measures it. As an analogy, by picking up a copy of American Scientist in New York, you can “instantaneously learn” the contents of a different copy of American Scientist in San Francisco, but it would be strange to call that faster-than-light communication! (Although this might come as a letdown to some science fiction fans, it’s really a relief: If you could use entanglement to communicate faster than light, then quantum mechanics would flat-out contradict Einstein’s special theory of relativity.)

However, the analogy with classical correlation raises an obvious question. If entangled particles are really no “spookier” than a pair of identical magazines, then what’s the big deal about entanglement anyway? Why can’t we suppose that, just before the two electrons separated, they said to each other “hey, if anyone asks, let’s both be spinning left”? This, indeed, is essentially the question Einstein posed—and the Bell inequality provides the answer. Namely, if the two electrons had simply “agreed in advance” on how to spin, then Alice and Bob could not have used them to boost their success in the CHSH game. That Alice and Bob could do this shows that entanglement must be more than just correlation between two random variables.

To summarize, the Bell inequality paints a picture of our universe as weirdly intermediate between local and nonlocal. Using entangled particles, Alice and Bob can do something that would have required faster-than-light communication, had you tried to simulate what was going on using classical physics. But once you accept quantum mechanics, you can describe what’s going on without any recourse to faster-than-light influences.
Those first 2 paragraphs are fine, but then the summary is nonsense. There is no "intermediate between local and nonlocal". It is true that you need quantum mechanics to explain electrons, and you cannot effectively simulate them with classical mechanics. Einstein, Bell, and their followers refuse to accept what the experts taught in 1930. That's all.

The electron does not really spin left in quantum mechanics. Measurements may find that, but the act of measuring forces that. It is possible that, just before the two electrons separated, they said to each other “hey, if anyone measures left spin, let’s both say yes.” Measuring spin in other directions gives other results. The Bell paradoxes only occur if the observer is allowed to choose the direction of the spin measurement, and you cling to some classical (non-quantum) model of spin.

Wednesday, June 18, 2014

Einstein agreed with the Lorentz theory

I have argued that Einstein never claimed this his special relativity theory was any different from Lorentz's, such as this:
The truth is that there was very little difference between the views of Lorentz and Einstein.
However I did not consider a Schwarz paper:
The only difference between Lorentz's approach and Einstein's is that Lorentz derived the transformation after an experiment forced him to, whereas Einstein starts with the generalized principle of relativity and derives the transformation from a necessary consequence of applying the second postulate.
That paper translates a 1907 Einstein paper as:
It required only the recognition that the auxiliary quantity introduced by H. A. Lorentz, and called by him "local time", can be defined as simply "time." ...

In what follows it is endeavored to present an integrated survey of the investigations which have arisen to date from combining the theory of H. A. Lorentz and the theory of relativity.
Taken literally, that reads as Lorentz having his own theory of local time, and Einstein claiming to have another theory called "the theory of relativity." But that is a mistranslation, as no one used the term "theory of relativity" in 1907. A better translation says:
Quote by Einstein 1907:
Im folgenden ist nun der Versuch gemacht, die Arbeiten zu einem Ganzen zusammenzufassen, welche bisher aus der Vereinigung von H.A. Lorentzscher Theorie und Relativit├Ątsprinzip hervorgegangen sind.
In den ersten beiden Teilen der Arbeit sind die kinematischen Grundlagen sowie deren Anwendung auf die Grundgleichungen der Maxwell-Lorentzschen Theorie behandelt; dabei hielt ich mich and die Arbeiten von H.A. Lorentz (...1904) and A. Einstein (...1905).

Translated:
In what follows, the attempt is made to summarize into a whole the works hitherto emerged from the unification of H.A. Lorentz's theory and the principle of relativity.
In the first two parts of the work, the kinematic foundations as well as their application upon the fundamental equations of the Maxwell-Lorentz theory are dealt with; on that occasion, I relied on works of H.A. Lorentz (...1904) and A. Einstein (...1905).
That famous Einstein 1905 paper says the same thing:
we have the proof that, on the basis of our kinematical principles, the electrodynamic foundation of Lorentz's theory of the electrodynamics of moving bodies is in agreement with the principle of relativity.
That Lorentz theory refers to what we now call Maxwell's equations, the Lorentz force law, and the 1895 Lorentz paper giving a relativistic explanation of Michelson-Morley. Poincare criticized that theory in 1900 for not being fully compatible with the principle of relativity. Lorentz answered with a 1904 paper extending his theory to comply with the principle of relativity.

The technical content of Einstein's 1905 paper was essentially the same as Lorentz's 1904 paper -- responding to Poincare's conjecture that the theory comply with the principle of relativity. The main difference, as Schwarz explains, is that Lorentz show how the theory is a consequence of experiment, and Einstein takes a shortcut by using postulates instead. There were also minor terminological differences, such as saying "local time" or "time in the local reference frame".

No one else saw any difference between Lorentz's and Einstein's views, as the theory was often called the Lorentz-Einstein theory. Only decades later were they considered different, and now they have different Wikipedia articles: Lorentz aether theory and Special relativity.

Einstein spent much of his life bragging about his relativity originality, and in 1954 he gave this confusing explanation of how his theory was different:
By careful examination of the experimental facts, Lorentz found out that one has to think of the aether as rigid and acceleration-free (contrary to H. Hertz). Newton's space was "materialized" in this way. Though time didn't appear as a problem at first. Yet it became a problem, because it enters as an independent variable (besides the space coordinates) into Maxwell's equations of "empty space", upon which all electromagnetic processes were founded by Lorentz. Now, all would have been satisfying, when it would have been possible to demonstrate the state of motion of the aether ("absolute rest"). The systematic treatment of this problem by Lorentz led very closely to special relativity, because this problem forced Lorentz to transform spatial coordinates and time collectively. That he didn't make that step to special relativity, simply lied in the circumstance, that it was psychologically impossible for him to dispense with the reality of the aether as a material thing (carrier of the electromagnetic field). Those who witnessed this time will understand it.
Einstein believed in the aether after about 1916, but back in that 1907 paper, he had much more faith in electromagnetic fields:
Only the idea of a luminiferous ether as the carrier of electric and magnetic forces does not fit in in with the theory presented here; for electromagnetic fields do not appear here as states of some kind of matter, but rather as independently existing objects, on a par with matter, and sharing with the latter the characteristic of inertia. [Schwarz translation]
Einstein is taking a strange philosophical view here. When people back then talked about the aether, they often just meant the possibility of electromagnetic fields.

Lorentz does acknowledge that he had not abandoned the concept of a preferred frame. He wrote in 1910:
Provided that there would exist an aether: then one of all systems x, y, z, t, would be preferred ... Now, if the relativity principle had general validity in nature, however, one would consequently be unable to find out whether the reference system momentarily employed is that preferred one. Thus one arrives at the same results, ... To which of both ways of thinking one adheres to, we can leave to the judgment of each individual.
In other words, you can believe or not believe in an unobservable preferred frame. He wrote in 1914:
Einstein ... arrives at the abolishment of the aether. The latter is, by the way, to some extent a quarrel about words: it makes no great difference, whether one speaks of vacuum or aether. Anyway, according to Einstein it has no meaning to speak about a motion relative to the aether. He also denies the existence of absolute simultaneity.
Most cosmological models do have a preferred frame for time and stationary objects. That is why you can read stories about the age of the universe, and how some stars are older than other stars. So there is (probably) a preferred frame, and it does not have much to do with special relativity.

Those who say that special relativity somehow depends on there being no preferred frame are just wrong. There may or may not be a preferred frame, and it does not make any difference to the Lorentz transformations or any other part of special relativity.

Einstein does not mention Poincare, who introduced the spacetime geometry that is central to what everyone has called special relativity for the last century.

Monday, June 16, 2014

Defending philosophy again

I mentioned Massimo Pigliucci defending philosphy below, and now he has an audio/video discussion of the same topic.

Pigliucci mainly defends the value of philosophical thinking, and accuses physicists of being ignorant for thinking that philosophers are all post-modernists. He also says that string theorists are upset with philosophers saying that their untestable ideas are unscientific, and that the theorists would like to re-define science without any interference from philosophers.

He says physicists should respect philosophy because Bohr and Einstein had philosophical debates. Okay, but what philosopher has said anything worthwhile about physics in the last 50 years?

He says that some physicists have conceded value to ethics and moral philosophy, but have their strong objections to philosophy of science. I have tried to read some supposedly important moral philosophy, and I have found it to be nearly 100% worthless.

He misses the point of just how anti-science philosophers have become. Scientists believe that they are searching for truth and philosophers deny it. And not just the postmodernists.

Friday, June 13, 2014

Evolutionists and teachers confused about randomness

Here is a evolution acceptance paper:
Is Oklahoma really OK? A regional study of the prevalence of biological evolution-related misconceptions held by introductory biology teachers
by Tony B Yates and Edmund A Marek

Biological evolutionary explanations pervade all biological fields and bring them together under one theoretical umbrella. Whereas the scientific community embraces the theory of biological evolution, the general public largely lacks an understanding, with many adhering to misconceptions. Because teachers are functioning components of the general public and most teachers experience the same levels of science education as does the general public, teachers too are likely to hold biological evolution misconceptions. The focus of this study was to identify the types and prevalence of biological evolution misconceptions held by Oklahoma high school introductory biology teachers and to correlate those findings with demographic variables. ...

Such a high misconception rate in teachers concerning the mechanism of randomness in evolution is disconcerting because there is probably no other misconception which better indicates a lack of understanding of evolution than the belief that evolution proceeds by random chance (Isaak 2003). With the environment selecting specific variations within populations, evolution in totality is a nonrandom process. However, randomness does play a role in pivotal evolutionary mechanisms including the origination of variations via both mutations and gene recombination (Smith and Sullivan 2007). As Dawkins puts it, ‘ … evolution is the nonrandom survival of randomly varying coded information’ (Dawkins 2009, p. W2).
Non-mathematicians tend to have funny views about randomness. I mentioned before the Causalist-Statisticalist Debate about how some scientists emphasize randomness and some do not.

Interpretations of quantum mechanics have similar confusions, with some arguing that randomness is essential to the theory, and others not.

Random chance is not a physical thing. It is just way of describing our inability to predict something. Dawkins is saying that he can predict the survival but not the DNA changes. But some DNA changes in a population are predictable, and some extinction events are not.

If this is the biggest misconception about evolution, then the teachers probably understand it pretty well.

Wednesday, June 11, 2014

BICEP2 skepticism

For BICEP2 skepticism, see Big Bang Blunder Bursts the Multiverse Bubble and What is direct evidence and does the BICEP2 measurement prove that gravity must be quantized?.

UI wonder why the professional skeptics are not more skeptical about big announcements like BICEPS being proof of big bang inflation causing quantized gravity waves and the multiverse. Instead they are always writing about how there is no good evidence for homeopathy or astrology.

Monday, June 9, 2014

Mermin has pedantic quibble about QBism

Cornell physicist David Mermin was quoted in NewScientist magazine, and then wrote a hair-splitting letter attempting to somehow correct his view of quantum mechanics:
I am delighted that you take the quantum Bayesianist view of science seriously enough to feature it on your cover (10 May, p 32). But you overemphasize the subjectivity of the scientist almost as much as conventional physics underemphasizes it.

Your article attributes to QBism the view that "measurements do not cause things to happen in the real world, they cause things to happen in our heads". The actual QBist position is that a measurement is any action that a particular person (Alice) takes in her external world, and the outcome of that measurement is the experience this world prompts in Alice through its response to her action.

Other consequences of Alice's action are part of her external world and potentially accessible to others. Alice has her own private subjective experience, but she can attempt to describe this to others through the imperfect medium of language, which helps account for the common features of the different external worlds that each of us individually infers from our own private experience.

It is of course hard to convey all this in three pages, let alone a short letter. After all, it has escaped the awareness of almost all physicists for nearly 90 years. For a more nuanced view of QBism I recommend the paper cited in your article, bit.ly/qbism.
Ithaca, New York, US
No, QBism is not some view that has escaped physicists for 90 years. It is nearly the same as Bohr's view, as I have noted previously.

Recent Mermin explanations of his view of quantum mechanics are here and here, and how he has gripes about how the above letter was edited.
The first omission — of the New Scientist’s own words — diminishes the degree to which their article misrepresents QBism as antirealist. The second omission — from both my versions — eliminates the heart of my explanation of QBist realism. Their combined effect is to turn my correction of the New Scientist’s gross misrepresentation of realism in QBism into what sounds like a pedantic quibble.
His letter is a stupid pedantic quibble because he cannot explain how hi view is any different from what Bohr and others said in 1930.

He seems unhappy to be put in the antirealist camp. That is a confusing term. It is not that quantum mechanics is contrary to realism, but some physicists, like Einstein, Bell, and other quantum-haters, had a belief that realism requires that measurements be equated with hidden variables. Quantum mechanics rejects that view, and for that it is called antirealist.

Mermin complains about the shortness of his letter, but go ahead and read his longer articles to see how empty they are. He thinks that he has discovered something, because he used to pitch a different interpretation of quantum mechanics, and he seems to have finally come around to the view that Bohr and Heisenberg and Schroedinger had it right all along. But there is no need to read his papers; just read an old textbook and ignore the modern physics gurus who babble nonsense about quantum mechanics being wrong.

Friday, June 6, 2014

Taking quantum mechanics seriously

A new physics paper starts:
Many advocates of the Everettian interpretation consider that their approach is the only one taking quantum mechanics really seriously, and that by doing so a fantastic scenario for our reality can be deduced, made of infinitely many and continuously branching parallel worlds.
I guess its true that those advocates claim that they are the ones taking quantum mechanics seriously, but the truth is more nearly the opposite. he many-worlds interpretation has no scientific merit, and is just a way of mocking quantum mechanics.

Taking quantum mechanics seriously means only believing in observables, and not in imaginary worlds that can never be observed.

Tuesday, June 3, 2014

Counterfactuals: Quantum Mechanics

Some interpretations of quantum mechanics rely very heavily on a certain style of counterfactual reasoning. Quantum mechanics teaches that an electron has wave and particle properties. It is observed as a point particle, but otherwise it behaves like a wave. The uncertainty principle says that you cannot measure the precise position and momentum at the same time.

J.J. Thomson won the 1906 Nobel Prize for discovering that electrons are particles (his acceptance lecture called them "corpuscles"). His son won the 1937 Nobel Prize for showing that electrons are waves. The subject has confused people ever since.

The interpretations differ in their view of the electron before it is observed. Some say that it is a wave, with no precise location. Others say that it is a particle, but a funny kind of particle that can be many places at once. It thus has multiple histories, unlike anything in our ordinary experience.

The multiple histories get confusing when we consider counterfactuals. Suppose you ask, "where would the electron be if it were in a particular location at a particular time?" This question makes no sense because the electron could not be at a particular location at a particular time because of the uncertainty principle (unless a measurement is disturbing the system). Admitting that the electron could have been in other places as well at the same time only makes the counterfactual meaningless.

To see how this dilemma plays out, consider the Double-slit experiment. An electron beam is fired thru a double-slit, and wave properties cause an interference pattern at the detector. The confusion occurs when you introduce counterfactuals about which slit the electrons pass thru. You can ask "if the electron goes thru the top slit, then where is it detected?" The question has no clear answer. Attempts to answer it involve saying that the electron has some sort of spooky action-at-a-distance effect on electrons going thru the other slit.

The best answer is to refuse to answer the question as a meaningless counterfactual. The question presupposes that the electron is a particle, and it is not a particle. It is a wave that goes thru both slits at once. If you put detectors in the slits then you find the electron in one slit or the other, but without the detector the electron cannot be localized to one slit.

As Asher Peres explained, "unperformed experiments have no results." The electron only looks like a point particle when certain measurement experiments are done. If no such experiment is performed, then there are no point particle consequences.

At this point you are probably wondering why we even talk about the electron as a particle if it is really a wave and if treating it as a particle causes so much trouble. The answer is that treating it as a particle is incredibly useful. Every electron appears identical, with the same mass, charge, spin, and zero radius. We can even see the particle tracks in bubble chamber pictures. R.P. Feynman worked out a diagram calculus for electrons where predictions are based on sums over their many possible histories. Under his interpretation, each diagram is a counterfactual but they all contribute real measurement.

Feynman's quantum mechanics textbook has his version of Peres's slogan:
Another thing that people have emphasized since quantum mechanics was developed is the idea that we should not speak about those things which we cannot measure. (Actually relativity theory also said this.)
The particle interpretation of electrons and photons works quite well as long as you avoid something called counterfactual definiteness. That is, you must not assume that the particles have particular properties that could be obtained by measurement, unless you actually do the measurement.

The 1965 Bell's theorem and subsequent experiments have shown the paradox of quantum mechanics more strikingly than the double-slit. The theorem has been called “the most profound discovery of science.” It is usually explained in terms of correlations between measurements on two equal and opposite electrons.

The correlations are hard to understand, and have two common explanations. Either one measurement has a spooky non-causal effect on the other, or we have to reject counterfactual definiteness.

Many physicists prefer the spooky explanation, and say that this proves that quantum mechanics allows non-locality, meaning one measurement can have an action-at-distance effect on another. As no experiment has ever demonstrated such an effect and we have no mathematical theory for how that would happen, this is indeed a striking conclusion.

The better and more mundane conclusion is that quantum mechanics is not amenable to certain counterfactuals. The disagreement is about whether this conclusion is profound. To some physicists, this proves that scientific realism is impossible. To others, it just shows that some ill-posed questions are unanswerable, just like many philosophical and theological questions, such as "how many angels can dance on the head of a pin?"

There has been no Nobel Prize given for work related to Bell's theorem. While there is talk of such a prize every year, others just shrug it off as just a metaphysical confusion about counterfactuals. All of the Bell test experiments have confirmed the 1930 understanding of quantum mechanics.

Nature is not just the conjunction of quantum particle scenarios. An electron can be measured in a lot of different places, but the electron is not the union of those possibilities because that would not capture its wave properties.

In a mystery novel, Sherlock Holmes might list five possible explanations for a crime, eliminate four, and conclude that the fifth was correct. You cannot do that with quantum particle interpretations because electrons are not really particles.

It does make perfect sense in quantum mechanics to treat electrons and photons as particles with particle histories, as long as you admit that a large number of such histories are possible. The histories are like counterfactuals except that they cannot be said to be definitely true or false. They are just Feynman paths. They allow a particle interpretation of nature and have an extremely powerful predictive theory.

Feynman paths and diagrams are used to explain the simple transmission of light. According to the theory, the simplest interactions involve infinitely many diagrams showing paths and collisions of electrons and photons. The diagrams might involve energy being created or destroyed, particles going backwards in time, and causality violations. Consideration of all of these ridiculous scenarios is essential to calculating an answer that agrees with experiment.

You can choose to believe that all of these scenarios are real if you wish, but it is easier to think of them as counterfactuals. They are wildly implausible assertions that are somehow used to deduce a valid conclusion. You might think that agreement with experiment is strong evidence for the truth of the hypotheses used to make the numerical prediction. But in this case, it is not. The scenarios involve simultaneous precise positions and momenta for photons and electrons. We know that photons and electrons are not really particles, and they do not have specific positions and momenta. We also know that energy is conserved, and that cause precedes effect. We think we know those things, anyway.

Thus it is convenient to think of quantum particles as having multiple counterfactual histories as Feynman paths, and the theory makes predictions as sums over all those paths. One cannot say that some paths are true and some are false, as all must be used to get correct numerical predictions and none can be taken literally. This interpretation is popular but not required, as it is also possible to treat the electrons and photons as fields instead of particles.

For those who argue that time is an illusion, the possibility of multiple histories is very unsettling. They refuse to accept that the past is any different from the future, so they say that the multiple counterfactuals in the past implies that we also have multiple counterfactuals in the future. So they prefer the many-worlds interpretation, where all possible scenarios happen in alternate universes. There can be no physical evidence for alternate universes, but they find it more compatible with their prejudices against time. In my opinion, belief in many-worlds is nothing but a severe confusion about the nature of time and counterfactuals.

The uncertainty principle says that measuring position and then momentum can give a different result from measuring momentum and then position. In quantum jargon, the observable operators do not commute, so the operator order matters. This is hard to understand if you believe in some sort of counterfactual simultaneous observation of position and momentum. But if you follow the Feynman-Peres advice against counterfactual definiteness, then it is plausible that the order of observation can make a difference.

You need to reject counterfactual definiteness in quantum mechanics because the theory predicts observations that disturb the system. A system with an observation is different from one without. You cannot just assume that the system behaves as if everything were observed. If you did, then the electrons and photons would all seem like particles, and you would miss out on the wave properties.

Counterfactuals are understandable by children but troublesome for philosophers. I have posted below about counterfactuals outside the context of quantum mechanics, in order to clarify them without the added quantum confusion.

My conclusion is that the counterfactual is the most confusing thing about quantum mechanics. Understand counterfactuals and you will not be tempted to subscribe to quantum spooky interpretations, to believe in many-worlds, or to deny locality, causality, or realism. All of those mysteries are rooted in misunderstandings about counterfactuals.