Sunday, July 24, 2016

Musser tries to explain nonlocality

George Musser writes on the Lumo blog about nonlocality:
The situation is nonlocal inasmuch as we are speaking of joint properties of spatiotemporally separated objects. We know the singlet electrons have a total spin of zero, but we cannot ascribe either particle a definite spin in advance of measurement. If you object to the word “nonlocal” in this context, fine. I would also be happy with “nonseparable,” “delocalized,” or “global.” ...

The real issue is how to explain the phenomenology of correlations. I know that LuboŇ° does not think highly of the EPR paper (neither did Einstein), but it is the usual starting point for this discussion, so let us focus on the most solid part of that paper: the dilemma it presents us with. Given certain assumptions, to explain correlated outcomes, we must either assign some preexisting values to the properties of entangled particles or we must imagine action at a distance. Einstein recoiled from the latter possibility — he was committed to (classical) field theory. The former possibility was later ruled out by Bell experiments. So, presumably we need to question one of the assumptions going into the argument, and that’s where we go down the interpretive rabbit hole of superdeterminism, Everettian views, and so forth, none of which is entirely satisfactory, either. We seem to be stuck. ...

If you disagree, fine. Tell me what is going on. Give me a step-by-step explanation of how particle spins show the observed correlations even though neither has a determinate value in advance of being measured.
He is saying that if you want an intuitive understanding of "what is going on", then you have to either accept action-at-distance or contradictions with experiment. Both of those are unacceptable.

The way out of this conundrum, as the textbooks have explained for 85 years, is to reject the idea that particle spin can be modeled by classical (pre-quantum) objects. By "what is going on", he means something he can relate to by personal experience. In other words, he wants a classical interpretation.

The classical interpretations are made impossible by the noncommuting observables. Or by Bell's theorem or several other ways the point has been explained.

When you observe a particle's spin, you change its state into something that has a classical interpretation. But just temporarily. If you then measure spin in a different direction, you are back to non-classical behavior.

The supposed nonlocality is just an illusion. The experiments only seem nonlocal if you try to match them to a classical model.

I don't know why this is so difficult. I am just saying what the textbooks have said for decades.

Thursday, July 21, 2016

Einstein’s process of discovery

Einstein idolizer John D. Norton writes a lot of nonsense about Einstein, with the latest being How Einstein Did Not Discover. It includes:
11. The Power of a Single Experiment

The example of Einstein’s discovery of the light quantum will illustrate another popular myth about what powered Einstein’s discoveries. There is, in each case, a single, perplexing, powerful, decisive, crucial experiment. Only Einstein, it is said, was able to interpret the experiment correctly and arrive at a great discovery.

This myth is best known through the example of the Michelson-Morley experiment. Contrary to many reports, Einstein did not formulate the theory as a direct response to its null outcome. The mistake is an easy one to make. It was long standard for pedagogic accounts of special relativity to begin with an account of the experiment and jump directly to special relativity. The pattern starts with Einstein’s (1907) early review. It introduces the Michelson-Morley experiment and no others in its opening pages. Holton’s (1969) analysis of the myth is standard and includes numerous examples. To it, we should add that the null result of the Michelson-Morley experiment was unhelpful and possibly counter-productive in Einstein’s investigations of an emission theory of light, for the null result is predicted by an emission theory.
As Norton alludes, Einstein's 1907 article and most modern textbooks introduce the Michelson-Morley experiment as being crucial to the development of special relativity. The papers that announced the discovery of the FitzGerald contraction, Lorentz transformation, local time, and spacetime geometry all explained these concepts as consequences of MMX.

The null result could be explained by a light emission theory, or by a stationary Earth, or by an aether drag. So MMX alone did not prove special relativity. Other experiments cause the experts to reject those possibilities.

Einstein did not cite MMX or those other theories and experiments because his work was derivative. He was just giving a reformulation of Lorentz's theory, but not recapitulating Lorentz's theoretical and experimental arguments.

Einstein historians have to do a funny dance around this subject, because the relativity textbooks don't make any sense. They say that the MMX was crucial, and they say that Einstein invented special relativity, but Einstein denied that MMX had anything to do with his work.

There is a larger reason for denying the importance of MMX. Philosophers and science historians today are anti-positivists and they deny that scientific breakthrus are based on crucial experiments. Relativity was considered the first great breakthru of the XXc, so the postivist-haters need to have some way of saying that it was not driven by experiment.

It seems possible that someone could have predicted special relativity from abstract principles of causality, or from mathematical analysis of electromagnetism. But that is not how it happened. It was postivist analysis of experiments.

Monday, July 18, 2016

Air Force funds quantum computers

NextGov reports:
The Air Force wants white papers that describe new ways quantum computing could help achieve its mission, according to an amended Broad Agency Announcement posted Friday. Eventually, the government could provide a test-bed where a contractor might install, develop and test a quantum computing system, according to the announcement.

Last year, the Air Force announced it had about $40 million available to fund research into, and the eventual maintenance and installation of a quantum system -- a branch of emerging computing technology that relies on the mechanics of atomic particles to process complex equations. ...

The Air Force is among several other federal groups interested in quantum.

Last year, for instance, the Intelligence Advanced Research Projects Activity, which focuses on research and development, said it planned to award a multiyear grant to IBM to build out a component of a quantum computer. A true quantum computer might be useful, IARPA program manager David Moehring told Nextgov then, because it might be applied to complex questions like the "Traveling Salesman Problem" -- what's the best way for a salesman to visit several different locations?
$40M is not much money to the Air Force, but it shows how money is pouring into the field.

Most quantum computing projects are not even very expensive, by the standards of modern physics experiments.

I liked this comment:
String theory, multiple universes, complexity, quantum teleportation... these are to Physics what Division I football is to college, which is to say, it sells tickets and opens purse strings. No one is going to buy a book on Newtonian physics and relive their junior year in high school. But let Brian Greene write something crazy and out there about a "Holographic Universe" or somesuch and the peeps will scoop it up, and maybe even decide to become physics and math majors, and there are lots of worse results than that. So let the alumni donate for the football team, and let the googley-eyed high schoolers all plan on high-paying and fulfilling careers as Quantum Mechanics. It puts butts in the seats...
So do most physicists realize that 90% of the public image of Physics is garbage? But they quietly go along with it because it keeps the funding dollars coming in?

Sometimes I think that I am just posting the obvious on this blog. Maybe everyone knows it, but cannot say. I can say it because I am not part of the Physics money machine.

Tuesday, July 12, 2016

Google wants us to worry about quantum computing

Google is trying to get nervous about quantum computing:
Google is working on safeguarding Chrome against the potential threat of quantum computers, the company announced today.
At least they admit that the quantum computers may be impossible:
Quantum computers exist today but, for the moment, they are small and experimental, containing only a handful of quantum bits. It's not even certain that large machines will ever be built, although Google, IBM, Microsoft, Intel and others are working on it. ... quantum computers could undermine the security of the entire internet.
Johah Lehrer is back with a new book, after a spectacular rise and fall as a science writer. Before his fall, I accused him of fabricating an Einstein quote, but no one cared about that.

Thursday, July 7, 2016

When evidence is too good to be true

Phys.org reported this in January:
Under ancient Jewish law, if a suspect on trial was unanimously found guilty by all judges, then the suspect was acquitted. This reasoning sounds counterintuitive, but the legislators of the time had noticed that unanimous agreement often indicates the presence of systemic error in the judicial process, even if the exact nature of the error is yet to be discovered. They intuitively reasoned that when something seems too good to be true, most likely a mistake was made.

In a new paper to be published in The Proceedings of The Royal Society A, a team of researchers, Lachlan J. Gunn, et al., from Australia and France has further investigated this idea, which they call the "paradox of unanimity."

"If many independent witnesses unanimously testify to the identity of a suspect of a crime, we assume they cannot all be wrong," coauthor Derek Abbott, a physicist and electronic engineer at The University of Adelaide, Australia, told Phys.org. "Unanimity is often assumed to be reliable. However, it turns out that the probability of a large number of people all agreeing is small, so our confidence in unanimity is ill-founded. This 'paradox of unanimity' shows that often we are far less certain than we think."

The researchers demonstrated the paradox in the case of a modern-day police line-up, in which witnesses try to identify the suspect out of a line-up of several people. The researchers showed that, as the group of unanimously agreeing witnesses increases, the chance of them being correct decreases until it is no better than a random guess.

This is an important point, and a paradox. If someone tells you that scientists are unanimous about global warming, vaccine policy, cosmic inflation, or Donald Trump, you should be suspicious.

Of course the textbooks are unanimous on many things, such as energy conservation. So we should not reject all that textbook knowledge. But most of those things only got into the textbooks after some healthy debate about the pros and cons.

Wednesday, July 6, 2016

Comparing science to poetry

Philosophers sometimes complain that they get no respect from scientists.

The NY Times has a running series of essays, and the latest one denies the scientific method because he says science is like poetry:
In 1970, I had the chance to attend a lecture by Stephen Spender. He described in some detail the stages through which he would pass in crafting a poem. He jotted on a blackboard some lines of verse from successive drafts of one of his poems, asking whether these lines (a) expressed what he wanted to express and (b) did so in the desired form. He then amended the lines to bring them closer either to the meaning he wanted to communicate or to the poetic form of that communication.

I was immediately struck by the similarities between his editing process and those associated with scientific investigation and began to wonder whether there was such a thing as a scientific method. Maybe the method on which science relies exists wherever we find systematic investigation. In saying there is no scientific method, what I mean, more precisely, is that there is no distinctly scientific method.

There is meaning, which we can grasp and anchor in a short phrase, and then there is the expression of that meaning that accounts for it, whether in a literal explanation or in poetry or in some other way. Our knowledge separates into layers: Experience provides a base for a higher layer of more conceptual understanding. This is as true for poetry as for science. ...

James Blachowicz is a professor emeritus of philosophy at Loyola University Chicago
Science finds objective truths about the world. Poetry just expresses thoughts in an entertaining way. If he cannot see the difference, he deserves no respect.

Saturday, July 2, 2016

The crisis in Physics

NPR Radio reports:
Of course, there are many scientists who continue to see great promise in string theory and the multiverse. But, as Marcelo and I wrote in The New York Times last year, it all adds up to muddied waters and something some researchers see as a "crisis in physics."

Smolin and Unger believe this crisis is real — and it's acute. They pull no punches in their sense that the lack of empirical data has led the field astray.
The "crisis" here is that we have good physical theories that explain nearly everything that is observed. Theoretical physicists like to speculate about unobservable parallel universes, but then they have no data to test their ideas.

Wednesday, June 29, 2016

Trying to test Many-Worlds

David Deutsch is a big proponent of the Many Worlds Interpretation of quantum mechanics, and writes The Logic of Experimental Tests, Particularly of Everettian Quantum Theory
By adopting a conception – based on Popper’s – of scientific theories as conjectural
and explanatory and rooted in problems (rather than being positivistic, instrumentalist and rooted in evidence), and a scientific methodology not involving induction, confirmation, probability or degrees of credence, and bearing in mind the decision-theoretic argument for betting-type decisions, we can eliminate the perceived problems about testing Everettian quantum theory and arrive at several simplifications of methodological issues in general.
This is what you get when you reject positivism. You can decide to believe in parallel universes with no evidence, because only the positivists insist on being rooted in evidence. Deutsch even claims that the parallel universe theory is testable, because of some philosophical misdirection.
n explanation is bad (or worse than a rival or variant explanation) to the extent that…

(i) it seems not to account for its explicanda; or
(ii) it seems to conflict with explanations that are otherwise good; or
(iii) it could easily be adapted to account for anything (so it explains nothing).

It follows that sometimes a good explanation may be less true than a bad one (i.e. its true assertions about reality may be subset of the latter’s, and its false ones a superset).
This sort of reasoning allows him to accept explanations that are not really true.
Scientific methodology, in turn, does not (nor could it validly) provide criteria for accepting a theory. Conjecture, and the correction of apparent errors and deficiencies, are the only processes at work. And just as the objective of science isn’t to find evidence that justifies theories as true or probable, so the objective of the methodology of science isn’t to find rules which, if followed, are guaranteed, or likely, to identify true theories as true.
This is anti-positivism.

I take the positivist view that science has established the truth of theories like Newtonian gravity for celestial mechanics, within a suitable domain of applicability. The theory lets you can make observations, fit a model, and make predictions of orbits with error estimates.

Yes, general relativity makes more precise predictions in some extreme cases, and gives a more satisfactory causal explanation, but the original Newtonian theory is still valid within its limits.
But one thing that cannot be modelled by random numbers is ignorance itself (as in the ‘ignorance interpretation of probability’).
Deutsch and other many-worlds advocates are very unhappy with probability, because they have to idea how to say that some of the parallel worlds are more likely than others, and hence cannot make sense out of probabilistic experiments.

So Deutsch says that the answer is to reject positivism and be very broad-minded about what is accepted as a explanation.

Monday, June 27, 2016

History of the quantum leap

German professor Herbert Capellmann writes on The Development of Elementary Quantum Theory from 1900 to 1927:
The basic laws of classical physics relied upon the principle ”Natura non facit saltus” (nature does not make jumps), transmitted from ancient philosophy. The underlying assumption was the existence of a space-time continuum and all changes in nature should occur continuously within this space-time continuum. Starting towards the end of the 17’th century, the classical laws governing these changes were expressed in form of differential equations or variational principles, where infinitesimally small changes of various physical variables are related to each other. Typically these differential equations of classical physics possessed exact solutions for given initial and boundary conditions, at least in principle. This led to the general conclusion that nature is deterministic; the state of nature at any given time was believed to be related in a unique way to its state at any past or future time. Even if the development of statistical thermodynamics related probabilities to thermodynamic variables, these probabilities were meant to describe insufficient knowledge of details due to the large numbers of microscopic particles involved, but deterministic behavior of all individual processes was not questioned. ...

Born, Werner Heisenberg and Pascual Jordan in 1925, are contained in:

The basic principles of Quantum physics:
- On the microscopic level all elementary changes in nature are discontinuous,
consisting of quantized steps: ”quantum transitions”.
- The occurrence of these quantum transitions is not deterministic, but governed by probability laws.
No, this is a very strange statement of the QM basic principles.

(He means atomic level, not microscopic level.)

Position, momentum, time, frequency, energy, etc. are not necessarily discrete. The wave function for an electron is typically a continuous function of space and time, and progresses smoothly according to a differential equation. So it is just wrong to say everything is discontinuous.

The emphasis on probabilities is also misleading, as he concedes that other physics theories also hinge on probabilities. He distinguishes QM by whether "insufficient knowledge" is involved, but that is just a philosophical issue.
The path towards the future Quantum Theory is defined as

”the systematic transformation of classical mechanics into a discontinuous atomic mechanics..... the new mechanics replaces the continuous manifold of (classical) states by discrete manifold, which is described by ”quantum numbers”....
quantum transitions between different states are determined by probabilities...
the theoretical determination of these probabilities is one of the profound tasks of Quantum Theory....”.
Some problems have a discrete spectrum, but all this talk about a discrete manifold is misleading.

The essence of QM is that the observables are represented by non-commuting linear operators.

The core of the theory is continuous, with continuous spacetime, continous wave functions, etc. For all we know, light is continuous, but only shows properties at discrete frequencies sometimes because some continuous problem has a discrete spectrum. Even then, that spectrum can often be perturbed by applying an electric or magnetic field.

Sunday, June 26, 2016

Old monkeys lose interest in toys

The NY Times Science section is pretty good, but it misses a lot of pure science stories because it prefers stories that somehow illuminate what it means to be human, especially if there is a leftist slant.

Here is the latest example:
Humans spend less time monkeying around as they get older, and according to a study published Thursday, so do monkeys.

As anyone who has ever hung out with a grandparent, observed a retiring parent, or grown old themselves may know, many people get pickier with age. ...

The researchers found that the monkeys’ interest in toys waned when they became reproductive. And around 20, (their “retirement age”) monkeys, like humans, had fewer social contacts and approached others less frequently. ...

How human behavior changes as we age could therefore have some biological origins. ...

Dr. Freund said she sees the same behavior patterns in humans.
So the lesson here is that we are just the same as the monkeys we split from 25M years ago. Leftists love the idea that we are just monkeys.

But there is no reason to believe that our relation to monkeys has anything to do with the result.
Perhaps monkeys and humans just lose stamina with age, and maybe the monkeys are too tired to deal with relationships that are ambivalent or negative, she added. Or maybe, as the researchers are now trying to investigate, aging monkeys are less socially interactive because they tend to take fewer risks, which is what appears to happen in humans according to some research.

Whatever the reason behind the behavior of these distantly-related species is, there’s a take-home message for humans: “Our behaviors that seem very much the result of our deliberation and choice,” said Dr. Freund, “might be more similar to our primate ancestors than we might think.”
So it is not in our genes at all. These are just choices that any thinking species might make.

So is anything new or surprising?

Here is some more obvious research. A medical journal reports that men are more willing to risk unsafe sex if the woman is extremely attractive.

Wednesday, June 22, 2016

Lobbying for the exascale computer

The NY Times reports:
Last year, the Obama administration began a new effort to develop a so-called “exascale” supercomputer that would be 10 times faster than today’s fastest supercomputers. (An exaflop is a quintillion — one million trillion — mathematical instructions a second.) Computer scientists have argued that such machines will allow more definitive answers on crucial questions such as the danger posed by climate change.
It is funny what scientists will say to get funding. If saying that anthropogenic global warming is a proven fact gets them funding, they say that. But if then saying that we need an exascale supercomputer will prove it, they say that to get funding for those exaflops.

Monday, June 20, 2016

Philosopher errors about free will

I listened to some dopey philosophers discuss free will, and they agreed on several absurd points.
Science has proved that the world is deterministic.
No, our leading scientific theories are not deterministic. Quantum mechanics is the most fundamental, and it is not deterministic at all. The most deterministic theory is supposed to be Newtonian mechanics, but it is not deterministic as it is usually applied.
Regardless of empirical knowledge, a rational materialistic view requires determinism.
I don't know how anyone can believe anything so silly. Nearly all of science, from astrophysics to social science, uses models that are partially deterministic and partially stochastic. Pure determinism is not used or believed anywhere.
Random means all possibilities are equally likely.
No. Coin tosses are supposed to have this property, but anything more complicated usually does not. If you allow the possibility of the coin landing on it edge, the edge is not equally likely.
Randomness means no one has any control over outcomes.
No, it means nothing of the kind. Often the randomness is a scientific paper is controlled by a pseudorandom number generator. While those numbers look random compared to the other data, they are determined by a formula.
The world is either deterministic or random, so we have no free will.
I wrote an essay explaining why this is wrong.

This is the sort of thing that only a foolish philosopher would say. Free will is self-evident. And yet they claim that it does not exist based on some supposed dichotomy between two other concepts that they never adequately define.

They might define randomness as anything that is not determined. Then they try to draw some grand philosophical consequence from that dichotomy. No, you cannot prove something nontrivial just by giving a definition.

Saturday, June 18, 2016

Physicists with faith in the multiverse

Peter Woit reports:
So, from the Bayesians we now have the following for multiverse probability estimates:

Carroll: “About 50%”
Polchinski: “94%”
Rees: “Kill my dog if it’s not true”
Linde: “Kill me if it’s not true”
Weinberg: “Kill Linde and Rees’s dog if it’s not true”

Not quite sure how one explains this when arguing with people convinced that science is just opinion.
Neil comments:
When a weather forecaster tells me the probability of rain tomorrow is 50%, I translate it as “I don’t know.” With a greater than 50%, I hear “There is more reason to think it will rain than it won’t” and vice versa with less than 50%.
No, this is badly confused.

If you really don't know anything, then you can apply the Principle of indifference to say that both possibilities have a 50% prior. But that is certainly not what the weather forecaster means. He is says that when historical conditions have matched the current conditions, it has rained 50% of the time. That is very useful, as a typical day will usually have a much less chance of rain (in most places).

A multiverse probability estimate does not refer to other instances that may or may not have been multiverse. So all the probability can mean is a measure of the speaker's belief. This is not any evidence for the multiverse, so it is like a measure of one's faith in God.

Thursday, June 16, 2016

New black hole collision announced

A reader alerts me to this:
Why does Ligo's reported second detection of gravitational waves and a black hole merger look absolutely nothing like the first detection announced in Februaray?
There are more details at the LIGO Skeptic blog.

I don't know. I am also wondering why they are just now announcing a collision that was supposedly observed 5 months ago, and whether LIGO still has its policy of only 3 people knowing whether or not the result has been faked.

And why do they sit on the data so long? The LIGO folks could tell us whenever they have a coincident event. As it is, they are not telling us the full truth, because they make big announcements while concealing the data that would allow us to make comparisons.