Sunday, June 16, 2019

Bell only proved a null result

Peter Woit has closed comments on Bell locality without responding to a lot of sharp criticism, so I am not sure where he stands.

Marty Tysanner wrote:
While I may understand their rationale, it still dismays me that so many people have thrown locality “under the bus” (so to speak). Locality isn’t like the 19th century ether that was invented to explain certain physical phenomena; it’s at the core of relativity, electromagnetism, classical mechanics, and even QM evolution up to the time of measurement. I can’t think of anywhere else in all of fundamental physics – outside QM entanglement – where locality is in question. Moreover, there are obvious conceptual issues with non-local influences/communication between entangled particles at space-like separation: How do the particles “find” each other within the entire universe, so they can “communicate” (other than tautalogically or some other version of “it just happens; don’t ask how”); and why don’t we see nonlocality in other contexts if it’s a truly fundamental aspect of nature? ...

Given the centrality of locality in physics, I think we should “fight to the death” to preserve it in a fundamental theory.
That part of Tysanner's comment is correct. Locality is at the core of nearly all of Physics, including quantum field theory (QFT). If it were false, we would expect to see some convincing evidence, and then do some radical rethinking of out theories.

We do see the Bell test experiments, but they only negate the combination of locality and counterfactual definiteness. You can preserve locality by rejecting counterfactual definiteness. That is what mainstream physicists have done for 90 years.

Tim Maudlin disagrees with Woit and writes:
You do experiments in two (or three: GHZ) labs. You get results of those experiments. Those results display correlations that no local theory (in the precise sense defined by Bell) can predict. Ergo, no local theory can be the correct theory of the actual universe. I.e. actual physics is not local (in Bell’s sense).
Bell defined a local theory as a classical theory of local hidden variables. In later papers, he called them beables, and gave some philosophical arguments for believing in them, but the definition of locality is the same.

Quantum mechanics uses non-commuting observables. By the uncertainty principle, they cannot have simultaneous values. That remains true if you call them beables instead of observables. If you create a model in which they do have simultaneous, but maybe unknown values, then you get predictions that contradict experiment. That is what Bell and his followers have shown.

The only conclusion is the null result: No reason to reject quantum mechanics. Any other conclusion is an error.

Saturday, June 15, 2019

Woit dives into Bell non-locality

Peter Woit writes:
what’s all this nonsense about Bell’s theorem and supposed non-locality?

If I go to the Scholarpedia entry for Bell’s theorem, I’m told that:
Bell’s theorem asserts that if certain predictions of quantum theory are correct then our world is non-local.
but I don’t see this at all. As far as I can tell, for all the experiments that come up in discussions of Bell’s theorem, if you do a local measurement you get a local result, and only if you do a non-local measurement can you get a non-local result. Yes, Bell’s theorem tells you that if you try and replace the extremely simple quantum mechanical description of a spin 1/2 degree of freedom by a vastly more complicated and ugly description, it’s going to have to be non-local. But why would you want to do that anyway?
He is right, and he cites a Gell-Mann video in agreement, but most of the comments don't.

In short, Bell proved in about 1965 a nice theorem showing the difference between quantum mechanics and classical theories. He showed that assuming local hidden variables leads to different conclusions.

In later years, Bell convinced many people that the assumption of hidden variables was not necessary. They are wrong, and most respectable physicists, textbooks, and Wikipedia articles say so. Some say that he would have deserved a Nobel Prize if he had been right, but he got no such prize.

Lee Smolin responds:
I have made what I hope is a strong case for taking seriously what we might call the “John Bell point of view” in my recent book Einstein’s Unfinished Revolution, and find myself mostly in agreement with Eric Dennis and Marko. I would very briefly underline a few points:

-This is not a debate resting on confusions of words. Nor is there, to my knowledge, confusion among experts about the direct implications of the experimental results that test the Bell inequalities. The main non-trivial assumption leading to those inequalities is a statement that is usually labeled “Bell-locality”. Roughly this says (given the usual set up) that the “choice of device setting at A cannot influence the same-time output at B”.

Nothing from either quantum mechanics nor classical mechanics is assumed. The experiments test “Bell-locality” and the experimental results are (after careful examination of loop-holes etc.) that the inequality is cleanly and convincingly violated in nature. Therefor “Bell-locality” is false in nature.

-The conclusion that “Bell-locality” is false in nature is an objective fact. It does not depend on what view you may hold on the ultimate correctness, completeness or incompleteness of QM. Bohmians, Copenhagenists,Everetians etc all come to the same conclusion.
This is so wrong that I am tempted to cite Lubos Motl's opinion of Smolin.

There is no experiment where the choice of device setting at A influences the same-time output at B. The Bell experiments only show that the choice of device setting at A influences how we go about making predictions at B, but it never affects the output at B.

There is a big difference. One is spooky action-at-a-distance, and one is not.

It is amazing that Smolin could write a book in this subject, and get this simple point wrong.

I added this comment, but it was deleted:
As Wikipedia says: Cornell solid-state physicist David Mermin has described the appraisals of the importance of Bell's theorem in the physics community as ranging from "indifference" to "wild extravagance".

Bell assumed local hidden variables. If you believe that the theory of everything should be based on hidden variables, then Bell's theorem is a big deal. If you believe in QM as a perfectly good non-hidden-variable theory, then Bell's theorem has no importance.

As Peter says, "systems don’t have simultaneous well-defined values for non-commuting observables." That is another way of saying that Bell's assumption of hidden variables is violated.
Motl posted his explanation.
You would need non-locality in a classical theory to fake the quantum results. But that doesn't mean that our Universe is non-local because your classical theory – and any classical theory – is just wrong. In particular, the correct theory – quantum mechanics – says that the wave function is not an actual observable. It means that its collapse isn't an "objective phenomenon" that could cause some non-local influences. Instead, it may always be interpreted as the observer's learning of some new information.
One comment cited Travis Norsen to support Bell. I explained his errors in this 2015 post.

Peter Shor comments:
First: quantum mechanics doesn’t just break classical probability theory (as Bell demonstrated); it breaks classical computational complexity theory and classical information theory as well. This is why there are a number of computer scientists who are convinced that quantum computers can’t possibly work.
Those computer scientists might be right.

The Bell test experiments were done by physicists with a belief that they would disprove quantum mechanics, and overthrow 50 years of conventional wisdom. All they did was to confirm what the textbooks said.

Most of the XX century was without any firm opinion that quantum mechanics breaks classical computational complexity theory and classical information theory. That opinion developed in the last 30 years, just as experiments were being done. While experiments have confirmed aspects of quantum mechanics that Bell doubted, the jury is still out on quantum computing.

Wednesday, June 12, 2019

Constructing Bell paradoxes

An economics professor and puzzle author posted this variant of a Bell Theorem paradox:
The Brain Teaser: Today we both chose Tails and both saw green. What colors were on our Heads sides? ...

Possible Resolution I. There are no such coins. You’re right. There are no such coins. But there are subatomic particles that behave exactly like these coins. ...

Possible Resolution II: The coins are very very sneaky and they like to screw around with our minds, so they change their own colors depending on the choices we make, just to fool us. .. .

Possible Resolution III: Neither side of either coin has a color until we decide to examine it, so that on a day when I examine my tails side, it makes no sense to ask about the color of the heads side in the first place.

Therefore I am not allowed to pose this brain teaser in the first place. ...

So where does this leave us?

It leaves us with quantum mechanics, which correctly predicts the result of this experiment, among kajillions of others, and which is incompatible with classical intuitions (as the puzzle in the post is intended to illustrate), but is perfectly consistent with different intuitions which take a while to cultivate, but are not impossible to get used to.
I do not think that these Bell puzzles are helpful in understanding quantum mechanics.

Quantum mechanics was created to deal with a fundamental observed fact: Electricity and light are sometimes seen as waves, and sometimes particles.

If you model them as waves, then the discrete energy values are puzzling. If you model them as particles, then the interference patterns are puzzling.

Heisenberg got around this difficulty by saying that you could think of an electron as a particle, but one where you cannot measure the position and momentum at the same time, as you could if it were really a particle.

All these Bell paradoxes are based on pretending that the election is a particle where conflicting measurements can be made at the same time.

If you think that an electron or photon is a particle, and you want to see something puzzling, just look at the double-slit experiment.

Subatomic particles are not like coins in the above puzzle. Coins have properties like position and momentum that are independent of observation. Subatomic particles have wave-like properties, and the particle-like properties depend on how they are observed. These are simple facts of nature that have been understood since the 1920s.

I often hear of physicists who hope that someone will find a new interpretation of quantum mechanics that will make these wave-particle puzzles go away. Forget it. They are just refusing to accept facts of nature.

Monday, June 10, 2019

Smolin attacks Einstein myth

Lee Smolin writes in a 2015 paper:
There is a myth that Einstein's discovery of general relativity was due to his following beautiful mathematics to discover new insights about nature. I argue that this is an incorrect reading of the history and that what Einstein did was to follow physical insights which arose from asking that the story we tell of how nature works be coherent.
Smolin assumes that Einstein discovered general relativity, but if you study the history of general relativity, you find that most of the key ideas were developed by others.

Poincare and Minkowski developed four-dimensional spacetime, Poincare the finite propagation speed of gravity and the first relativistic field theory for gravity, Minkowski and Laue the stress-energy tensor, Ricci, Levi-Civita, and Grossmann the Ricci tensor, Poincare a partial explanation for the Mercury anomaly, Hilbert the Lagrangian approach, Schwarzschild the black hole metric.

The big equation of general relativity that was used to solve the Mercury anomaly was just Ricci = 0.
But how exactly did Einstein perform the seemingly miraculous feat of inventing a theory that correctly describes phenomena that had not yet even been observed? There is a myth which is usually trotted out to answer this query, which is that Einstein was a lone genius who followed beautiful mathematics to discover his great theory. Genius, inspired by aesthetics. Mathematics as a tool of prophecy.

No one was more responsible for spreading this myth than Einstein himself, who described in several essays and popular talks in the 1920's and later how he followed a trial of mathematical beauty to his discovery of general relativity. As Einstein wrote in his autobiographical notes,

``I have learned something else from the theory of gravitation: no collection of empirical facts, no matter how comprehensive, can ever lead to the formulation of such complicated equations ... [they] can only be found by the discovery of a logically simple mathematical condition that completely, or almost completely, determines the equations. Once one has those sufficiently strong formal conditions one requires only little knowledge of facts to set up a theory[3].

In the last twenty years historians have been doing a careful job of studying what Einstein actually did during the eight years of hard, often frustrating work ... And it was very different from the myth.
Einstein constructed several myths about himself.
In the absence of ideas and insights about nature, Einstein fell back on mathematics as his guide. He constructed a myth about how mathematical beauty had been prophetic for his invention of general relativity and he attempted to use it to justify his forays into unified field theory.

Einstein's search for a unified field theory failed, and the roots of this failure are his embrace of mathematical beauty as a guiding principle. Over the thirty-five years between 1920 and his death in 1955 Einstein attempted many versions of a unified field theory. He tried higher, hidden dimensions, they failed. He tried more general versions of curved geometries beyond the geometry used in general relativity. They all failed to produce a useful unification.
I wrote a book on this subject: How Einstein Ruined Physics.
According to popular accounts of the scientific method, such as Thomas Kuhn's The Structure of Scientific Revolutions[1], theories are invented to describe phenomena which experimentalists have previously discovered. ...

This simple schema does not apply to general relativity. All the characteristic phenomena that general relativity describes were unknown in 1915 when Einstein published his theory. These include the expanding universe, black holes, light bending in gravitational fields, gravitational lenses, time slowing down in gravitational fields, gravitational waves, dark energy. Not only were these phenomena not yet observed in 1915, most of them had not even been thought about. The fact that a century later, all of these are well confirmed is a triumph unmatched by any other theory in the history of science.
It would be amazing if Einstein predicted all of these things out of pure theory in 1915. But Einstein did not even believe in most of them himself.

The expanding universe has little to do with general relativity. Gravity is an attractive force. If the universe were not expanding, it would have collapsed already, under either Newtonian gravity or general relativity.

Black holes and gravitational lenses are consequences of strong gravitational fields, regardless of relativity. So is bending light, if light has mass. Dark energy was not predicted or discovered until about 20 years ago.

Einstein did discover time slowing down in gravitational fields, but as a consequence of special relativity, in 1908.

Poincare discovered a relativistic theory of gravity in 1905 that predicted gravitational waves.

So Einstein's 1915 theory is not responsible for any of these things.

Special relativity was invented to explain experiments, like the Michelson-Morley experiment. General relativity was invented to reconcile special relativity with gravity, and to explain gravitational causality and the Mercury anomaly.

I wrote a series of posts on how relativity might have been invented independently of experiment. But historically, it did not happen that way. All of the worthwhile scientific theories were invented to describe phenomena which experimentalists have previously discovered.

Theoretical physicists today like to ignore experiments and try to find the great unified theory based on pure thought, so they like to promote the myth that such an approach has succeeded in the past.

Friday, June 7, 2019

My blog motto is validated

This blog motto is:
Natura non facit saltus (Latin for "nature does not make jumps") has been an important principle of natural philosophy. ...

The principle expresses the idea that natural things and properties change gradually, rather than suddenly. In a mathematical context, this allows one to assume that the solutions of the governing equations are continuous, and also does not preclude their being differentiable (differentiability implies continuity). Modern day quantum mechanics is sometimes seen as violating the principle, with its idea of a quantum leap.[5] Erwin Schrödinger in his objections to quantum jumps supported the principle, and initially developed his wave mechanics in order to remove these jumps.
Quanta mag reports:
When quantum mechanics was first developed a century ago as a theory for understanding the atomic-scale world, one of its key concepts was so radical, bold and counter-intuitive that it passed into popular language: the “quantum leap.” Purists might object that the common habit of applying this term to a big change misses the point that jumps between two quantum states are typically tiny, which is precisely why they weren’t noticed sooner. But the real point is that they’re sudden. So sudden, in fact, that many of the pioneers of quantum mechanics assumed they were instantaneous.

A new experiment shows that they aren’t. By making a kind of high-speed movie of a quantum leap, the work reveals that the process is as gradual as the melting of a snowman in the sun. “If we can measure a quantum jump fast and efficiently enough,” said Michel Devoret of Yale University, “it is actually a continuous process.”

In a 1952 paper called “Are there quantum jumps?,” Schrödinger answered with a firm “no,” his irritation all too evident in the way he called them “quantum jerks.” ...

The techniques developed by the Yale team reveal the changing mindset of a system during a quantum jump.
The article goes on to discuss randomness, which is really another issue.

The slogan applies to everything from electrons to grand ideas. Things change continuously.

Wednesday, June 5, 2019

Paper says Hawking faked his illness

A Discover mag blog reports:
A paper in a peer-reviewed medical journal makes the suggestion that physicist Stephen Hawking’s disability, which famously confined him to a wheelchair and robbed him of his speech, was psychosomatic in nature.
If you think that is impossible, the author notes that a 2014 biographical movie about Hawking featured an actor who faked Hawking's illness very convincingly. If an actor could do it, then so could Hawking!

This seems a little wacky, but there are a lot of others who believe that Hawking was misdiagnosed.

Tuesday, June 4, 2019

Reality and substantiality of the luminiferous ether

Science Friday has a segment making of wrong theories in the history of science, and last Friday the target was the aether. They mock this 1884 Lord Kelvin essay:
This thing we call the luminiferous ether. That is the only substance we are confident of in dynamics. One thing we are sure of, and that is the reality and substantiality of the luminiferous ether. ...

I move through this “luminiferous ether” as if it were nothing. ... What can this luminiferous ether be? It is something that the planets move through with the greatest ease. It permeates our air; it is nearly in the same condition, so far as our means of judging are concerned, in our air and in the inter-planetary space. The air disturbs it but little; you may reduce air by air-pumps to the hundred thousandth of its density, and you make little effect in the transmission of light through it. The luminiferous ether is an elastic solid, for which the nearest analogy I can give you is this jelly which you see, 5 and the nearest analogy to the waves of light is the motion, which you can imagine, of this elastic jelly, with a ball of wood floating in the middle of it. ...

What we know of the luminiferous ether is that it has the rigidity of a solid and gradually yields. Whether or not it is brittle and cracks we cannot yet tell, but I believe the discoveries in electricity and the motions of comets and the marvellous spurts of light from them, tend to show cracks in the luminiferous ether — show a correspondence between the electric flash and the aurora borealis and cracks in the luminiferous ether. Do not take this as an assertion, it is hardly more than a vague scientific dream: but you may regard the existence of the luminiferous ether as a reality of science; that is, we have an all-pervading medium, an elastic solid, with a great degree of rigidity — an rigidity so prodigious in proportion to its density that the vibrations of light in it have the frequencies I have mentioned, with the wave-lengths I have mentioned. The fundamental question as to whether or not luminiferous ether has gravity has not been answered. We have no knowledge that the luminiferous ether is attracted by gravity; it is sometimes called imponderable because some people vainly imagine that it has no weight; I call it matter with the same kind of rigidity that this elastic jelly has.
Okay, some of this stuff about cracks and jelly seem ridiculous today, but most of it is mostly correct. The discovery of dark energy could be said to be an answer to his question about "whether or not luminiferous ether has gravity".

Special relativity taught nothing about whether the aether exists. It only taught that any such aether must be Lorentz invariant. As the program conceded at the end, modern quantum field theory does have the concept of a pervasive medium like the aether, as that is how the Higgs field is explained.

The program explained about the Michelson-Morley experiment, and how it failed to detect aether motion. While it was decisive in the discovery and acceptance of special relativity, Einstein seemed to have only vague third-hand knowledge of it.

Yes, it is a historical fact that the MM experiment was crucial for special relativity, but not for Einstein's thinking. Einstein's famous paper was to give an exposition of Lorentz's theory, but it did not bother to explain Lorentz's MM-based reasoning.

Monday, June 3, 2019

Einstein's biggest mistake

Gary J. Ferland has a new letter on Einstein's biggest mistake?
What, if any, was Einstein's biggest mistake, the one most affecting our physics today? There is a perhaps apocryphal story, recounted by George Gamow, that he counted his cosmological constant as his biggest blunder. We now know his hypothesized cosmological constant to be correct. His lifelong rejection of quantum mechanics, an interesting side-story in the evolution of 20th-century physics, is a candidate. None of these introduced difficulties in how our physics is done today. It can be argued that his biggest actual mistake, one that affects many subfields of physics and chemistry and bewilders students today, occurred in his naming of his A and B coefficients.
Yes, his lifelong rejection of quantum mechanics is a candidate for his biggest mistake. So is his pursuit of unified field theory.

I think his biggest mistake was his failure to accept relativity as a geometric theory. Students today learn relativity as a geometric theory, and they are taught that Einstein discovered it all, but Einstein refused to accept the geometric view in both special and general relativity. I explain this point on this blog, such as here.

Lee Smolin wrote:
Einstein’s search for a unified field theory failed, and the roots of this failure are his embrace of mathematical beauty as a guiding principle. Over the thirty-five years between 1920 and his death in 1955 Einstein attempted many versions of a unified field theory. ...

Einstein already understood by 1922 that the hypothesis that there are extra, hidden dimensions could not give a unification of the forces. ...

As Einstein wrote to his friend Paul Ehrenfest, “It is anomalous to replace the four dimensional continuum by a five dimensional one and then to subsequently tie up artificially one of those five dimensions in order to account for the fact that it does not manifest itself.”[5].

[5] Quoted in Abraham Pais, Subtle is the lord (New York: Oxford Univ. Press, 1982) p.334
Einstein had a similar objection to extending from three-dimensional space to four-dimensional spacetime, with time having a different geometry.

The problem here is that Einstein did not truly recognize the significance of geometry. He had this blind spot all of his life.

Sunday, June 2, 2019

Wormholes are not entangled black holes

Lubos Motl has posted this rant against a prominent complexity theorist:
Let me simplify it. He is extremely skeptical of the ER-EPR correspondence because the wormholes belong to general relativity, entanglement belong to quantum mechanics, these are different theories, so the objects must be different, too! Ingenious. ER=EPR was disproven. End of story. Or is it? ...

In quantum gravity, there may be Einstein-Rosen bridges (non-traversable wormholes) and when they're there, the have the exact same effect on all observers and couples of observers as entangled black hole pairs. When they walk like a duck (an entangled black hole pair), quack like one, and so on, they are a duck. ...

Many laymen were trained to parrot and indefinitely repeat some completely wrong slogans such as "string theory is not even wrong because it's not falsifiable". ...

When I was an undergrad, instructors were generally teaching us that when a famous scientist associates himself with some extremely wrong statements, he loses his name, image, credibility, and often his job, too. I think that the times have changed and the expectations of the institutionalized science have dramatically weakened. ...
I agree with his last point. Today, famous scientists associate themselves with crackpot causes all the time, with no noticeable loss to their status.

String theory does not make any testable predictions. Wormholes do not exist. Entangled black holes do not exist. These nonexistent things are not even related.

By "quantum gravity", he does not mean the perfectly good theories we have today, and which predict every known situation where gravity might be quantized. He means "consistent" quantum gravity, which is some hypothetical theory that no one has ever found and could never be tested, but would explain the center singularity of a black hole and the first nanosecond of the big bang.

Lumo doesn't argue that there is any empirical merit to any of these ideas. Instead, he cites super-smart big-shot physicists who have published many 1000s of papers citing these ideas, and implies that critics are not smart enough to understand them.

Much of theoretical physics has become like comic books in the Marvel universe. Wild fantasies that have no bearing on reality.

Saturday, June 1, 2019

Google is researching Cold Fusion

According to a peer-reviewed paper revealed this week, Google is continuing its experiments into the controversial science of cold fusion -- the theory that nuclear fusion, the process that powers the Sun, can produce energy in a table-top experiment at room temperature. While Google's recent project found no evidence that cold fusion is possible, it did make some advances in measurement and materials-science techniques that the researchers say could benefit energy research. "The team also hopes that its work will inspire others to revisit cold-fusion experiments, even if the phenomenon still fails to materialize," reports Nature.
The story got this comment:
Google tossed $10 million into research.
If it pans out, energy is trillions of dollars each and every year.

Someone at Google probably figured the odds of this working on one in ten thousand. A 1 in 10,000 chance of making your money multiply by a million X is a good bet.

Yeah their quantum computing research may be a bit similar. Spend a few million to potentially make a forty billion. If Google tries 1,000 long-shot projects at $10 million each, and only one succeeds to the extent that it makes a twenty billion, they've doubled their money.

It's a bit like you and spending a few minutes applying for a job making twice as much money. We probably won't get it, but it's worth the five minutes because if we do that 50 times we might we'll double our salary (and spend 250 minutes applying for jobs). Google can afford to try out $10 million here, $10 million there, see what ends up working.
Google is also working on electric flying cars, and other long-shots.

The actual Nature paper says:
The 1989 claim of ‘cold fusion’ was publicly heralded as the future of clean energy generation. However, subsequent failures to reproduce the effect heightened scepticism of this claim in the academic community, ... we embarked on a multi-institution programme to re-evaluate cold fusion to a high standard of scientific rigour. Here we describe our efforts, which have yet to yield any evidence of such an effect. Nonetheless, a by-product of our investigations has been to provide new insights ...
Google may soon be spinning its quantum computer research the same way.

The claim of quantum computing was heralded as the future of high-performance computing. Subsequent failures to achieve a quantum advantage have led to skepticism. A rigorous evaluation was needed. While they did not succeed in making a quantum computer, they provided some new insights.

Thursday, May 30, 2019

QBism is similar to Copenhagen

QBism is a variant of the Copenhagen interpretation of quantum mechanics, which has dominated textbooks for 90 years.

The proponents of QBism sometimes argue that quantum mechanics does not need an interpretation. They see QBism as a sort of minimalist interpretation. A new paper, Is Quantum Mechanics Self-Interpreting?, criticizes this claim.

Another new paper on QBism is "B" is for Bohr. It argues that Bohr would have agreed with QBism. The Copenhagen interpretation is supposed to be Bohr's view, so that should be the same also, but Bohr is widely misunderstood. The paper says:
Today, Bohr seems obscure to most physicists. Catherine Chevalley17 has identified three reasons why. The first is that Bohr's views have come to be equated with one variant or another of the Copenhagen interpretation, which only emerged in the mid-1950's, in response to David Bohm's hidden-variables theory and the Marxist critique of Bohr's alleged idealism, which had inspired Bohm.
The leading opponent to Bohr/Copenhagen was David Bohm, a Communist. Einstein also opposed it, and he was also a Communist sympathizer.

Is there some connection between Communist-Marxist political theory, and anti-positivist hidden-variables theory? They are both contrary to what I consider good rational thinking, but is there some more direct reason why an anti-positivist would go Communist, or a Communist would go anti-positivist? Maybe someone can explain it to me.

I am also trying to understand this NY Post story:
City Department of Education brass are targeting a “white-supremacy culture” among school administrators — by disparaging ideas like “individualism,” “objectivity” and “worship of the written word,” The Post has learned.
For details of this anti-White ideology, see this paper.

I always thought science should be objective, but is that a White supremacist thing? Would anti-White political activists seek to undermine objectivity in science? I dunno, but it appears that political ideology drives a lot of bad ideas in science.

Monday, May 27, 2019

Metaphor for bad science

What is the best metaphor for bad science? Usually, when someone wants to mock some idea as bad science, they compare it to flat Earth theory, geocentrism, or aether theory.

The trouble is that none of these comparisons work. The ancient Greeks understood that the Earth was round, and it is not true that Columbus was needed to disprove the flat Earthers.

Geocentrism is not really wrong, as relativity teaches that any frame of reference can be used. Aether theory is not really wrong either, as there are several physics concepts that can be legitimately presented as a modern aether.

Ancients said that everything was composed of earth, water, air, and fire. This is a gross oversimplification, but not necessarily an error. A combustible substance called phlogiston was also just an oversimplification.

The alchemist desire to turn lead into gold was fruitless, but it was not an error to speculate that lead and gold were made of the same ingredients. They are. We can't blame them for not knowing how large the nuclear forces are.

Aristotle is mocked for saying that heavier objects fall faster than lighter objects, but he didn't really say that. Also, denser objects do fall faster if air resistance is significant.

Celestial epicycles are cited as a methodological error, but it is very hard to justify that view.

So what example should we be using for bad or erroneous science? Any suggestions?

Friday, May 24, 2019

Doctor Quark dies

From the NY Times obituary of Murray Gell-Mann:
As with strangeness, the Eightfold Way and quarks were independently discovered by other theorists, but the breadth of Dr. Gell-Mann’s accomplishments and his flare for nomenclature ensured that his would be the name most remembered.

His instincts weren’t infallible. At first he dismissed quarks as mathematical abstractions — an accounting device with no real correlate in the physical world. There was good reason for his skepticism: Quarks would have to have electrical charges measured in thirds, something that was never observed.

After quarks were confirmed indirectly in an experiment at the Stanford Linear Accelerator Center, in Menlo Park, Calif., Dr. Gell-Mann denied that he had ever doubted their existence. He went on to help explain how the tiny particles are permanently stuck together, keeping their fractional charges hidden from view.
So Gell-Mann wasn't sure if the quarks were real, but he gets the credit because he gave them a good name!

This isn't as ridiculous as it sounds. Coincidentally, Dr. Bee has an essay:
What do scientists mean when they say that something exists? Every time I give a public lecture, someone will come and inform me that black holes don’t exist, or quarks don’t exist, or time doesn’t exist. Last time someone asked me “Do you really believe that gravitational waves exist?” ...

When we say that these experiments measured “gravitational waves emitted in a black hole merger”, we really mean that specific equations led to correct predictions.

It is a similar story for the Higgs-boson and for quarks. The Higgs-boson and quarks are names that we have given to mathematical structures. In this case the structures are part of what is called the standard model of particle physics. We use this mathematics to make predictions. The predictions agree with measurements. That is what we mean when we say “quarks exist”: We mean that the predictions obtained with the hypothesis agrees with observations.
That's right. A quark is just a useful name for part of a mathematical model, it is not clear that it makes any sense to even talk about whether it is any more real than that.

The obituary does partially explain the hyphen in his name: His father was born Isidore Gellmann in Eastern Europe, and changed his name in New York to Arthur Gell-Mann.

Thursday, May 23, 2019

Why science cannot prove or disprove free will

Leftist-atheist-evolutionist says that he was persuaded that he has no free will by this PNAS article:
The beauty of the mind of man has nothing to do with free will or any unique hold that biology has on select laws of physics or chemistry. This beauty lies in the complexity of the chemistry and cell biology of the brain, which enables a select few of us to compose like Mozart and Verdi, and the rest of us to appreciate listening to these compositions. The reality is, not only do we have no more free will than a fly or a bacterium, in actuality we have no more free will than a bowl of sugar. The laws of nature are uniform throughout, and these laws do not accommodate the concept of free will.
I mention Coyne's views here. That article yielded this rebuttal:
In his reply (1) to the letter by Anckarsäter (2) commenting on his original article (3), Anthony Cashmore expresses the view that a belief in free will would require at least a molecular model as a justification. However, such a model cannot exist, as I will explain in the following.

The behavior of an agent possessing free will is by definition unpredictable. In contrast to stochastic phenomena, it is not even possible to predict all observable statistical properties of the behavior of such an agent. A molecular model for free will, or in fact any scientific model for free will, would thus have to contain some property labeled as unpredictable.

However, the scientific method that we apply today, which is based on the formulation of hypotheses that are then tested by observation and experiment, cannot accommodate unpredictability. The statement that "property X is unpredictable" cannot be tested by observation and is thus not a scientific hypothesis. Moreover, even if property X itself is observable, its supposed unpredictability makes it impossible to formulate scientific hypotheses about it. As a consequence, free will cannot be integrated into any scientific model.

The only way in which the scientific method could resolve the question of the existence of free will is by showing its nonexistence. This would require a scientific model that permits a complete prediction of human behavior, or at least of all its observable statistical properties. However, as Anckarsäter pointed out (2), we are very far from having such a model.

Cashmore goes on to claim that in the absence of a good reason to believe in free will, we should believe in its nonexistence. A pragmatically minded person would counter that, in the absence of solid evidence to the contrary, we should trust our perception, which tells us that we do have free will. However, neither point of view can claim science as its justification. For a believer in the scientific method, the only coherent point of view is agnosticism about the existence of free will.
I agree that free will cannot be scientifically proved or disproved.

I also agree that our perception gives us good reason to believe in free will, even if it cannot be proved.

He says: "A molecular model for free will ... would thus have to contain some property labeled as unpredictable." Yes, that is right, and our best molecular models do indeed have properties labeled as unpredictable. All quantum systems do. Whether that unpredictability has anything to do with free will is an open question. The question may never be solved, for the reasons he gives. Maybe molecules have a tiny bit of free will, and human free will is derived from the collective free will of molecules.

Monday, May 20, 2019

Free will is indispensable for explaining our world

A podcast defends free will:
Unlike those who defend free will by giving up the idea that it requires alternative possibilities to choose from, Christian List retains this idea as central, resisting the tendency to defend free will by watering it down. He concedes that free will and its prerequisites — intentional agency, alternative possibilities, and causal control over our actions—cannot be found among the fundamental physical features of the natural world. But, he argues, that’s not where we should be looking. Free will is a “higher-level” phenomenon found at the level of psychology. It is like other phenomena that emerge from physical processes but are autonomous from them and not best understood in fundamental physical terms — like an ecosystem or the economy. When we discover it in its proper context, acknowledging that free will is real is not just scientifically respectable; it is indispensable for explaining our world.
List is correct about those last couple of points. Free will is not best understood in fundamental physical terms; acknowledging that it is real is not just scientifically respectable; and it is indispensable for explaining our world.

Even if you don't believe in free will, you still need to explain why some non-respectable concept is indispensable for explaining our world.

Jerry Coyne calls this incoherent:
In other word, some form of true libertarian free will arises mysteriously between the molecules that make up our brain and the behaviors that emanate from that brain.

Sadly, I cannot find anywhere in List’s spiel where he say how this emergent free will arises, or how it manages to defy the laws of physics. He uses weak analogies, like saying that while the “weather” arises from motions of molecules in the atmosphere, meterologists use models that abstract from the microphysical to the macrophysical and are indeterministic, giving probabilities of weather events. But that’s a bogus analogy, for the macro-“weather” is certainly consistent with, and arises from, lower-level phenomena. True, “rain” is an emergent property, but it is absolutely consistent with the laws of physics. List’s free will isn’t.
How is it that weather is any more consistent with the laws of physics than free will? There is no theory or experiment to substantiate this claim.

There is a long list of things that we cannot reduce to the motions of molecules. Weather is one, but many believe that is possible with sufficient info and computer resources. Consciousness is another, and no one knows how a computer would even start on that problem.

Coyne is right that no one can say how this emergent free will arises. I am not sure we can truly say how turbulence or phase transitions arise.

He is wrong to say free will (if it exists) must manage to defy the laws of physics. What laws are those? This is like people who say that the flying bumblebee defies the laws of physics.

Coyne elaborates on his opposition to libertarian free will, which he defines:
Libertarian free will means a form of free will independent of physical causality: a kind of free will that says, at any given moment when you face making a decision, you are not constrained to make a single decision. You could have done otherwise. ...
At a higher level of description, your decisions can be truly open. When you walk into a store and choose between Android and Apple, the outcome is not preordained. It really is on you.
If the outcome is not preordained, then it is a form of libertarian free will and not determinism. Period.
I agree with this, except that I am not sure what he means by "independent of physical causality". I believe in free will, I believe that I can choose between Android and Apple, and I also believe in physical causality. When I make that choice, it is a physical process, and my decision physically causes consequences.

Physical causality means that events are effected by what is inside the backward light cone. I include my mental processes as contributing to causality.

List argues:
One can have long debates about whether current AI systems are sufficiently advanced, but there is no conceptual reason why sophisticated AI systems could not qualify as bearers of free will. Much like corporate agents, which we also think should be held responsible for their actions, AI systems ideally should display a certain form of moral agency and not just rigid goal-seeking behavior in the interest of profit or whatever else their objective function might be. As we employ more and more AI systems in high-stakes settings, we would like those systems to make ethically acceptable decisions.
I may differ with him here. Those AI systems are nearly always deterministic. They may have ethical decision-making programmed in, but that is not free will.

There are those who think that the human mind is just a programmed Turing machine. There are others who believe that the mind is a window into the soul, separate from the body, and not constrained by physical law.

I think that there could be a middle ground, where the mind just functions according to material properties of the brain, but it is not a pre-programmed Turing machine either. It has consciousness, and the ability to make libertarian free choices.

Update: Coyne has a third rant against List, with a link to a technical 2014 List paper on the subject.