Monday, September 30, 2019

Classical and quantum theories are similarly indeterministic

Nearly everyone accepts the proposition that classical mechanics is deterministic, while quantum mechanics is probabilistic. For example, a recent Quanta mag essay starts:
In A Philosophical Essay on Probabilities, published in 1814, Pierre-Simon Laplace introduced a notorious hypothetical creature: a “vast intelligence” that knew the complete physical state of the present universe. For such an entity, dubbed “Laplace’s demon” by subsequent commentators, there would be no mystery about what had happened in the past or what would happen at any time in the future. According to the clockwork universe described by Isaac Newton, the past and future are exactly determined by the present. ...

A century later, quantum mechanics changed everything.
I believe that this view is mistaken.

I don't just mean that some classical theories use probability, like statistical mechanics. Or that quantum mechanics sometimes predicts a sure result.

I mean that determinism is not a genuine difference between classical and quantum mechanics.

A couple of recent papers by Flavio Del Santo and Nicolas Gisin make this point.

One says:
Classical physics is generally regarded as deterministic, as opposed to quantum mechanics that is considered the first theory to have introduced genuine indeterminism into physics. We challenge this view by arguing that the alleged determinism of classical physics relies on the tacit, metaphysical assumption that there exists an actual value of every physical quantity, with its infinite predetermined digits (which we name "principle of infinite precision").
Also:
Classical physics is generally regarded as deterministic, as opposed to quantum mechanics that is considered the first theory to have introduced genuine indeterminism into physics. We challenge this view by arguing that the alleged determinism of classical physics relies on the tacit, metaphysical assumption that there exists an actual value of every physical quantity, with its infinite predetermined digits (which we name "principle of infinite precision"). Building on recent information-theoretic arguments showing that the principle of infinite precision (which translates into the attribution of a physical meaning to mathematical real numbers) leads to unphysical consequences, we consider possible alternative indeterministic interpretations of classical physics. We also link those to well-known interpretations of quantum mechanics. In particular, we propose a model of classical indeterminism based on "finite information quantities" (FIQs). Moreover, we discuss the perspectives that an indeterministic physics could open (such as strong emergence), as well as some potential problematic issues. Finally, we make evident that any indeterministic interpretation of physics would have to deal with the problem of explaining how the indeterminate values become determinate, a problem known in the context of quantum mechanics as (part of) the ``quantum measurement problem''. We discuss some similarities between the classical and the quantum measurement problems, and propose ideas for possible solutions (e.g., ``collapse models'' and ``top-down causation'').
Another:
Do scientific theories limit human knowledge? In other words, are there physical variables hidden by essence forever? We argue for negative answers and illustrate our point on chaotic classical dynamical systems. We emphasize parallels with quantum theory and conclude that the common real numbers are, de facto, the hidden variables of classical physics. Consequently, real numbers should not be considered as "physically real" and classical mechanics, like quantum physics, is indeterministic.
The point here is that any deterministic theory involving real numbers becomes indeterministic if you use finitary measurements and representations of those reals. In practice, all those theories are indeterministic.

Also, any indeterministic theory can be made deterministic by including the future observables in the present state. Quantum mechanical states are usually unknowable, and people accept that, so one could add the future (perhaps unknowable) being in the present state.

Thus whether a physical theory is deterministic is just an artifact of how the theory is presented. It has no more meaning than that.

Tuesday, September 24, 2019

Did Google achieve Quantum Supremacy?

I have readers turning to my blog to see if I have shut it because of the humiliation of being proven wrong.

I refer to papers announcing that Google has achieve quantum supremacy. You can find links to the two papers in the comments on Scott Aaronson's blog.

I am not conceding defeat yet. First, Google has withdrawn the papers, and refuses to say whether it has a achieved a breakthru or not. Second, outside experts like Aaronson have apparently been briefed on the work, but refuse to comment on it. And those who do comment are not positive:
However, the significance of Google’s announcement was disputed by at least one competitor. Speaking to the FT, IBM’s head of research Dario Gil said that Google’s claim to have achieved quantum supremacy is “just plain wrong.” Gil said that Google’s system is a specialized piece of hardware designed to solve a single problem, and falls short of being a general-purpose computer, unlike IBM’s own work.
Gil Kalai says that the Google and IBM results are impressive, but he still believes that quantum supremacy is impossible.

So it may not be what it appears to be.

Aaronson had been sworn to secrecy, and now considers the Google work a vindication of his ideas. He stops short of saying that it proves quantum supremacy, but he implies that the quantum supremacy skeptics have been checkmated.

Probably Google is eager to make a big splash about this, but is getting the paper published in Science or Nature, and those journals do not like to be scooped. The secrecy also helps suppress criticism, because the critics usually don't know enuf about the work when the reporters call.

The paper claims quantum supremacy on the basis of doing a computation that would have been prohibitive on a classical supercomputer.

That sounds great, but since the computation was not replicated, how do we know that it was done correctly?

Wikipedia says:
A universal quantum simulator is a quantum computer proposed by Yuri Manin in 1980[4] and Richard Feynman in 1982.[5] Feynman showed that a classical Turing machine would experience an exponential slowdown when simulating quantum phenomena, while his hypothetical universal quantum simulator would not. David Deutsch in 1985, took the ideas further and described a universal quantum computer.
So we have known since 1982 that simulating a quantum experiment on a classical computer can take exponential time.

At first glance, it appears that Google has only verified that. It did some silly quantum experiment, and then showed that the obvious classical simulation of it would take exponential time.

Is that all Google has done? I haven't read the paper yet, so I don't know. It is hard to believe that Google would claim quantum supremacy if that is all it is. And Google has not officially claimed it yet.

The paper says:
The benchmark task we demonstrate has an immediate application in generating certifiable random numbers [9];
Really? Is that all? It would be more impressive if they actually computed something.

Monday, September 23, 2019

Debunking Libet's free will experiment

The anti-free-will folks often cite a famous experiment by Libet. It doesn't really disprove free will, but it seemed to show that decisions had an unconscious element.

Now I learn that the experiment has been debunked anyway. The Atlantic mag reports:
Twenty years later, the American physiologist Benjamin Libet used the Bereitschaftspotential to make the case not only that the brain shows signs of a decision before a person acts, but that, incredibly, the brain’s wheels start turning before the person even consciously intends to do something. Suddenly, people’s choices—even a basic finger tap—appeared to be determined by something outside of their own perceived volition. ...

This would not imply, as Libet had thought, that people’s brains “decide” to move their fingers before they know it. Hardly. Rather, it would mean that the noisy activity in people’s brains sometimes happens to tip the scale if there’s nothing else to base a choice on, saving us from endless indecision when faced with an arbitrary task. The Bereitschaftspotential would be the rising part of the brain fluctuations that tend to coincide with the decisions. This is a highly specific situation, not a general case for all, or even many, choices. ...

When Schurger first proposed the neural-noise explanation, in 2012, the paper didn’t get much outside attention, but it did create a buzz in neuroscience. Schurger received awards for overturning a long-standing idea.
This does not resolve the issue of free will, but it does destroy one of the arguments against free will.

It also throws into doubt the idea that we subconsciously make decisions.

Saturday, September 21, 2019

On the verge of quantum supremacy again

July news:
Google expected to achieve quantum supremacy in 2019: Here’s what that means

Google‘s reportedly on the verge of demonstrating a quantum computer capable of feats no ordinary classical computer could perform. The term for this is quantum supremacy, and experts believe the Mountain View company could be mere months from achieving it. This may be the biggest scientific breakthrough for humanity since we figured out how to harness the power of fire. ...

Experts predict the advent of quantum supremacy – useful quantum computers – will herald revolutionary advances in nearly every scientific field. We’re talking breakthroughs in chemistry, astrophysics, medicine, security, communications and more. It may sound like a lot of hype, but these are the grounded predictions. Others think quantum computers will help scientists unlock some of the greater mysteries of the cosmos such as how the universe came to be and whether life exists outside of our own planet.
It seems as if I post these stories every year. Okay, here we go again.

I am betting Google will fail again. Check back on Dec. 31, 2019.

If Google delivers as promised, I will admit to being wrong. Otherwise, another year of phony promises will have passed.

Maybe already. The Financial Times is reporting:
Google claims to have reached quantum supremacy
The article is behind a paywall, so that's all I know. If true, you can be sure Google will be bragging in a major way. (Update: Read the FT article here.)

Update: LuMo tentatively believes it:
Google's quantum computing chip Bristlecone – that was introduced in March 2018 – has arguably done a calculation that took 3 minutes but it would take 10,000 years on the IBM's Summit, the top classical supercomputer as of today. I know nothing about the details of this calculation. I don't even know what amount of quantum error correction, if any, is used or has to be used for these first demonstrations of quantum supremacy.

If you have a qualified guess, let us know – because while I have taught quantum computing (in one or two of the lectures of QM) at Harvard, I don't really have practical experience with the implementation of the paradigm.

If true, and I tend to think it's true even though the claim is remarkable, we are entering the quantum computing epoch.
I look forward to the details being published. Commenter MD Cory suggests that I have been tricked.

Friday, September 20, 2019

Physicists confusing religion and science

Sabine Hossenfelderwrites in a Nautilus essay:
And finally, if you are really asking whether our universe has been programmed by a superior intelligence, that’s just a badly concealed form of religion. Since this hypothesis is untestable inside the supposed simulation, it’s not scientific. This is not to say it is in conflict with science. You can believe it, if you want to. But believing in an omnipotent Programmer is not science—it’s tech-bro monotheism. And without that Programmer, the simulation hypothesis is just a modern-day version of the 18th century clockwork universe, a sign of our limited imagination more than anything else.

It’s a similar story with all those copies of yourself in parallel worlds. You can believe that they exist, all right. This belief is not in conflict with science and it is surely an entertaining speculation. But there is no way you can ever test whether your copies exist, therefore their existence is not a scientific hypothesis.

Most worryingly, this confusion of religion and science does not come from science journalists; it comes directly from the practitioners in my field. Many of my colleagues have become careless in separating belief from fact. They speak of existence without stopping to ask what it means for something to exist in the first place. They confuse postulates with conclusions and mathematics with reality. They don’t know what it means to explain something in scientific terms, and they no longer shy away from hypotheses that are untestable even in principle.
She is right, but with this attitude, she is not going to get tenure anywhere good.

Deepak Chopra wrote a letter to NY Times in response to Sean M. Carroll's op-ed. He mixes quantum mechanics and consciousness in a way that drives physicists nuts. They regard him as a mystic crackpot whose ideas should be classified as religion. But he is not really as bad as Carroll. It would be easier to test Chopra's ideas than Carroll's many-worlds nonsense.

Carroll is an example of a physicist confusing religion and science.

Wednesday, September 18, 2019

The politics of quantum mechanics

Lubos Motl writes:
You know, for years, many people who were discussing this blog were asking: What do axioms of quantum mechanics have to do with Motl's being right-wing? And the answer was "virtually nothing", of course. Those things really were assumed to be uncorrelated and it was largely the case and it surely should be the case. But it is no longer the case. The whole political machinery of raw power – at least one side of it – is now being abused to push physics in a certain direction.You know, for years, many people who were discussing this blog were asking: What do axioms of quantum mechanics have to do with Motl's being right-wing? And the answer was "virtually nothing", of course. Those things really were assumed to be uncorrelated and it was largely the case and it surely should be the case. But it is no longer the case. The whole political machinery of raw power – at least one side of it – is now being abused to push physics in a certain direction.
Maybe Motl is on to something.

Sean M. Carroll has written a preposterous book advocating the many-worlds version of quantum mechanics. It is being widely promoted in the left-wing news media, while right-wing sources either ignore or trash it. Is that a coincidence?

There is something about the left-winger that wants to believe in parallel universes. Carroll also says:
If the universe is infinitely big, and it looks the same everywhere, that guarantees that infinite copies of something exactly like you exist out there. Does that bother me? No.
I think that this is a left-wing fantasy. Do right-wingers about such unobservable egalitarianism? I doubt it.

Sunday, September 8, 2019

Carroll promotes his dopey new quantum book

Physicist Sean M. Carroll has a NY Times op-ed today promoting his stupid new book.
“I think I can safely say that nobody really understands quantum mechanics,” observed the physicist and Nobel laureate Richard Feynman. ...

What’s surprising is that physicists seem to be O.K. with not understanding the most important theory they have.
No, that is ridiculous.

I assume that Feynman meant that it is hard to relate quantum objects to classical objects with a more intuitive understanding. Physicists grappled with the theory in the 1920s, and by 1935 everyone had a good understanding of it.
The reality is exactly backward. Few modern physics departments have researchers working to understand the foundations of quantum theory.
That is because the foundations were well-understood 90 years ago.
In the 1950s the physicist David Bohm, egged on by Einstein, proposed an ingenious way of augmenting traditional quantum theory in order to solve the measurement problem. ... Around the same time, a graduate student named Hugh Everett invented the “many-worlds” theory, another attempt to solve the measurement problem, only to be ridiculed by Bohr’s defenders.
They deserved to be ridiculed, but their theories did nothing towards solving the measurement problem, are philosophically absurd, and have no empirical support.
The current generation of philosophers of physics takes quantum mechanics very seriously, and they have done crucially important work in bringing conceptual clarity to the field.
Who? I do not think that there is any living philosopher who has shed any light on the subject.
It’s hard to make progress when the data just keep confirming the theories we have, rather than pointing toward new ones.

The problem is that, despite the success of our current theories at fitting the data, they can’t be the final answer, because they are internally inconsistent.
This must sound crazy to an outsider. Physicists have perfectly good theories that explain all the data well, and yet Carroll writes books on why the theories are no good.

The theories are just fine. Carroll's philosophical prejudices are what is wrong.

Carroll does not say what would discredit himself -- he is a big believer in many-worlds theory. If he wrote an op-ed explaining exactly what he believes about quantum mechanics, everyone would deduce that he is a crackpot.

Philosopher Tim Maudlin also has a popular new essay on quantum mechanics. He is not so disturbed by the measurement problem, or indeterminism, or Schroedinger's cat, but he is tripped up by causality:
What Bell showed that if A and B are governed by local physics — no spooky-action-at-a-distance — then certain sorts of correlations between the behaviours of the systems cannot be predicted or explained by any local physics.
This is only true if "local physics" means a classical theory of local hidden variables. Bell did show that quantum mechanics can be distinguished from those classical theories, but there is still no action-at-a-distance.

Update: Lumo trashes Carroll's article. Woit traces an extremely misleading claim about a physics journal editorial policy. The journal just said that it is a physics journals, and articles have to have some physics in them. Philosophy articles could be published elsewhere.

Saturday, September 7, 2019

Deriving free will from singularities

William Tomos Edwards writes in Quillette to defend free will:
[Biologist Jerry] Coyne dismisses the relevance of quantum phenomena here. While it’s true that there is no conclusive evidence for non-trivial quantum effects in the brain, it is an area of ongoing research with promising avenues, and the observer effect heavily implies a connection. Coyne correctly points out that the fundamental randomness at the quantum level does not grant libertarian free will. Libertarian free will implies that humans produce output from a process that is neither random nor deterministic. What process could fit the bill?
No, they are both wrong. Libertarian free will certainly does imply that human produce output that is not predictable by others, and hence random. That is the definition of randomness.

Quantum randomness is not some other kind of randomness. There is only one kind of randomness.

Then Edwards goes off the rails:
Well, if the human decision-making process recruits one or more irremovable singularities, and achieves fundamentally unpredictable output from those, I would consider that a sufficient approximation to libertarian free will. Furthermore, a singularity could be a good approximation to an “agent.” Singularities do occur in nature, at the center of every black hole, and quite possibly at the beginning of the universe, and quantum phenomena leave plenty of room open for them. ...

The concept of a singularity becomes important once again here because if you can access some kind of instantaneous infinity and your options are fundamentally, non-trivially infinite, then it would seem you have escaped compatibilism and achieved a more profound freedom.
Now he is just trolling us. There are no singularities or infinities in nature. You can think of the center of a black hole that way, but it is not observable, so no one will ever know. There certainly aren't any black holes in your brain.

Coyne replies here, but quantum mechanics is out of his expertise.

Thursday, September 5, 2019

Universal grammar and other pseudosciences

Everyone agrees that astrology is pseudoscience, but this new paper takes on some respected academic subjects:
After considering a set of demarcation criteria, four pseudosciences are examined: psychoanalysis, speculative evolutionary psychology, universal grammar, and string theory. It is concluded that these theoretical frameworks do not meet the requirements to be considered genuinely scientific. ...

To discriminate between two different types of activities some kind of criteria or
standards are necessary. It is argued that the following four demarcation criteria are
suitable to distinguish science from pseudoscience:
1. Testability. ...
2. Evidence. ...
3. Reproducibility. ...
4. The 50-year criterion.
By these criteria, string theory fails to be science. It mentions that a couple of philosophers try to defend string theory, but only by inventing some new category. I guess they don't want to call it "pseudoscience" if respectable professors promote it.

Respectable professors also have a long history of supporting Freudian psychoanalysis.

This claim about universal grammar struck me:
Chomksy (1975, p. 4) argues that children learn language easily since they do it without formal instruction or conscious awareness.
Not only is Chomsky well-respected for these opinions, but Steve Pinker and many others have said similar things.

This puzzles me. I taught my kids to talk, and I would not describe it as easy. I had to give them formal instruction, and they seemed to be consciously aware of it.

The process takes about a year. It is a long series of incremental steps. Steps are: teaching the child to understand simple commands, such as "stop", articulating sounds like "hi", responding to sounds, like saying "hi" in response to "hi", learning simple nouns, like saying "ball" while pointing to a ball, developing a vocabulary of 20 nouns or so, learning simple verbs like "go", putting together subject-verb, putting together subject-verb-object, etc.

All of these steps are difficult for a two-year-old, and require a great deal of individual instruction and practice.

Sure, two-year-olds might learn a lot by observing, but you could say the same about other skills. Some are taught to dress themselves, while others learn by mimicking others. No one would say that children learn to dress themselves without instruction.
Steven Pinker was the first to popularize the hypothesis that language is an instinct. In his influential book The Language Instinct, Pinker asserts that “people know how to talk in more or less the sense that spiders know how to spin webs” (Pinker 1995, p. 18). Pinker’s analogy is striking, since it is obviously incorrect. A spider will spin webs even if it remains isolated since birth. On the other hand, a child who has been isolated since birth will not learn language. In other words, while web-spinning does not require previous experience and it is innate, language does require experience and it is learned.
Chomsky and Pinker are two of our most respected intellectuals today.

Googling indicates that Chomsky had one daughter and no grandkids. Pinker has no kids. I am not sure that is relevant, as many others have similarly claimed that children learn language naturally.

Monday, September 2, 2019

Psychology is in crisis

I often criticize the science of Physics here, some some other sciences are doing much worse. Such as:
Psychology is declared to be in crisis. The reliability of thousands of studies have been called into question by failures to replicate their results. ...

The replication crisis, if nothing else, has shown that productivity is not intrinsically valuable. Much of what psychology has produced has been shown, empirically, to be a waste of time, effort, and money. As Gibson put it: our gains are puny, our science ill-founded.
This is pretty harsh, but it doesn't even mention how many leaders in the field have turned out to be fraud, or how some sub-fields are extremely politicized, or how much damage they do to people in psychotherapies.