Tuesday, February 20, 2018

Interpreting QM to doubt quantum computing

A comment said:
Scott is being a good sport here and telling the truth about quantum computers. They can't work without MWI! RIGHT!
In a later posting, Aaronson says:
Which interpretation of QM you espouse (e.g., MWI, Copenhagen, or Bohm) has no effect—none, zero—on what you should predict about the scalability of quantum computation, because by explicit design, all interpretations make exactly the same predictions for any experiment you can do on any system external to yourself.
This is contrary to the opinion of others like David Deutsch, who say that the many-worlds interpretation is what justifies quantum computing.

So Aaronson would presumably deny that he jumped on the many-worlds (MWI) bandwagon in order to justify quantum computing.

This comment gives an example of a very famous and respect theoretical physicist not believing in quantum computing:
As another approach, there’s a somewhat weird book by Gerard ‘t Hooft that you probably know about (warning, 3MB download):

https://link.springer.com/content/pdf/10.1007%2F978-3-319-41285-6.pdf

It explicitly (p. 87) says it’s incompatible with large-scale QC and that if such QC happens then the book’s proposed theory is falsified. So at least it says something concrete :).
'tHooft was one of the main masterminds behind the Standard Model of particle physics, but he also has some funny ideas about super-determinism. So I think he is probably right about large-scale QC being impossible, but I am not endorsing his reasoning.

It is funny to warn about a 3MB download, as that is the average size of a web page today.

Monday, February 19, 2018

Trump and quantum computing

Scott Aaronson gripes about an article gushing about the coming quantum computers:
Would you agree that the Washington Post has been a leader in investigative journalism exposing Trump’s malfeasance? Do you, like me, consider them one of the most important venues on earth for people to be able to trust right now? How does it happen that the Washington Post publishes a quantum computing piece filled with errors that would embarrass a high-school student doing a term project (and we won’t even count the reference to Stephen “Hawkings” — that’s a freebie)?
No, the coverage of President Trump has been much more biased and misleading.

The author commented on Scott's blog, giving reputable sources for all the wild quantum computing claims. Scott has quit talking to the press. What do you expect from journalists, if all the experts are talking nonsense?

It could be worse, and NewScientist reported:
Quantum computer could have predicted Trump’s surprise election
Predicting the outcome of a general election is a challenge. But combining quantum computing with neural network technology could improve forecasts, according to a new study that used just such a network to model the 2016 US presidential elections.
The article is paywalled, so I don't know how bad it is.

The press promoted string theory, when all the big-shot professors said that it was the secret to the fundamental workings of the universe. Now they have moved onto quantum computing, and other topics.

A comment refers to this:
Briefly stated, the Gell-Mann Amnesia effect is as follows. You open the Newspaper to an article on some subject you know well. In Murray’s case (Murray Gell-Mann), physics. In mine, show business. You read the article and see the Journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect. I call these the “wet streets cause rain” stories. Paper’s full of them. In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.
Yes, and complaining about "fake news" inevitably leads to discussions of bogus stories in the Wash. Post, NY Times, and CNN.

Saturday, February 17, 2018

No real conflict between quantum and relativity

In a PBS TV Nova physics blog, a Harvard post-doc writes:
Sometimes the biggest puzzle in physics seems like the worst relationship in the universe. Quantum mechanics and general relativity are the two best theories in physics, but they have never been able to get along.

Quantum mechanics successfully describes the world of the very small, where nothing is predictable and objects don’t have precise positions until they are observed.

General relativity does well with describing massive objects. It says that the world behaves in a precise, predictable way, whether or not it’s observed.

Neither one has ever failed an experimental test. But so far no experiment has been able to show which — if either — of the two theories will hold up in the places where the two converge, such as the beginning of the universe and the center of a black hole.
That's right, no experiment can re-create the beginning of the Big Bang, or the center of a black hole. And that is why there is no conflict between quantum mechanics and relativity.

Thursday, February 15, 2018

How Kepler answered Tycho's arguments

Modern philosophers and historians like to make fun of early astronomers who believed in geocentrism (Sun going around the Earth), as if that were the epitome of unscientific thinking.

Here is a new paper:
In his 1606 De Stella Nova, Johannes Kepler attempted to answer Tycho Brahe's argument that the Copernican heliocentric hypothesis required all the fixed stars to dwarf the Sun, something Brahe found to be a great drawback of that hypothesis. This paper includes a translation into English of Chapter 16 of De Stella Nova, in which Kepler discusses this argument, along with brief outlines of both Tycho's argument and Kepler's answer (which references snakes, mites, men, and divine power, among other things). ...

Answers such as these to Brahe’s star size objection to Copernicus would endure. In 1651 Giovanni Battista Riccioli in his Almagestum Novum analyzed one hundred and twenty six proand anti-Copernican arguments, concluding that the vast majority in either direction were indecisive. As he saw it, there were two decisive arguments, both in favor of the antiCopernicans: one was the absence of any detectable Coriolis Effect (as it would be called today);6 the other was Brahe’s star size objection.
Here is the star size objection. Under a Copernican heliocentric model, the stars must be very far away, much farther than any known distances. Furthermore, the optics of the day made the stars appear to have a noticeable diameter that would make them unimaginably huge if they were really so far away.

The apparent diameters were later understood to be spurious artifacts of diffraction. The stars are not really so large.

Kepler's arguments are interesting, but do not resolve the matter. Nothing truly resolves the matter, because choosing a frame of reference is not a scientific question. You can say that a Sun-centered frame is closer to being inertial, or that the large size of the Sun compared to the Earth makes it more reasonable to think of the Earth as moving, but that's about all.

Monday, February 12, 2018

Physicist Lawrence Krauss on a podcast

The Sam Harris podcast #115 interviews a physicist:
In this episode of the Waking Up podcast, Sam Harris speaks with Lawrence Krauss and Matt Dillahunty about the threat of nuclear war, science and a universal conception of morality, the role of intuition in science, the primacy of consciousness, the nature of time, free will, the self, meditation, and other topics. This conversation was recorded at New York City Center on January 13, 2018.
You can also get it on YouTube.

It is amusing to hear these guys ramble on about their beliefs about consciousness, free will, etc., while all the time claiming to be such super-rational scientists that the word "believe" does not even apply to them.

I heard a story (from Ben Shapiro) that Sam Harris was in another public discussion on free will, and a questioner from the audience said, "You convince me that we have no free will, but I have a 5-year-old son. What should I tell him?" Harris was flustered, and then said to lie to the kid.

There are probably some things that he thinks that his audience is too stupid to understand. What else is he lying about?

Harris referred to how he spent years meditating and taking hallucinogenic drugs, after dropping out of college. Krauss was noticeably skeptical that he learned so much from taking drugs, but Harris made an analogy to Krauss studying mathematics. That is, just as Harris's LSD hallucinations might seem like nonsense to others, Krauss looking at a page of mathematical symbols also looks like nonsense to most people.

No, it is a stupid analogy. Advanced mathematics is demonstrably useful for all sorts of purposes. Nobody ever accomplished anything while on LSD. This sort of reasoning gives legitimacy to the nuttiest ideas, and it is surprising to get it from someone who made most of his reputation by badmouthing religion.

One quibble I have with Krauss is that about about 1:00:00, he says:
[questioning that] classical reason and logic should guide our notions.

The point is that classical reason and logic, when it comes to the world, are often wrong because our notions of classical reason and logic are based on our experience
Someone else says he is wrong, because the logic of Venn diagrams is not based on experience. Krauss sticks to his point, and says Venn diagrams can be wrong because an electron can be in two places at once.

I think that the problem here is that physicists, like Krauss, have a flawed view of mathematics. To the mathematician like myself, classical reason and logic are never wrong.

Electrons are not really in two places at once. But even if they were, they would not achieve some sort of logical impossibility. Nothing ever achieves a logical impossibility. It might be that you have a theory that is ambiguous about the electron location, or that there are 2 electrons, or something else. In my view, the electron is a wave that is not localized to a point, and separate places could have a possibility of observing it. Your exact view depends on your interpretation, but you are still going to use classical mathematics in any case.

I am still trying to get over Aaronson believing in many-worlds. It is hard for me to see how these smart professors can believe in such silly things.

Philosopher of Physics Tim Maudlin defends his favorite interpretations on Aaronson's site, and also has a new paper on The Labyrinth of Quantum Logic.

Quantum logic is a clever way to try to explain puzzles like the double slit, where light goes thru the double slit like a wave, but attempts to understand the experiment in terms of individual photon going thru one slit or the other are confusing. Quantum logic declares that it can be true that "the photon goes thru slit 1 and the photon goes thru slit 2", but the rules of logic need to be changed so that it does not imply that either "the photon goes thru slit 1" or "the photon goes thru slit 2" is true. In math jargon, they deny the law of the excluded middle.

As Maudlin explains, there has been some historical interest in quantum logic, but it has never proved useful, or even made much sense. No physical experiment can possibly effect our laws of mathematics. You can tell yourself that quantum logic explains the double slit experiment, but that's all you can do. It doesn't lead to anything else.

Quantum probability is another topic where people try to over-interpret quantum mechanics to try to tell us something about mathematics. Some say that quantum mechanics discovered that probabilities can be negative, or some such nonsense. Again, you can choose to think about things that way, but it has no bearing on mathematical probability theory, and probabilities are never negative.

The Harris and Krauss show was repeated in Chicago, with this summary:
I asked Sam at dinner if he was going to talk about free will, but he said that they’d covered that topic in a previous event, which was archived on his podcast. Nevertheless, one guy asked the speakers how, given the absence of free will, they could advise him how to cure his addiction to alcohol. That was a good question, because Sam and Lawrence are hard determinists (Matt is a compatibilist but still a determinist.) Answering that question without getting balled up in an infinite regress is quite difficult. If, for instance, you tell someone that they can choose to put themselves in a milieu where there is no alcohol and also surround themselves with supportive people (yes, that’s how it could be done), you risk making people think that you can make such a choice freely, instantiating dualism. I suppose a good answer is that one’s brain is a computer that weighs various inputs before giving the output (a decision), and that the advice Sam gave — which could of course influence the actions of the addict — was also adaptive, in the sense that he was giving strategies that his brain calculated had a higher probability of being useful. Further, we all try to be helpful to cement relationships and get a good reputation—that’s part of the evolved and learned program of our brains. But of course Sam had no “free” choice about his advice, and this shows the difficulty of discussing free will with those who haven’t thought about it. ...

The final remark came from Lawrence, who said that every time he stays in a hotel, his own gesture to diminish faith was to take the Gideon Bible, wrap it in a piece of paper, and throw it in the trash. Sam remarked dryly, “And that’s why atheists have such a good public image.”
Alchololics Anonymous teaches that alcoholics have little or no free will, and that they are doomed to remain alcoholics no matter what they do. They are advised to accept what they do not have the free will to change.

It is weird to worry about instantiating dualism.

Krauss is one of the better and more level-headed physicists in the public eye, but he thinks that he is helping people by telling them that they have no free will, and trashing their Bibles. He is good when he sticks to the physics.

Wednesday, February 7, 2018

Essays on What Is Fundamental?

submitted an essay to the annual FQXI essay contest.
What Is “Fundamental”
October 28, 2017 to January 22, 2018

Interesting physical systems can be described in a variety of languages. A cell, for example, might be understood in terms for example of quantum or classical mechanics, of computation, or information processing, of biochemistry, of evolution and genetics, or of behavior and function. We often consider some of these descriptions “more fundamental” than other more “emergent” ones, and many physicists pride themselves on pursuing the most fundamental sets of rules. But what exactly does it mean?

Are “more fundamental” constituents physically smaller? Not always: if inflation is correct, quanta of the inflaton field are as large as the observable universe.

Are “less fundamental” things made out of “more fundamental” ones? Perhaps – but while a cell is indeed "made of" atoms, it is perhaps more so “made of" structural and genetic information that is part of a long historical and evolutionary process. Is that process more fundamental than the cell?

Does a “more fundamental” description uniquely specify a “less fundamental” one? Not in many cases: consider string theory, with its landscape of 10500 or more low-energy limits. And the same laws of statistical mechanics can apply to many types of statistically described constituents.

Is “more fundamental” more economical or elegant in terms of concepts or entities? Only sometimes: a computational description of a circuit may be much more elegant than a wavefunction one. And there are hints that even gravity, a paragon of elegance, may be revealed as a statistical description of something else.
I couldn't get too excited about this essay, because "fundamental" is a subjective and ill-defined term. Nevertheless, I did post about 5 times in the last years on this subject, primarily in response to philosophers posting ridiculous opinions about what is fundamental in physics.

On the FQXi site, you can comment on my essay, and rate it on a scale from 1 to 10.

Monday, February 5, 2018

Aaronson gets on the Many-Worlds bus

Quantum computer complexity theorist Scott Aaronson has come out of the closet as a Many-Worlds supporter:
I will say this, however, in favor of Many-Worlds: it’s clearly and unequivocally the best interpretation of QM, as long as we leave ourselves out of the picture! I.e., as long as we say that the goal of physics is to give the simplest, cleanest possible mathematical description of the world that somewhere contains something that seems to correspond to observation, and we’re willing to shunt as much metaphysical weirdness as needed to those who worry themselves about details like “wait, so are we postulating the physical existence of a continuum of slightly different variants of me, or just an astronomically large finite number?”
This surprises me. H is the one who always insists on describing quantum mechanics in terms of probabilities. But in MWI, there is no Born rule, and there are no probabilities. All possible outcomes occur in different universes, and there is no known way to say that some universes are more probable than others.

Tim Maudlin has criticisms in the comments. He prefers the de Broglie Bohm interpretation, aka pilot wave theory, aka nonlocal hidden variable theory, aka (as he prefers) something with Bell's nonlocal beables.

Scott tries to answer why so many physicists have signed onto something so ridiculous:
Pascal #67:
There was a time when MWI was considered completely outlandish, but now it seems to be taken much more seriously.
What do you think caused this change in perspective?
Interesting question! Here are the first seven answers that spring to mind for me:

1. The founding generation of QM, the generation that had been directly influenced by Bohr and Heisenberg, died off. A new generation of physics students, less under their influence, decided that MWI made more sense to them. (You may want to read Max Tegmark’s personal account of his “conversion” to MWI, as a grad student in Berkeley, in his book. I suspect hundreds of similar stories played out.)

2. The quantum cosmologists mostly signed on to MWI, because Copenhagen didn’t seem to them to provide a sensible framework for the questions they were now asking. (Did the quantum fluctuations in the early universe acquire definite properties only when we, billions of years later, decided to measure the imprint of those properties on the CMB?)

3. David Deutsch, the most famous contemporary MWI proponent, was inspired by MWI to invent quantum computing; he later famously asked, “to anyone who still denies MWI, how does Shor’s algorithm work? if not parallel universes, then where was the number factored?” Anyone who understands Shor’s algorithm can give sophisticated answers to that question (mine are here). But in any case, what’s true is that quantum computing forced everyone’s attention onto the exponentiality of the wavefunction—something that was of course known since the 1920s, but (to my mind) shockingly underemphasized compared to other aspects of QM.

4. The development of decoherence theory, which fleshed out a lot of what had been implicit in Everett’s original treatment, and forced people to think more about under what conditions a measurement could be reversed.

5. The computer revolution. No, I’m serious. If you imagine it’s a computer making the measurement rather than a human observer, it somehow seems more natural to think about the computer’s memory becoming entangled with the system, but that then leads you in MWI-like directions (“but what if WE’RE THE COMPUTERS?”). Indeed, Everett explicitly took that tack in his 1957 paper. Also, if you approach physics from the standpoint of “how would I most easily simulate the whole universe on a computer?,” MWI is going to seem much more sensible to you than Copenhagen. It’s probably no coincidence that, after leaving physics, Everett spent the rest of his life doing CS and operations research stuff for the US defense department (mostly simulating nuclear wars, actually).

6. The rise of New Atheism, Richard Dawkins, Daniel Dennett, eliminativism about consciousness, and a subculture of self-confident Internet rationalists. Again, I’m serious. Once you’ve trained yourself to wield Occam’s Razor as combatively as people did for those other disputes, I think you’re like 90% of the way to MWI. (Again it’s probably no coincidence that, from what I know, Everett himself would’ve been perfectly at home in the worldview of the modern New Atheists and Internet rationalists.)

7. The leadership, in particular, of Eliezer Yudkowsky in modern online rationalism. Yudkowsky, more so even than Deutsch (if that’s possible), thinks it’s outlandish and insane to believe anything other than MWI — that all the sophisticated arguments against MWI have no more merit than the sophisticated arguments of 400 years ago against heliocentrism. (E.g., “If we could be moving at enormous speed, despite feeling exactly like we’re standing still, then radical skepticism would be justified about absolutely anything!”) Eliezer, and others like him, created a new phenomenon, of people needing to defensively justify why they weren’t Many-Worlders.
Wow. I think that Scott is mostly correct about these reasons, but it is surprising to see an MWI-advocate admit to them.

It is bizarre that radical rationalist atheist skeptics have somehow bullied mainstream physicists into believing in parallel unobservable universes. How does that happen? Yes, I know Bohr is dead, but can't anyone else fill in for him?

400 years ago, heliocentrism had the same predictions as the alternatives, with the heliocentrists starting to find some technical advantages. MWI does not make any quantitative predictions, and does not have any technical computational advantages. Some argue that MWI has a philosophical advantage in that it eliminates the observer, but that hasn't really given us any new physics.

MWI is still completely outlandish. It is amazing how many otherwise intelligent men have been sucked in by it.

Update: There is some technical discussion on Aaronson's blog about whether Born's rule can co-exist with Bohm's mechanics and with MWI. A couple of comments do a good job of explaining why failing to predict probabilities really is a fatal blow to MWI.

This issue goes right to the heart of what science is all about. Copenhagen quantum mechanics does a great job of predicting experiments, and became very widely accepted. Then comes MWI, which doesn't predict anything in our universe, but predicts all sorts of wild fantasies in unobservable parallel universes. So many of the smart professors jump ship, and endorse MWI? Weird.

Update: Lubos Motl piles on Aaronson, as usual.

Motl credits Aaronson with being about 80% correct, expecially when he (Aaronson) thinks for himself. Aaronson correctly explains what's wrong with the pilot wave and transactional interpretations. The big remaining issue is Copenhagen versus MWI.

That big issue goes right to the heart of what science is all about, and Motl explains it well. He creates an analogy of MWI to creationism (as Copenhagen to biological evolution). These creationism analogies get tiresome, but he makes a good point. MWI adds a huge belief structure way beyond what there can ever be evidence.

Update: In a later posting, Aaronson says:
Which interpretation of QM you espouse (e.g., MWI, Copenhagen, or Bohm) has no effect—none, zero—on what you should predict about the scalability of quantum computation, because by explicit design, all interpretations make exactly the same predictions for any experiment you can do on any system external to yourself.
This is contrary to the opinion of others like David Deutsch, who say that the many-worlds interpretation is what justifies quantum computing.

Wednesday, January 31, 2018

Horgan v Deutsch on consciousness

I posted on consciousness, without noticing a couple of other recent opinions.

SciAm's John Horgan writes:
Is science infinite? Can it keep giving us profound insights into the world forever? Or is it already bumping into limits, as I argued in The End of Science? In his 2011 book The Beginning of Infinity physicist David Deutsch made the case for boundlessness. When I asked him about consciousness in a recent Q&A he replied: “I think nothing worth understanding will always remain a mystery. And consciousness (qualia, creativity, free will etc.) seems eminently worth understanding.”

At a meeting I just attended in Switzerland, “The Enigma of Human Consciousness,” another eminent British physicist, Martin Rees, challenged Deutsch’s optimism. At the meeting scientists, philosophers and journalists (including me) chatted about animal consciousness, machine consciousness, psychedelics, Buddhism, meditation and other mind-body puzzles.

Rees, speaking via Skype from Cambridge, reiterated points he made last month in “Is There a Limit to Scientific Understanding?” In that essay Rees calls Beginning of Infinity “provocative and excellent” but disputes Deutsch’s central claim that science is boundless. Science “will hit the buffers at some point,” Rees warns. He continues:There are two reasons why this might happen. The optimistic one is that we clean up and codify certain areas (such as atomic physics) to the point that there’s no more to say. A second, more worrying possibility is that we’ll reach the limits of what our brains can grasp. There might be concepts, crucial to a full understanding of physical reality, that we aren’t aware of, any more than a monkey comprehends Darwinism or meteorology… Efforts to understand very complex systems, such as our own brains, might well be the first to hit such limits. Perhaps complex aggregates of atoms, whether brains or electronic machines, can never know all there is to know about themselves. 

Rees’s view resembles mine. In The End of Science I asserted that scientists are running into cognitive and physical limits and will never solve the deepest mysteries of nature, notably why there is something rather than nothing. I predicted that if we create super-intelligent machines, they too will be baffled by the enigma of their own existence.
It seems possible to me that we will never understand consciousness any better than we do today.

I have a lot of confidence in the power of science, but that is mainly for questions that have scientific formulations. These questions about consciousness do not necessarily have any answer.

Monday, January 29, 2018

Electrons may be conscious

From a Quartz essay:
Consciousness permeates reality. Rather than being just a unique feature of human subjective experience, it’s the foundation of the universe, present in every particle and all physical matter.

This sounds like easily-dismissible bunkum, but as traditional attempts to explain consciousness continue to fail, the “panpsychist” view is increasingly being taken seriously by credible philosophers, neuroscientists, and physicists, including figures such as neuroscientist Christof Koch and physicist Roger Penrose.

“Why should we think common sense is a good guide to what the universe is like?” says Philip Goff, a philosophy professor at Central European University in Budapest, Hungary. “Einstein tells us weird things about the nature of time that counters common sense; quantum mechanics runs counter to common sense. Our intuitive reaction isn’t necessarily a good guide to the nature of reality.”
I am not sure if this is nutty or not. We do not have a scientific definition of consciousness, so there is no way to test the ideas in this essay.

Nevertheless, there appears to be such a thing as consciousness, even if we cannot give a good definition of it.

Assuming you are a materialist, and not a dualist, the human brain is the sum of its constituent parts. Do those parts have a little bit of consciousness, or does consciousness only emerge after a certain cognitive capacity is reached? Both seem possible to me.

If consciousness is emergent, then we can expect AI computers to be conscious some day. Or maybe those computers will never be conscious until they are made of partially conscious parts.

There is an argument that decoherence times in a living brain environment are sufficiently fast that quantum mechanics cannot possibly play any part in consciousness. I do not accept that. The argument shows that you do not have Schroedinger cats in your head, or at least not for very long, but you quantum mechanics could have a vital role in decision making. We don't understand the brain well enough to say.

It may also turn out that consciousness will never be defined precisely enough for these questions to make sense.

Wednesday, January 24, 2018

Intellectuals are afraid of free will

It is funny to see scientists expressing a quasi-religious belief in determinism, and it rejecting free will.

The leftist-atheist-evolutionist writes:
One thing that’s struck me while interacting with various Scholars of Repute is how uncomfortable many get when they have to discuss free will. ...

No, I’m talking about other prominent thinkers, and I’ll use Richard Dawkins as an example. When I told him in Washington D.C. that, in our onstage conversation, that I would ask him about free will, he became visibly uncomfortable. ...

Why this avoidance of determinism? I’ve thought about it a lot, and the only conclusion I can arrive at is this: espousing the notion of determinism, and emphasizing its consequences, makes people uncomfortable, and they take that out on the determinist. For instance, suppose someone said — discussing the recent case of David Allen Turpin and Louise Anna Turpin, who held their 13 children captive under horrendous circumstances in their California home (chaining them to beds, starving them, etc.—”Yes, the Turpins people did a bad thing, but they had no choice. They were simply acting on the behavioral imperatives dictated by their genes and environment, and they couldn’t have done otherwise.”

If you said that, most people would think you a monster—a person without morals who was intent on excusing their behavior. But that statement about the Turpins is true! ...

But grasping determinism, as I, Sam [Harris], and people like Robert Sapolsky believe, would lead to recommending a complete overhaul of our justice system. ...

I assume that most readers here accept determinism of human behavior, with the possible exception of truly indeterminate quantum-mechanical phenomena that may affect our behavior but still don’t give us agency. What I want to know is why many intellectuals avoid discussing determinism, which I see as one of the most important issues of our time.
A comment disagreed, saying that quantum uncertainty in the brain implies that Turpin could have done otherwise. Coyne first accused a commenter of denying the laws of physics, and then qualified his post to say that he "could not CONSCIOUSLY have done otherwise. ... Randomness does not give us any “freedom”, ...".

There are several problems here. First, the laws of physics are not all deterministic. So one can say that Turpin had some free choice without denying any laws of physics.

Second, it is very difficult to say what is conscious behavior, and what is not, as we have no good scientific definition of consciousness. Does a dog make a conscious decision to chew on a bone? Can a computer possibly make a conscious decision? There is no consensus on how to answer questions like these.

Third, Coyne's put-down of randomness is nonsense. If you do have freedom to make arbitrary choices, then such choices look exactly like randomness. If you complain that my decisions are unpredictable, then you are complaining about my freedom to make decisions. There is no observable difference.

A week before, Coyne attacked E.O. Wilson:
Note in the second paragraph that Wilson cites “chance” in support of free will. If by “chance” he means “things that are determined but we can’t predict”, then that’s no support for the classic notion of free will: the “you could have chosen otherwise” sort. If he’s referring instead to pure quantum indeterminacy, well, that just confers unpredictability on our decisions, not agency. We don’t choose to make an electron jump in our brain.

From what I make of the third paragraph, his message is that because we are a long way from figuring out how we make behavioral decisions, we might as well act as if we have free will, especially because “confidence in free will is biologically adaptive.”
Wilson did not even express an opinion on free will, but merely expressed skepticism about understanding the brain.

Coyne also comments:
I’ve explained it many times; if you don’t understand the difference between somebody committing a good or bad act that was predetermined, and somebody freely choosing to perform a good or bad act for which they are praised or damned for supposedly making a good or bad choice, I can’t help you. They are different and the former isn’t empty.
But Coyne believes that the latter is empty, because no one can really choose anything.

My guess is that Dawkins is a believer in free will, but doesn't like to talk about it because he doesn't know how to square it with his widely-professed atheist beliefs. That could be true about other intellectuals as well.

While it may be baffling that some intellectuals are afraid to endorse determinism, I think that it is even more baffling that Coyne and Sam Harris are so eager to trying to convince people that no one has any choice about their thought processes.

I agree with this criticism of Sam Harris:
If there is no free will, why write books or try to convince anyone of anything? People will believe whatever they believe. They have no choice! Your position on free will is, therefore, self-refuting. The fact that you are trying to convince people of the truth of your argument proves that you think they have the very freedom that you deny them.
And yet many intellectuals deny free will, including physicists from Einstein to Max Tegmark.

Thursday, January 18, 2018

S. M. Carroll goes beyond falsifiability

Peter Woit writes:
Sean Carroll has a new paper out defending the Multiverse and attacking the naive Popperazi, entitled Beyond Falsifiability: Normal Science in a Multiverse. He also has a Beyond Falsifiability blog post here.

Much of the problem with the paper and blog post is that Carroll is arguing against a straw man, while ignoring the serious arguments about the problems with multiverse research.
Here is Carroll's argument that the multiverse is better than the Freudian-Marxist-crap that Popper was criticizing:
Popper was offering an alternative to the intuitive idea that we garner support for ideas by verifying or confirming them. In particular, he was concerned that theories such as the psychoanalysis of Freud and Adler, or Marxist historical analysis, made no definite predictions; no matter what evidence was obtained from patients or from history, one could come up with a story within the appropriate theory that seemed to fit all of the evidence. Falsifiability was meant as a corrective to the claims of such theories to scientific status.

On the face of it, the case of the multiverse seems quite different than the theories Popper was directly concerned with. There is no doubt that any particular multiverse scenario makes very definite claims about what is true. Such claims could conceivably be falsified, if we allow ourselves to count as "conceivable" observations made outside our light cone. (We can't actually make such observations in practice, but we can conceive of them.) So whatever one's stance toward the multiverse, its potential problems are of a different sort than those raised (in Popper's view) by psychoanalysis or Marxist history.

More broadly, falsifiability doesn't actually work as a solution to the demarcation problem, for reasons that have been discussed at great length by philosophers of science.
Got that? Just redefine "conceivable" to include observations that could never be done!

While Woit rejects string and multiverse theory, he is not sure about quantum computers:
I am no expert on quantum computing, but I do have quite a bit of experience with recognizing hype, and the Friedman piece appears to be well-loaded with it.
I'll give a hint here -- scientists don't need all the crazy hype if they have real results to brag about.

Monday, January 15, 2018

Gender fairness, rather than gender bias

I have quoted SciAm's John Horgan a few times, as he has some contrarian views about science and he is willing to express skepticism about big science fads. But he also has some conventional leftist blinders.

A couple of women posted a rebuttal to him on SciAm:
They found that the biggest barrier for women in STEM jobs was not sexism but their desire to form families. Overall, Ceci and Williams found that STEM careers were characterised by “gender fairness, rather than gender bias.” And, they stated, women across the sciences were more likely to receive hiring offers than men, their grants and articles were accepted at the same rate, they were cited at the same rate, and they were tenured and promoted at the same rate.

A year later, Ceci and Williams published the results of five national hiring experiments in which they sent hypothetical female and male applicants to STEM faculty members. They found that men and women faculty members from all four fields preferred female applicants 2:1 over identically qualified males.
This seems accurate to me. It is hard to find any women in academia with stories about how they have been mistreated.

Nevertheless, men get into trouble if they just say that there are personality differences between men and women. If you are a typical leftist man, you are expected to complain about sexism and the patriarchy, and defer to women on the subject.

Thursday, January 11, 2018

Intel claims 49-qubit computer

Here is news from the big Consumer Electronics Show:
Intel announced it has built a 49-qubit processor, suggesting it is on par with the quantum computing efforts at IBM and Google.

The announcement of the chip, code-named “Tangle Lake,” came during a pre-show keynote address by Intel CEO Brian Krzanich at this year’s Consumer Electronics Show (CES) in Las Vegas. “This 49-qubit chip pushes beyond our ability to simulate and is a step toward quantum supremacy, a point at which quantum computers far and away surpass the world’s best supercomputers,” said Krzanich. The chief exec went on to say that he expects quantum computing will have a profound impact in areas like material science and pharmaceuticals, among others. ...

In November 2017, IBM did announce it had constructed a 50-qubit prototype in the lab, while Google’s prediction of delivering a 49-qubit processor before the end of last year apparently did not pan out. As we’ve noted before, the mere presence of lots of qubits says little about the quality of the device. Attributes like coherence times and fault tolerance are at least as critical as size when it comes to quantum fiddling.

Details like that have not been made public for Tangle Lake, which Intel has characterized a “test chip.” Nevertheless, Intel’s ability to advance its technology so quickly seems to indicate the company will be able to compete with quantum computers being developed by Google, IBM, and a handful of quantum computing startups that have entered the space.
Until recently, the physics professors were saying that we needed 50 qubits to get quantum supremacy. Now these companies are claiming 49 qubits or barely 50 qubits, but they are not claiming quantum supremacy.

They don't really have 49 qubits. They are just saying that because it is the strongest claim that they can make, without someone calling their bluff and demanding proof of the quantum supremacy.
“In the quest to deliver a commercially viable quantum computing system, it’s anyone’s game,” said Mike Mayberry, corporate vice president and managing director of Intel Labs. “We expect it will be five to seven years before the industry gets to tackling engineering-scale problems, and it will likely require 1 million or more qubits to achieve commercial relevance.”
A million qubits? Each one has to be put in a Schrodinger cat state where it is 0 and 1 at the same time, pending an observation, and all million qubits have to be simultaneously entangled with each other.

This cannot happen in 5-7 years. This will never achieve commercial relevance.

Monday, January 8, 2018

The confidence interval fallacy

Statisticians have a concept called the p-value that is crucial to most papers in science and medicine, but is widely misunderstood. I just learned of another similarly-misunderstood concept.

Statisticians also have the confidence interval. But it does not mean what you think.

The Higgs boson has mass 125.09±0.21 GeV. You might see a statement that a 95% confidence interval for the mass is [124.88,125.30], and figure that physicists are 95% sure that the mass is within that interval. Or that 95% of the observations were within that interval.

Nope. It has some more roundabout definition. It does not directly give you confidence that the mass is within the interval.

Statistician A. Gelman recently admitted getting this wrong in his textbook, and you can learn more at The Fallacy of Placing Confidence in Confidence Intervals.

Some commenters at Gelman's blog say that the term was misnamed, and maybe should have been called "best guess interval" or something like that.

Saturday, January 6, 2018

Science perpetuating unequal social orders

A reader sends this 2017 paper on The careless use of language in quantum information:
An imperative aspect of modern science is that scientific institutions act for the benefit of a common scientific enterprise, rather than for the personal gain of individuals within them. This implies that science should not perpetuate existing or historical unequal social orders. Some scientific terminology, though, gives a very different impression. I will give two examples of terminology invented recently for the field of quantum information which use language associated with subordination, slavery, and racial segregation: 'ancilla qubit' and 'quantum supremacy'.
I first heard of this sort of objection in connection with Master/slave (technology)
Master/slave is a model of communication where one device or process has unidirectional control over one or more other devices. In some systems a master is selected from a group of eligible devices, with the other devices acting in the role of slaves.[1][2][3] ...

Appropriateness of terminology

In 2003, the County of Los Angeles in California asked that manufacturers, suppliers and contractors stop using "master" and "slave" terminology on products; the county made this request "based on the cultural diversity and sensitivity of Los Angeles County".[5][6] Following outcries about the request, the County of Los Angeles issued a statement saying that the decision was "nothing more than a request".[5] Due to the controversy,[citation needed] Global Language Monitor selected the term "master/slave" as the most politically incorrect word of 2004.[7]

In September 2016, MediaWiki deprecated instances of the term "slave" in favor of "replica".[8][9]

In December 2017, the Internet Systems Consortium, maintainers of BIND, decided to allow the words primary and secondary as a substitute for the well-known master/slave terminology. [10]
I am not even sure that people associate "white supremacy" with South Africa anymore. It appears to be becoming one of those meaningless name-calling epithets, like "nazi". Eg, if you oppose illegal immigration, you might be called a white supremacist.

Until everyone settled on "quantum supremacy", I used other terms on this blog, such as super-Turing. That is, the big goal is to make a computer that can do computations with a complexity that exceeds the capability of a Turing machine.

Meanwhile, the inventor of the quantum supremacy term has cooked a new term for the coming Google-IBM overhyped results:
Noisy Intermediate-Scale Quantum (NISQ) technology will be available in the near future. Quantum computers with 50-100 qubits may be able to perform tasks which surpass the capabilities of today's classical digital computers, but noise in quantum gates will limit the size of quantum circuits that can be executed reliably. NISQ devices will be useful tools for exploring many-body quantum physics, and may have other useful applications, but the 100-qubit quantum computer will not change the world right away --- we should regard it as a significant step toward the more powerful quantum technologies of the future. Quantum technologists should continue to strive for more accurate quantum gates and, eventually, fully fault-tolerant quantum computing. ...

We shouldn’t expect NISQ is to change the world by itself; instead it should be regarded as a step toward more powerful quantum technologies we’ll develop in the future. I do think that quantum computers will have transformative effects on society eventually, but these may still be decades away. We’re just not sure how long it’s going to take.
Will Google and IBM be happy claiming NISQ and admitting that quantum supremacy and transformative effects are decades away? I doubt it, but if they cannot achieve quantum supremacy, they will surely want to claim something.
A few years ago I spoke enthusiastically about quantum supremacy as an impending milestone for human civilization [20]. I suggested this term as a way to characterize computational tasks performable by quantum devices, where one could argue persuasively that no existing (or easily foreseeable) classical device could perform the same task, disregarding whether the task is useful in any other respect. I was trying to emphasize that now is a very privileged time in the coarse-grained history of technology on our planet, and I don’t regret doing so. ...

I’ve already emphasized repeatedly that it will probably be a long time before we have fault-tolerant quantum computers solving hard problems.
He sounds like Carl Sagan telling us about communication with intelligent life on other planets.