Wednesday, January 31, 2018

Horgan v Deutsch on consciousness

I posted on consciousness, without noticing a couple of other recent opinions.

SciAm's John Horgan writes:
Is science infinite? Can it keep giving us profound insights into the world forever? Or is it already bumping into limits, as I argued in The End of Science? In his 2011 book The Beginning of Infinity physicist David Deutsch made the case for boundlessness. When I asked him about consciousness in a recent Q&A he replied: “I think nothing worth understanding will always remain a mystery. And consciousness (qualia, creativity, free will etc.) seems eminently worth understanding.”

At a meeting I just attended in Switzerland, “The Enigma of Human Consciousness,” another eminent British physicist, Martin Rees, challenged Deutsch’s optimism. At the meeting scientists, philosophers and journalists (including me) chatted about animal consciousness, machine consciousness, psychedelics, Buddhism, meditation and other mind-body puzzles.

Rees, speaking via Skype from Cambridge, reiterated points he made last month in “Is There a Limit to Scientific Understanding?” In that essay Rees calls Beginning of Infinity “provocative and excellent” but disputes Deutsch’s central claim that science is boundless. Science “will hit the buffers at some point,” Rees warns. He continues:There are two reasons why this might happen. The optimistic one is that we clean up and codify certain areas (such as atomic physics) to the point that there’s no more to say. A second, more worrying possibility is that we’ll reach the limits of what our brains can grasp. There might be concepts, crucial to a full understanding of physical reality, that we aren’t aware of, any more than a monkey comprehends Darwinism or meteorology… Efforts to understand very complex systems, such as our own brains, might well be the first to hit such limits. Perhaps complex aggregates of atoms, whether brains or electronic machines, can never know all there is to know about themselves. 

Rees’s view resembles mine. In The End of Science I asserted that scientists are running into cognitive and physical limits and will never solve the deepest mysteries of nature, notably why there is something rather than nothing. I predicted that if we create super-intelligent machines, they too will be baffled by the enigma of their own existence.
It seems possible to me that we will never understand consciousness any better than we do today.

I have a lot of confidence in the power of science, but that is mainly for questions that have scientific formulations. These questions about consciousness do not necessarily have any answer.

Monday, January 29, 2018

Electrons may be conscious

From a Quartz essay:
Consciousness permeates reality. Rather than being just a unique feature of human subjective experience, it’s the foundation of the universe, present in every particle and all physical matter.

This sounds like easily-dismissible bunkum, but as traditional attempts to explain consciousness continue to fail, the “panpsychist” view is increasingly being taken seriously by credible philosophers, neuroscientists, and physicists, including figures such as neuroscientist Christof Koch and physicist Roger Penrose.

“Why should we think common sense is a good guide to what the universe is like?” says Philip Goff, a philosophy professor at Central European University in Budapest, Hungary. “Einstein tells us weird things about the nature of time that counters common sense; quantum mechanics runs counter to common sense. Our intuitive reaction isn’t necessarily a good guide to the nature of reality.”
I am not sure if this is nutty or not. We do not have a scientific definition of consciousness, so there is no way to test the ideas in this essay.

Nevertheless, there appears to be such a thing as consciousness, even if we cannot give a good definition of it.

Assuming you are a materialist, and not a dualist, the human brain is the sum of its constituent parts. Do those parts have a little bit of consciousness, or does consciousness only emerge after a certain cognitive capacity is reached? Both seem possible to me.

If consciousness is emergent, then we can expect AI computers to be conscious some day. Or maybe those computers will never be conscious until they are made of partially conscious parts.

There is an argument that decoherence times in a living brain environment are sufficiently fast that quantum mechanics cannot possibly play any part in consciousness. I do not accept that. The argument shows that you do not have Schroedinger cats in your head, or at least not for very long, but you quantum mechanics could have a vital role in decision making. We don't understand the brain well enough to say.

It may also turn out that consciousness will never be defined precisely enough for these questions to make sense.

Wednesday, January 24, 2018

Intellectuals are afraid of free will

It is funny to see scientists expressing a quasi-religious belief in determinism, and it rejecting free will.

The leftist-atheist-evolutionist writes:
One thing that’s struck me while interacting with various Scholars of Repute is how uncomfortable many get when they have to discuss free will. ...

No, I’m talking about other prominent thinkers, and I’ll use Richard Dawkins as an example. When I told him in Washington D.C. that, in our onstage conversation, that I would ask him about free will, he became visibly uncomfortable. ...

Why this avoidance of determinism? I’ve thought about it a lot, and the only conclusion I can arrive at is this: espousing the notion of determinism, and emphasizing its consequences, makes people uncomfortable, and they take that out on the determinist. For instance, suppose someone said — discussing the recent case of David Allen Turpin and Louise Anna Turpin, who held their 13 children captive under horrendous circumstances in their California home (chaining them to beds, starving them, etc.—”Yes, the Turpins people did a bad thing, but they had no choice. They were simply acting on the behavioral imperatives dictated by their genes and environment, and they couldn’t have done otherwise.”

If you said that, most people would think you a monster—a person without morals who was intent on excusing their behavior. But that statement about the Turpins is true! ...

But grasping determinism, as I, Sam [Harris], and people like Robert Sapolsky believe, would lead to recommending a complete overhaul of our justice system. ...

I assume that most readers here accept determinism of human behavior, with the possible exception of truly indeterminate quantum-mechanical phenomena that may affect our behavior but still don’t give us agency. What I want to know is why many intellectuals avoid discussing determinism, which I see as one of the most important issues of our time.
A comment disagreed, saying that quantum uncertainty in the brain implies that Turpin could have done otherwise. Coyne first accused a commenter of denying the laws of physics, and then qualified his post to say that he "could not CONSCIOUSLY have done otherwise. ... Randomness does not give us any “freedom”, ...".

There are several problems here. First, the laws of physics are not all deterministic. So one can say that Turpin had some free choice without denying any laws of physics.

Second, it is very difficult to say what is conscious behavior, and what is not, as we have no good scientific definition of consciousness. Does a dog make a conscious decision to chew on a bone? Can a computer possibly make a conscious decision? There is no consensus on how to answer questions like these.

Third, Coyne's put-down of randomness is nonsense. If you do have freedom to make arbitrary choices, then such choices look exactly like randomness. If you complain that my decisions are unpredictable, then you are complaining about my freedom to make decisions. There is no observable difference.

A week before, Coyne attacked E.O. Wilson:
Note in the second paragraph that Wilson cites “chance” in support of free will. If by “chance” he means “things that are determined but we can’t predict”, then that’s no support for the classic notion of free will: the “you could have chosen otherwise” sort. If he’s referring instead to pure quantum indeterminacy, well, that just confers unpredictability on our decisions, not agency. We don’t choose to make an electron jump in our brain.

From what I make of the third paragraph, his message is that because we are a long way from figuring out how we make behavioral decisions, we might as well act as if we have free will, especially because “confidence in free will is biologically adaptive.”
Wilson did not even express an opinion on free will, but merely expressed skepticism about understanding the brain.

Coyne also comments:
I’ve explained it many times; if you don’t understand the difference between somebody committing a good or bad act that was predetermined, and somebody freely choosing to perform a good or bad act for which they are praised or damned for supposedly making a good or bad choice, I can’t help you. They are different and the former isn’t empty.
But Coyne believes that the latter is empty, because no one can really choose anything.

My guess is that Dawkins is a believer in free will, but doesn't like to talk about it because he doesn't know how to square it with his widely-professed atheist beliefs. That could be true about other intellectuals as well.

While it may be baffling that some intellectuals are afraid to endorse determinism, I think that it is even more baffling that Coyne and Sam Harris are so eager to trying to convince people that no one has any choice about their thought processes.

I agree with this criticism of Sam Harris:
If there is no free will, why write books or try to convince anyone of anything? People will believe whatever they believe. They have no choice! Your position on free will is, therefore, self-refuting. The fact that you are trying to convince people of the truth of your argument proves that you think they have the very freedom that you deny them.
And yet many intellectuals deny free will, including physicists from Einstein to Max Tegmark.

Thursday, January 18, 2018

S. M. Carroll goes beyond falsifiability

Peter Woit writes:
Sean Carroll has a new paper out defending the Multiverse and attacking the naive Popperazi, entitled Beyond Falsifiability: Normal Science in a Multiverse. He also has a Beyond Falsifiability blog post here.

Much of the problem with the paper and blog post is that Carroll is arguing against a straw man, while ignoring the serious arguments about the problems with multiverse research.
Here is Carroll's argument that the multiverse is better than the Freudian-Marxist-crap that Popper was criticizing:
Popper was offering an alternative to the intuitive idea that we garner support for ideas by verifying or confirming them. In particular, he was concerned that theories such as the psychoanalysis of Freud and Adler, or Marxist historical analysis, made no definite predictions; no matter what evidence was obtained from patients or from history, one could come up with a story within the appropriate theory that seemed to fit all of the evidence. Falsifiability was meant as a corrective to the claims of such theories to scientific status.

On the face of it, the case of the multiverse seems quite different than the theories Popper was directly concerned with. There is no doubt that any particular multiverse scenario makes very definite claims about what is true. Such claims could conceivably be falsified, if we allow ourselves to count as "conceivable" observations made outside our light cone. (We can't actually make such observations in practice, but we can conceive of them.) So whatever one's stance toward the multiverse, its potential problems are of a different sort than those raised (in Popper's view) by psychoanalysis or Marxist history.

More broadly, falsifiability doesn't actually work as a solution to the demarcation problem, for reasons that have been discussed at great length by philosophers of science.
Got that? Just redefine "conceivable" to include observations that could never be done!

While Woit rejects string and multiverse theory, he is not sure about quantum computers:
I am no expert on quantum computing, but I do have quite a bit of experience with recognizing hype, and the Friedman piece appears to be well-loaded with it.
I'll give a hint here -- scientists don't need all the crazy hype if they have real results to brag about.

Monday, January 15, 2018

Gender fairness, rather than gender bias

I have quoted SciAm's John Horgan a few times, as he has some contrarian views about science and he is willing to express skepticism about big science fads. But he also has some conventional leftist blinders.

A couple of women posted a rebuttal to him on SciAm:
They found that the biggest barrier for women in STEM jobs was not sexism but their desire to form families. Overall, Ceci and Williams found that STEM careers were characterised by “gender fairness, rather than gender bias.” And, they stated, women across the sciences were more likely to receive hiring offers than men, their grants and articles were accepted at the same rate, they were cited at the same rate, and they were tenured and promoted at the same rate.

A year later, Ceci and Williams published the results of five national hiring experiments in which they sent hypothetical female and male applicants to STEM faculty members. They found that men and women faculty members from all four fields preferred female applicants 2:1 over identically qualified males.
This seems accurate to me. It is hard to find any women in academia with stories about how they have been mistreated.

Nevertheless, men get into trouble if they just say that there are personality differences between men and women. If you are a typical leftist man, you are expected to complain about sexism and the patriarchy, and defer to women on the subject.

Thursday, January 11, 2018

Intel claims 49-qubit computer

Here is news from the big Consumer Electronics Show:
Intel announced it has built a 49-qubit processor, suggesting it is on par with the quantum computing efforts at IBM and Google.

The announcement of the chip, code-named “Tangle Lake,” came during a pre-show keynote address by Intel CEO Brian Krzanich at this year’s Consumer Electronics Show (CES) in Las Vegas. “This 49-qubit chip pushes beyond our ability to simulate and is a step toward quantum supremacy, a point at which quantum computers far and away surpass the world’s best supercomputers,” said Krzanich. The chief exec went on to say that he expects quantum computing will have a profound impact in areas like material science and pharmaceuticals, among others. ...

In November 2017, IBM did announce it had constructed a 50-qubit prototype in the lab, while Google’s prediction of delivering a 49-qubit processor before the end of last year apparently did not pan out. As we’ve noted before, the mere presence of lots of qubits says little about the quality of the device. Attributes like coherence times and fault tolerance are at least as critical as size when it comes to quantum fiddling.

Details like that have not been made public for Tangle Lake, which Intel has characterized a “test chip.” Nevertheless, Intel’s ability to advance its technology so quickly seems to indicate the company will be able to compete with quantum computers being developed by Google, IBM, and a handful of quantum computing startups that have entered the space.
Until recently, the physics professors were saying that we needed 50 qubits to get quantum supremacy. Now these companies are claiming 49 qubits or barely 50 qubits, but they are not claiming quantum supremacy.

They don't really have 49 qubits. They are just saying that because it is the strongest claim that they can make, without someone calling their bluff and demanding proof of the quantum supremacy.
“In the quest to deliver a commercially viable quantum computing system, it’s anyone’s game,” said Mike Mayberry, corporate vice president and managing director of Intel Labs. “We expect it will be five to seven years before the industry gets to tackling engineering-scale problems, and it will likely require 1 million or more qubits to achieve commercial relevance.”
A million qubits? Each one has to be put in a Schrodinger cat state where it is 0 and 1 at the same time, pending an observation, and all million qubits have to be simultaneously entangled with each other.

This cannot happen in 5-7 years. This will never achieve commercial relevance.

Monday, January 8, 2018

The confidence interval fallacy

Statisticians have a concept called the p-value that is crucial to most papers in science and medicine, but is widely misunderstood. I just learned of another similarly-misunderstood concept.

Statisticians also have the confidence interval. But it does not mean what you think.

The Higgs boson has mass 125.09±0.21 GeV. You might see a statement that a 95% confidence interval for the mass is [124.88,125.30], and figure that physicists are 95% sure that the mass is within that interval. Or that 95% of the observations were within that interval.

Nope. It has some more roundabout definition. It does not directly give you confidence that the mass is within the interval.

Statistician A. Gelman recently admitted getting this wrong in his textbook, and you can learn more at The Fallacy of Placing Confidence in Confidence Intervals.

Some commenters at Gelman's blog say that the term was misnamed, and maybe should have been called "best guess interval" or something like that.

Saturday, January 6, 2018

Science perpetuating unequal social orders

A reader sends this 2017 paper on The careless use of language in quantum information:
An imperative aspect of modern science is that scientific institutions act for the benefit of a common scientific enterprise, rather than for the personal gain of individuals within them. This implies that science should not perpetuate existing or historical unequal social orders. Some scientific terminology, though, gives a very different impression. I will give two examples of terminology invented recently for the field of quantum information which use language associated with subordination, slavery, and racial segregation: 'ancilla qubit' and 'quantum supremacy'.
I first heard of this sort of objection in connection with Master/slave (technology)
Master/slave is a model of communication where one device or process has unidirectional control over one or more other devices. In some systems a master is selected from a group of eligible devices, with the other devices acting in the role of slaves.[1][2][3] ...

Appropriateness of terminology

In 2003, the County of Los Angeles in California asked that manufacturers, suppliers and contractors stop using "master" and "slave" terminology on products; the county made this request "based on the cultural diversity and sensitivity of Los Angeles County".[5][6] Following outcries about the request, the County of Los Angeles issued a statement saying that the decision was "nothing more than a request".[5] Due to the controversy,[citation needed] Global Language Monitor selected the term "master/slave" as the most politically incorrect word of 2004.[7]

In September 2016, MediaWiki deprecated instances of the term "slave" in favor of "replica".[8][9]

In December 2017, the Internet Systems Consortium, maintainers of BIND, decided to allow the words primary and secondary as a substitute for the well-known master/slave terminology. [10]
I am not even sure that people associate "white supremacy" with South Africa anymore. It appears to be becoming one of those meaningless name-calling epithets, like "nazi". Eg, if you oppose illegal immigration, you might be called a white supremacist.

Until everyone settled on "quantum supremacy", I used other terms on this blog, such as super-Turing. That is, the big goal is to make a computer that can do computations with a complexity that exceeds the capability of a Turing machine.

Meanwhile, the inventor of the quantum supremacy term has cooked a new term for the coming Google-IBM overhyped results:
Noisy Intermediate-Scale Quantum (NISQ) technology will be available in the near future. Quantum computers with 50-100 qubits may be able to perform tasks which surpass the capabilities of today's classical digital computers, but noise in quantum gates will limit the size of quantum circuits that can be executed reliably. NISQ devices will be useful tools for exploring many-body quantum physics, and may have other useful applications, but the 100-qubit quantum computer will not change the world right away --- we should regard it as a significant step toward the more powerful quantum technologies of the future. Quantum technologists should continue to strive for more accurate quantum gates and, eventually, fully fault-tolerant quantum computing. ...

We shouldn’t expect NISQ is to change the world by itself; instead it should be regarded as a step toward more powerful quantum technologies we’ll develop in the future. I do think that quantum computers will have transformative effects on society eventually, but these may still be decades away. We’re just not sure how long it’s going to take.
Will Google and IBM be happy claiming NISQ and admitting that quantum supremacy and transformative effects are decades away? I doubt it, but if they cannot achieve quantum supremacy, they will surely want to claim something.
A few years ago I spoke enthusiastically about quantum supremacy as an impending milestone for human civilization [20]. I suggested this term as a way to characterize computational tasks performable by quantum devices, where one could argue persuasively that no existing (or easily foreseeable) classical device could perform the same task, disregarding whether the task is useful in any other respect. I was trying to emphasize that now is a very privileged time in the coarse-grained history of technology on our planet, and I don’t regret doing so. ...

I’ve already emphasized repeatedly that it will probably be a long time before we have fault-tolerant quantum computers solving hard problems.
He sounds like Carl Sagan telling us about communication with intelligent life on other planets.

Thursday, January 4, 2018

Google promises quantum supremacy in 2018

NewScientist reports:
If all goes to plan in 2018, Google will unveil a device capable of performing calculations that no other computer on the planet can tackle. The quantum computing era is upon us.

Well, sort of. Google is set to achieve quantum supremacy, the long-awaited first demonstration of quantum computers’ ability to outperform ordinary machines at certain tasks. Regular computing bits can be in one of two states: 0 or 1. Their quantum cousins, qubits, get a performance boost by storing a mixture of both states at the same time.

Google’s planned device has just 49 qubits – hardly enough to threaten the world’s high-speed supercomputers. But the tech giant has stacked the deck heavily in its favour, choosing to attack a problem involving simulating the behaviour of random quantum objects – a significant home advantage for a quantum machine.

This task is useless. Solving it won’t build better AI, ...
Google promised quantum supremacy in 2017. Now it is 2018.

If we hear this every year for the next five years, will anyone finally agree that I am right to be skeptical?

Tuesday, January 2, 2018

Scientists censoring non-leftist views

Scott Aaronson is considering joining a group supporting a diversity of views in academia, but backed out because he believes that if someone like Donald Trump were elected, "I’d hope that American academia would speak with one voice".

Okay, he obvious does not favor a diversity of views, and does not even want representation of the electoral majority that voted for Trump.

SciAm blogger John Horgan writes:
In principle, evolutionary psychology, which seeks to understand our behavior in light of the fact that we are products of natural selection, can give us deep insights into ourselves. In practice, the field often reinforces insidious prejudices. That was the theme of my recent column “Darwin Was Sexist, and So Are Many Modern Scientists.”

The column provoked such intense pushback that I decided to write this follow-up post. ...

Political scientist Charles Murray complained that Scientific American “has been adamantly PC since before PC was a thing,” which as someone who began writing for the magazine in 1986 I take as a compliment. ...

War seems to have emerged not millions of years ago but about 12,000 years ago when our ancestors started abandoning their nomadic ways and settling down. ... War and patriarchy, in other words, are relatively recent cultural developments. ...

Proponents of biological theories of sexual inequality accuse their critics of being “blank slaters,” who deny any innate psychological tendencies between the sexes. This is a straw man. I am not a blank-slater, nor do I know any critic of evolutionary psychology who is. But I fear that biological theorizing about these tendencies, in our still-sexist world, does more harm than good. It empowers the social injustice warriors, and that is the last thing our world needs.
Our world will always be sexist. It is human nature. Only in academia can you find ppl striving for a non-sexist world.

It is odd to hear a science magazine writer complain that "biological theorizing ... does more harm than good." When we only allow certain theorizing that supports certain political views, then we get bogus theories. In this case, he only wants anti-sexism and anti-patriarchy theories.

It is amusing to read Scott's comments, where he agrees with the academic leftists 98%. But Ken Miller jumps on his for disagreeing with white genocide. That is, Scott says that a leftist professor deserves to be criticized if he advocates white genocide.