Wednesday, June 22, 2016

Lobbying for the exascale computer

The NY Times reports:
Last year, the Obama administration began a new effort to develop a so-called “exascale” supercomputer that would be 10 times faster than today’s fastest supercomputers. (An exaflop is a quintillion — one million trillion — mathematical instructions a second.) Computer scientists have argued that such machines will allow more definitive answers on crucial questions such as the danger posed by climate change.
It is funny what scientists will say to get funding. If saying that anthropogenic global warming is a proven fact gets them funding, they say that. But if then saying that we need an exascale supercomputer will prove it, they say that to get funding for those exaflops.

Monday, June 20, 2016

Philosopher errors about free will

I listened to some dopey philosophers discuss free will, and they agreed on several absurd points.
Science has proved that the world is deterministic.
No, our leading scientific theories are not deterministic. Quantum mechanics is the most fundamental, and it is not deterministic at all. The most deterministic theory is supposed to be Newtonian mechanics, but it is not deterministic as it is usually applied.
Regardless of empirical knowledge, a rational materialistic view requires determinism.
I don't know how anyone can believe anything so silly. Nearly all of science, from astrophysics to social science, uses models that are partially deterministic and partially stochastic. Pure determinism is not used or believed anywhere.
Random means all possibilities are equally likely.
No. Coin tosses are supposed to have this property, but anything more complicated usually does not. If you allow the possibility of the coin landing on it edge, the edge is not equally likely.
Randomness means no one has any control over outcomes.
No, it means nothing of the kind. Often the randomness is a scientific paper is controlled by a pseudorandom number generator. While those numbers look random compared to the other data, they are determined by a formula.
The world is either deterministic or random, so we have no free will.
I wrote an essay explaining why this is wrong.

This is the sort of thing that only a foolish philosopher would say. Free will is self-evident. And yet they claim that it does not exist based on some supposed dichotomy between two other concepts that they never adequately define.

They might define randomness as anything that is not determined. Then they try to draw some grand philosophical consequence from that dichotomy. No, you cannot prove something nontrivial just by giving a definition.

Saturday, June 18, 2016

Physicists with faith in the multiverse

Peter Woit reports:
So, from the Bayesians we now have the following for multiverse probability estimates:

Carroll: “About 50%”
Polchinski: “94%”
Rees: “Kill my dog if it’s not true”
Linde: “Kill me if it’s not true”
Weinberg: “Kill Linde and Rees’s dog if it’s not true”

Not quite sure how one explains this when arguing with people convinced that science is just opinion.
Neil comments:
When a weather forecaster tells me the probability of rain tomorrow is 50%, I translate it as “I don’t know.” With a greater than 50%, I hear “There is more reason to think it will rain than it won’t” and vice versa with less than 50%.
No, this is badly confused.

If you really don't know anything, then you can apply the Principle of indifference to say that both possibilities have a 50% prior. But that is certainly not what the weather forecaster means. He is says that when historical conditions have matched the current conditions, it has rained 50% of the time. That is very useful, as a typical day will usually have a much less chance of rain (in most places).

A multiverse probability estimate does not refer to other instances that may or may not have been multiverse. So all the probability can mean is a measure of the speaker's belief. This is not any evidence for the multiverse, so it is like a measure of one's faith in God.

Thursday, June 16, 2016

New black hole collision announced

A reader alerts me to this:
Why does Ligo's reported second detection of gravitational waves and a black hole merger look absolutely nothing like the first detection announced in Februaray?
There are more details at the LIGO Skeptic blog.

I don't know. I am also wondering why they are just now announcing a collision that was supposedly observed 5 months ago, and whether LIGO still has its policy of only 3 people knowing whether or not the result has been faked.

And why do they sit on the data so long? The LIGO folks could tell us whenever they have a coincident event. As it is, they are not telling us the full truth, because they make big announcements while concealing the data that would allow us to make comparisons.

Monday, June 13, 2016

Dawkins against First Cause argument

Leftist-atheist-evolutionist summarizes a Dawkins speech:
As you may recall, in The God Delusion Richard argued that the “First Cause” argument is intellectually bankrupt, for it doesn’t explain the “cause” of a complex God who could create everything. In response, the dimmer or more mendacious theologians say that God isn’t complex at all, but simple. (Some theologians, however, just punt and say that God is the One Thing that Doesn’t Need a Cause.) It is this “simple god” argument that Dawkins takes apart halfway through the video. And, of course, even if God were simple, his appearance in the cosmos still needs an explanation. It just won’t do to say that God is the one thing, among all other things, that doesn’t need a cause. If he was hanging around forever, pray tell us, O theologians, what he was doing before he created the Universe.

Finally, physics has dispensed with the idea of causation; the discipline has no such thing as “a law of cause and effect,” and some physical phenomena are simply uncaused.
If he just said that theologians cannot explain the cause of God, then I would agree.

But then the argument is that God requires a causal explanation, and then nothing else does!

How could Dawkins and Coyne think that "physics has dispensed with the idea of causation"? Does some textbook say that?

All my textbooks say that events are caused by events in the backward light cone, with causation propagated by differential equations.

All of our modern science theories are based on causality. From what we know, everything has causes that are earlier in time.

If you ask what caused the big bang, then no one has a good answer for that. You can say that there is no singularity. You can say that the singularity had no cause. You can say that the big bang is just an energy release in some larger multiverse. You can say God caused it. You can say turtles caused it. You can say a lot of things, but the subject is so far removed from the domain of established science that it is all just wild speculation.

Dawkins' speech was at something called the "2016 Reason Rally". Those guys must have a funny idea about what "reason" is.

Neil deGrasse Tyson now says that he does not call himself an "atheist" because that term is now understood to include various other ideologies. He has a point. If you called yourself a Baptist, then people would expect you to have views in line with the big Baptist organizations, rather than expect you to conform to some dictionary definition. Likewise atheist is not just someone who denies God, according to many today.

Meanwhile, Scott Aaronson posts a Trump-hating rant:
From my standpoint, Hillary does have four serious drawbacks, any one of which might cause me to favor an alternative, were such an alternative available: ...

Again, though: I regard Trump as an existential threat to American democracy and rule of law, of a kind I’ve never seen in my lifetime and never expected to see.
One of his arguments is that Trump proposed a temporary ban on Muslim immigration.

No, Islam is the existential threat, and Trump recognizes that. Aaronson posted that before the recent gay bar shooting.

Here is more Trump-hating:
Ms. Whitman, according to one of the people present, did not stop at comparing Mr. Trump to Hitler and Mussolini.
Okay, maybe Islam and the Trump-haters are the existential threats.

Friday, June 10, 2016

Google seeking universal quantum computer

Nature mag reports:
Google moves closer to a universal quantum computer

Combining the best of analog and digital approaches could yield a full-scale multipurpose quantum computer.

For 30 years, researchers have pursued the universal quantum computer, a device that could solve any computational problem, with varying degrees of success. ...

This new approach should enable a computer with quantum error correction, says Lidar. Although the researchers did not demonstrate that here, the team has previously shown how that might be achieved on its nine-qubit device2.

“With error correction, our approach becomes a general-purpose algorithm that is, in principle, scalable to an arbitrarily large quantum computer,” says Alireza Shabani, another member of the Google team.

The Google device is still very much a prototype. But Lidar says that in a couple of years, devices with more than 40 qubits could become a reality.

“At that point,” he says, “it will become possible to simulate quantum dynamics that is inaccessible on classical hardware, which will mark the advent of ‘quantum supremacy’.”
I post this for the benefit of those who think that quantum computers have already been built. As you can see, Google is really itching to claim the first quantum computer, and it hopes to do so in the next couple of years.

I doubt it, but we shall see. Google has done better with artificial intelligence than I expected, but worse in robotics. I am predicting that it does not demonstrate quantum supremacy.

Tuesday, June 7, 2016

Hawking pushes soft hair on black holes

A NY Times article brags about the latest research paper from Stephen Hawking and coauthors:
But there was a hitch. By Dr. Hawking’s estimation, the radiation coming out of the black hole as it fell apart would be random. As a result, most of the “information” about what had fallen in — all of the attributes and properties of the things sucked in, whether elephants or donkeys, Volkswagens or Cadillacs — would be erased.

In a riposte to Einstein’s famous remark that God does not play dice, Dr. Hawking said in 1976, “God not only plays dice with the universe, but sometimes throws them where they can’t be seen.”

But his calculation violated a tenet of modern physics: that it is always possible in theory to reverse time, run the proverbial film backward and reconstruct what happened in, say, the collision of two cars or the collapse of a dead star into a black hole.

The universe, like a kind of supercomputer, is supposed to be able to keep track of whether one car was a green pickup truck and the other was a red Porsche, or whether one was made of matter and the other antimatter. These things may be destroyed, but their “information” — their essential physical attributes — should live forever.
No, this is nonsense. There is no accepted principle that information should live forever.
At least in principle, then, he agreed, information is always preserved — even in the smoke and ashes when you, say, burn a book. With the right calculations, you should be able reconstruct the patterns of ink, the text. ...

But neither Dr. Hawking nor anybody else was able to come up with a convincing explanation for how that happens and how all this “information” escapes from the deadly erasing clutches of a black hole.

Indeed, a group of physicists four years ago tried to figure it out and suggested controversially that there might be a firewall of energy just inside a black hole that stops anything from getting out or even into a black hole.

The new results do not address that issue. But they do undermine the famous notion that black holes have “no hair” — that they are shorn of the essential properties of the things they have consumed. ...

But neither Dr. Hawking nor anybody else was able to come up with a convincing explanation for how that happens and how all this “information” escapes from the deadly erasing clutches of a black hole.

Indeed, a group of physicists four years ago tried to figure it out and suggested controversially that there might be a firewall of energy just inside a black hole that stops anything from getting out or even into a black hole.

The new results do not address that issue. But they do undermine the famous notion that black holes have “no hair” — that they are shorn of the essential properties of the things they have consumed. ...

“When I wrote my paper 40 years ago, I thought the information would pass into another universe,” he told me. Now he thinks the information is stored on the black hole’s horizon.
The general public must think that Hawking and other physicists are trolling us.

No, you cannot recconstruct a book after burning it, and you cannot recover info after dropping it into a black hole. Saying that the info stays on the horizon, or whatever Hawking is saying now, has no observable implications. The firewall idea is just more lunacy.

The idea that info lasts forever is promoted primarily the Everett many-worlds cult. They believe that irreversible experiments, like buring a book, are really just some sort of illusion, with all the info passing into a parallel universe. The info looks as if it is lost to our universe, but a trace of it is quantum entangled, so there is some infinitesimal probability it can be reinstated.

If that seems like gibberish to you, it is gibberish.

Monday, June 6, 2016

Musk says we live in a simulation

Elon Musk is today's most daring entrepreneur, and he says:
At Recode's annual Code Conference, Elon Musk explained how we are almost certainly living in a more advanced civilization's video game. He said: "The strongest argument for us being in a simulation probably is the following. Forty years ago we had pong. Like, two rectangles and a dot. That was what games were. Now, 40 years later, we have photorealistic, 3D simulations with millions of people playing simultaneously, and it's getting better every year. Soon we'll have virtual reality, augmented reality. If you assume any rate of improvement at all, then the games will become indistinguishable from reality, even if that rate of advancement drops by a thousand from what it is now. Then you just say, okay, let's imagine it's 10,000 years in the future, which is nothing on the evolutionary scale. So given that we're clearly on a trajectory to have games that are indistinguishable from reality, and those games could be played on any set-top box or on a PC or whatever, and there would probably be billions of such computers or set-top boxes, it would seem to follow that the odds that we're in base reality is one in billions. Tell me what's wrong with that argument. Is there a flaw in that argument?"
Maybe this is why Musk thinks that colonizing Mars is a good idea.

This is another argument against spending another billion dollars on quantum computer research.

By doing a lot of quantum experiments, we stress the ability of the simulator to keep up. These experiments put qubits into Schroedinger cat states, and there is no known efficient way to simulate cat states on a classical computer. So such simulations will slow down the frame rate, and effectively put us all in a state of suspended animation while the computations catch up.

Possibly the simulation would take short-cuts, and violate the laws of quantum mechanics. No one has seen that yet. Another possibility is that the game hardware of the future will have massively power quantum computers in them. But that is not part of Musk's argument, as there has been no progess in making a quantum computer more efficient than a classical computer.

If the quantum computer enthusiasts are right, Google will have a practical quantum computer in a couple of years. If so, then Google could be putting us all in a state of suspended animation, or maybe even killing us by making possible the quantum simulators that will make it likely that we are all only living in a simulation.

As the above summary says, "Tell me what's wrong with that argument. Is there a flaw in that argument?"

Scott Aaronson challenges the famous mathematical physicist Roger Penrose on the subject of computer consciousness, and says:
One thing I hadn’t fully appreciated before meeting Penrose is just how wholeheartedly he agrees with Everett that quantum mechanics, as it currently stands, implies Many Worlds. Penrose differs from Everett only in what conclusion he draws from that. He says it follows that quantum mechanics has to be modified or completed, since Many Worlds is such an obvious reductio ad absurdum.
Penrose has his own theories, but they are not widely accepted.

Saying that quantum mechanics, as it currently stands, implies Many Worlds, is not much different from saying that it implies quantum computers. It makes almost as much sense to me to say that living in a quantum simulator is an obvious reductio ad absurdum.

For us to be programs in a simulator, we would have to believe that computer consciousness is possible, and there is no consensus for that. There is not even any way to define consciousness or to detect it if you find it.

Penrose says consciousness hinges on quantum mechanics being modified to accommodate gravity, and such quantum gravity operations taking place in the brain.

Aaronson says that consciousness could hinge on living in an anti-DeSitter space so that the boundary of a black hole somewhere could enforce unitary of the wave function time evolution. But he concedes that we seem to live in a de-Sitter-like space, with photons escaping irreversibly.

Lumo comments on Penrose and Aaronson, as usual.

In the Matt Damon MIT Commencement Speech June 3 2016, he tells of prominent physicist who say that we might be living in a simulation. He also tells plenty of left-wing nonsense. I would think that this ought to be a symptom of schizophrenia. I mean thinking that we live in a simulation is crazy. Maybe some of the leftist nonsense also.

Monday, May 30, 2016

The medieval politics of infinitesimals

I would not have thought that infinitesimals would be so political, but a book last year says so. It is titled, Infinitesimal: How a Dangerous Mathematical Theory Shaped the Modern World.

MIT historian and ex-Russian Slava Gerovitch reviews the book.
The Jesuits were largely responsible for raising the status of mathematics in Italy from a lowly discipline to a paragon of truth and a model for social and political order. The Gregorian reform of the calendar of 1582, widely accepted in Europe across the religious divide, had very favorable political ramifications for the Pope, and this project endeared mathematics to the hearts of Catholics. In an age of religious strife and political disputes, the Jesuits hailed mathematics in general, and Euclidean geometry in particular, as an exemplar of resolving arguments with unassailable certainty through clear definitions and careful logical reasoning. They lifted mathematics from its subservient role well below philosophy and theology in the medieval tree of knowledge and made it the centerpiece of their college curriculum as an indispensable tool for training the mind to think in an orderly and correct way.

The new, enviable position of mathematics in the Jesuits’ epistemological hierarchy came with a set of strings attached. Mathematics now had a new responsibility to publicly symbolize the ideals of certainty and order. Various dubious innovations, such as the method of indivisibles, with their inexplicable paradoxes, undermined this image. The Jesuits therefore viewed the notion of infinitesimals as a dangerous idea and wanted to expunge it from mathematics. In their view, infinitesimals not only tainted mathematics but also opened the door to subversive ideas in other areas, undermining the established social and political order. The Jesuits never aspired to mathematical originality. Their education was oriented toward an unquestioning study of established truths, and it discouraged open-ended intellectual explorations. In the first decades of the seventeenth century the Revisors General in Rome issued a series of injunctions against infinitesimals, forbidding their use in Jesuit colleges. Jesuit mathematicians called the indivisibles “hallucinations” and argued that “[t]hings that do not exist, nor could they exist, cannot be compared” (pp. 154, 159). ...

The battle over the method of indivisibles played out differently in England, where the Royal Society proved capable of sustaining an open intellectual debate. One of the most prominent critics of infinitesimals in England was philosopher and amateur mathematician Thomas Hobbes. A sworn enemy of the Catholic Church, he nevertheless shared with the Jesuits a fundamental commitment to hierarchical order in society. He believed that only a single-purpose organic unity of a nation, symbolized by the image of Leviathan, could save England from the chaos and strife sowed by the civil war. In the 1650s–70s his famously acrimonious dispute with John Wallis, the Savilian Professor of Geometry at Oxford and a leading proponent of the method of indivisibles, again pitted a champion of social order against an advocate of intellectual freedom. ...

In the 1960s, three hundred years after the Jesuits’ ban, infinitesimals eventually earned a rightful place in mathematics by acquiring a rigorous foundation in Abraham Robinson’s work on nonstandard analysis. They had played their most important role, however, back in the days when the method of indivisibles lacked rigor and was fraught with paradoxes. Perhaps it should not come as a surprise that today’s mathematics also borrows extremely fruitful ideas from nonrigorous fields, such as supersymmetric quantum field theory and string theory. ...

If, as in the case of the Jesuits, maintaining the appearance of infallibility becomes more important than exploration of new ideas, mathematics loses its creative spirit and turns into a storage of theorems.
I do not accept this. He argues that mathematics must accept nonrigorous work because someone might give it a rigorous foundation 3 centuries later.

It was only a few years later that Isaac Newton (and Leibniz and others) developed a coherent theory of infinitesimals. The subject was made much more rigorous again with Cauchy, Weierstrauss, and others in the 1800s.

Sunday, May 29, 2016

Thursday, May 26, 2016

Cranks denying local causality

I criticize mainstream professors a lot, but I am the one defending mainstream textbook science.

Philosopher Massimo Pigliucci claims to be an expert on pseudoscience, and is writing a book bragging about how much progress philosophers have made. Much of it has appeared in a series of blog posts.

One example of progress is that he says that philosophers have discovered that causality plays no role in fundamental physics:
Moreover, some critics (e.g., Chakravartty 2003) argue that ontic structural realism cannot account for causality, which notoriously plays little or no role in fundamental physics, and yet is crucial in every other science. For supporters like Ladyman causality is a concept pragmatically deployed by the “special” sciences (i.e., everything but fundamental physics), yet not ontologically fundamental.
I do not know how he could be so clueless. Causality plays a crucial role in every physics book I have.

A Quanta mag article explains:
New Support for Alternative Quantum View

An experiment claims to have invalidated a decades-old criticism against pilot-wave theory, an alternative formulation of quantum mechanics that avoids the most baffling features of the subatomic universe.

Of the many counterintuitive features of quantum mechanics, perhaps the most challenging to our notions of common sense is that particles do not have locations until they are observed. This is exactly what the standard view of quantum mechanics, often called the Copenhagen interpretation, asks us to believe. Instead of the clear-cut positions and movements of Newtonian physics, we have a cloud of probabilities described by a mathematical structure known as a wave function. The wave function, meanwhile, evolves over time, its evolution governed by precise rules codified in something called the Schrödinger equation. The mathematics are clear enough; the actual whereabouts of particles, less so. ...

For some theorists, the Bohmian interpretation holds an irresistible appeal. “All you have to do to make sense of quantum mechanics is to say to yourself: When we talk about particles, we really mean particles. Then all the problems go away,” said Goldstein. “Things have positions. They are somewhere. If you take that idea seriously, you’re led almost immediately to Bohm. It’s a far simpler version of quantum mechanics than what you find in the textbooks.” Howard Wiseman, a physicist at Griffith University in Brisbane, Australia, said that the Bohmian view “gives you a pretty straightforward account of how the world is…. You don’t have to tie yourself into any sort of philosophical knots to say how things really are.”
This is foolish.

The double-slit experiment shows that electrons and photons are not particles. Not classical particles, anyway.

Bohm lets you pretend that they are really particles, but you have to believe that they are attached to ghostly pilot waves that interact nonlocally with the particles and the rest of the universe.

Bohm also lets you believe in determinism, altho it is a very odd sort of determinism because there is no local causality.

Just what is appealing about that?

Yes, you can say that the electrons have positions, but those position are inextricably tied up with unobservable pilot waves, so what is satisfying about that?

Contrary to what the philosophers say, local causality is essential to physics and to our understanding of the world. If some experiment proves it wrong, then I would have to revise my opinion. But that has never been done, and there is no hope of doing it.

Belief in action-at-a-distance is just a mystical pipe dream.

So Bohm is much more contrary to intuition that ordinary Copenhagen quantum mechanics. And Bohm is only known to work in simple cases, as far as I know, and no one has ever used it to do anything new.

Wednesday, May 25, 2016

Left denies progress towards genetic truths

NPR radio interviews a Pulitzer Prize-winning physician plugging his new book on genetics:
As researchers work to understand the human genome, many questions remain, including, perhaps, the most fundamental: Just how much of the human experience is determined before we are already born, by our genes, and how much is dependent upon external environmental factors?

Oncologist Siddhartha Mukherjee tells Fresh Air's Terry Gross the answer to that question is complicated. "Biology is not destiny," Mukherjee explains. "But some aspects of biology — and in fact some aspects of destiny — are commanded very strongly by genes."

The degree to which biology governs our lives is the subject of Mukherjee's new book, The Gene. In it, he recounts the history of genetics and examines the roles genes play in such factors as identity, temperament, sexual orientation and disease risk.
Based on this, he has surely had his own genome sequenced, right? Nope.

GROSS: ... I want to ask about your own genes. Have you decided whether to or not to get genetically tested yourself? And I should mention here that there is a history of schizophrenia in your family. You had two uncles and a cousin with schizophrenia. You know, what scientists are learning about schizophrenia is that there is a genetic component to it or genetic predisposition. So do you want to get tested for that or other illnesses?

MUKHERJEE: I've chosen not to be tested. And I will probably choose not to be tested for a long time, until I start getting information back from genetic testing that's very deterministic. Again, remember that idea of penetrance that we talked about. Some genetic variations are very strongly predictive of certain forms of illness or certain forms of anatomical traits and so forth. I think that right now, for diseases like schizophrenia, we're nowhere close to that place. The most that we know is that there are multiple genes in familial schizophrenia, the kind that our family has. Essentially, we don't know how to map, as it were. There's no one-to-one correspondence between a genome and the chances of developing schizophrenia.

And until we can create that map - and whether we can create that map ever is a question - but until I - we can create that map, I will certainly not be tested because it - that idea - I mean, that's, again, the center of the book. That confines you. It becomes predictive. You become - it's a chilling word that I use in the book - you become a previvor (ph). A previvor is someone who's survived an illness that they haven't even had yet. You live in the shadow of an illness that you haven't had yet. It's a very Orwellian idea. And I think we should resist it as much as possible.

GROSS: Would you feel that way if you were a woman and there was a history of breast cancer in your family?

MUKHERJEE: Very tough question - if I was a woman and I had a history of breast cancer in my family - if the history was striking enough - and, you know, here's a - it's a place where a genetic counselor helps. If the history was striking enough, I would probably sequence at least the genes that have been implicated in breast cancer, no doubt about it.

I post this to prove that even the experts in genetics have the dopiest ideas about it. He wants to inform the public about genetics, but he is willfully ignorant of the personal practical implications.

I also criticized his New Yorker article on epigenetics.

Bad as he is, his reviewers are even worse. Atlantic mag reviews his book to argue that genes are overrated:
The antidote to such Whig history is a Darwinian approach. Darwin’s great insight was that while species do change, they do not progress toward a predetermined goal: Organisms adapt to local conditions, using the tools available at the time. So too with science. What counts as an interesting or soluble scientific problem varies with time and place; today’s truth is tomorrow’s null hypothesis — and next year’s error.

... The point is not that this [a complex view of how genes work; see below] is the correct way to understand the genome. The point is that science is not a march toward truth. Rather, as the author John McPhee wrote in 1967, “science erases what was previously true.” Every generation of scientists mulches under yesterday’s facts to fertilize those of tomorrow.

“There is grandeur in this view of life,” insisted Darwin, despite its allowing no purpose, no goal, no chance of perfection. There is grandeur in a Darwinian view of science, too. The gene is not a Platonic ideal. It is a human idea, ever changing and always rooted in time and place. To echo Darwin himself, while this planet has gone cycling on according to the laws laid down by Copernicus, Kepler, and Newton, endless interpretations of heredity have been, and are being, evolved.
I do not recall Darwin ever said that evolution does not make progress, or have a purpose. Whether he did or not, many modern evolutionists, such as the late Stephen Jay Gould, say things like that a lot.

They not only deny progress and purpose in the history of life, they deny that science makes progress. They say that "today’s truth is tomorrow’s null hypothesis".

There are political undertones to this. Leftists and Marxists hate the idea of scientific truths, and they really despise truths about human nature.

As you can see from my motto, I reject all of this. Science makes progress towards truth, and genuine truths are not erased or mulched. My positivism is in a minority among philosophers and science popularizers.

Speaking of academic leftists citing Darwin for foolish ideas, the current Atlantic mag has a philosopher article saying:
The sciences have grown steadily bolder in their claim that all human behavior can be explained through the clockwork laws of cause and effect. This shift in perception is the continuation of an intellectual revolution that began about 150 years ago, when Charles Darwin first published On the Origin of Species. Shortly after Darwin put forth his theory of evolution, his cousin Sir Francis Galton began to draw out the implications: If we have evolved, then mental faculties like intelligence must be hereditary. But we use those faculties — which some people have to a greater degree than others — to make decisions. So our ability to choose our fate is not free, but depends on our biological inheritance. ...

Many scientists say that the American physiologist Benjamin Libet demonstrated in the 1980s that we have no free will. It was already known that electrical activity builds up in a person’s brain before she, for example, moves her hand; Libet showed that this buildup occurs before the person consciously makes a decision to move. The conscious experience of deciding to act, which we usually associate with free will, appears to be an add-on, a post hoc reconstruction of events that occurs after the brain has already set the act in motion. ...

This research and its implications are not new. What is new, though, is the spread of free-will skepticism beyond the laboratories and into the mainstream. ...

The list goes on: Believing that free will is an illusion has been shown to make people less creative, more likely to conform, less willing to learn from their mistakes, and less grateful toward one another. In every regard, it seems, when we embrace determinism, we indulge our dark side.
This is mostly nonsense, of course. Intelligence has been shown to be heritable, as would be expected from Darwinian evolution. But I don't think that Darwin believe in such extreme genetic determination, as he did not understand genes.

It is possible that people who believe that free will is an illusion have some mild form of schizophrenia.

This is yet another example of philosophers thinking that they know better than everyone else. Philosophers and schizophrenics can hold beliefs that no normal person would.

Here is a summary of the Atlantic article:
Libertarian free will [the “we could have chosen otherwise” form] is dead, or at least dying among intellectuals. That means that determinism reigns (Cave doesn’t mention quantum mechanics), and that at any one time we can make only one choice.

But if we really realized we don’t have free will of that sort, we’d behave badly. Cave cites the study of Vohs and Schooler (not noting that that study wasn’t repeatable), but also other studies showing that individuals who believe in free will are better workers than those who don’t. I haven’t read those studies, and thus don’t know if they’re flawed, but of course there may be unexamined variables that explain this correlation.

Therefore, we need to maintain the illusion that we have libertarian free will, or at least some kind of free will. Otherwise society will crumble.
I hate to be anti-intellectual, but what am I to think when all the intellectuals are trying to convince me to give up my belief in free will? Or that they are such superior beings that they can operate without free will, but lesser beings like myself need to maintain that (supposedly false) belief?

Speaking of overrated intellectuals, I see that physicist Sean M. Carroll's new book is on the NY Times best-seller list.

Tuesday, May 24, 2016

Skeptics stick to soft targets

SciAm blogger John Horgan has infuriated the "skeptic" by saying they only go after soft targets, and not harder questions like bogus modern physics and the necessity of war.

He has a point. I am always amazed when these over-educated academic skeptics endorse some totally goofy theory like many-worlds quantum mechanics.

Steve Pinker rebuts the war argument:
John Horgan says that he “hates” the deep roots theory of war, and that it “drives him nuts,” because “it encourages fatalism toward war.” ...

Gat shows how the evidence has been steadily forcing the “anthropologists of peace” to retreat from denying that pre-state peoples engaged in lethal violence, to denying that they engage in “war,” to denying that they engage in it very often. Thus in a recent book Ferguson writes, “If there are people out there who believe that violence and war did not exist until after the advent of Western colonialism, or of the state, or agriculture, this volume proves them wrong.” ...

And speaking of false dichotomies, the question of whether we should blame “Muslim fanaticism” or the United States as “the greatest threat to peace” is hardly a sophisticated way for skeptical scientists to analyze war, as Horgan exhorts them to do. Certainly the reckless American invasions of Afghanistan and Iraq led to incompetent governments, failed states, or outright anarchy that allowed Sunni-vs-Shiite and other internecine violence to explode — but this is true only because these regions harbored fanatical hatreds which nothing short of a brutal dictatorship could repress. According to the Uppsala Conflict Data Project, out of the 11 ongoing wars in 2014, 8 (73%) involved radical Muslim forces as one of the combatants, another 2 involved Putin-backed militias against Ukraine, and the 11th was the tribal war in South Sudan. (Results for 2015 will be similar.) To blame all these wars, together with ISIS atrocities, on the United States, may be cathartic to those with certain political sensibilities, but it’s hardly the way for scientists to understand the complex causes of war and peace in the world today.
I would not blame the USA for all those wars, but it has been the Clinton-Bush-Obama policy to destabilize Mideast governments, aid the radical Muslim forces of our choosing, and to provoke Putin. This has been a disaster in Libya, Syria, Egypt, and now also Kosovo, Iraq, and Afghanistan. Sometimes I think that Barack Obama and Hillary Clinton are seeking a World War III between Islam and Christendom. Or maybe just to flood the West with Moslem migrants and refugees.

Pinker has his own dubious theories about war.

I do agree with Horgan that these supposed skeptics are not really very skeptical about genuine science issues.

Another problem with them is that they are dominated by leftist politics. They will ignore any facts which conflict with their leftist worldview, and even purge anyone who says the wrong thing.

The conference where Horgan spoke had disinvited Richard Dawkins because he retweeted a video that had some obscure cultural references that did not pass some leftist ideological purity test. They did not like Horgan either and denounced him on the stage. It is fair to assume that he will not be invited back.

Monday, May 23, 2016

IBM claims quantum computer cloud service

IBM announced:
IBM scientists have built a quantum processor that users can access through a first-of-a-kind quantum computing platform delivered via the IBM Cloud onto any desktop or mobile device. IBM believes quantum computing is the future of computing and has the potential to solve certain problems that are impossible to solve on today’s supercomputers.

The cloud-enabled quantum computing platform, called IBM Quantum Experience, will allow users to run algorithms and experiments on IBM’s quantum processor, work with the individual quantum bits (qubits), and explore tutorials and simulations around what might be possible with quantum computing.

The quantum processor is composed of five superconducting qubits and is housed at the IBM T.J. Watson Research Center in New York. The five-qubit processor represents the latest advancement in IBM’s quantum architecture that can scale to larger quantum systems. It is the leading approach towards building a universal quantum computer.

A universal quantum computer can be programmed to perform any computing task and will be exponentially faster than classical computers for a number of important applications for science and business.
There is more hype here.

As Scott Aaronson likes to point out, quantum computers would not be exponentially faster for any common application. Here is a list of quantum algorithms, with their suspected speedups.

According to this timeline, we have had 5-qubit quantum computers since the year 2000. I first expressed skepticism on this blog in 2005.

At least IBM admits:
A universal quantum computer does not exist today, but IBM envisions medium-sized quantum processors of 50-100 qubits to be possible in the next decade. With a quantum computer built of just 50 qubits, none of today’s TOP500 supercomputers could successfully emulate it, reflecting the tremendous potential of this technology. The community of quantum computer scientists and theorists is working to harness this power, and applications in optimization and chemistry will likely be the first to demonstrate quantum speed-up.
So there is no true quantum computer today, and no one has demonstrated any quantum speed-up.

Don't plan on connect this to any real-time service. IBM is running batch jobs. Your results will come to you in the mail. That's email, I hope, and not some punch-cards in an envelope.

Here is one way you can think about quantum computing, and my skepticism of it. Quantum mechanics teaches that you cannot be definite about an electron's position when it is not being observed. One way of interpreting this is that the electron can be in two or more places at once. If you believe that, then it seems plausible that the electron could be contributing to two or more different computations at the same time.

Another view is that an electron is only in one place at a time, and that uncertainty in its position is just our lack of knowledge. With this view, it seems implausible that our uncertainty could be the backbone of a super-Turing computation.

Some people with this latter view deny quantum mechanics and believe in hidden variables. But I am talking about followers of quantum mechanics, not them. See for example this recent Lubos Motl post, saying "Ignorance and uncertainty are de facto synonyms." He accepts (Copenhagen) quantum mechanics, and the idea that an electron can be understood as having multiple histories as distinct paths, but he never says that an electron is in two places at the same time, or that a cat is alive and dead at the same time.

So which view is better? I don't think that either view is quite correct, but both views are common. The differences can explain why some people are big believers in quantum computing, and some are not.

Suppose you really believe that a cat is alive and dead at the same time. That the live cat and the dead cat exist as distinct but entangled entities, not just possibilities in someone's head. Then it is not too much more of a stretch to believe that the live cat can interact with the dead cat to do a computation.

If you do not take that view, then were is the computation taking place?

I would say that an electron is not really a particle, but a wave-like entity that is often measured like a particle. So does that mean it can do two different computations at once? I doubt it.