Wednesday, May 25, 2016

Left denies progress towards genetic truths

NPR radio interviews a Pulitzer Prize-winning physician plugging his new book on genetics:
As researchers work to understand the human genome, many questions remain, including, perhaps, the most fundamental: Just how much of the human experience is determined before we are already born, by our genes, and how much is dependent upon external environmental factors?

Oncologist Siddhartha Mukherjee tells Fresh Air's Terry Gross the answer to that question is complicated. "Biology is not destiny," Mukherjee explains. "But some aspects of biology — and in fact some aspects of destiny — are commanded very strongly by genes."

The degree to which biology governs our lives is the subject of Mukherjee's new book, The Gene. In it, he recounts the history of genetics and examines the roles genes play in such factors as identity, temperament, sexual orientation and disease risk.
Based on this, he has surely had his own genome sequenced, right? Nope.

GROSS: ... I want to ask about your own genes. Have you decided whether to or not to get genetically tested yourself? And I should mention here that there is a history of schizophrenia in your family. You had two uncles and a cousin with schizophrenia. You know, what scientists are learning about schizophrenia is that there is a genetic component to it or genetic predisposition. So do you want to get tested for that or other illnesses?

MUKHERJEE: I've chosen not to be tested. And I will probably choose not to be tested for a long time, until I start getting information back from genetic testing that's very deterministic. Again, remember that idea of penetrance that we talked about. Some genetic variations are very strongly predictive of certain forms of illness or certain forms of anatomical traits and so forth. I think that right now, for diseases like schizophrenia, we're nowhere close to that place. The most that we know is that there are multiple genes in familial schizophrenia, the kind that our family has. Essentially, we don't know how to map, as it were. There's no one-to-one correspondence between a genome and the chances of developing schizophrenia.

And until we can create that map - and whether we can create that map ever is a question - but until I - we can create that map, I will certainly not be tested because it - that idea - I mean, that's, again, the center of the book. That confines you. It becomes predictive. You become - it's a chilling word that I use in the book - you become a previvor (ph). A previvor is someone who's survived an illness that they haven't even had yet. You live in the shadow of an illness that you haven't had yet. It's a very Orwellian idea. And I think we should resist it as much as possible.

GROSS: Would you feel that way if you were a woman and there was a history of breast cancer in your family?

MUKHERJEE: Very tough question - if I was a woman and I had a history of breast cancer in my family - if the history was striking enough - and, you know, here's a - it's a place where a genetic counselor helps. If the history was striking enough, I would probably sequence at least the genes that have been implicated in breast cancer, no doubt about it.

GROSS: OK.
I post this to prove that even the experts in genetics have the dopiest ideas about it. He wants to inform the public about genetics, but he is willfully ignorant of the personal practical implications.

I also criticized his New Yorker article on epigenetics.

Bad as he is, his reviewers are even worse. Atlantic mag reviews his book to argue that genes are overrated:
The antidote to such Whig history is a Darwinian approach. Darwin’s great insight was that while species do change, they do not progress toward a predetermined goal: Organisms adapt to local conditions, using the tools available at the time. So too with science. What counts as an interesting or soluble scientific problem varies with time and place; today’s truth is tomorrow’s null hypothesis — and next year’s error.

... The point is not that this [a complex view of how genes work; see below] is the correct way to understand the genome. The point is that science is not a march toward truth. Rather, as the author John McPhee wrote in 1967, “science erases what was previously true.” Every generation of scientists mulches under yesterday’s facts to fertilize those of tomorrow.

“There is grandeur in this view of life,” insisted Darwin, despite its allowing no purpose, no goal, no chance of perfection. There is grandeur in a Darwinian view of science, too. The gene is not a Platonic ideal. It is a human idea, ever changing and always rooted in time and place. To echo Darwin himself, while this planet has gone cycling on according to the laws laid down by Copernicus, Kepler, and Newton, endless interpretations of heredity have been, and are being, evolved.
I do not recall Darwin ever said that evolution does not make progress, or have a purpose. Whether he did or not, many modern evolutionists, such as the late Stephen Jay Gould, say things like that a lot.

They not only deny progress and purpose in the history of life, they deny that science makes progress. They say that "today’s truth is tomorrow’s null hypothesis".

There are political undertones to this. Leftists and Marxists hate the idea of scientific truths, and they really despise truths about human nature.

As you can see from my motto, I reject all of this. Science makes progress towards truth, and genuine truths are not erased or mulched. My positivism is in a minority among philosophers and science popularizers.

Speaking of academic leftists citing Darwin for foolish ideas, the current Atlantic mag has a philosopher article saying:
The sciences have grown steadily bolder in their claim that all human behavior can be explained through the clockwork laws of cause and effect. This shift in perception is the continuation of an intellectual revolution that began about 150 years ago, when Charles Darwin first published On the Origin of Species. Shortly after Darwin put forth his theory of evolution, his cousin Sir Francis Galton began to draw out the implications: If we have evolved, then mental faculties like intelligence must be hereditary. But we use those faculties — which some people have to a greater degree than others — to make decisions. So our ability to choose our fate is not free, but depends on our biological inheritance. ...

Many scientists say that the American physiologist Benjamin Libet demonstrated in the 1980s that we have no free will. It was already known that electrical activity builds up in a person’s brain before she, for example, moves her hand; Libet showed that this buildup occurs before the person consciously makes a decision to move. The conscious experience of deciding to act, which we usually associate with free will, appears to be an add-on, a post hoc reconstruction of events that occurs after the brain has already set the act in motion. ...

This research and its implications are not new. What is new, though, is the spread of free-will skepticism beyond the laboratories and into the mainstream. ...

The list goes on: Believing that free will is an illusion has been shown to make people less creative, more likely to conform, less willing to learn from their mistakes, and less grateful toward one another. In every regard, it seems, when we embrace determinism, we indulge our dark side.
This is mostly nonsense, of course. Intelligence has been shown to be heritable, as would be expected from Darwinian evolution. But I don't think that Darwin believe in such extreme genetic determination, as he did not understand genes.

It is possible that people who believe that free will is an illusion have some mild form of schizophrenia.

This is yet another example of philosophers thinking that they know better than everyone else. Philosophers and schizophrenics can hold beliefs that no normal person would.

Here is a summary of the Atlantic article:
Libertarian free will [the “we could have chosen otherwise” form] is dead, or at least dying among intellectuals. That means that determinism reigns (Cave doesn’t mention quantum mechanics), and that at any one time we can make only one choice.

But if we really realized we don’t have free will of that sort, we’d behave badly. Cave cites the study of Vohs and Schooler (not noting that that study wasn’t repeatable), but also other studies showing that individuals who believe in free will are better workers than those who don’t. I haven’t read those studies, and thus don’t know if they’re flawed, but of course there may be unexamined variables that explain this correlation.

Therefore, we need to maintain the illusion that we have libertarian free will, or at least some kind of free will. Otherwise society will crumble.
I hate to be anti-intellectual, but what am I to think when all the intellectuals are trying to convince me to give up my belief in free will? Or that they are such superior beings that they can operate without free will, but lesser beings like myself need to maintain that (supposedly false) belief?

Speaking of overrated intellectuals, I see that physicist Sean M. Carroll's new book is on the NY Times best-seller list.

Tuesday, May 24, 2016

Skeptics stick to soft targets

SciAm blogger John Horgan has infuriated the "skeptic" by saying they only go after soft targets, and not harder questions like bogus modern physics and the necessity of war.

He has a point. I am always amazed when these over-educated academic skeptics endorse some totally goofy theory like many-worlds quantum mechanics.

Steve Pinker rebuts the war argument:
John Horgan says that he “hates” the deep roots theory of war, and that it “drives him nuts,” because “it encourages fatalism toward war.” ...

Gat shows how the evidence has been steadily forcing the “anthropologists of peace” to retreat from denying that pre-state peoples engaged in lethal violence, to denying that they engage in “war,” to denying that they engage in it very often. Thus in a recent book Ferguson writes, “If there are people out there who believe that violence and war did not exist until after the advent of Western colonialism, or of the state, or agriculture, this volume proves them wrong.” ...

And speaking of false dichotomies, the question of whether we should blame “Muslim fanaticism” or the United States as “the greatest threat to peace” is hardly a sophisticated way for skeptical scientists to analyze war, as Horgan exhorts them to do. Certainly the reckless American invasions of Afghanistan and Iraq led to incompetent governments, failed states, or outright anarchy that allowed Sunni-vs-Shiite and other internecine violence to explode — but this is true only because these regions harbored fanatical hatreds which nothing short of a brutal dictatorship could repress. According to the Uppsala Conflict Data Project, out of the 11 ongoing wars in 2014, 8 (73%) involved radical Muslim forces as one of the combatants, another 2 involved Putin-backed militias against Ukraine, and the 11th was the tribal war in South Sudan. (Results for 2015 will be similar.) To blame all these wars, together with ISIS atrocities, on the United States, may be cathartic to those with certain political sensibilities, but it’s hardly the way for scientists to understand the complex causes of war and peace in the world today.
I would not blame the USA for all those wars, but it has been the Clinton-Bush-Obama policy to destabilize Mideast governments, aid the radical Muslim forces of our choosing, and to provoke Putin. This has been a disaster in Libya, Syria, Egypt, and now also Kosovo, Iraq, and Afghanistan. Sometimes I think that Barack Obama and Hillary Clinton are seeking a World War III between Islam and Christendom. Or maybe just to flood the West with Moslem migrants and refugees.

Pinker has his own dubious theories about war.

I do agree with Horgan that these supposed skeptics are not really very skeptical about genuine science issues.

Another problem with them is that they are dominated by leftist politics. They will ignore any facts which conflict with their leftist worldview, and even purge anyone who says the wrong thing.

The conference where Horgan spoke had disinvited Richard Dawkins because he retweeted a video that had some obscure cultural references that did not pass some leftist ideological purity test. They did not like Horgan either and denounced him on the stage. It is fair to assume that he will not be invited back.

Monday, May 23, 2016

IBM claims quantum computer cloud service

IBM announced:
IBM scientists have built a quantum processor that users can access through a first-of-a-kind quantum computing platform delivered via the IBM Cloud onto any desktop or mobile device. IBM believes quantum computing is the future of computing and has the potential to solve certain problems that are impossible to solve on today’s supercomputers.

The cloud-enabled quantum computing platform, called IBM Quantum Experience, will allow users to run algorithms and experiments on IBM’s quantum processor, work with the individual quantum bits (qubits), and explore tutorials and simulations around what might be possible with quantum computing.

The quantum processor is composed of five superconducting qubits and is housed at the IBM T.J. Watson Research Center in New York. The five-qubit processor represents the latest advancement in IBM’s quantum architecture that can scale to larger quantum systems. It is the leading approach towards building a universal quantum computer.

A universal quantum computer can be programmed to perform any computing task and will be exponentially faster than classical computers for a number of important applications for science and business.
There is more hype here.

As Scott Aaronson likes to point out, quantum computers would not be exponentially faster for any common application. Here is a list of quantum algorithms, with their suspected speedups.

According to this timeline, we have had 5-qubit quantum computers since the year 2000. I first expressed skepticism on this blog in 2005.

At least IBM admits:
A universal quantum computer does not exist today, but IBM envisions medium-sized quantum processors of 50-100 qubits to be possible in the next decade. With a quantum computer built of just 50 qubits, none of today’s TOP500 supercomputers could successfully emulate it, reflecting the tremendous potential of this technology. The community of quantum computer scientists and theorists is working to harness this power, and applications in optimization and chemistry will likely be the first to demonstrate quantum speed-up.
So there is no true quantum computer today, and no one has demonstrated any quantum speed-up.

Don't plan on connect this to any real-time service. IBM is running batch jobs. Your results will come to you in the mail. That's email, I hope, and not some punch-cards in an envelope.

Here is one way you can think about quantum computing, and my skepticism of it. Quantum mechanics teaches that you cannot be definite about an electron's position when it is not being observed. One way of interpreting this is that the electron can be in two or more places at once. If you believe that, then it seems plausible that the electron could be contributing to two or more different computations at the same time.

Another view is that an electron is only in one place at a time, and that uncertainty in its position is just our lack of knowledge. With this view, it seems implausible that our uncertainty could be the backbone of a super-Turing computation.

Some people with this latter view deny quantum mechanics and believe in hidden variables. But I am talking about followers of quantum mechanics, not them. See for example this recent Lubos Motl post, saying "Ignorance and uncertainty are de facto synonyms." He accepts (Copenhagen) quantum mechanics, and the idea that an electron can be understood as having multiple histories as distinct paths, but he never says that an electron is in two places at the same time, or that a cat is alive and dead at the same time.

So which view is better? I don't think that either view is quite correct, but both views are common. The differences can explain why some people are big believers in quantum computing, and some are not.

Suppose you really believe that a cat is alive and dead at the same time. That the live cat and the dead cat exist as distinct but entangled entities, not just possibilities in someone's head. Then it is not too much more of a stretch to believe that the live cat can interact with the dead cat to do a computation.

If you do not take that view, then were is the computation taking place?

I would say that an electron is not really a particle, but a wave-like entity that is often measured like a particle. So does that mean it can do two different computations at once? I doubt it.

Saturday, May 21, 2016

More on the European Quantum Manifesto scam

I trashed the European Quantum Manifesto, and a reader points me to this cryptologist rant against it:
There is a significant risk that all of the benefits of quantum computing during the next 20 years will be outweighed by the security devastation caused by quantum computing during the same period. ...

This brings me to what really bugs me about the Quantum Manifesto. Instead of highlighting the security threat of quantum technology and recommending funding for a scientifically justified response, the Manifesto makes the thoroughly deceptive claim that quantum technology improves security.
He then goes on to explain why quantum cryptography is never going to improve anyone's security.
A company named ID Quantique has been selling quantum-cryptography hardware, specifically hardware for BB84-type protocols, since 2004. ID Quantique claims that quantum cryptography provides "absolute security, guaranteed by the fundamental laws of physics." However, Vadim Makarov and his collaborators have shown that the ID Quantique devices are vulnerable to control by attackers, that various subsequent countermeasures are still vulnerable, and that analogous vulnerabilities in another quantum-key-distribution system are completely exploitable at low cost. The most reasonable extrapolation from the literature is that all of ID Quantique's devices are exploitable.

How can a product be broken if it provides "absolute security, guaranteed by the fundamental laws of physics"?
He explains how quantum cryptography misses the point of what cryptography is all about, and fails to address essential security issues that non-quantum techniques have already solved.

He is right about all this, and I have made similar points on this blog for years.

I also agree, and have said so here, that if quantum computing is successful in the next 20 years, then the social and economic impact will be overwhelmingly negative. The main application is to destroy the computer security that our society depends on.

Where I differ from him is that I say quantum computing and quantum cryptography are both scams, for different reasons. Quantum computing is technologically impossible, and almost entirely destructive if it were possible. Quantum cryptography is possible, but expensive, impractical, and insecure.

Actually, they are both possible in the sense that the laws of quantum mechanics allow experiments such that confirming the 1930 theory can be interpreted as favoring quantum computing or quantum cryptography. But the exitement is about the possibility of super-Turing computing, and more secure key exchanges, but these have never been achieved.

Wednesday, May 18, 2016

Aristotle, Einstein, and the nature of force

Vienna physics professor Herbert Pietschmann writes:
But Newton also removed Aristotles division of the motions into “natural motions” and “enforced motions”. For Newton, also Aristotles “natural motions” became “enforced” by the gravitational force. In this way, he unified our understanding of dynamics in the most general way. ...

In 1915, Albert Einstein had found the basic equation in his theory of general relativity. He published a complete version of his thoughts in 1916. According to this theory, the gravitational interaction was not caused by a force but by a curvature of spacetime. In this basic publication Einstein writes: “Carrying out the general theory of relativity must lead to a theory of gravitation: for one can generate a gravitational field by merely changing the coordinate system.” ...

To summarize, let us state that motion caused by gravitation is not caused by a force; in that sense it differs from all other motions. Einstein made this clear in his quoted paper from 1916. He writes:21 “According to the theory of General Relativity gravitation has an exceptional role with respect to all other forces, especially electromagnetism.” This is what in a sense we might call “return of Aristotelian physics” since it clearly distinguishes between “natural motion” and “enforced motion”, constituting the basic problem of modern physics.

Acceleration is either caused by the geometry of spacetime (gravitation) or by an external force in Euclidian spacetime (all other forces). Mathematically, these two different views are represented either by the Theory of General Relativity (gravitation) or by Quantum Field Theory (all other forces).
This is partially correct. Aristotle's concept of force seems wrong by Newtonian standards, but is actually reasonable in the light of relativity, as I previously argued.

But Einstein did not believe that gravitational acceleration was caused by the geometry of spacetime, as most physicists do today.

Einstein is also making a false distinction between gravity and electromagnetism. The preferred relativistic view of electromagnetism, as developed by Poincare, Minkowski, and maybe Weyl, is that the field is a curvature tensor and the force is a geometrical artifact. In this view, electromagnetism and gravity are formally quite similar.

In his paper on General Relativity from 1916 he writes: “the law of causality makes a sensible statement on the empirical world only when cause and effect are observable.” Since a gravitational “force” was not observable, Einstein had eliminated it from his theory of gravitation and replaced it by the curvature of spacetime. ...
There are people who deduce that the law of causality is invalid because the theory of relativity made it obsolete. Einstein himself seems to have gone back and forth on the issue.

The idea is mistaken, whether it is Einstein's fault or not. Causality is the essence of relativity.
In this connexion it is historically interesting that only ten years later Einstein converted from these ideas which had led him to his most fundamental contributions to physics. Werner Heisenberg who had used the same philosophy for the derivation of his uncertainty relation, recalls a conversation with Einstein in 1926; Einstein: “You don’t seriously believe that a physical theory can only contain observables.” Heisenberg: “I thought you were the one who made this idea the foundation of your theory of relativity?” Einstein: “May be I have used this kind of philosophy, nevertheless it is nonsense.”
Here is where Einstein rejects positivist philosophy for which he is widely credited.

The main reason Einstein is credited for special relativity over Lorentz is for the greater emphasis on observables. But as you can see, Einstein disavowed that view.

For the skeptical view of causality in physics, see this review:
The role of causality in physics presents a problem. Although physics is widely understood to aim at describing the causes of observable phenomena and the interactions of systems in experimental set-ups, the picture of the world given by fundamental physical theories is largely acausal: e.g. complete data on timeslices of the universe related by temporally bidirectional dynamical laws. The idea that physics is acausal in nature, or worse, incompatible with the notion of causality, has attracted many adherents. Causal scepticism in physics is most associated with Russell’s (1913) arguments that a principle of causality is incompatible with actual physical theories. For causal sceptics, insofar as causal reasoning is used in physics, it is at best extraneous and at worst distorts the interpretation of a theory’s content.
No, causal reasoning is essential to physics. The arguments of Bertrand Russell and other philosophers rejecting causality are nonsense.

Thursday, May 12, 2016

Google seeks quantum supremacy

Ilyas Khan writes:
I travelled over to the west coast and spent some time with the Artificial Intelligence team within Google at their headquarters just off Venice Beach in LA. Like all who visit that facility, I am constrained by an NDA in talking about what is going on. However in their bid to establish "Quantum Supremacy" the team, led by Hartmut Neven, talks not in terms of decades but in a timetable that is the technology equivalent of tomorrow. For the avoidance of doubt, the "tomorrow" that I refer to is the timeline for building and operating a universal quantum computer.
I interpret "the technology equivalent of tomorrow" as being within two years. Check back here at that time.

No, Google is not going to succeed. This is not like self-driving cars, where it is clear that the technology is coming, as prototypes have proved feasibility. For that, computers just have to mimic what humans do, and have several advantages, such better sensors, faster reactions, and real-time access to maps.

Despite hundreds of millions of dollars in investment, there is still no convincing demonstration of quantum supremacy, or any proof that any method will scale.

Google is all about scale, so I am sure that its researchers have a story to tell their senior management. But it is covered by a non-disclosure agreement, so we do not know what it is.

You can bet that if Google ever achieves a universal quantum computer, or even just quantum supremacy, it will brag to everyone and attempt to collect a Nobel prize. If you do not hear anything in a couple of years, then they are not delivering on their promises.

Monday, May 9, 2016

Darwin did not discredit Lamarck

Genomicist Razib Khan writes about a New Yorker mag mistake:
But there’s a major factual problem which I mentioned when it came out, and, which some friends on Facebook have been griping about. I’ll quote the section where the error is clearest:
…Conceptually, a key element of classical Darwinian evolution is that genes do not retain an organism’s experiences in a permanently heritable manner. Jean-Baptiste Lamarck, in the early nineteenth century, had supposed that when an antelope strained its neck to reach a tree its efforts were somehow passed down and its progeny evolved into giraffes. Darwin discredited that model….
It is true that in Neo-Darwinian evolution, the modern synthesis, which crystallized in the second quarter of the 20th century, genes do not retain an organism’s experiences in a permanently heritable manner. But this is not true for Charles Darwin’s theories, which most people would term a “classical Darwinian” evolutionary theory.
Not just the New Yorker.

For some reason, the Darwin idolizers frequently stress that he proved Lamarck wrong about heredity. This is misguided for a couple of reasons.

First, they are usually eager to convince you of some leftist-atheist-evolutionist-naturalist-humanist agenda, but they could make essentially the same points if Lamarckianism were true.

Second, Darwin was a Lamarkian, as a comment explains:
Darwin’s On the Origin of Species proposed natural selection as the main mechanism for development of species, but did not rule out a variant of Lamarckism as a supplementary mechanism.[11] Darwin called his Lamarckian hypothesis pangenesis, and explained it in the final chapter of his book The Variation of Animals and Plants under Domestication (1868), after describing numerous examples to demonstrate what he considered to be the inheritance of acquired characteristics. Pangenesis, which he emphasised was a hypothesis, was based on the idea that somatic cells would, in response to environmental stimulation (use and disuse), throw off ‘gemmules’ or ‘pangenes’ which travelled around the body (though not necessarily in the bloodstream). These pangenes were microscopic particles that supposedly contained information about the characteristics of their parent cell, and Darwin believed that they eventually accumulated in the germ cells where they could pass on to the next generation the newly acquired characteristics of the parents. Darwin’s half-cousin, Francis Galton, carried out experiments on rabbits, with Darwin’s cooperation, in which he transfused the blood of one variety of rabbit into another variety in the expectation that its offspring would show some characteristics of the first. They did not, and Galton declared that he had disproved Darwin’s hypothesis of pangenesis, but Darwin objected, in a letter to the scientific journal Nature, that he had done nothing of the sort, since he had never mentioned blood in his writings. He pointed out that he regarded pangenesis as occurring in Protozoa and plants, which have no blood. (wiki-Lamarckism)
Here is more criticism of the New Yorker and part 2.

I have heard people say that the New Yorker employs rigorous fact checkers, but I don't think that they try to check that the science is right. It is a literary magazine, and they check literary matters.

Denying relativity credit to Poincare

I stumbled across this 1971 book review:
In a third article Stanley Goldberg gives a remarkably clear picture of Einstein's special relativity theory and the response of the British, French, and Germans to the theory. Starting with two simple postulates, videlicet [= as follows] the constancy of the velocity of light and the impossibility of determining an absolute motion of any kind, Einstein was able to derive the Lorentz transformation with ease as well as many other relations of a kinematical nature. The "ether" was dismissed in a short sentence. The German physicists understood the theory, but not all agreed with it. The British stuck with the ether and didn't even try to understand special relativity. The French were not much interested in the theory either; even Poincaré failed to mention it in his writings on electrodynamics.
Poincare did not fail to mention it; he created the theory. Poincare is mainly responsible for the spacetime geometry and electromagnetic covariance of special relativity, along with elaborations by Minkowski. I don't know how physicists could be so ignorant of one of the great advances of physics.

I do not know anything like it in the history of science. Every discussion of relativity goes out of its way to attribute the theory solely to Einstein, and to give some history of how it happened. And they get the story wrong every time. I explain more in my book.

Friday, May 6, 2016

Stop asking whether quantum computing is possible

The current SciAm mag has a paywalled article describing three competing research programs trying to build the first quantum computer, and concludes:
The time has come to stop asking whether quantum computing is possible and to start focusing on what it will be able to do. The truth is that we do not know how quantum computing will change the world.
Huhh?

No, quantum computing is probably impossible, and would not change the world even if it were possible.

One supposed application is a "quantum internet", which quantum computers are used as routers to transmit qubits from one user to another. The only known use for that is for so-called quantum cryptography, but that has no advantages over conventional cryptography. It would cost a million times as much, and be hopelessly insecure by today's standards. It cannot authenticate messages, and all implementations have been broken, as far as I know.

The article also mentions quantum clocks. I do not know what that is all about, but we already have extremely cheap clocks that are far more accurate than what is needed by anyone.

Meanwhile, IBM claims to have 5 qubits:
IBM said on Wednesday that it's giving everyone access to one of its quantum computing processors, which can be used to crunch large amounts of data. Anyone can apply through IBM Research's website to test the processor, however, IBM will determine how much access people will have to the processor depending on their technology background -- specifically how knowledgeable they are about quantum technology.
If IBM really had a revolutionary computer, it would be able to figure out something to do with it. No, it cannot "be used to crunch large amounts of data."

Wednesday, May 4, 2016

Wired explains entanglement

Famous physicist Frank Wilczek explains entanglement in a Wired/Quanta mag article:
An aura of glamorous mystery attaches to the concept of quantum entanglement, and also to the (somehow) related claim that quantum theory requires “many worlds.” Yet in the end those are, or should be, scientific ideas, with down-to-earth meanings and concrete implications. Here I’d like to explain the concepts of entanglement and many worlds as simply and clearly as I know how. ...

So: Is the quantity of evil even or odd? Both possibilities are realized, with certainty, in different sorts of measurements. We are forced to reject the question. It makes no sense to speak of the quantity of evil in our system, independent of how it is measured. Indeed, it leads to contradictions.

The GHZ effect is, in the physicist Sidney Coleman’s words, “quantum mechanics in your face.” It demolishes a deeply embedded prejudice, rooted in everyday experience, that physical systems have definite properties, independent of whether those properties are measured. For if they did, then the balance between good and evil would be unaffected by measurement choices. Once internalized, the message of the GHZ effect is unforgettable and mind-expanding.
To get to this conclusion, you have to equate "definite properties" with measurement outcomes.

A electron has definite properties, but it is not really a particle and does not have a definite position. If you measure the electron, using a method that puts it in a definite position, then it is in that position for the instant of the measurement. A nanosecond later, it is back to its wave-like state with indeterminate position.

For more on Wilczek, see this Edge interview.

Monday, May 2, 2016

New book on spooky action

I previously trashed George Musser's new book (without reading it), and now he was on Science Friday radio promoting it:
Could the space we live in—our everyday reality—just be a projection of some underlying quantum structure? Might black holes be like the Big Bang in reverse, where space reverts to spacelessness? Those are the sorts of far-out questions science writer George Musser ponders in his book Spooky Action at a Distance: The Phenomenon that Reimagines Space and Time—And What it Means for Black Holes, the Big Bang, and Theories of Everything. In this segment, Musser and quantum physicist Shohini Ghose talk about the weird quantum world, and the unpredictable nature of particles.
Here is an excerpt:
The world we experience possesses all the qualities of locality. We have a strong sense of place and of the relations among places. We feel the pain of separation from those we love and the impotence of being too far away from something we want to affect. And yet quantum mechanics and other branches of physics now suggest that, at a deeper level, there may be no such thing as place and no such thing as distance. Physics experiments can bind the fate of two particles together, so that they behave like a pair of magic coins: if you flip them, each will land on heads or tails—but always on the same side as its partner. They act in a coordinated way even though no force passes through the space between them. Those particles might zip off to opposite sides of the universe, and still they act in unison. These particles violate locality. They transcend space.

Evidently nature has struck a peculiar and delicate balance: under most circumstances it obeys locality, and it must obey locality if we are to exist, yet it drops hints of being nonlocal at its foundations. That tension is what I’ll explore in this book. For those who study it, nonlocality is the mother of all physics riddles, implicated in a broad cross section of the mysteries that physicists confront these days: not just the weirdness of quantum particles, but also the fate of black holes, the origin of the cosmos, and the essential unity of nature.
Everything in the universe obeys locality, as far as we know.

Musser's previous book was The Complete Idiot’s Guide to String Theory, and that does not require spooky action, so presumably he understands that the spookiness is just goofiness to sell books. He may understand that string theory is all a big scam also.

Wednesday, April 27, 2016

Questioning quantum supremacy

KWRegan writes:
Gil Kalai is a popularizer of mathematics as well as a great researcher. His blog has some entries on Polymath projects going back to the start of this year. He has just contributed an article to the May AMS Notices titled, “The Quantum Computer Puzzle.” ...

Quantum supremacy has a stronger meaning than saying that nature is fundamentally quantum: it means that nature operates in concrete ways that cannot be emulated by non-quantum models. If factoring is not in BPP — let alone randomized classical quadratic time — then nature can do something that our classical complexity models need incomparably longer computations to achieve.

Kalai says:
Quantum computers are hypothetical devices, based on quantum physics, which would enable us to perform certain computations hundreds of orders of magnitude faster than digital computers. This feature is coined “quantum supremacy”, and one aspect or another of such quantum computational supremacy might be seen by experiments in the near future: by implementing quantum error-correction or by systems of noninteracting bosons or by exotic new phases of matter called anyons or by quantum annealing, or in various other ways. ...

A main reason for concern regarding the feasibility of quantum computers is that quantum systems are inherently noisy. We will describe an optimistic hypothesis regarding quantum noise that will allow quantum computing and a pessimistic hypothesis that won’t. ...

Are quantum computers feasible? Is quantum supremacy possible? My expectation is that the pessimistic hypothesis will prevail, leading to a negative answer.
I agree with Kalai that the quantum computer is the perpetual motion machine of the 21st century.

Monday, April 25, 2016

European Quantum Manifesto

SciAm reports:
The European Commission has quietly announced plans to launch a €1-billion (US$1.13 billion) project to boost a raft of quantum technologies—from secure communication networks to ultra-precise gravity sensors and clocks.

The initiative, to launch in 2018, will be similar in size, timescale and ambition to two existing European flagships, the decade-long Graphene Flagship and the Human Brain Project, although the exact format has yet to be decided, Nathalie Vandystadt, a commission spokesperson, toldNature. Funding will come from a mixture of sources, including the commission, as well as other European and national funders, she added.

The commission is likely to have a “substantial role” in funding the flagship, says Tommaso Calarco, who leads the Integrated Quantum Science and Technology centre at the Universities of Ulm and Stuttgart in Germany. He co-authored a blueprint behind the initiative, which was published in March, called the Quantum Manifesto. Countries around the world are investing in these technologies, says Calarco. Without such an initiative, Europe risks becoming a second-tier player, he says. “The time is really now or never.” ...

High-profile US companies are already investing in quantum computing, and Chinese scientists are nearing the completion of a 2,000-kilometre long quantum-communication link — the longest in the world — to send information securely between Beijing and Shanghai.

In Europe, the flagship is expected to fuel the development of such technologies, which the commission calls part of a “second quantum revolution” (the first being the unearthing of the rules of the quantum realm, which led to the invention of equipment such as lasers and transistors).
No, there is no practical utility to quantum communication or quantum computing.

Scott Aaronson and others comment on Candian PM Trudeau's description of quantum computing.

Here is my own attempt at a brief 35-second explanation:
Quantum mechanics is a system for tracking the potential observations of atoms, and other phenomena on that scale. In particular it allows calculating probabilities of multiple possibilities, even tho single outcomes are observed.

A quantum computer is a conjectural machine to do computations from interpreting those multiple possibilities as separate realities that can each contribute to what appears to be an almost-magic parallel computation. Despite almost a billion dollars in investment, no such speedups have been achieved.

Thursday, April 21, 2016

Two Dogmas of Empiricism

Modern philosophy of science is mainly characterized by its denial of truth. Philosophers are always arguing that everything is subjective, or that math has no foundation, or that the scientific method is invalid, or that philosophical musings are just as good as experimental results. They just had a conference devoted to non-empirical physics, whatever that is.

A core anti-truth document is the 1951 essay, Two Dogmas of Empiricism, by Harvard philosopher and logician W. Quine.

The essay is widely praised, and even "regarded as the most important in all of twentieth-century philosophy".

The title says "two dogmas", but the essay says that they are the same. The dogma is that empirical knowledge can be distinguished from other kinds.

He concedes that some knowledge is true by simple logic, without recourse to empirical investigation, like:
(1) No unmarried man is married.
But then he has a giant brain freeze trying to classify this:
(2) No bachelor is married.
This seems to be true by definition and logic, because "bachelor" is synonymous with "unmarried man". But then he complains:
But it is not quite true that the synonyms 'bachelor' and 'unmarried man' are everywhere interchangeable salva veritate. Truths which become false under substitution of 'unmarried man' for 'bachelor' are easily constructed with help of 'bachelor of arts' or 'bachelor's buttons.' Also with help of quotation, thus:
'Bachelor' has less than ten letters.
Therefore he argues that it is impossible to distinguish logical and empirical truths, and also that the whole program of scientific reductionism is invalid.

That's it. The argument is that stupid. Of course there are distinctions between logical and empirical truths, even tho the issues drive philosophers nuts. See my defense of logical positivism.

The English language is not as precise as mathematical logic. There are lots of synonyms in English, but that does not mean that they are substitutable in all contexts. Sometimes you need a little context to understand which meaning of a term is being used. This is a trivial observation. Those who praise this essay are morons. Or they are leftists who are ideologically opposed to objective truths.

Tuesday, April 19, 2016

Politician on quantum computing

A reader pointed out that Canada's dopey leader tried to explain quantum computing:
Justin Trudeau’s appeal may have just found a new dimension.

The Canadian Prime Minister was asked about the deployment of airstrikes against ISIS at a press conference on April 15, but the reporter had inserted a joke about quantum computing beforehand.

Trudeau took the bait, and launched into an explanation of the quantum computer.

“Normal computers work by …” Trudeau said, before he was greeted by an outburst of laughter. “Don’t interrupt me. When you walk out of here, most of you will know more about quantum computing.”

Then he proceeded to give a brief summary of what makes quantum computing different from normal computing.

“Normal computers work…either there’s power going through a wire or not. It’s one or a zero. They’re binary systems,” Trudeau said. “What quantum states allow for is much more complex information to be encoded into a single bit.”

“A regular computer bit is either a one or a zero — on or off — a quantum state can be much more complex than that because, as we know, things can be both particle and wave at the same time, and the uncertainty around quantum states allows us to encode more information into a much smaller computer.”

“Don’t get me going on this or we’ll be here all day,” Trudeau said at the end of his explanation, to cheers from the crowd.
He began a Master's degree in environmental geography before entering politics, so that gives him the ability to talk science, I guess.

No, a quantum computer will never be a smaller computer, even if one is ever built.

It is wishful thinking to say that uncertainty means more information.