Monday, May 30, 2016

The medieval politics of infinitesimals

I would not have thought that infinitesimals would be so political, but a book last year says so. It is titled, Infinitesimal: How a Dangerous Mathematical Theory Shaped the Modern World.

MIT historian and ex-Russian Slava Gerovitch reviews the book.
The Jesuits were largely responsible for raising the status of mathematics in Italy from a lowly discipline to a paragon of truth and a model for social and political order. The Gregorian reform of the calendar of 1582, widely accepted in Europe across the religious divide, had very favorable political ramifications for the Pope, and this project endeared mathematics to the hearts of Catholics. In an age of religious strife and political disputes, the Jesuits hailed mathematics in general, and Euclidean geometry in particular, as an exemplar of resolving arguments with unassailable certainty through clear definitions and careful logical reasoning. They lifted mathematics from its subservient role well below philosophy and theology in the medieval tree of knowledge and made it the centerpiece of their college curriculum as an indispensable tool for training the mind to think in an orderly and correct way.

The new, enviable position of mathematics in the Jesuits’ epistemological hierarchy came with a set of strings attached. Mathematics now had a new responsibility to publicly symbolize the ideals of certainty and order. Various dubious innovations, such as the method of indivisibles, with their inexplicable paradoxes, undermined this image. The Jesuits therefore viewed the notion of infinitesimals as a dangerous idea and wanted to expunge it from mathematics. In their view, infinitesimals not only tainted mathematics but also opened the door to subversive ideas in other areas, undermining the established social and political order. The Jesuits never aspired to mathematical originality. Their education was oriented toward an unquestioning study of established truths, and it discouraged open-ended intellectual explorations. In the first decades of the seventeenth century the Revisors General in Rome issued a series of injunctions against infinitesimals, forbidding their use in Jesuit colleges. Jesuit mathematicians called the indivisibles “hallucinations” and argued that “[t]hings that do not exist, nor could they exist, cannot be compared” (pp. 154, 159). ...

The battle over the method of indivisibles played out differently in England, where the Royal Society proved capable of sustaining an open intellectual debate. One of the most prominent critics of infinitesimals in England was philosopher and amateur mathematician Thomas Hobbes. A sworn enemy of the Catholic Church, he nevertheless shared with the Jesuits a fundamental commitment to hierarchical order in society. He believed that only a single-purpose organic unity of a nation, symbolized by the image of Leviathan, could save England from the chaos and strife sowed by the civil war. In the 1650s–70s his famously acrimonious dispute with John Wallis, the Savilian Professor of Geometry at Oxford and a leading proponent of the method of indivisibles, again pitted a champion of social order against an advocate of intellectual freedom. ...

In the 1960s, three hundred years after the Jesuits’ ban, infinitesimals eventually earned a rightful place in mathematics by acquiring a rigorous foundation in Abraham Robinson’s work on nonstandard analysis. They had played their most important role, however, back in the days when the method of indivisibles lacked rigor and was fraught with paradoxes. Perhaps it should not come as a surprise that today’s mathematics also borrows extremely fruitful ideas from nonrigorous fields, such as supersymmetric quantum field theory and string theory. ...

If, as in the case of the Jesuits, maintaining the appearance of infallibility becomes more important than exploration of new ideas, mathematics loses its creative spirit and turns into a storage of theorems.
I do not accept this. He argues that mathematics must accept nonrigorous work because someone might give it a rigorous foundation 3 centuries later.

It was only a few years later that Isaac Newton (and Leibniz and others) developed a coherent theory of infinitesimals. The subject was made much more rigorous again with Cauchy, Weierstrauss, and others in the 1800s.

Sunday, May 29, 2016

Thursday, May 26, 2016

Cranks denying local causality

I criticize mainstream professors a lot, but I am the one defending mainstream textbook science.

Philosopher Massimo Pigliucci claims to be an expert on pseudoscience, and is writing a book bragging about how much progress philosophers have made. Much of it has appeared in a series of blog posts.

One example of progress is that he says that philosophers have discovered that causality plays no role in fundamental physics:
Moreover, some critics (e.g., Chakravartty 2003) argue that ontic structural realism cannot account for causality, which notoriously plays little or no role in fundamental physics, and yet is crucial in every other science. For supporters like Ladyman causality is a concept pragmatically deployed by the “special” sciences (i.e., everything but fundamental physics), yet not ontologically fundamental.
I do not know how he could be so clueless. Causality plays a crucial role in every physics book I have.

A Quanta mag article explains:
New Support for Alternative Quantum View

An experiment claims to have invalidated a decades-old criticism against pilot-wave theory, an alternative formulation of quantum mechanics that avoids the most baffling features of the subatomic universe.

Of the many counterintuitive features of quantum mechanics, perhaps the most challenging to our notions of common sense is that particles do not have locations until they are observed. This is exactly what the standard view of quantum mechanics, often called the Copenhagen interpretation, asks us to believe. Instead of the clear-cut positions and movements of Newtonian physics, we have a cloud of probabilities described by a mathematical structure known as a wave function. The wave function, meanwhile, evolves over time, its evolution governed by precise rules codified in something called the Schrödinger equation. The mathematics are clear enough; the actual whereabouts of particles, less so. ...

For some theorists, the Bohmian interpretation holds an irresistible appeal. “All you have to do to make sense of quantum mechanics is to say to yourself: When we talk about particles, we really mean particles. Then all the problems go away,” said Goldstein. “Things have positions. They are somewhere. If you take that idea seriously, you’re led almost immediately to Bohm. It’s a far simpler version of quantum mechanics than what you find in the textbooks.” Howard Wiseman, a physicist at Griffith University in Brisbane, Australia, said that the Bohmian view “gives you a pretty straightforward account of how the world is…. You don’t have to tie yourself into any sort of philosophical knots to say how things really are.”
This is foolish.

The double-slit experiment shows that electrons and photons are not particles. Not classical particles, anyway.

Bohm lets you pretend that they are really particles, but you have to believe that they are attached to ghostly pilot waves that interact nonlocally with the particles and the rest of the universe.

Bohm also lets you believe in determinism, altho it is a very odd sort of determinism because there is no local causality.

Just what is appealing about that?

Yes, you can say that the electrons have positions, but those position are inextricably tied up with unobservable pilot waves, so what is satisfying about that?

Contrary to what the philosophers say, local causality is essential to physics and to our understanding of the world. If some experiment proves it wrong, then I would have to revise my opinion. But that has never been done, and there is no hope of doing it.

Belief in action-at-a-distance is just a mystical pipe dream.

So Bohm is much more contrary to intuition that ordinary Copenhagen quantum mechanics. And Bohm is only known to work in simple cases, as far as I know, and no one has ever used it to do anything new.

Wednesday, May 25, 2016

Left denies progress towards genetic truths

NPR radio interviews a Pulitzer Prize-winning physician plugging his new book on genetics:
As researchers work to understand the human genome, many questions remain, including, perhaps, the most fundamental: Just how much of the human experience is determined before we are already born, by our genes, and how much is dependent upon external environmental factors?

Oncologist Siddhartha Mukherjee tells Fresh Air's Terry Gross the answer to that question is complicated. "Biology is not destiny," Mukherjee explains. "But some aspects of biology — and in fact some aspects of destiny — are commanded very strongly by genes."

The degree to which biology governs our lives is the subject of Mukherjee's new book, The Gene. In it, he recounts the history of genetics and examines the roles genes play in such factors as identity, temperament, sexual orientation and disease risk.
Based on this, he has surely had his own genome sequenced, right? Nope.

GROSS: ... I want to ask about your own genes. Have you decided whether to or not to get genetically tested yourself? And I should mention here that there is a history of schizophrenia in your family. You had two uncles and a cousin with schizophrenia. You know, what scientists are learning about schizophrenia is that there is a genetic component to it or genetic predisposition. So do you want to get tested for that or other illnesses?

MUKHERJEE: I've chosen not to be tested. And I will probably choose not to be tested for a long time, until I start getting information back from genetic testing that's very deterministic. Again, remember that idea of penetrance that we talked about. Some genetic variations are very strongly predictive of certain forms of illness or certain forms of anatomical traits and so forth. I think that right now, for diseases like schizophrenia, we're nowhere close to that place. The most that we know is that there are multiple genes in familial schizophrenia, the kind that our family has. Essentially, we don't know how to map, as it were. There's no one-to-one correspondence between a genome and the chances of developing schizophrenia.

And until we can create that map - and whether we can create that map ever is a question - but until I - we can create that map, I will certainly not be tested because it - that idea - I mean, that's, again, the center of the book. That confines you. It becomes predictive. You become - it's a chilling word that I use in the book - you become a previvor (ph). A previvor is someone who's survived an illness that they haven't even had yet. You live in the shadow of an illness that you haven't had yet. It's a very Orwellian idea. And I think we should resist it as much as possible.

GROSS: Would you feel that way if you were a woman and there was a history of breast cancer in your family?

MUKHERJEE: Very tough question - if I was a woman and I had a history of breast cancer in my family - if the history was striking enough - and, you know, here's a - it's a place where a genetic counselor helps. If the history was striking enough, I would probably sequence at least the genes that have been implicated in breast cancer, no doubt about it.

GROSS: OK.
I post this to prove that even the experts in genetics have the dopiest ideas about it. He wants to inform the public about genetics, but he is willfully ignorant of the personal practical implications.

I also criticized his New Yorker article on epigenetics.

Bad as he is, his reviewers are even worse. Atlantic mag reviews his book to argue that genes are overrated:
The antidote to such Whig history is a Darwinian approach. Darwin’s great insight was that while species do change, they do not progress toward a predetermined goal: Organisms adapt to local conditions, using the tools available at the time. So too with science. What counts as an interesting or soluble scientific problem varies with time and place; today’s truth is tomorrow’s null hypothesis — and next year’s error.

... The point is not that this [a complex view of how genes work; see below] is the correct way to understand the genome. The point is that science is not a march toward truth. Rather, as the author John McPhee wrote in 1967, “science erases what was previously true.” Every generation of scientists mulches under yesterday’s facts to fertilize those of tomorrow.

“There is grandeur in this view of life,” insisted Darwin, despite its allowing no purpose, no goal, no chance of perfection. There is grandeur in a Darwinian view of science, too. The gene is not a Platonic ideal. It is a human idea, ever changing and always rooted in time and place. To echo Darwin himself, while this planet has gone cycling on according to the laws laid down by Copernicus, Kepler, and Newton, endless interpretations of heredity have been, and are being, evolved.
I do not recall Darwin ever said that evolution does not make progress, or have a purpose. Whether he did or not, many modern evolutionists, such as the late Stephen Jay Gould, say things like that a lot.

They not only deny progress and purpose in the history of life, they deny that science makes progress. They say that "today’s truth is tomorrow’s null hypothesis".

There are political undertones to this. Leftists and Marxists hate the idea of scientific truths, and they really despise truths about human nature.

As you can see from my motto, I reject all of this. Science makes progress towards truth, and genuine truths are not erased or mulched. My positivism is in a minority among philosophers and science popularizers.

Speaking of academic leftists citing Darwin for foolish ideas, the current Atlantic mag has a philosopher article saying:
The sciences have grown steadily bolder in their claim that all human behavior can be explained through the clockwork laws of cause and effect. This shift in perception is the continuation of an intellectual revolution that began about 150 years ago, when Charles Darwin first published On the Origin of Species. Shortly after Darwin put forth his theory of evolution, his cousin Sir Francis Galton began to draw out the implications: If we have evolved, then mental faculties like intelligence must be hereditary. But we use those faculties — which some people have to a greater degree than others — to make decisions. So our ability to choose our fate is not free, but depends on our biological inheritance. ...

Many scientists say that the American physiologist Benjamin Libet demonstrated in the 1980s that we have no free will. It was already known that electrical activity builds up in a person’s brain before she, for example, moves her hand; Libet showed that this buildup occurs before the person consciously makes a decision to move. The conscious experience of deciding to act, which we usually associate with free will, appears to be an add-on, a post hoc reconstruction of events that occurs after the brain has already set the act in motion. ...

This research and its implications are not new. What is new, though, is the spread of free-will skepticism beyond the laboratories and into the mainstream. ...

The list goes on: Believing that free will is an illusion has been shown to make people less creative, more likely to conform, less willing to learn from their mistakes, and less grateful toward one another. In every regard, it seems, when we embrace determinism, we indulge our dark side.
This is mostly nonsense, of course. Intelligence has been shown to be heritable, as would be expected from Darwinian evolution. But I don't think that Darwin believe in such extreme genetic determination, as he did not understand genes.

It is possible that people who believe that free will is an illusion have some mild form of schizophrenia.

This is yet another example of philosophers thinking that they know better than everyone else. Philosophers and schizophrenics can hold beliefs that no normal person would.

Here is a summary of the Atlantic article:
Libertarian free will [the “we could have chosen otherwise” form] is dead, or at least dying among intellectuals. That means that determinism reigns (Cave doesn’t mention quantum mechanics), and that at any one time we can make only one choice.

But if we really realized we don’t have free will of that sort, we’d behave badly. Cave cites the study of Vohs and Schooler (not noting that that study wasn’t repeatable), but also other studies showing that individuals who believe in free will are better workers than those who don’t. I haven’t read those studies, and thus don’t know if they’re flawed, but of course there may be unexamined variables that explain this correlation.

Therefore, we need to maintain the illusion that we have libertarian free will, or at least some kind of free will. Otherwise society will crumble.
I hate to be anti-intellectual, but what am I to think when all the intellectuals are trying to convince me to give up my belief in free will? Or that they are such superior beings that they can operate without free will, but lesser beings like myself need to maintain that (supposedly false) belief?

Speaking of overrated intellectuals, I see that physicist Sean M. Carroll's new book is on the NY Times best-seller list.

Tuesday, May 24, 2016

Skeptics stick to soft targets

SciAm blogger John Horgan has infuriated the "skeptic" by saying they only go after soft targets, and not harder questions like bogus modern physics and the necessity of war.

He has a point. I am always amazed when these over-educated academic skeptics endorse some totally goofy theory like many-worlds quantum mechanics.

Steve Pinker rebuts the war argument:
John Horgan says that he “hates” the deep roots theory of war, and that it “drives him nuts,” because “it encourages fatalism toward war.” ...

Gat shows how the evidence has been steadily forcing the “anthropologists of peace” to retreat from denying that pre-state peoples engaged in lethal violence, to denying that they engage in “war,” to denying that they engage in it very often. Thus in a recent book Ferguson writes, “If there are people out there who believe that violence and war did not exist until after the advent of Western colonialism, or of the state, or agriculture, this volume proves them wrong.” ...

And speaking of false dichotomies, the question of whether we should blame “Muslim fanaticism” or the United States as “the greatest threat to peace” is hardly a sophisticated way for skeptical scientists to analyze war, as Horgan exhorts them to do. Certainly the reckless American invasions of Afghanistan and Iraq led to incompetent governments, failed states, or outright anarchy that allowed Sunni-vs-Shiite and other internecine violence to explode — but this is true only because these regions harbored fanatical hatreds which nothing short of a brutal dictatorship could repress. According to the Uppsala Conflict Data Project, out of the 11 ongoing wars in 2014, 8 (73%) involved radical Muslim forces as one of the combatants, another 2 involved Putin-backed militias against Ukraine, and the 11th was the tribal war in South Sudan. (Results for 2015 will be similar.) To blame all these wars, together with ISIS atrocities, on the United States, may be cathartic to those with certain political sensibilities, but it’s hardly the way for scientists to understand the complex causes of war and peace in the world today.
I would not blame the USA for all those wars, but it has been the Clinton-Bush-Obama policy to destabilize Mideast governments, aid the radical Muslim forces of our choosing, and to provoke Putin. This has been a disaster in Libya, Syria, Egypt, and now also Kosovo, Iraq, and Afghanistan. Sometimes I think that Barack Obama and Hillary Clinton are seeking a World War III between Islam and Christendom. Or maybe just to flood the West with Moslem migrants and refugees.

Pinker has his own dubious theories about war.

I do agree with Horgan that these supposed skeptics are not really very skeptical about genuine science issues.

Another problem with them is that they are dominated by leftist politics. They will ignore any facts which conflict with their leftist worldview, and even purge anyone who says the wrong thing.

The conference where Horgan spoke had disinvited Richard Dawkins because he retweeted a video that had some obscure cultural references that did not pass some leftist ideological purity test. They did not like Horgan either and denounced him on the stage. It is fair to assume that he will not be invited back.

Monday, May 23, 2016

IBM claims quantum computer cloud service

IBM announced:
IBM scientists have built a quantum processor that users can access through a first-of-a-kind quantum computing platform delivered via the IBM Cloud onto any desktop or mobile device. IBM believes quantum computing is the future of computing and has the potential to solve certain problems that are impossible to solve on today’s supercomputers.

The cloud-enabled quantum computing platform, called IBM Quantum Experience, will allow users to run algorithms and experiments on IBM’s quantum processor, work with the individual quantum bits (qubits), and explore tutorials and simulations around what might be possible with quantum computing.

The quantum processor is composed of five superconducting qubits and is housed at the IBM T.J. Watson Research Center in New York. The five-qubit processor represents the latest advancement in IBM’s quantum architecture that can scale to larger quantum systems. It is the leading approach towards building a universal quantum computer.

A universal quantum computer can be programmed to perform any computing task and will be exponentially faster than classical computers for a number of important applications for science and business.
There is more hype here.

As Scott Aaronson likes to point out, quantum computers would not be exponentially faster for any common application. Here is a list of quantum algorithms, with their suspected speedups.

According to this timeline, we have had 5-qubit quantum computers since the year 2000. I first expressed skepticism on this blog in 2005.

At least IBM admits:
A universal quantum computer does not exist today, but IBM envisions medium-sized quantum processors of 50-100 qubits to be possible in the next decade. With a quantum computer built of just 50 qubits, none of today’s TOP500 supercomputers could successfully emulate it, reflecting the tremendous potential of this technology. The community of quantum computer scientists and theorists is working to harness this power, and applications in optimization and chemistry will likely be the first to demonstrate quantum speed-up.
So there is no true quantum computer today, and no one has demonstrated any quantum speed-up.

Don't plan on connect this to any real-time service. IBM is running batch jobs. Your results will come to you in the mail. That's email, I hope, and not some punch-cards in an envelope.

Here is one way you can think about quantum computing, and my skepticism of it. Quantum mechanics teaches that you cannot be definite about an electron's position when it is not being observed. One way of interpreting this is that the electron can be in two or more places at once. If you believe that, then it seems plausible that the electron could be contributing to two or more different computations at the same time.

Another view is that an electron is only in one place at a time, and that uncertainty in its position is just our lack of knowledge. With this view, it seems implausible that our uncertainty could be the backbone of a super-Turing computation.

Some people with this latter view deny quantum mechanics and believe in hidden variables. But I am talking about followers of quantum mechanics, not them. See for example this recent Lubos Motl post, saying "Ignorance and uncertainty are de facto synonyms." He accepts (Copenhagen) quantum mechanics, and the idea that an electron can be understood as having multiple histories as distinct paths, but he never says that an electron is in two places at the same time, or that a cat is alive and dead at the same time.

So which view is better? I don't think that either view is quite correct, but both views are common. The differences can explain why some people are big believers in quantum computing, and some are not.

Suppose you really believe that a cat is alive and dead at the same time. That the live cat and the dead cat exist as distinct but entangled entities, not just possibilities in someone's head. Then it is not too much more of a stretch to believe that the live cat can interact with the dead cat to do a computation.

If you do not take that view, then were is the computation taking place?

I would say that an electron is not really a particle, but a wave-like entity that is often measured like a particle. So does that mean it can do two different computations at once? I doubt it.

Saturday, May 21, 2016

More on the European Quantum Manifesto scam

I trashed the European Quantum Manifesto, and a reader points me to this cryptologist rant against it:
There is a significant risk that all of the benefits of quantum computing during the next 20 years will be outweighed by the security devastation caused by quantum computing during the same period. ...

This brings me to what really bugs me about the Quantum Manifesto. Instead of highlighting the security threat of quantum technology and recommending funding for a scientifically justified response, the Manifesto makes the thoroughly deceptive claim that quantum technology improves security.
He then goes on to explain why quantum cryptography is never going to improve anyone's security.
A company named ID Quantique has been selling quantum-cryptography hardware, specifically hardware for BB84-type protocols, since 2004. ID Quantique claims that quantum cryptography provides "absolute security, guaranteed by the fundamental laws of physics." However, Vadim Makarov and his collaborators have shown that the ID Quantique devices are vulnerable to control by attackers, that various subsequent countermeasures are still vulnerable, and that analogous vulnerabilities in another quantum-key-distribution system are completely exploitable at low cost. The most reasonable extrapolation from the literature is that all of ID Quantique's devices are exploitable.

How can a product be broken if it provides "absolute security, guaranteed by the fundamental laws of physics"?
He explains how quantum cryptography misses the point of what cryptography is all about, and fails to address essential security issues that non-quantum techniques have already solved.

He is right about all this, and I have made similar points on this blog for years.

I also agree, and have said so here, that if quantum computing is successful in the next 20 years, then the social and economic impact will be overwhelmingly negative. The main application is to destroy the computer security that our society depends on.

Where I differ from him is that I say quantum computing and quantum cryptography are both scams, for different reasons. Quantum computing is technologically impossible, and almost entirely destructive if it were possible. Quantum cryptography is possible, but expensive, impractical, and insecure.

Actually, they are both possible in the sense that the laws of quantum mechanics allow experiments such that confirming the 1930 theory can be interpreted as favoring quantum computing or quantum cryptography. But the exitement is about the possibility of super-Turing computing, and more secure key exchanges, but these have never been achieved.

Wednesday, May 18, 2016

Aristotle, Einstein, and the nature of force

Vienna physics professor Herbert Pietschmann writes:
But Newton also removed Aristotles division of the motions into “natural motions” and “enforced motions”. For Newton, also Aristotles “natural motions” became “enforced” by the gravitational force. In this way, he unified our understanding of dynamics in the most general way. ...

In 1915, Albert Einstein had found the basic equation in his theory of general relativity. He published a complete version of his thoughts in 1916. According to this theory, the gravitational interaction was not caused by a force but by a curvature of spacetime. In this basic publication Einstein writes: “Carrying out the general theory of relativity must lead to a theory of gravitation: for one can generate a gravitational field by merely changing the coordinate system.” ...

To summarize, let us state that motion caused by gravitation is not caused by a force; in that sense it differs from all other motions. Einstein made this clear in his quoted paper from 1916. He writes:21 “According to the theory of General Relativity gravitation has an exceptional role with respect to all other forces, especially electromagnetism.” This is what in a sense we might call “return of Aristotelian physics” since it clearly distinguishes between “natural motion” and “enforced motion”, constituting the basic problem of modern physics.

Acceleration is either caused by the geometry of spacetime (gravitation) or by an external force in Euclidian spacetime (all other forces). Mathematically, these two different views are represented either by the Theory of General Relativity (gravitation) or by Quantum Field Theory (all other forces).
This is partially correct. Aristotle's concept of force seems wrong by Newtonian standards, but is actually reasonable in the light of relativity, as I previously argued.

But Einstein did not believe that gravitational acceleration was caused by the geometry of spacetime, as most physicists do today.

Einstein is also making a false distinction between gravity and electromagnetism. The preferred relativistic view of electromagnetism, as developed by Poincare, Minkowski, and maybe Weyl, is that the field is a curvature tensor and the force is a geometrical artifact. In this view, electromagnetism and gravity are formally quite similar.

In his paper on General Relativity from 1916 he writes: “the law of causality makes a sensible statement on the empirical world only when cause and effect are observable.” Since a gravitational “force” was not observable, Einstein had eliminated it from his theory of gravitation and replaced it by the curvature of spacetime. ...
There are people who deduce that the law of causality is invalid because the theory of relativity made it obsolete. Einstein himself seems to have gone back and forth on the issue.

The idea is mistaken, whether it is Einstein's fault or not. Causality is the essence of relativity.
In this connexion it is historically interesting that only ten years later Einstein converted from these ideas which had led him to his most fundamental contributions to physics. Werner Heisenberg who had used the same philosophy for the derivation of his uncertainty relation, recalls a conversation with Einstein in 1926; Einstein: “You don’t seriously believe that a physical theory can only contain observables.” Heisenberg: “I thought you were the one who made this idea the foundation of your theory of relativity?” Einstein: “May be I have used this kind of philosophy, nevertheless it is nonsense.”
Here is where Einstein rejects positivist philosophy for which he is widely credited.

The main reason Einstein is credited for special relativity over Lorentz is for the greater emphasis on observables. But as you can see, Einstein disavowed that view.

For the skeptical view of causality in physics, see this review:
The role of causality in physics presents a problem. Although physics is widely understood to aim at describing the causes of observable phenomena and the interactions of systems in experimental set-ups, the picture of the world given by fundamental physical theories is largely acausal: e.g. complete data on timeslices of the universe related by temporally bidirectional dynamical laws. The idea that physics is acausal in nature, or worse, incompatible with the notion of causality, has attracted many adherents. Causal scepticism in physics is most associated with Russell’s (1913) arguments that a principle of causality is incompatible with actual physical theories. For causal sceptics, insofar as causal reasoning is used in physics, it is at best extraneous and at worst distorts the interpretation of a theory’s content.
No, causal reasoning is essential to physics. The arguments of Bertrand Russell and other philosophers rejecting causality are nonsense.

Thursday, May 12, 2016

Google seeks quantum supremacy

Ilyas Khan writes:
I travelled over to the west coast and spent some time with the Artificial Intelligence team within Google at their headquarters just off Venice Beach in LA. Like all who visit that facility, I am constrained by an NDA in talking about what is going on. However in their bid to establish "Quantum Supremacy" the team, led by Hartmut Neven, talks not in terms of decades but in a timetable that is the technology equivalent of tomorrow. For the avoidance of doubt, the "tomorrow" that I refer to is the timeline for building and operating a universal quantum computer.
I interpret "the technology equivalent of tomorrow" as being within two years. Check back here at that time.

No, Google is not going to succeed. This is not like self-driving cars, where it is clear that the technology is coming, as prototypes have proved feasibility. For that, computers just have to mimic what humans do, and have several advantages, such better sensors, faster reactions, and real-time access to maps.

Despite hundreds of millions of dollars in investment, there is still no convincing demonstration of quantum supremacy, or any proof that any method will scale.

Google is all about scale, so I am sure that its researchers have a story to tell their senior management. But it is covered by a non-disclosure agreement, so we do not know what it is.

You can bet that if Google ever achieves a universal quantum computer, or even just quantum supremacy, it will brag to everyone and attempt to collect a Nobel prize. If you do not hear anything in a couple of years, then they are not delivering on their promises.

Monday, May 9, 2016

Darwin did not discredit Lamarck

Genomicist Razib Khan writes about a New Yorker mag mistake:
But there’s a major factual problem which I mentioned when it came out, and, which some friends on Facebook have been griping about. I’ll quote the section where the error is clearest:
…Conceptually, a key element of classical Darwinian evolution is that genes do not retain an organism’s experiences in a permanently heritable manner. Jean-Baptiste Lamarck, in the early nineteenth century, had supposed that when an antelope strained its neck to reach a tree its efforts were somehow passed down and its progeny evolved into giraffes. Darwin discredited that model….
It is true that in Neo-Darwinian evolution, the modern synthesis, which crystallized in the second quarter of the 20th century, genes do not retain an organism’s experiences in a permanently heritable manner. But this is not true for Charles Darwin’s theories, which most people would term a “classical Darwinian” evolutionary theory.
Not just the New Yorker.

For some reason, the Darwin idolizers frequently stress that he proved Lamarck wrong about heredity. This is misguided for a couple of reasons.

First, they are usually eager to convince you of some leftist-atheist-evolutionist-naturalist-humanist agenda, but they could make essentially the same points if Lamarckianism were true.

Second, Darwin was a Lamarkian, as a comment explains:
Darwin’s On the Origin of Species proposed natural selection as the main mechanism for development of species, but did not rule out a variant of Lamarckism as a supplementary mechanism.[11] Darwin called his Lamarckian hypothesis pangenesis, and explained it in the final chapter of his book The Variation of Animals and Plants under Domestication (1868), after describing numerous examples to demonstrate what he considered to be the inheritance of acquired characteristics. Pangenesis, which he emphasised was a hypothesis, was based on the idea that somatic cells would, in response to environmental stimulation (use and disuse), throw off ‘gemmules’ or ‘pangenes’ which travelled around the body (though not necessarily in the bloodstream). These pangenes were microscopic particles that supposedly contained information about the characteristics of their parent cell, and Darwin believed that they eventually accumulated in the germ cells where they could pass on to the next generation the newly acquired characteristics of the parents. Darwin’s half-cousin, Francis Galton, carried out experiments on rabbits, with Darwin’s cooperation, in which he transfused the blood of one variety of rabbit into another variety in the expectation that its offspring would show some characteristics of the first. They did not, and Galton declared that he had disproved Darwin’s hypothesis of pangenesis, but Darwin objected, in a letter to the scientific journal Nature, that he had done nothing of the sort, since he had never mentioned blood in his writings. He pointed out that he regarded pangenesis as occurring in Protozoa and plants, which have no blood. (wiki-Lamarckism)
Here is more criticism of the New Yorker and part 2.

I have heard people say that the New Yorker employs rigorous fact checkers, but I don't think that they try to check that the science is right. It is a literary magazine, and they check literary matters.

Denying relativity credit to Poincare

I stumbled across this 1971 book review:
In a third article Stanley Goldberg gives a remarkably clear picture of Einstein's special relativity theory and the response of the British, French, and Germans to the theory. Starting with two simple postulates, videlicet [= as follows] the constancy of the velocity of light and the impossibility of determining an absolute motion of any kind, Einstein was able to derive the Lorentz transformation with ease as well as many other relations of a kinematical nature. The "ether" was dismissed in a short sentence. The German physicists understood the theory, but not all agreed with it. The British stuck with the ether and didn't even try to understand special relativity. The French were not much interested in the theory either; even Poincaré failed to mention it in his writings on electrodynamics.
Poincare did not fail to mention it; he created the theory. Poincare is mainly responsible for the spacetime geometry and electromagnetic covariance of special relativity, along with elaborations by Minkowski. I don't know how physicists could be so ignorant of one of the great advances of physics.

I do not know anything like it in the history of science. Every discussion of relativity goes out of its way to attribute the theory solely to Einstein, and to give some history of how it happened. And they get the story wrong every time. I explain more in my book.

Friday, May 6, 2016

Stop asking whether quantum computing is possible

The current SciAm mag has a paywalled article describing three competing research programs trying to build the first quantum computer, and concludes:
The time has come to stop asking whether quantum computing is possible and to start focusing on what it will be able to do. The truth is that we do not know how quantum computing will change the world.
Huhh?

No, quantum computing is probably impossible, and would not change the world even if it were possible.

One supposed application is a "quantum internet", which quantum computers are used as routers to transmit qubits from one user to another. The only known use for that is for so-called quantum cryptography, but that has no advantages over conventional cryptography. It would cost a million times as much, and be hopelessly insecure by today's standards. It cannot authenticate messages, and all implementations have been broken, as far as I know.

The article also mentions quantum clocks. I do not know what that is all about, but we already have extremely cheap clocks that are far more accurate than what is needed by anyone.

Meanwhile, IBM claims to have 5 qubits:
IBM said on Wednesday that it's giving everyone access to one of its quantum computing processors, which can be used to crunch large amounts of data. Anyone can apply through IBM Research's website to test the processor, however, IBM will determine how much access people will have to the processor depending on their technology background -- specifically how knowledgeable they are about quantum technology.
If IBM really had a revolutionary computer, it would be able to figure out something to do with it. No, it cannot "be used to crunch large amounts of data."

Wednesday, May 4, 2016

Wired explains entanglement

Famous physicist Frank Wilczek explains entanglement in a Wired/Quanta mag article:
An aura of glamorous mystery attaches to the concept of quantum entanglement, and also to the (somehow) related claim that quantum theory requires “many worlds.” Yet in the end those are, or should be, scientific ideas, with down-to-earth meanings and concrete implications. Here I’d like to explain the concepts of entanglement and many worlds as simply and clearly as I know how. ...

So: Is the quantity of evil even or odd? Both possibilities are realized, with certainty, in different sorts of measurements. We are forced to reject the question. It makes no sense to speak of the quantity of evil in our system, independent of how it is measured. Indeed, it leads to contradictions.

The GHZ effect is, in the physicist Sidney Coleman’s words, “quantum mechanics in your face.” It demolishes a deeply embedded prejudice, rooted in everyday experience, that physical systems have definite properties, independent of whether those properties are measured. For if they did, then the balance between good and evil would be unaffected by measurement choices. Once internalized, the message of the GHZ effect is unforgettable and mind-expanding.
To get to this conclusion, you have to equate "definite properties" with measurement outcomes.

A electron has definite properties, but it is not really a particle and does not have a definite position. If you measure the electron, using a method that puts it in a definite position, then it is in that position for the instant of the measurement. A nanosecond later, it is back to its wave-like state with indeterminate position.

For more on Wilczek, see this Edge interview.

Monday, May 2, 2016

New book on spooky action

I previously trashed George Musser's new book (without reading it), and now he was on Science Friday radio promoting it:
Could the space we live in—our everyday reality—just be a projection of some underlying quantum structure? Might black holes be like the Big Bang in reverse, where space reverts to spacelessness? Those are the sorts of far-out questions science writer George Musser ponders in his book Spooky Action at a Distance: The Phenomenon that Reimagines Space and Time—And What it Means for Black Holes, the Big Bang, and Theories of Everything. In this segment, Musser and quantum physicist Shohini Ghose talk about the weird quantum world, and the unpredictable nature of particles.
Here is an excerpt:
The world we experience possesses all the qualities of locality. We have a strong sense of place and of the relations among places. We feel the pain of separation from those we love and the impotence of being too far away from something we want to affect. And yet quantum mechanics and other branches of physics now suggest that, at a deeper level, there may be no such thing as place and no such thing as distance. Physics experiments can bind the fate of two particles together, so that they behave like a pair of magic coins: if you flip them, each will land on heads or tails—but always on the same side as its partner. They act in a coordinated way even though no force passes through the space between them. Those particles might zip off to opposite sides of the universe, and still they act in unison. These particles violate locality. They transcend space.

Evidently nature has struck a peculiar and delicate balance: under most circumstances it obeys locality, and it must obey locality if we are to exist, yet it drops hints of being nonlocal at its foundations. That tension is what I’ll explore in this book. For those who study it, nonlocality is the mother of all physics riddles, implicated in a broad cross section of the mysteries that physicists confront these days: not just the weirdness of quantum particles, but also the fate of black holes, the origin of the cosmos, and the essential unity of nature.
Everything in the universe obeys locality, as far as we know.

Musser's previous book was The Complete Idiot’s Guide to String Theory, and that does not require spooky action, so presumably he understands that the spookiness is just goofiness to sell books. He may understand that string theory is all a big scam also.