Sunday, June 30, 2013

Pope did not ban the telescope

Nature magazine is the most prestigious science magazine in the world, and it just published this interview:
On NeuroPod this month, a super-high resolution human brain atlas, how to build a brain from a tiny pool of cells, and the scientist who thinks drug laws are the worst case of scientific censorship 'since the Catholic Church banned the telescope'.
No, the Catholic Church never banned the telescope. I guess he is referring to the trial of Galileo, but there was never any disapproval of the telescope or any other scientific instrument. While Galileo made some discoveries with a telescope that he developed and improved, that had almost nothing to do with his dispute with the Church.

Galileo's main argument was that the tides prove the motion of the Earth. He was wrong and the Church correctly said that he was wrong. The Church's position was that he was free to teach and publish alternative astronomical theories, but he could not say that the tides prove the motion of the Earth.

Update: The 2011 movie A Dangerous Method has a similarly false analogy to Galileo. It says:
Freud: All I’m doing is pointing out what experience indicates to me must be the truth. And I can assure you that in 100 years time our work will still be rejected. Columbus, you know, had no idea what country he’d discovered. Like him I’m in the dark. All I know is that I have set foot on the shore and the country exists.

Jung: I think of you more as Galileo and your opponents as those who condemned him by refusing even to put their eye to his telescope.

Freud: In any event I have simply opened a door. It’s for young men like yourself to walk through it.

Jung: I’m sure you have many more doors to open for us.
This is nonsense. No one ever condemned Galileo by refusing to look thru a telescope. No one ever rejected any legitimate scientific work by Freud or Jung either. For the most part, Galileo, Freud, and Jung were criticized for not being able to give cientific support for their claims.

Galileo did a lot of good work. Freud and Jung were quacks.

Friday, June 28, 2013

Origin of the genetic code

I am curious about the origin of the genetic code, as distinct from the discovery of the chemical structure of DNA. The UK Telegraph reports:
That enigma was resolved in 1953, in two scientific articles by James Watson and Francis Crick of the University of Cambridge. First, they suggested that the DNA molecule was composed of two parallel spirals that were mirror images of each other, with the sequence of bases on one spiral being matched by the sequence on the other – the double helix. Then, five weeks later, they boldly stated that “the precise sequence of the bases is the code which carries the genetical information”. They argued that the DNA molecule contained a code that told the cell what protein to make.

Immediately, the physicist George Gamow suggested that the code must use sequences of three “letters” or bases. Given that there were 20 amino acids, a two-base code would not work (there are only 16 possible two-letter combinations of the four bases). A three-base code would produce 64 possible combinations – easily enough to encode the 20 amino acids.
Matthew Cobb writes that the genetic code just turned 60 years old:
It had been previously suggested by researchers such as Dounce (1952) and Caldwell and Hinshelwood (1950) that the order of the bases might in some way enable a gene to synthesise proteins, probably by acting as a physical template (Dounce even argued that three bases would correspond to a particular amino acid, which turned out to be right, but not for the physico-chemical reasons that Dounce argued). In 1950, Erwin Chargaff had suggested that a single change in a base could lead to a mutation, but again he viewed this in terms of a physical change to the DNA molecule.

The great step forward made by Watson and Crick in their second paper was to take these pre-existing ideas and reshape them in a less literal form. The sequence of bases was no longer seen in terms of a physical template for protein synthesis, but as something far more abstract – a code carrying genetical information.

Watson and Crick with their model of the DNA molecule. Crick is pointing with a slide-rule. Note the sketch of the double helix, by Crick’s wife Odile, pinned to the wall.

What is intriguing is where this novel interpretation came from. The first person who explicitly suggested that genes contained a ‘code-script’ was the physicist Erwin Schrödinger, in 1943. Although his ideas were widely-read, there were few attempts to explore the idea of a ‘code’, because the physical nature of the gene was unknown.
Jerry Coyne interviewed Watson:
Watson was here as an undergraduate, and was first interested in ornithology. He said his interests changed when he read Erwin Schrödinger’s 1944 book What is Life?, which inspired many biologists to work on the molecular basis of inheritance.
So maybe Watson got it from Schroedinger. Here is what that 1944 book said:
“It is these chromosomes, or probably only an axial skeleton fibre of what we actually see under the microscope as the chromosome, that contain in some kind of code-script the entire pattern of the individual's future development and of its functioning in the mature state. Every complete set of chromosomes contains the full code; so there are, as a rule, two copies of the latter in the fertilized egg cell, which forms the earliest stage of the future individual.”
This seems to be a pretty clear explanation of the concept of chromosomes carrying a genetic code.

I have noted before that Watson is still bragging about stealing credit from R. Franklin for the chemical structure of DNA, while Franklin is also accused of failing to make the inductive leap. Watson and Crick did figure out that the DNA base pairs are matched, but the real leap was to say that the DNA contained the genetic code. That leap appears to have been make by Schroedinger in 1944.

Wednesday, June 26, 2013

Wilson and effective field theory

The death of Kenneth G. Wilson has brought attention to effective field theory. Sean M. Carroll explains:
But it might be fun to just do a general discussion of the idea of “effective field theory,” which is crucial to modern physics and owes a lot of its present form to Wilson’s work.
Quantum gravity gets a lot of research attention based on the supposed contradiction between general relativity and quantum mechanics, and the failure to find a fully renormalizable theory. But there is a perfectly good effective field theory, and no problems at any observable energy scale. A lot of big-shot physicists tell us that there must be a better theory somewhere, but there is no scientific reason for believing that any theory will be better than what we have, or that experiment will ever validate such a theory.

Monday, June 24, 2013

Quantum poll likes Bohm over Bohr

I mentioned previous surveys of foundational attitudes, and now there is another poll of physicists on quantum mechanics.

This poll has 63% saying their favorite interpretation of quantum mechanics is De-Broglie-Bohm, and 70% saying that Bohr was wrong. It appears that physics has regressed over the last 80 years. The De-Broglie-Bohm interpretation doesn't even make much sense, and has not been extended to enough situations to be useful. It is much stranger than conventional interpretations. It is misguided for reasons that Bohr correctly explained decades ago.

Physics is in a sorry state. It used to be that it was led by people who could reliably tell us what is correct. Okay, they can tell us whether there is a Higgs bozon boson. But why can't they give us better answers to these questions?

Friday, June 21, 2013

Einstein rejected relativity geometrization

I have posted before, and in my book, that the textbook geometrical description of special relativity cannot be attributed to Albert Einstein. My reasons are (1) Einstein's 1905 special relativity paper was no more geometrical than the previous Lorentz theory; (2) Poincare's 1905 paper pre-dated Einstein's and had the spacetime geometry; (3) Minkowski built on Poincare's theory and popularized the spacetime geometry in 1908; and (4) Einstein did not understand the Poincare-Minkowski geometrical 4D approach and badmouthed it.

Einstein did later embrace Minkowski's approach, and used it in his general relativity work. So I assumed that he subscribed to geometrization after about 1910. Besides Minkowski, Einstein got the geometrical view from Grossmann, Levi-Civita, and Hilbert. Those were mathematicians who fully appreciated differential geometry.

However, a new paper, Why Einstein did not believe that General Relativity geometrizes gravity, by Dennis Lehmkuhl, claims:
I argue that, contrary to folklore, Einstein never really cared for geometrizing the gravitational or (subsequently) the electromagnetic field; indeed, he thought that the very statement that General Relativity geometrizes gravity "is not saying anything at all". Instead, I shall show that Einstein saw the "unification" of inertia and gravity as one of the major achievements of General Relativity.
The paper quotes Steven Weinberg:
I found that in most textbooks geometric ideas were given a starring role, so that a student who asked why the gravitational field is represented by a metric tensor, or why freely falling particles move on geodesics, or why the field equations are generally covariant would come away with an impression that this had something to do with the fact that space-time is a Riemannian manifold.

Of course, this was Einstein's point of view, and his preeminent genius necessarily shapes our understanding of the theory he created.
Lehmkuhl says that Weinberg was wrong, and that geometry was not Einstein's view.

Weinberg also had been widely criticized for saying that geometry was not necessary for general relativity, and he has refused to retract that view. Today the geometric view is universal. Even more strangely, his 1972 textbook goes on to say:
But now the passage of time has taught us not to expect that the strong, weak, and electromagnetic interactions can be understood in geometrical terms, and that too great an emphasis on geometry can only obscure the deep connections between gravitation and the rest of physics.
Weinberg got the 1979 Nobel Prize for an obscure 1967 paper that was later reinterpreted as giving a geometric model for the weak and electromagnetic interactions. Now all three interactions are best understood in terms of geometrical gauge theory. So Weinberg is famous for a 1967 paper that supposedly geometrized the weak force, but he denied in 1972 that such a thing was possible.

Hermann Weyl is responsible for geometrizing electromagnetism in 1918, along with gravitation. The term "gauge theory" is from Weyl. Einstein was not happy with that view. In 1925, Einstein attacked a Meyerson relativity book for using geometry:
Meyerson sees another essential correspondence between Descartes' theory of physical events and the theory of relativity, namely the reduction of all concepts of the theory to spatial, or rather geometrical, concepts; in relativity theory, however, this is supposed to hold completely only after the subsumption of the electric field in the manner of Weyl's or Eddington's theory. I would like to deal more closely with this last point because I have an entirely different opinion on the matter. I cannot, namely, admit that the assertion that the theory of relativity traces physics back to geometry has a clear meaning.
Einstein's famous 1916 general relativity paper had a lot of differential geometry explanations along with the physics, and had no references, so many people assumed that he invented it all. Historians know that hs not true, but until now they at least thought that Einstein believed in the geometrical approach.

Lehmkuhl makes a convincing argument. It should convince Einstein historians that he did not accept the geometrical view, or even admit that it had meaning.

In another new paper, Brown and Lehmkuhl argue:
(But it is worth stressing here that Einstein did not view GR as furnishing a geometric explanation of gravitational phenomena; he continued to reject the notion of space, or space-time, as providing the cause of inertia.89)

89 For details of Einstein’s arguments against seeing GR as a ‘geometrization of gravity’, see Lehmkuhl [n.d.]; for related arguments, see Anderson [1999] and Brown [2009].
They also quote Einstein as writing in a letter:
You consider the transition to special relativity as the most essential thought of relativity, not the transition to general relativity. I consider the reverse to be correct. I see the most essential thing in the overcoming of the inertial system, a thing which acts upon all processes, but undergoes no reaction. The concept is in principle no better than that of the centre of the universe in Aristotelian physics.
To me, the big idea of relativity is the 4-dimensional geometry, with special relativity being the flat spacetime (with curved circle bundle) and general relativity being curved spacetime. The special relativity has been more important because it affected most of physics, while general relativity just affects cosmology.

It seems weird for historians to change their minds about Einstein long after his death. He frequently talked and wrote about his discoveries, so how could anyone get it wrong? It was decades after his death that historians decided that he did not follow Michelson-Morley. And they still cannot explain how he avoided discussing Poincare.

To a lot of physicists today, general relativity is a great theory because it geometrizes gravity. Now we know that was not Einstein's view.

In my view, the geometrization of all four fundamental forces is probably the greatest achievement of 20th century theoretical physics. Perhaps more credit should be given to those responsible for these discoveries, and less to those who did not even believe in them afterwards

Tuesday, June 18, 2013

Accusing Higgs of adhocness

A new philosphy paper, Philosophical perspectives on ad hoc-hypotheses and the Higgs mechanism, argues:
The ad hoc-charge against the SMHM [“Standard Model Higgs Mechanism”] is interesting from a philosophical point of view on several grounds. First, the claim that our currently best theory of fundamental particle physics is based on an ad hoc-hypothesis sounds alarming and is certainly worthy of consideration in itself. Second, there exists a longstanding philosophical debate about the notion of adhocness, to which eminent philosophers of science such as Popper, Lakatos, Schaffner, Grünbaum, Leplin and others have made important contributions. This gives rise to the question of whether the SMHM qualifies as “ad hoc” according to any of these philosophers’ accounts of adhocness, and what the possible ramifications would be. Third, it seems natural to ask what impact the recent experimental discovery of a Higgs-like particle at the LHC has on the status of the ad hoc-charge against the SMHM.
This concept of adhocness may seem stupid, but it is the main reason those philosophers of science credit Einstein for special relativity. See for examples Brush, Singh, Hawking, Zahar, Kacser, Kragh, and Rigden.

Einstein made his reputation on his most famous paper, his 1905 special relativity paper. But he had trouble explaining how it was an advance over previous work. Lorentz had the FitzGerald contraction, local time, constant speed of light, rejection of aether motion, relativistic mass, transformation of Maxwell's equations, and explanation of Michelson-Morley. Besides all that, Poincare had the clock synchronization, relativity principle, viewing the aether as a convention, E=mc2, Lorentz group, spacetime, non-Euclidean geometry, electromagnetic covariance, and gravity waves. Others at the time described Einstein's paper as just a presentation of Lorentz's theory, and no one thought that it was as advanced as Poincare's work.

Einstein's response was that Lorentz's theory was "ad hoc", and that he did not understand Poincare's 4D theory. Philosophers gradually picked up on this concept, and now a majority of Einstein scholars agree. They say that Lorentz was ad hoc in the sense that he paid close attention to experiments like Michelson-Morley. Einstein's innovation, they say, was to present relativity as a paradigm shift that ignores experiment. They blame Lorentz and Poincare for failing to be true believers in the new relativity religion, because they both said that experimental evidence could prove them wrong.

Philosophers went on to say that the great scientific revolutions are reworkings of previous theories that have no measurable advantages. Einstein's genius was to detach himself from the physical world. Theoretical physicists have largely adopted this view, and get most excited about untestable ideas like string theory, multiverse, many-worlds, etc.

All of this is horribly misguided. Special relativity comes to us from Lorentz, Poincare, and Minkowski. Einstein had almost zero influence on the creation or acceptance of the theory. Lorentz and Poincare were very much concerned with explaining the physical world when they developed the theory. Those who use Einstein as an example of the merits of incommensurable paradigm shifts are just wrong.

(Einstein was influential in acceptance of general relativity, but that was because he pointed to empirical evidence. With hard evidence, it was not one of those Kuhnian paradigm shifts that the philosophers love.)

All of this is detailed in my book, How Einstein Ruined Physics, and on this blog.

So the invention of the Higgs mechanism in the 1960s was ad hoc in the sense that it was an attempt to explain how a gauge field could be short-range, and the weak and strong forces were observed to be short-range. How is this a criticism? Of course physicists respond to data when formulating their theories. That is how it has always been done, until Einstein wasted away his post-1925 life on unified field theories, and string theorists took over in 1980 or so.

Monday, June 17, 2013

Physics has gone too far

Science writer Jim Baggott is plugging a new book, Farewell to Reality: How Modern Physics Has Betrayed the Search for Scientific Truth. I liked his 2010 book, The Quantum Story: A History in 40 Moments. He debates string theorist Mike Duff, and there is commentary by Lumo and Woit. Baggott argues:
There are no clues in the available scientific data about how these problems might be solved, and theorists have been obliged to speculate. But, in Farewell to Reality, I argue that in their ambition to develop a "theory of everything", some theorists have crossed a line. The resulting theories, such as superstring theory (or M-theory), are not grounded in empirical data and produce no real predictions, so they can't be tested. Albert Einstein once warned: "Time and again the passion for understanding has led to the illusion that man is able to comprehend the objective world rationally by pure thought without any empirical foundations – in short, by metaphysics." Now, metaphysics is not science. Yet a string of recent bestselling popular science books, supported by press articles, radio and television documentaries, have helped to create the impression that this is all accepted scientific fact. Physics has gone too far.
Duff replies with phony history lessons:
Quantum theory, for example, was largely driven by empirical results, whereas Einstein's general theory of relativity was more a product of speculation and thought experiments (contrary to what your quote implies). Speculation, then, is a vital part of the scientific process. When Paul Dirac wrote down his equation describing how quantum particles behave he wasn't just explaining the electron, whose properties had been well established in experiments. His equation also predicted the hitherto undreamed-of positron, and hence the whole concept of antimatter.
Quantum theory was driven by data, but so was relativity and Dirac's theory. Poincare first conceived relativistic gravity in order to understand the finite propagation of gravity and the precession of Mercury's orbit.

Duff also says:
It is a common fallacy that physics is only about what has already been confirmed in experiments. The Higgs boson had no foundation in empirical reality when it was predicted in 1964. ...

Theories rarely spring fully formed from the minds of their discoverers. Chapter 2 of your book reminds us that it took 30 years of quantum entanglement (Einstein's "spooky action at a distance", proposed in 1935) before John Bell made a falsifiable prediction and another 20 before Alain Aspect tested it experimentally. Was all the entanglement research done in the meantime, including Einstein's, unscientific metaphysics? I don't think so.
The Higgs mechanism was proposed as a way of explaining the short-range nature of nuclear forces. Yes, the particle was only found 50 years later.

Einstein's quantum work was unscientific metaphysics. Einstein and Bell were trying to prove quantum mechanics wrong by showing some supposedly counter-intuitive aspects of it. If they had turned out to be right, and they had inspired someone to disprove quantum mechanics wrong, then I guess I would have had to admit that they were onto something. But it is clear now that they were barking up the wrong tree. It was also the opinion of the leading physicists at the time of Einstein and Bell that they were barking up the wrong tree.

Baggott is right that theoretical physics has become Fairy Tale Physics, as explained in this podcast or my book.

Sunday, June 16, 2013

Crediting Einstein for ignoring data

Sydney Finkelstein writes on the UK BBC:
What would big data think of Einstein?

Albert Einstein, author of the theory of relativity, was awarded the Nobel Prize for Physics in 1921. Would his genius make it past the big data test today? ...

But big data should not be confused with big ideas. It is in those ideas — the ones that make us conjure up the image of Albert Einstein — that lead to breakthroughs. ...

Companies, like civilisations, advance by leaps and bounds when genius is let loose, not when genius is locked away and deemed too out of the mainstream of data-driven knowledge.

What if Albert Einstein lived today and not 100 years ago? What would big data say about the general theory of relativity, about quantum theory? There was no empirical support for his ideas at the time — that’s why we call them breakthroughs.

Today, Einstein might be looked at as a curiosity, an “interesting” man whose ideas were so out of the mainstream that a blogger would barely pay attention. Come back when you've got some data to support your point.
No, this popular view of Einstein and of scientific breakthroughs is entirely wrong. Relativity and quantum theory were driven by empirical support. Special relativity was driven by Michelson-Morley and electron beams. General relativity by Mercury's precession and starlight deflection. Quantum theory by atomic spectra. We called them breakthroughs because they did explain the data, not because they did not.

Einstein is commonly praised for ignoring data and thinking deep thoughts. However this praise is misguided, and there are very few examples of science progressing that way.

5-sigma does not prove Higgs

Economics professor Stephen T. Ziliak writes:
I want to believe as much as the next person that particle physicists have discovered a Higgs boson, the so-called “God particle,” one with a mass of 125 gigaelectronic volts (GeV). But so far I do not buy the statistical claims being made about the discovery. Since the claims about the evidence are based on “statistical significance” – that is, on the number of standard deviations by which the observed signal departs from a null hypothesis of “no difference” – the physicists’ claims are not believable. Statistical significance is junk science, and its big piles of nonsense are spoiling the research of more than particle physicists.

I’m an economist. So don’t trust me with newfangled junk bonds or the fate of the world financial system. But here is something you can believe, and will want to: Statistical significance stinks. In statistical sciences from economics to medicine, including some parts of physics and chemistry, the ubiquitous “test” for “statistical significance” cannot, and will not, prove that a Higgs boson exists, any more than it can prove the reality of God, the existence of a good pain pill, or the validity of loose monetary policy. ...

I show in a book I wrote on the subject with Deirdre N. McCloskey, The Cult of Statistical Significance (2008), that the null hypothesis test procedure – another name for statistical significance testing – produces many such errors, with tragic results for real world economies, law, medicine, and even human life. ...

Consider again the illogic of the physicists’ procedure. The signal in the data which has been observed over and above background noise (denoted as being at 5 sigma) is possibly a Higgs boson – that is true. But in sober moments – when flash-bulbs fade and fizzy drinks fall flat – those same particle physicists admit that the jury is still out – that the statistically significant bump could be “consistent with” other plausible hypotheses, not specified by their models – just like Mrs. Smith could have died of something other than cramp, and probably did.
He has a point, if the 5-sigma significance were the main evidence for the Higgs boson. There is danger in relying on p-values too much.

Friday, June 14, 2013

Lightman says impossible to know everything

This Radiolab podcast says:
Inspired by an essay written by physicist and novelist Alan Lightman, Robert pays a visit to Brian Greene to ask if the latest developments in theoretical physics spell a crisis for science -- where we find we've reached the limit of what we can see and test, and are left with mathematical equations that can't be verified by experiments or observation.
That essay says
It is important to point out that neither eternal inflation nor string theory has anywhere near the experimental support of many previous theories in physics, such as special relativity or quantum electrodynamics, mentioned earlier. Eternal inflation or string theory, or both, could turn out to be wrong. However, some of the world’s leading physicists have devoted their careers to the study of these two theories. ...

“We had a lot more confidence in our intuition before the discovery of dark energy and the multiverse idea,” says Guth. “There will still be a lot for us to understand, but we will miss out on the fun of figuring everything out from first principles.”
Lightman is expanding this essay into a book. I guess it will explain how the world's leading physicists have wasted their careers on untestable theories that were supposed to explain the universe from first principles, but they have now all decided that we live in a multiverse that can never be explained. Greene apparently still believes in some sort of mathematical theory of everything.

Wednesday, June 12, 2013

SciAm pushes Quantum Bayesianism

SciAm has a new article on an alternative to quantum mechanics:
Can Quantum Bayesianism Fix the Paradoxes of Quantum Mechanics?
A new version of quantum theory sweeps away the bizarre paradoxes of the microscopic world. The cost? Quantum information exists only in your imagination

Flawlessly accounting for the behavior of matter on scales from the subatomic to the astronomical, quantum mechanics is the most successful theory in all the physical sciences. It is also the weirdest.

In the quantum realm, particles seem to be in two places at once, information appears to travel faster than the speed of light, and cats can be dead and alive at the same time. Physicists have grappled with the quantum world's apparent paradoxes for nine decades, with little to show for their struggles. Unlike evolution and cosmology, whose truths have been incorporated into the general intellectual landscape, quantum theory is still considered (even by many physicists) to be a bizarre anomaly, a powerful recipe book for building gadgets but good for little else. The deep confusion about the meaning of quantum theory will continue to add fuel to the perception that the deep things it is so urgently trying to tell us about our world are irrelevant to everyday life and too weird to matter.

This article was originally published with the title Quantum Weirdness? It's All in Your Mind.

In Brief

Quantum mechanics is an incredibly successful theory but one full of strange paradoxes. A recently developed model called Quantum Bayesianism (or QBism) combines quantum theory with probability theory in an effort to eliminate the paradoxes or put them in a less troubling form.

QBism reimagines the entity at the heart of quantum paradoxes — the wave function. Scientists use wave functions to calculate the probability that a particle will have a certain property, such as being in one place and not another. But paradoxes arise when physicists assume that a wave function is real.

QBism maintains that the wave function is solely a mathematical tool that an observer uses to assign his or her personal belief that a quantum system will have a specific property. In this conception, the wave function does not exist in the world — rather it merely reflects an individual's subjective mental state.
Here is a 2010 research article with more info:
This article summarizes the Quantum Bayesian point of view of quantum mechanics, with special emphasis on the view's outer edges --- dubbed QBism. QBism has its roots in personalist Bayesian probability theory, is crucially dependent upon the tools of quantum information theory, and most recently, has set out to investigate whether the physical world might be of a type sketched by some false-started philosophies of 100 years ago (pragmatism, pluralism, nonreductionism, and meliorism). ...

There is something about quantum theory that is different in character from any physical theory posed before. To put a finger on it, the issue is this: The basic statement of the theory — the one we have all learned from our textbooks — seems to rely on terms our intuitions balk at as having any place in a fundamental description of reality. The notions of “observer” and “measurement” are taken as primitive, the very starting point of the theory. This is an unsettling situation! Shouldn’t physics be talking about what is before it starts talking about what will be seen and who will see it?
This is foolishness. Other previous scientific theories, such as thermodynamics and relativity, have emphasized what is seen by observers.

The article explains that Bayesian statistics is all about probability not being a physical entity. So why all the emphasis on it? This whole approach seems confused. The purpose is supposed to be to "fix the paradoxes of quantum mechanics", but these alternatives are all more confusing and counter-intuitive than quantum mechanics. There is no sign that QBism would be any better.

Quantum mechanics does not say that particles can be in two places at once, or that information travels faster than the speed of light. QBism is one of several misguided approaches that try to deny quantum mechanics by attributing misconceptions to the theory.

Monday, June 10, 2013

Still no quantum computer

Scott Aaronson continues to deny the existence of quantum computers:
D-Wave founder Geordie Rose claims that D-Wave has now accomplished its goal of building a quantum computer that, in his words, is “better at something than any other option available.” This claim has been widely and uncritically repeated in the press, so that much of the nerd world now accepts it as fact. However, the claim is not supported by the evidence currently available. It appears that, while the D-Wave machine does outperform certain off-the-shelf solvers, simulated annealing codes have been written that outperform the D-Wave machine on its own native problem when run on a standard laptop. More research is needed to clarify the issue, but in the meantime, it seems worth knowing that this is where things currently stand.
Aaronson is a big quantum computing enthusiast, and he is doing a public service by throwing cold water on exaggerated claims.

I think that it is unlikely that a useful quantum computer will ever be built. There are a lot of smart people with million dollar research grants trying to prove me wrong. Aaronson makes his mistake with this argument:
This talk will assume what David Deutsch calls the “momentous dichotomy”:

Either a given technology is possible, or else there’s some principled reason why it’s not possible.

Example application: Quantum computing
This sort of reasoning can lead you to all sorts of science fiction ideas.

I never understood a principled reason why Maxwell's demon is impossible. And yet it seems to be impossible.

Meanwhile Aaronson has released a long new paper on The Ghost in the Quantum Turing Machine. This discusses free will and other topics, and he seems to say some sensible things. I hope to post more about it later.

Update: This paper is an elaboration of his previous views. To read those, see his lecture (that was a draft for his book), or this summary of another lecture:
This backward causation, or retrocausality, was the “loony” aspect of Aaronson’s talk. Except there’s nothing loony about it. It is a concept that Einstein’s special theory of relativity made a live possibility. Relativity convinced most physicists that we live in a “block universe” in which past, present, and future are equally real. In that case, there’s no reason to suppose the past influences the future, but not vice-versa. Although their theories shout retrocausality, physicists haven’t fully grappled with the implications yet. It might, for one thing, explain many of the mysteries of quantum mechanics.

In a follow-up email, Aaronson told me that the connection between free will and cosmic initial state was also explored by philosopher Carl Hoefer in a 2002 paper. What Aaronson has done is apply the insights of quantum mechanics. If you can’t clone a quantum state perfectly, you can’t clone yourself perfectly, and if you can’t clone yourself perfectly, you can’t ever be fully simulated on a computer. Each decision you take is yours and yours alone. It is the unique record of some far-flung collection of particles in the early universe. Aaronson wrote, “What quantum mechanics lets you do here, basically, is ensure that the aspects of the initial microstate that are getting resolved with each decision are ‘fresh’ aspects, which haven’t been measured or recorded by anyone else.”
He has a quantum-inspired view of free will, and is not squarely in the pro or anti camps.

Update: There are comments on Aaronson's free will paper here, and possibly here.

Saturday, June 8, 2013

Radio interview has relativity history

Thw UK BBC just broadcast this program on the history of Relativity:
Melvyn Bragg and his guests discuss Einstein's theories of relativity. Between 1905 and 1917 Albert Einstein formulated a theoretical framework which transformed our understanding of the Universe. The twin theories of Special and General Relativity offered insights into the nature of space, time and gravitation which changed the face of modern science. Relativity resolved apparent contradictions in physics and also predicted several new phenomena, including black holes. It's regarded today as one of the greatest intellectual achievements of the twentieth century, and had an impact far beyond the world of science.

With:
Ruth Gregory, Professor of Mathematics and Physics at Durham University
Martin Rees, Astronomer Royal and Emeritus Professor of Cosmology and Astrophysics at the University of Cambridge
Roger Penrose, Emeritus Rouse Ball Professor of Mathematics at the University of Oxford.
They do a lot of Einstein worship, and say, paraphrasing:
Einstein's greatest paper was his 1905 special relativity. His next big contribution was general relativity.

Special relativity revolutionized mechanics because it denied absolute time.

Einstein's relativity explained Michelson-Morley, altho they admit that he is credited for using abstract reasoning without looking at experiments, and may not have even known about Michelson-Morley.

The speed of light is constant and Einstein completed Maxwell's theory.

Relativity is best expressed as spacetime geometry. Penrose admits that Einstein had nothing to do with this advance, and did not even accept it at first.

Einstein got internationally famous with the 1919 solar eclipse experiment.

Einstein explained why different masses fall at the same rate.

Einstein misjudged the consequences of general relativity, such as black holes and the expansion of the universe.

Einstein mainly advanced physics with general relativity. Lorentz, Poincare, and Minkowski may have had the whole of special relativity without Einstein.
They started out saying they would explain Einstein's contribution, but what they said does not hold water. Lorentz and Poincare were famous thru-out Europe for denying absolute time, long before Einstein. Einstein added nothing in 1905.

Einstein got the constant speed of light from Lorentz, and added nothing to Maxwell's theory.

It may be true that Einstein helped advance general relativity. He got Grossmann, Levi-Civita, and Hilbert interested in the subject, and they figured out the field equations and geometrical properties. I don't know who figured out that small object follow geodesics in spacetime, regardless of mass. Einstein did say it, but he did not credit his sources, so it could have been someone else. We know that Grossmann and Levi-Civita were responsible for covariance, because Einstein published papers saying that covariance was wrong after they proposed it.

Thursday, June 6, 2013

Theoretical physicist seeks revolutions

Here is an interview of a leading physicist:
In the early 1970s, David J. Gross exposed the hidden structure of the atomic nucleus. He helped to reinvent string theory in the 1980s. In 2004, he shared the Nobel Prize in Physics. And today he struggles mightily to describe the basic forces of nature at the Planck scale (billions of times smaller than a proton), where, string theorists hope, the equations of gravity and quantum mechanics mesh. ...

Gross characterizes theoretical physics as rife with esoteric speculations, a strange superposition of practical robustness and theoretical confusion. He has problems with the popularizing of “multiverses” and “landscapes” of infinite worlds, which are held up as emblematic of physical reality. Sometimes, he says, science is just plain stuck until new data, or a revolutionary idea, busts the status quo. But he is optimistic: Experience tells him that objects that once could not be directly observed, such as quarks and gluons, can be proven to exist. Someday, perhaps the same will be true for the ideas of strings and branes and the holographic boundaries that foreshadow the future of physics. ...

Gross: Quantum mechanics remains our latest revolution. That said, some scientists are waiting for an irreducible final theory. Reductionism has proven to be an extraordinary successful method of investigation. The Standard Model is a very precise, reductionist theory. But it wasn’t radical.

Simons Science: When you chaired the 25th Solvay Conference in 2011, you observed, in your opening remarks, that there is “confusion at the frontiers of physics.” Why?

Gross: A scientific “frontier” is defined as a state of confusion. Nonetheless, we have a big problem: Physics explains the world around us with incredible precision and breadth. But further explanation is highly constrained by what we already know. Theories of quantum gravity, for instance, represent serious challenges to our current theoretical framework.

Simons Science: String theory strives to unite all four fundamental forces: electromagnetism, radiation (weak), nuclear force (strong) and gravity.

Gross: First of all, string theory is not a theory. The Standard Model is a theory. String theory is a model, a framework, part of quantum field theory. It’s a set of rules and tricks for constructing consistent quantum states, a lot of them.

Simons Science: At Solvay, you said the hope that string theory would produce a unique dynamical description of reality appears to be a “mirage.”

Gross: String theory is not as revolutionary as we once hoped. Its principles are not new: They are the principles of quantum mechanics. String theory is part and parcel of quantum field theory.

The theoretical structure of modern physics is a lot bigger and richer than we thought, because it’s a theory of dynamical space-time, which must incorporate gravity, a force that is not yet integrated into the Standard Model.

There are frustrating theoretical problems in quantum field theory that demand solutions, but the string theory “landscape” of 10500 solutions does not make sense to me. Neither does the multiverse concept or the anthropic principle, which purport to explain why our particular universe has certain physical parameters. These models presume that we are stuck, conceptually.
Gross is hung up on the idea that physics needs revolutions. He sounds like a Marxist saying stuff like this.

At least he admits that string theory, “landscape”, multiverse, anthropic principle, etc. have flopped. But his reason is that they are not revolutionary enough!
Simons Science: Is it possible to falsify string theory/quantum field theory? Or is that a purely philosophical question?

Gross: The question of how we decide whether our theories are correct or wrong or falsifiable has a philosophical aspect. But in the absence of empirical data, can we really judge the validity of a theory? Perhaps. Can philosophy by itself resolve such an ontological quandary? I doubt it. Philosophers who contribute to making physics are, thereby, physicists!

Now, in the last century, great physicists such as Ernst Mach, Bohr and Einstein were also philosophers who were concerned with developing theories of knowledge. Einstein famously criticized Heisenberg for focusing only on observable entities, when there can be indirect evidence for entities that cannot be seen. It may be the same with string theory.

Simons Science: Is the revolution at hand?

Gross: Those of us in this game believe that it is possible to go pretty far out on a limb, if one is careful to be logically consistent within an existing theoretical framework. How far that method will succeed is an open question.
It sounds as if he is clinging to the silly idea that some hidden variable theory is going to revolutionize physics.

I should credit him with saying that physics already explains the world, that further explanation is constrained by present knowledge, and that all the current revolution attempts are failures.

Monday, June 3, 2013

Led by the beauty of the mathematics

NewScientist reports:
Physicists have a problem, and they will be the first to admit it. The two mathematical frameworks that govern modern physics, quantum mechanics and general relativity, just don't play nicely together despite decades of attempts at unification. Eric Weinstein, a consultant at a New York City hedge fund with a background in mathematics and physics, says the solution is to find beauty before seeking truth. ...

Weinstein says his approach follows in the footsteps of Albert Einstein, Paul Dirac and Chen Ning Yang, the physicists' whose equations he is attempting to unify. "The principal authors of all three of our most basic equations subscribe to the aesthetic school, while the rest of the profession had chased the consequence of beauty with adherence to data," he says.

For example, Dirac predicted the existence of the positron based on the symmetries of his equation describing the electron. He was led by the beauty of the mathematics, not the data at the time, which said such a thing did not exist, says Weinstein. ...

In addition, any modification to the central equations of physics would have to give results that are only a slight correction to existing theories – just as Einstein's equations offer very similar answers to the approximations of Newton's equations, says John March-Russell. Right now, equations and experiments are agreeing to 1 part in 10 billion, so Weinstein's theory would have to be a very small tweak indeed, and he has yet to reveal its size.
This illustrates what is wrong with physics. When the theory works to ten decimal places, it is silly to say that physics has a big problem. Second, Einstein, Dirac, and Yang never accomplished anything worthwhile after they jumped to the "aesthetic school" and attempted grand unifications while ignoring data.

As I have explained in my book, it is a big myth that Einstein discovered relativity by ignoring data.

Dirac was trying to understand the electron and the proton with his equations. Only after his theory did not match the data, did he suggest that maybe there is another particle. He was not attempting some sort of unified theory for the sake of beauty.

I don't know too much of Chen Ning Yang. His Nobel bio says:
Dr. Yang is a prolific author, his numerous articles appearing in the Bulletin of the American Mathematical Society, The Physical Review, Reviews of Modern Physics, and the Chinese Journal of Physics. ...

Dr. Yang is a quiet, modest, and affable physicist;
From what little I know, he was not modest. I was surprised that the site would brag first about him publishing in a math journal, so I looked it up, and I could only find this paper. The paper is trivial. He must have earned his reputation in some other way.

At least NewScientist is skeptical, but there is no need to look at his equations. If someone proposes a grand theory that claims to unify and replace two theories that are accurate to 10 decimal places, but he cannot tell you whether or not his theory is accurate to any decimal places, then you can be pretty sure that he has not solved a physics problem.

Here is another example of someone with crackpot ideas. I just listened to the podcast, Rupert Sheldrake on "Science Set Free". I don't have to read his papers. Just listen to his justification, and note how little it has to do with reality. He suggests a million dollar prize for a perpetual motion machine, to be verified by neutral experts, not scientists who are prejudiced against such a thing. But the inventor of a source of useful free energy could probably make a trillion dollars from it, so such a prize would be of no consequence.