Wednesday, August 26, 2015

Hawking has new black hole info theory

Bekenstein-Hawking black hole entropy nonsense, and now Hawking continues to babble nonsense about the black hole information paradox:
Stephen Hawking, who once stunned the scientific community by saying that black holes emit radiation, expounded on another groundbreaking theory on Tuesday.

"The message of this lecture is that black holes ain't as black as they are painted. They are not the eternal prisons they were once thought," Hawking told a meeting of experts, according to the New Scientist. "Things can get out of a black hole both on the outside and possibly come out in another universe." ...

"Quantum mechanics — a highly successful theory that describes physical phenomena at the scale of atoms and subatomic particles — says that information can never be lost, even when it falls into a black hole. It is widely believed to be an inviolable law of nature. ..."

During his talk on Tuesday at the KTH Royal Institute of Technology in Stockholm, Hawking proposed that the information of the particles sucked into a black hole eventually makes it out in the radiation that is emitted by a black hole.

The information emitted, however, is not usable. ...

"At Monday's public lecture, he explained this jumbled return of information was like burning an encyclopedia: You wouldn't technically lose any information if you kept all of the ashes in one place, but you'd have a hard time looking up the capital of Minnesota."
I hate to pick on someone with a degenerative neurological disorder, but Hawking lost it decades ago.

First, nothing in quantum mechanics says that info can never be lost. Some of the processes are time reversible, so you could say that nothing is lost in those processes. But there is nothing special about information, and quantum mechanics is not a time reversible theory.

Second, there is no "inviolable law of nature" that says that info can never be lost. Just ask yourself: Who got the Nobel Prize for that? What is the experiment demonstrating it? What is even the theoretical basis for it? What useful consequence does it have? The answers are no one and nothing, because there is no such law.

Third, if there is some definition of information that allows an encyclopedia to be burned without losing any of it, then it is unrelated to every definition of information that I know. It is crazy to argue that burning an encyclopedia conserves information.

Fourth, it takes a trillion trillion years for a black hole to evaporate, so this is completely disconnected from any observational science.

People make fun of medieval scholars for supposedly debating about How many angels can dance on the head of a pin? This is the modern equivalent. Someday people will make fun of XXc and 21c physics for arguing about this stupid issue.

Update: Lubos Motl weighs in on this issue, and says that you have to be a string theorist to understand the finer points.

Monday, August 24, 2015

Autism discoveries not independent

The history of science is filled with examples of people independently discovering some major principle. In my experience, tho, such claims of independence do not hold up under scrutiny.

For example, I have doubted claims that the Pythagorean theorem was proved independently. I have many posts doubting that Einstein re-discovered relativity independently.

Here is an example from psychology:
In one of the uncanny synchronicities of science, autism was first recognized on two continents nearly simultaneously. In 1943, a child psychiatrist named Leo Kanner published a monograph outlining a curious set of behaviors he noticed in 11 children at the Johns Hopkins Hospital in Baltimore. A year later, a pediatrician in Vienna named Hans Asperger, who had never seen Kanner's work, published a paper describing four children who shared many of the same traits. Both Kanner and Asperger gave the condition the same name: autism — from the Greek word for self, autòs — because the children in their care seemed to withdraw into iron-walled universes of their own.
The article makes a good case that both of them stole the idea from Georg Frankl. He directly worked many years for both of them. See also History of Asperger syndrome.

A NY Times book review does not know about the connection:
The history of science is studded with stories of simultaneous discovery, in which two imaginative souls (or more!) turn out to have been digging tunnels to the same unspoiled destination. The most fabled example is calculus, developed independently in two different countries by Isaac Newton and Gottfried Wilhelm von Leibniz, but the list stretches back centuries and unfurls right into the present. One can add to it sunspots, evolution, platinum, chloroform ... and now autism, as the science journalist Steve Silberman informs us, identified separately by Leo Kanner and Hans Asperger. The crucial difference is that Kanner had the fortune to publish his work in Baltimore, while Asperger had the misfortune to publish his in Nazi-controlled Vienna, and this accident of geopolitics lies at the tragic core of Silberman’s ambitious, meticulous and largehearted (if occasionally long-winded) history, “NeuroTribes: The Legacy of Autism and the Future of ­Neurodiversity.”
No, autism was not independently identified by Kanner and Asperger, and maybe I should doubt some of those other stories. It is my understanding that Newton and Leibniz were not as independent as it appeared, as they saw unpublished manuscripts from each other. I guess some people say that Darwin and Wallace independently discovered evolution by natural selection, but I am not convinced that any of that was independent. I don't know about sunspots, platinum, and chloroform.

Sunday, August 23, 2015

Bekenstein black hole area and entropy

The NY Times obituary of Jacob Bekenstein says:
acob Bekenstein, a physicist who prevailed in an argument with Stephen Hawking that revolutionized the study of black holes, and indeed the nature of space-time itself, died on Sunday in Helsinki, Finland, where he was to give a physics lecture. He was 68. ...

Black holes are the prima donnas of Einstein’s general theory of relativity, which predicts that space wraps itself completely around some object, causing it to disappear as a black hole. Dr. Bekenstein suggested in his Ph.D. thesis that the black hole’s entropy, a measure of the disorder or wasted energy in a system, was proportional to the area of a black hole’s event horizon, the spherical surface in space from which there is no return. ...

Lee Smolin, a theorist at the Perimeter Institute for Theoretical Physics in Waterloo, Ontario, said, “No result in theoretical physics has been more fundamental or influential than his discovery that black holes have entropy proportional to their surface area.”

Dr. Bousso called Dr. Bekenstein “one of the very few giants in the field of quantum gravity.”
Really? Is this the best theoretical physics has to offer?

This formula is just a definition, with no observable consequences. The concept of entropy helps understand thermodynamic reactions, but black holes don't any observable thermodynamics. Hawking says that they will evaporate over the next trillion trillion years, but there is no known way to verify that.

45 years ago some grad student notices that area and entropy of a black hole can increase, so he speculates that there might be some relation. For that he became a giant in the field of quantum gravity? The obituary suggests that he might have won a Nobel Prize, if he had lived long enuf. Maybe if he had lived a trillion trillion years while we all watch a black hole evaporate.

I don't want to badmouth Bekenstein, but it shows the sorry state of theoretical physics and quantum gravity that a trivial definition with no observable consequences is hailed as the greatest achievement in the field.

The string theorists rave about this formula because they say that it is backed up by some calculations by some string theorists. The calculation does not even have much to do with string theory, but the string theorists brag about this as their greatest accomplishment, and their strongest experimental validation. Again, this only shows the sorry state of string theory.

Peter Woit exposes multiverse nonsense:
Susskind deals straightforwardly with the lack of scientific evidence problem by simply saying things that aren’t true:
This idea of a multiverse is not gratuitous speculation. No, it really comes out of both experiment or observational physics about the universe and the current theories as best we understand them.
He doesn’t explain what the experimental evidence for the multiverse is.
and points to this book review:
Physicists have a nerve. I know one (I’ll call him Mark) who berates every religious person he meets, yet honestly thinks there exist parallel universes, exactly like our own, in which we all have two noses. He refuses to give any credit to Old Testament creation myths and of course sneers at the idea of transubstantiation. But, without any sense of shame, he insists in the same breath that humans are made from the fallout of exploded stars; that it is theoretically possible for a person to decompose on one side of a black hole and recompose on the other, and that there are diamonds in the sky the size of the moon.
Physics is supposed to be the hardest of the hard sciences, and physicists the most level-headed. But physics has lost its way, and promotes stuff more outlandish than Biblical creation myths.

Saturday, August 22, 2015

Relativity forbids rigid objects

Physicist Lubos Motl gives a relativity lesson:
It is clearly a totally rudimentary problem in special relativity. It has its own name and if you search for Ehrenfest paradox, you quickly find out that there's been a lot of debates in the history of physics – relatively to what one would expect for such a basic high school problem in classical physics. Born, Ehrenfest, Kaluza, von Laue, Langevin, Rosen, E ...

A rod can't be "unbendable" or "unsqueezable" or "unstretchable" because it would mean that there is something in the rod that guarantees its prescribed proper length at all times. ...

This non-existence of perfectly rigid rods in relativity should be totally obvious for rods. But it holds for disks, too. ...

At any rate, the non-existence of perfectly rigid bodies is undoubtedly a characteristic, almost defining, implication of relativity.

I am pretty amazed that even in 2015, 110 years after Einstein presented his relativity, this very simple point remains controversial. Well, I am convinced that at least since 1911, almost all good physicists have agreed what the correct answer basically is.
He is right. Part of the problem is that Einstein's famous 1905 relativity paper declared:
The theory to be developed is based — like all electrodynamics — on the kinematics of the rigid body, since the assertions of any such theory have to do with the relationships between rigid bodies (systems of co-ordinates), clocks, and electromagnetic processes. Insufficient consideration of this circumstance lies at the root of the difficulties which the electrodynamics of moving bodies at present encounters. ...

If a material point is at rest relatively to this system of co-ordinates, its position can be defined relatively thereto by the employment of rigid standards of measurement and the methods of Euclidean geometry, and can be expressed in Cartesian co-ordinates. ...

Let there be given a stationary rigid rod; and let its length be l as measured by a measuring-rod which is also stationary.
Einstein's whole presentation is in terms of rigid bodies. If there is no such thing as a rigid body, then it is hard to make any sense of that paper.

Motl is right, here. The whole discovery of special relativity was based on the insight by FitzGerald and Lorentz that the Michelson-Morley apparatus was not really rigid, but can contract as motion deforms the electromagnetic fields that hold the molecules together. Then Poincare and Minkowski had the insight that space and time were being deformed.

Poincare's 1905 paper defined distance in terms of how far light goes in a specified time. Minkowski made the non-Euclidean metric the fundamental entity. Einstein's use of rigid measuring rods does not make much sense, and apparently is still causing confusion today.

Update: I meant to also say this. The most important point about relativity is that it rejects action-at-a-distance. If you had a rigid object, you could push it at one end, and have an instantaneous effect at the other end. That is completely contrary to the whole spirit of relativity.

Wednesday, August 19, 2015

What quantum feature killed the classical picture?

David Jennings and Matthew Leifer just updated their No Return to Classical Reality
At a fundamental level, the classical picture of the world is dead, and has been dead now for almost a century. Pinning down exactly which quantum phenomena are responsible for this has proved to be a tricky and controversial question, but a lot of progress has been made in the past few decades. We now have a range of precise statements showing that whatever the ultimate laws of Nature are, they cannot be classical. In this article, we review results on the fundamental phenomena of quantum theory that cannot be understood in classical terms. We proceed by first granting quite a broad notion of classicality, describe a range of quantum phenomena (such as randomness, discreteness, the indistinguishability of states, measurement-uncertainty, measurement-disturbance, complementarity, noncommutativity, interference, the no-cloning theorem, and the collapse of the wave-packet) that do fall under its liberal scope, and then finally describe some aspects of quantum physics that can never admit a classical understanding -- the intrinsically quantum mechanical aspects of Nature. The most famous of these is Bell's theorem, but we also review two more recent results in this area.
I agree with them that those other things are not so radically different from classical mechanics, but then they go nuts with the profundity of Bell's Theorem.
The departure of quantum mechanics from classicality was put into a very sharp and powerful form by John Bell [2, 20], who showed that some aspects of quantum entanglement can never fit into a model in which systems possess objective properties prior to measurement and that also obeys a principle of locality. Since the result only depends on certain empirically observed predictions of quantum theory, rather than the structure of the theory itself, any future theory beyond quantum theory will be subject to the same argument, so there can be no going back to a conception of the world that is both classical and local. ...

In the literature, this is often referred to by saying that either “locality” or “realism” must be given up. However you wish to parse the dilemma, it is clear that Bell inequality violations imply a radical departure from classical physics. ...

To sum up, we have shown that many phenomena that are traditionally viewed as intrinsically quantum-mechanical; such as randomness, discreteness, the indistinguishability of states, measurement-uncertainty, measurement-disturbance, complementarity, non-commutativity, interference, the no-cloning theorem, and the collapse of the wave-packet; all appear within classical statistical mechanics under reversible dynamics. These serve to map out classical fragments of quantum physics, in a search for the genuinely strange aspects of the theory. In addition to Bell’s theorem on the failure of local causality at a fundamental level, we have described two less well-known results that reveal further deep and subtle insights into the quantum realm.
Bell's theorem is a consequence of non-commutivity and those other principles. It does not contradict local causality, unless you are using contrived definitions (as the paper does).

If two observables do not commute, then measuring one leaves some uncertainty in the other one. That is the quantum behavior at the core of Bell's theorem. Measuring the position of an electron has the effect of localizing it, and that creates uncertainty in momentum.

Bell wanted to believe that the act of measuring an electron did not necessarily disturb it, and gave the value of some hidden variable that was determined all along. If he were right, then quantum mechanics would be proved wrong. So all he really had was an argument that some alternative theory of hidden variables is wrong.

Lubos Motl's latest rant is about some Christian videos explain quantum mechanics better than the atheist videos. Along the way, he says:
quantum mechanics teaches us that (especially before a measurement) there is no "objective truth about the state of Nature" from which all the knowledge of all observers would be derived as projections or a subset.
Yes, that is right. I am not sure about the religious implications, but that is a core view of quantum mechanics that goes back to Heisenberg, von Neumann, and Dirac. It is not something that Bell discovered decades later.

Saying that Bell's Theorem rules out "realism" seems very profound, until you learn that realism is just defined as some particular hidden variable theory.

Monday, August 17, 2015

John Conway's Life

There is a new book out about a mathematician, and as usual he is portrayed as a mentally ill misfit. Here is the WSJ review:
Even Mr. Conway’s darkest points somehow take a sharp veer into whimsy. In 1993, suffering from heart disease and somehow flat broke on a Princeton salary, he attempted suicide by pills. Upon recovering, he wore a T-shirt around campus that read “SUICIDE” in large block letters, apparently with the intent of diffusing rather than generating awkwardness. (“I wore it for 2 or 3 days until it got too sweaty,” he recalls.) ...

Mr. Conway typifies a popular stereotype of the mathematician: prone to wild enthusiasms, sweaty and wild-bearded, inattentive to the mundanities. Ms. Roberts, to her credit, reminds us that he is as much a social outlier among his colleagues as he would be in the general public; that when he forgets to show up to deliver a lecture, it’s annoying, not charming; that the sincere and profound admiration Mr. Conway enjoys is often tinted with exasperation. This is most notable in the only slightly touched-on subject of his romantic life. “I think John is the most selfish, childlike person I have ever met,” one of his three ex-wives tells Ms. Roberts. “One of the reasons I find that so intolerable is that I know damn well he can be human if he cares enough to bother.”
Hollywood usually treats mathematicians as mentally ill also.

A new physics biographical essay writes:
In the early 1970s, Yuri Golfand was among the discoverers of theoretical supersymmetry, a concept which completely changed mathematical physics in the 21st century. After his discovery, his research institution in Moscow fired him. He knew the humiliations of the Brezhnev regime firsthand, blacklisted and unemployed for the rest of the decade due to his desire to emigrate to Israel.
It calls supersymmetry a "revolutionary concept in theoretical physics". Supersymmetry certainly caused a lot of excitement, but it has been a gigantic dead end. Nothing has come out of that work that has any bearing on the real world. No Nobel Prizes have been given for any work related to supersymmetry. The world is not supersymmetric.

I don't want to minimize his hardships under Communism, but no one else got exit visas either.

Wednesday, August 12, 2015

Deutsch defends many-worlds philosophy

David Deutsch is one of the chief gurus of the many-worlds interpretation (MWI) of quantum mechanics, and of quantum computing. He says that quantum computing will work because of the efficiency of parallel computation being done in alternate universes.

I suspect that he was considered a crackpot at first, but now that we have 100s of millions of dollars being spent on these dead-ends of physics, he is revered as an insightful genius. He has been mentioned as a candidate for a Nobel Prize, if anyone ever finds any evidence for anything he says.

The MWI has two fatal flaws. First, there is no empirical evidence for it, and there can never be any such evidence. Second, it destroys the probabilistic predictions that are at the heart of quantum mechanics and every other science.

Deutsch has posted a new paper addressing these issues. There is no physics in the article; just philosophical hand-waving:
Claims that the standard methodology of scientific testing is inapplicable to Everettian quantum theory, and hence that the theory is untestable, are due to misconceptions about probability and about the logic of experimental testing. Refuting those claims by correcting those misconceptions leads to various simplifications, notably the elimination of everything probabilistic from fundamental physics (stochastic processes) and from the methodology of testing ('Bayesian' credences).
By "Everettian", he means MWI, and he shortens it to just "quantum theory", as if that were the most sensible interpretation. Copenhagen and other textbook interpretation are called "collapse" variants. The collapse is the idea that you refuse to consider the alternative (unobservable) universes.

Deutsch flips the arguments with a philosophical sleight-of-hand. He credits Karl Popper's rejection of positivism that a good scientific explanation is much more important than a crucial experiment. He agrees with the philosophers who say that there is no such thing as the crucial experiment. MWI doesn't explain any experiments but it does give a good explanation, so he says that it is philosophically superior to collapses.

He goes further and denies that any probabilistic theories are truly testable, and only something like MWI, which says that anything can happen without any probability estimates, should be considered testable. He concludes:
By adopting Popper’s explanatory, conjectural conception of science, and his objective, problem-based methodology of scientific testing (instead of ones that are subjective, inductivist, positivist, ‘Bayesian’ etc.), and bearing in mind the decision-theoretic argument, we can eliminate the perceived problems about testing Everettian quantum theory and arrive at several simplifications of methodological issues in general.

In particular, I have shown that the claim that the standard methods of testing are invalid for Everettian quantum theory depends on adopting a positivist or instrumentalist view of what the theory is about. The claim evaporates, given that science is about explaining the physical world.

Even ‘everything-possible-happens’ theories can be testable. But Everettian quantum theory is more than an everything-possible-happens theory. Because of its explanatory structure (exploited by, for instance, the decision-theoretic argument) it is testable in all the standard ways. It is the predictions of its ‘collapse’ variants (and any theory predicting literally stochastic processes in nature) that are not genuinely testable: their ‘tests’ depend on scientists conforming to a rule of behaviour, and not solely on reality conforming to explanations.
I cannot make any sense of this. All of science involves some sort of comparison of theory with experiment. The measurements never match up exactly, so we are always left with the problem of deciding whether the observations are within the margins of what the theory said was likely. There is no other way to do science, as far as I know.

If a theory gives probabilities and error estimates, as all good scientific theories do, then Deutsch says that it is not testable. If a theory says that everything possible happens, as MWI does, then Deutsch says that it is testable.

There is no merit to anything Deutsch says. Popper was wrong in his rejection of positivism. Duhem-Quine were wrong in their rejection of the crucial experiment. MWI is incoherent. Quantum computing is a pipe dream. You can reverse almost everything he says, and get closer to the truth.

Monday, August 10, 2015

Where are the extraterrestrials?

Dennis Overbye writes in the NY Times that not everyone is excited by the possibility of primitive life on Mars or elsewhere:
In an article published in Technology Review in 2008, Professor Bostrom declared that it would be a really bad sign for the future of humanity if we found even a microbe clinging to a rock on Mars. “Dead rocks and lifeless sands would lift my spirit,” he wrote.

Why?

It goes back to a lunch in 1950 in Los Alamos, N.M., the birthplace of the atomic bomb. The subject was flying saucers and interstellar travel. The physicist Enrico Fermi blurted out a question that has become famous among astronomers: “Where is everybody?”

The fact that there was no evidence outside supermarket tabloids that aliens had ever visited Earth convinced Fermi that interstellar travel was impossible. It would simply take too long to get anywhere.

The argument was expanded by scientists like Michael Hart and Frank Tipler, who concluded that extraterrestrial technological civilizations simply didn’t exist.

The logic is simple. Imagine that one million years from now Earthlings launch a robot to Alpha Centauri, the closest star system to our own. It gets there in a few years, and a million years later sends off probes to two other star systems. A million years after that, each of those sends off two more probes. Even allowing for generous travel times, in 100 million years roughly a nonillion stars (1030) could be visited. The galaxy contains maybe 200 billion stars, so each could be visited more than a trillion times in this robot crisscrossing.
I think that this is correct. If Earth-like planets are common, then it would only take 100M years for an advanced civilization to colonize the galaxy.

It seems reasonable to assume that primitive life has evolved on 100s of other planets in our galaxy. But it is doubtful that any of them evolved into an advanced civilization.

Earth has many strange features that have been essential to human life, and are unlikely elsewhere. We have a single sun, a single large moon to cause tides and stabilize the orbit, a Jupiter to clear out other junk, water covering 2/3 the Earth so sea and land life is possible, etc. We do not know where the water comes from.

Thursday, August 6, 2015

Newton studied alchemy

Alchemy is frequently cited as an example of pre-modern pseudo-science, like astrology. This always seemed unfair to me, as alchemists presumably spent most of their time studying properties of materials, and was hence legitimate early chemistry.

Even transmutation, such as trying to turn lead into gold, is not inherently crazy. As we now know, all matter is made of the same quarks and electrons, and there is no law of nature to prevent convert one kind of atom to another. It is just extremely difficult, and only possible today in very tiny quantities in giant particle accelerators.

In a new collection of essays on The Unknown Newton, William R. Newman writes on Newton and alchemy:
For the tercentenary celebration of Newton’s birth, Keynes famously wrote in an address that:
Newton was not the first of the age of reason. He was the last of the magicians, the last of the Babylonians and Sumerians, the last great mind which looked out on the visible and intellectual world with the same eyes as those who began to build our intellectual inheritance rather less than 10,000 years ago.
The thrust of Keynes’s address was that the conventional view of Newton as a “rationalist, one who had taught us to think on the lines of cold and untinctured reason,” was not quite right and that the truth was more complicated: one of the greatest scientists of all time spent a large part of his most creative years on various unscientific quests, including a search for that most elusive of alchemical substances, the philosophers’ stone. ...

Newton’s alchemy fits neither the Keynesian picture of the English natural philosopher as “the last of the magicians” nor the Dobbsian view of his alchemy as a religious quest. Instead, Newton’s alchemical studies reveal an early modern scholar and experimenter hard at work in deciphering extraordinarily difficult texts and a natural philosopher attempting to integrate the fruits of this research into his overall reform of scientific knowledge. Although this view of Newton’s alchemical scholarship and experimentation may be less evocative than Keynes’s or Dobbs’s, it conforms more closely to the depiction of Newton familiar to scholars of his physics, mathematics, and biblical studies. Throughout his divergent activities, Newton remained wedded to techniques of analysis and understanding that would be familiar to most of us today. The apparent incongruity between Newton the scientist and Newton the alchemist dissolves when we acquire a deeper understanding of alchemy and of the man himself.
The other essays examine Newton's religious investigations.

Wednesday, August 5, 2015

Theories of Everything, Mapped

Quanta magazine reports:
“Ever since the dawn of civilization,” Stephen Hawking wrote in his international bestseller A Brief History of Time, “people have not been content to see events as unconnected and inexplicable. They have craved an understanding of the underlying order in the world.”

In the quest for a unified, coherent description of all of nature — a “theory of everything” — physicists have unearthed the taproots linking ever more disparate phenomena.
The vast majority of these efforts are foolish and misguided, in my opinion, but please click on the link and click "START" for a snazzy map of all these theories on your screen. It looks great. You can waive your mouse over it, and feel as if you are touching some grand idea, all tied in together.

It is just a stupid collection of failed buzzwords, of course, but it looks great.

Monday, August 3, 2015

Wave function can be just our knowledge

Interpretations of quantum mechanics can disagree about whether the wave function Psi is a direct reflection of reality (ontology, ontic) or just a representation of our knowledge (epistemology, epistemic).

Some would also say to "shut up and calculate", and would be dismissive of such philosophical distinctions. Bohr would say that anytime you write formulas on paper, you are just trying to express our knowledge about a system.

A new paper comments on the PBR theorem:
Building upon the Harrigan‐Spekkens analysis, the PBR paper (Pusey, Barrett and Rudolph 2012) raises the question of whether a Ψ‐epistemic interpretation of the Ψ‐function is consistent with QM. According to the theorem proved in the paper, it is not, namely, if the epistemic interpretation is accepted and an overlap of the supports of two distinct probability distributions (corresponding to two distinct quantum states) is allowed, a violation of the predictions QM follows. PBR conclude that QM is not amenable to the epistemic interpretation. This surprising result has immediately attracted a great deal of attention. Most readers have taken the theorem at face value: The Ψ‐epistemic interpretation is indeed ruled out by the theorem and consequently, the remaining option is the Ψ‐ontic interpretations. In other words, the PBR theorem has been advertised as supporting a realist interpretation rather than an epistemic interpretation of Ψ. ...

What if we go radically epistemic and deny the assumption of definite physical states? In that case we take QM to be mute about the physical state of the system, interpreting it instead along the lines of Schrodinger, Pitowsky, Bub, Fuchs and others have suggested, as a maximal catalog of possible measurement results, a betting algorithm, a book‐keeping device. This option is left untouched by the PBR theorem. Not only is it not undermined by it, to the contrary, in ruling out a more classical probabilistic interpretation, which presupposes the existence of the ‘real’ state of the system, the PBR theorem in fact strengthens the radical epistemic interpretation.
So this paper made a big splash because the authors claimed to be disproving an epistemic interpretation, but it does nothing of the kind. It only gives an argument against a hidden variable theory that no one believed in anyway.

The original title to the PBR paper was The quantum state cannot be interpreted statistically, and was accepted for publication in Nature, a very high status journal. This raised eyebrows as the quantum state wave function has been interpreted statistically for 80+ years, and nothing short of a startling Nobel-Prize-winning discovery can change that.

But the title was incorrect use of terminology, and all they had was an argument against replacing quantum mechanics with a hidden variable theory of the type that had been considered and rejected 80 years ago. The argument had nothing to do with statistical interpretations. The title had to be changed, and the paper was published in a lesser journal.

Wednesday, July 29, 2015

Quantum computing compared to Goddard rockets

Slashdot reports:
If quantum computing is at the Goddard level that would be a good thing for quantum computing. This means that the major fundamental breakthrough that would put them over the top was in hand and merely a lot of investment, engineering and scaling was needed. The goal of being able to solve NP-hard or NP-Complete problems with quantum computers is similar to being able to travel to the moon, mars or deeper into space with rockets. Conventional flight could not achieve those goals because of the lack of atmosphere in space. Current computing seems like they are very limited in being able to tackle NP-hard and NP Complete problems. Although clever work in advanced mathematics and approximations can give answers that are close on a case by case basis.
Dream on.

Three comments were actually sensible:
Quantum computers cannot solve NP-Hard or NP-Complete problems -- at least, no faster than a classical computer. This is one of the most basic results in the field, and the author keeps on making hash of it. This article should not be taken seriously if it's rife with such basic errors.

[Goddard's rockets were] Designed on totally incorrect physics. The true revolutionaries of rocket propulsion all have German last names.

Quantum computing is about where teleportation, strong AI, a perfect cure for cancer, etc. is, namely it is completely unclear whether it will ever work. All this bullshit about Quantum Computing is just that: Bullshit. We do not even know whether the physics allows it, all we know is that the current theory (which we know is incomplete and inaccurate) would allow it if it was accurate.
Goddard was the famous American rocket pioneer whose physics was mocked by the NY Times:
[1920 editorial] That Professor Goddard, with his "chair" in Clark College and the countenancing of the Smithsonian Institution, does not know the relation of action and reaction, and of the need to have something better than a vacuum against which to react — to say that would be absurd. Of course he only seems to lack the knowledge ladled out daily in high schools.

[1969 ocrrection] Further investigation and experimentation have confirmed the findings of Isaac Newton in the 17th Century and it is now definitely established that a rocket can function in a vacuum as well as in an atmosphere. The Times regrets the error.[
Goddard really did get his Newtonian physics wrong, but the NY Times editorial did not correctly state the error. He needed something better than a vacuum to stabilize the rocket.

As far as I know, no publication has similarly denounced quantum computing. Dark Buzz fills the gap.

Monday, July 27, 2015

Wilczek's new book on beauty in nature

Here is an endorsement for a new book, A Beautiful Question: Finding Nature's Deep Design:
Deepak Chopra, M.D.: “For a century, science has invalidated ‘soft’ questions about truth, beauty, and transcendence. It took considerable courage therefore for Frank Wilczek to declare that such questions are within the framework of ‘hard’ science. Anyone who wants to see how science and transcendence can be compatible must read this book. Wilczek has caught the winds of change, and his thinking breaks through some sacred boundaries with curiosity, insight, and intellectual power.”
There is a fine line between the frontiers of hard physics and crackpot babble, I guess.

Separately Wilczek claims in Nature magazine:
Particle physics: A weighty mass difference

The neutron–proton mass difference, one of the most consequential parameters of physics, has now been calculated from fundamental theories. This landmark calculation portends revolutionary progress in nuclear physics.
The article is behind a paywall, so I cannot assess how revolutionary it is. My guess is that it uses the masses of the protons and neutrons to estimate the masses of the up and down quarks, and then uses the quark masses to calculate the proton and neutron masses. It does not sound revolutionary to me.

Peter Woit also endorses the book, and a comment says:
If Ptolemy’s epicycles worked, would we consider them to be beautiful?
Ptolemy’s epicycles did work. I scratch my head at how scientists can get this so wrong.

In Ptolemy's Almagest, the principal epicycles were just his way of representing the orbit of the Earth. The orbits of Earth and Mars could be approximated by circles, and the view of Mars from Earth can be represented by the vector difference of those two circles. The Earth circle was called an epicycle. There were also minor epicycles to correct for the orbits not being exactly circular.

So yes, epicycles did work to approximate the orbits, and the same main idea is used today whenever anyone describes a planetary orbit, as viewed from the Earth.

Saturday, July 25, 2015

No Nobel Prize for mathematicians

The NY Times has a profile of a mathematician, so of course it has to explain whether he has won a Nobel Prize and whether he is crazy like all the other mathematicians:
He has since won many other prizes, including a MacArthur ‘‘genius’’ grant and the Fields Medal, considered the Nobel Prize for mathematicians. Today, many regard Tao as the finest mathematician of his generation. ...

Possibly the greatest mathematician since antiquity was Carl Friedrich Gauss, a dour German born in the late 18th century. He did not get along with his own children and kept important results to himself, seeing them as unsuitable for public view. They were discovered among his papers after his death. Before and since, the annals of the field have teemed with variations on this misfit theme, from Isaac Newton, the loner with a savage temper; to John Nash, the ‘‘beautiful mind’’ whose work shaped economics and even political science, but who was racked by paranoid delusions; to, more recently, ­Grigory Perelman, the Russian who conquered the Poincaré conjecture alone, then refused the Fields Medal, and who also allowed his fingernails to grow until they curled.
Sergiu Klainerman explains that the Fields is nothing like the Nobel at all:
Concerning the first issue, the differences between the Fields Medal and the Nobel Prize can hardly be exaggerated. Whatever the original intentions, the Fields Medal is given only to young mathematicians below the age of forty. To have a chance at the medal a mathematician must not only make a major contribution early on, he/she must also be lucky enough to have its importance broadly recognized before the arbitrary fortieth mark. This means that, if an area of mathematics is not represented in the composition of the Fields committee at a given International Congress, truly original and important contributions in that area have very little chance.

In contrast, the Nobel Prize has no age limits. The role of a Nobel committee (in natural sciences) is, at least in principle, to identify those breakthroughs deemed most important by a broad segment of the scientific community and then decide who are the most deserving contributors to it. In contrast with the Fields Medal, which is given strictly to an individual, independent of whether other people might have contributed important ideas to the cited works, the Nobel Prize can be shared by up to three individuals. Thus, in theory, a Nobel Prize is awarded primarily for supreme achievements, and only secondarily to specific individuals. ...

In fact mathematics does not have any prize comparable with the Nobel Prize. The other major prizes — Abel, Shaw, and Wolf — don’t have any age limitation but are almost always given to individuals, based on works done throughout their careers, rather than for specific achievements. Even when the prize is shared there is, in most cases, no identifiable connection between the recipients.
The Abel Prize is maybe the closest to being a Nobel Prize for Math.

There is a wide perception that all the good math is done by young math prodigies. The most famous big math problems of the last 25 years were Fermat's Last Theorem and the Poincare-Thurston conjecture. Both were by mathematicians around age 40, and that is probably the age of highest productivity.

Thursday, July 23, 2015

Comparing special and general relativity

This year is celebrated as the centenary of general relativity, as ten years ago was the centenary of special relativity. What is the difference? Special relativity is the theory of flat spacetime, including Lorentz transformations and electromagnetism. General relativity is the theory of curved spacetime, and gravity.

Einstein fans disagree over which is the greater accomplishment. Special relativity changed thinking about space and time in a way that permeates XX century physics. General relativity is always lauded as a great theory, but its effects are barely measurable and it has had almost no influence on other branches of physics. It has some influence on cosmology, but not much.

So special relativity is the more influential theory, by far. But some Einstein fans prefer to praise general relativity, because that was a conceptually much more difficult accomplishment. Special relativity can be easily explained with some undergraduate linear algebra, but general relativity requires tensors and differential geometry.

Einstein's role was also different. His 1905 special relativity paper was written on his own, building on published papers. His 1915 general relativity was a collaboration with mathematicians. Some people see one as more credit-worthy than the other.

People often say that GPS requires special and general relativity clock corrections, but it is really just special relativity corrections. There is an effect due to satellite speed and special relativity, and an effect due to gravity that is often called general relativity. But the necessary gravity formula was actually derived by Einstein in 1907 from special relativity, using what he called "the happiest thought of my life". This was before he understood relativity as a spacetime theory, and many years before he knew anything about tensors or curvature.

Sometimes people say that special relativity is just about constant velocity inertial motion, but that is not how it was viewed in the early days, say 1895-1910. It was often applied to accelerating electrons and other particles. Gravitational time dilation can be calculated by comparing to acceleration in a flat spacetime. Viewed this way, the only truly measurable general relativity effects are things like precession of Mercury's orbit, and that is a very tiny effect that took centuries to notice.

Even if relativity had never been discovered, we would probably still have GPS. Nobody would understand why the satellite clocks had to be re-synchronized so often, but they could have figured out some heuristics for resetting the clocks.