Monday, August 31, 2015

NSA is cautious about quantum computers

The well-known cryptology author Bruce Schneier writes:
NSA Plans for a Post-Quantum World

The NSA is worried enough about advances in the technology to start transitioning away from algorithms that are vulnerable to a quantum computer. Does this mean that the agency is close to a working prototype in their own classified labs? Unlikely. Does this mean that they envision practical quantum computers sooner than my 30-to-40-year estimate? Certainly.
It is tricky to interpret comments from a govt spy agency. Some people assume that it has vast security knowledge far beyond the general public, while others assume that it is tricking us into using insecure products so it can spy on us.

Here the National Security Agency is pretty clearly saying that it is worthwhile to spend money to re-engineer cryptographic systems to protect against a quantum computer running Shor's algorithm to crack RSA.

This seems crazy to me. Maybe the NSA wants to sabotage confidence in RSA. Maybe it wants to justify spending on new systems. Maybe it is just spreading FUD - fear, uncertainty, and doubt.

Proponents of quantum computing are always bragging about how they have factored 15, 21, or even 143. I just noticed that a paper last year claimed Quantum factorization of 56153 with only 4 qubits:
The largest number factored on a quantum device reported until now was 143. That quantum computation, which used only 4 qubits at 300K, actually also factored much larger numbers such as 3599, 11663, and 56153, without the awareness of the authors of that work. Furthermore, unlike the implementations of Shor's algorithm performed thus far, these 4-qubit factorizations do not need to use prior knowledge of the answer. However, because they only use 4 qubits, these factorizations can also be performed trivially on classical computers. We discover a class of numbers for which the power of quantum information actually comes into play. We then demonstrate a 3-qubit factorization of 175, which would be the first quantum factorization of a triprime.
So someone factored 56153 = 233 x 241 without realizing it!

I am not sure that this clever trickery has much to do with quantum computing power. You want to look for someone claiming to achieve some sort of quantum speedup, where the quantum computer is somehow doing something more efficiently than a classical computer. That has not been achieved. The above paper also says that no number has ever even been truly factored with Shor's algorithm, as the published successes took shortcuts that assumed the answer in advance.

So no one has ever made a true qubit, and no one has truly used quantum computing to factor anything.

Bruce Schneier has some discussion of cryptographic consequences. He was predicting that there would be no significant quantum computers in his lifetime, but then he backed off and deferred to others. I say that quantum computers will not break commercial crypto for 1000 lifetimes.

Quantum complexity expert Scott Aaronson posts this reaction to the latest quantum computer:
A bunch of people have asked me to comment on D-Wave’s release of its 1000-qubit processor, ...

More importantly, I’d say it remains unclear whether any of the machine’s performance on the instances tested here can be attributed to quantum tunneling effects. ...

But, I dunno, I’m just not feeling the urge to analyze this in more detail. Part of the reason is that I think the press might be getting less hyper-excitable these days, thereby reducing the need for a Chief D-Wave Skeptic. ...

The realization has set in, I think, that both D-Wave and the others are in this for the long haul, with D-Wave currently having lots of qubits, but with very short coherence times and unclear prospects for any quantum speedup, and Martinis and some others having qubits of far higher quality, but not yet able to couple enough of them.
This could continue for 20 years. Lots of serious research, but no useful results.

Saturday, August 29, 2015

Latest quantum spookiness experiment

Nature magazine announces:
Quantum ‘spookiness’ passes toughest test yet

Experiment plugs loopholes in previous demonstrations of 'action at a distance', against Einstein's objections — and could make data encryption safer.

It’s a bad day both for Albert Einstein and for hackers. The most rigorous test of quantum theory ever carried out has confirmed that the ‘spooky action at a distance’ that the German physicist famously hated — in which manipulating one object instantaneously seems to affect another, far away one — is an inherent part of the quantum world.

The experiment, performed in the Netherlands, could be the final nail in the coffin for models of the atomic world that are more intuitive than standard quantum mechanics, say some physicists. It could also enable quantum engineers to develop a new suite of ultrasecure cryptographic devices.

“From a fundamental point of view, this is truly history-making,” says Nicolas Gisin, a quantum physicist at the University of Geneva in Switzerland.
This is just the latest of the Bell test experiments. These were very exciting about 50 years ago, because they had the potential to disprove quantum mechanics. All the tests, including this one, have confirmed the quantum mechanics of 1930.

This does appear to be an improved experiment, because previous ones had to make some mild assumptions about undetected photons. This cleverly uses electrons that can nearly always be detected.

However, it does not demonstrate action-at-a-distance, and it will not make data encryption any safer. It just gives more evidence against the hidden variable theories that everyone rejected in 1930.

These experiments are often suggested as candidates for the Nobel Prize in Physics. Maybe so, as these are nontrivial tests of good physics theories. But really, prizes for the theory were given in 1932 and 1933.

Sweden cannot give prizes for string theory, unified field theory, quantum gravity, black hole information, multiverse, supersymmetry, quantum computers, or any of the other topics that seem to preoccupy our finest Physics minds, because none of those have any experimental validation.

Update: Scott Aaronson describes this work, and concludes:
At a more fundamental level, will this new experiment finally convince everyone that local realism is dead, and that quantum mechanics might indeed be the operating system of reality? Alas, I predict that those who confidently predicted that a loophole-free Bell test could never be done, will simply find some new way to wiggle out, without admitting the slightest problem for their previous view. This prediction, you might say, is based on a different kind of realism.
By "local realism", what he really means is Local hidden variable theory, a foolish effort to disprove quantum mechanics.

Yes, I am convinced that local hidden variable theory is dead. It has been dead since 1930. This experiment is just another nail in the coffin.

I just don't agree with this use of the term "local realism". Quantum mechanics is the local realistic theory, not hidden variable theory.

Update: Several comments now criticize Aaronson's use of the term "local realism" as wrong as misleading. He insists on using it:
As I use the term, “local realism” is not a “physics definition,” it’s a math definition. And Bell’s theorem is not a “physics theorem” (whatever that means), it’s a math theorem that’s been proved and will stay proved until the end of time. If you want to argue about the theorem’s relevance to physics, you can do that, but you don’t get to negotiate the definition of “local realism,” because it’s now part of math.
He doubles down here. By a math term, he means associated with some stupid disproved mathematical hidden variable model. I think that he likes the term because it sounds profound to say that quantum mechanics has been proved contrary to local realism.

Wednesday, August 26, 2015

Hawking has new black hole info theory

Bekenstein-Hawking black hole entropy nonsense, and now Hawking continues to babble nonsense about the black hole information paradox:
Stephen Hawking, who once stunned the scientific community by saying that black holes emit radiation, expounded on another groundbreaking theory on Tuesday.

"The message of this lecture is that black holes ain't as black as they are painted. They are not the eternal prisons they were once thought," Hawking told a meeting of experts, according to the New Scientist. "Things can get out of a black hole both on the outside and possibly come out in another universe." ...

"Quantum mechanics — a highly successful theory that describes physical phenomena at the scale of atoms and subatomic particles — says that information can never be lost, even when it falls into a black hole. It is widely believed to be an inviolable law of nature. ..."

During his talk on Tuesday at the KTH Royal Institute of Technology in Stockholm, Hawking proposed that the information of the particles sucked into a black hole eventually makes it out in the radiation that is emitted by a black hole.

The information emitted, however, is not usable. ...

"At Monday's public lecture, he explained this jumbled return of information was like burning an encyclopedia: You wouldn't technically lose any information if you kept all of the ashes in one place, but you'd have a hard time looking up the capital of Minnesota."
I hate to pick on someone with a degenerative neurological disorder, but Hawking lost it decades ago.

First, nothing in quantum mechanics says that info can never be lost. Some of the processes are time reversible, so you could say that nothing is lost in those processes. But there is nothing special about information, and quantum mechanics is not a time reversible theory.

Second, there is no "inviolable law of nature" that says that info can never be lost. Just ask yourself: Who got the Nobel Prize for that? What is the experiment demonstrating it? What is even the theoretical basis for it? What useful consequence does it have? The answers are no one and nothing, because there is no such law.

Third, if there is some definition of information that allows an encyclopedia to be burned without losing any of it, then it is unrelated to every definition of information that I know. It is crazy to argue that burning an encyclopedia conserves information.

Fourth, it takes a trillion trillion years for a black hole to evaporate, so this is completely disconnected from any observational science.

People make fun of medieval scholars for supposedly debating about How many angels can dance on the head of a pin? This is the modern equivalent. Someday people will make fun of XXc and 21c physics for arguing about this stupid issue.

Update: Lubos Motl weighs in on this issue, and says that you have to be a string theorist to understand the finer points.

Monday, August 24, 2015

Autism discoveries not independent

The history of science is filled with examples of people independently discovering some major principle. In my experience, tho, such claims of independence do not hold up under scrutiny.

For example, I have doubted claims that the Pythagorean theorem was proved independently. I have many posts doubting that Einstein re-discovered relativity independently.

Here is an example from psychology:
In one of the uncanny synchronicities of science, autism was first recognized on two continents nearly simultaneously. In 1943, a child psychiatrist named Leo Kanner published a monograph outlining a curious set of behaviors he noticed in 11 children at the Johns Hopkins Hospital in Baltimore. A year later, a pediatrician in Vienna named Hans Asperger, who had never seen Kanner's work, published a paper describing four children who shared many of the same traits. Both Kanner and Asperger gave the condition the same name: autism — from the Greek word for self, autòs — because the children in their care seemed to withdraw into iron-walled universes of their own.
The article makes a good case that both of them stole the idea from Georg Frankl. He directly worked many years for both of them. See also History of Asperger syndrome.

A NY Times book review does not know about the connection:
The history of science is studded with stories of simultaneous discovery, in which two imaginative souls (or more!) turn out to have been digging tunnels to the same unspoiled destination. The most fabled example is calculus, developed independently in two different countries by Isaac Newton and Gottfried Wilhelm von Leibniz, but the list stretches back centuries and unfurls right into the present. One can add to it sunspots, evolution, platinum, chloroform ... and now autism, as the science journalist Steve Silberman informs us, identified separately by Leo Kanner and Hans Asperger. The crucial difference is that Kanner had the fortune to publish his work in Baltimore, while Asperger had the misfortune to publish his in Nazi-controlled Vienna, and this accident of geopolitics lies at the tragic core of Silberman’s ambitious, meticulous and largehearted (if occasionally long-winded) history, “NeuroTribes: The Legacy of Autism and the Future of ­Neurodiversity.”
No, autism was not independently identified by Kanner and Asperger, and maybe I should doubt some of those other stories. It is my understanding that Newton and Leibniz were not as independent as it appeared, as they saw unpublished manuscripts from each other. I guess some people say that Darwin and Wallace independently discovered evolution by natural selection, but I am not convinced that any of that was independent. I don't know about sunspots, platinum, and chloroform.

Sunday, August 23, 2015

Bekenstein black hole area and entropy

The NY Times obituary of Jacob Bekenstein says:
acob Bekenstein, a physicist who prevailed in an argument with Stephen Hawking that revolutionized the study of black holes, and indeed the nature of space-time itself, died on Sunday in Helsinki, Finland, where he was to give a physics lecture. He was 68. ...

Black holes are the prima donnas of Einstein’s general theory of relativity, which predicts that space wraps itself completely around some object, causing it to disappear as a black hole. Dr. Bekenstein suggested in his Ph.D. thesis that the black hole’s entropy, a measure of the disorder or wasted energy in a system, was proportional to the area of a black hole’s event horizon, the spherical surface in space from which there is no return. ...

Lee Smolin, a theorist at the Perimeter Institute for Theoretical Physics in Waterloo, Ontario, said, “No result in theoretical physics has been more fundamental or influential than his discovery that black holes have entropy proportional to their surface area.”

Dr. Bousso called Dr. Bekenstein “one of the very few giants in the field of quantum gravity.”
Really? Is this the best theoretical physics has to offer?

This formula is just a definition, with no observable consequences. The concept of entropy helps understand thermodynamic reactions, but black holes don't any observable thermodynamics. Hawking says that they will evaporate over the next trillion trillion years, but there is no known way to verify that.

45 years ago some grad student notices that area and entropy of a black hole can increase, so he speculates that there might be some relation. For that he became a giant in the field of quantum gravity? The obituary suggests that he might have won a Nobel Prize, if he had lived long enuf. Maybe if he had lived a trillion trillion years while we all watch a black hole evaporate.

I don't want to badmouth Bekenstein, but it shows the sorry state of theoretical physics and quantum gravity that a trivial definition with no observable consequences is hailed as the greatest achievement in the field.

The string theorists rave about this formula because they say that it is backed up by some calculations by some string theorists. The calculation does not even have much to do with string theory, but the string theorists brag about this as their greatest accomplishment, and their strongest experimental validation. Again, this only shows the sorry state of string theory.

Peter Woit exposes multiverse nonsense:
Susskind deals straightforwardly with the lack of scientific evidence problem by simply saying things that aren’t true:
This idea of a multiverse is not gratuitous speculation. No, it really comes out of both experiment or observational physics about the universe and the current theories as best we understand them.
He doesn’t explain what the experimental evidence for the multiverse is.
and points to this book review:
Physicists have a nerve. I know one (I’ll call him Mark) who berates every religious person he meets, yet honestly thinks there exist parallel universes, exactly like our own, in which we all have two noses. He refuses to give any credit to Old Testament creation myths and of course sneers at the idea of transubstantiation. But, without any sense of shame, he insists in the same breath that humans are made from the fallout of exploded stars; that it is theoretically possible for a person to decompose on one side of a black hole and recompose on the other, and that there are diamonds in the sky the size of the moon.
Physics is supposed to be the hardest of the hard sciences, and physicists the most level-headed. But physics has lost its way, and promotes stuff more outlandish than Biblical creation myths.

Saturday, August 22, 2015

Relativity forbids rigid objects

Physicist Lubos Motl gives a relativity lesson:
It is clearly a totally rudimentary problem in special relativity. It has its own name and if you search for Ehrenfest paradox, you quickly find out that there's been a lot of debates in the history of physics – relatively to what one would expect for such a basic high school problem in classical physics. Born, Ehrenfest, Kaluza, von Laue, Langevin, Rosen, E ...

A rod can't be "unbendable" or "unsqueezable" or "unstretchable" because it would mean that there is something in the rod that guarantees its prescribed proper length at all times. ...

This non-existence of perfectly rigid rods in relativity should be totally obvious for rods. But it holds for disks, too. ...

At any rate, the non-existence of perfectly rigid bodies is undoubtedly a characteristic, almost defining, implication of relativity.

I am pretty amazed that even in 2015, 110 years after Einstein presented his relativity, this very simple point remains controversial. Well, I am convinced that at least since 1911, almost all good physicists have agreed what the correct answer basically is.
He is right. Part of the problem is that Einstein's famous 1905 relativity paper declared:
The theory to be developed is based — like all electrodynamics — on the kinematics of the rigid body, since the assertions of any such theory have to do with the relationships between rigid bodies (systems of co-ordinates), clocks, and electromagnetic processes. Insufficient consideration of this circumstance lies at the root of the difficulties which the electrodynamics of moving bodies at present encounters. ...

If a material point is at rest relatively to this system of co-ordinates, its position can be defined relatively thereto by the employment of rigid standards of measurement and the methods of Euclidean geometry, and can be expressed in Cartesian co-ordinates. ...

Let there be given a stationary rigid rod; and let its length be l as measured by a measuring-rod which is also stationary.
Einstein's whole presentation is in terms of rigid bodies. If there is no such thing as a rigid body, then it is hard to make any sense of that paper.

Motl is right, here. The whole discovery of special relativity was based on the insight by FitzGerald and Lorentz that the Michelson-Morley apparatus was not really rigid, but can contract as motion deforms the electromagnetic fields that hold the molecules together. Then Poincare and Minkowski had the insight that space and time were being deformed.

Poincare's 1905 paper defined distance in terms of how far light goes in a specified time. Minkowski made the non-Euclidean metric the fundamental entity. Einstein's use of rigid measuring rods does not make much sense, and apparently is still causing confusion today.

Update: I meant to also say this. The most important point about relativity is that it rejects action-at-a-distance. If you had a rigid object, you could push it at one end, and have an instantaneous effect at the other end. That is completely contrary to the whole spirit of relativity.

Wednesday, August 19, 2015

What quantum feature killed the classical picture?

David Jennings and Matthew Leifer just updated their No Return to Classical Reality
At a fundamental level, the classical picture of the world is dead, and has been dead now for almost a century. Pinning down exactly which quantum phenomena are responsible for this has proved to be a tricky and controversial question, but a lot of progress has been made in the past few decades. We now have a range of precise statements showing that whatever the ultimate laws of Nature are, they cannot be classical. In this article, we review results on the fundamental phenomena of quantum theory that cannot be understood in classical terms. We proceed by first granting quite a broad notion of classicality, describe a range of quantum phenomena (such as randomness, discreteness, the indistinguishability of states, measurement-uncertainty, measurement-disturbance, complementarity, noncommutativity, interference, the no-cloning theorem, and the collapse of the wave-packet) that do fall under its liberal scope, and then finally describe some aspects of quantum physics that can never admit a classical understanding -- the intrinsically quantum mechanical aspects of Nature. The most famous of these is Bell's theorem, but we also review two more recent results in this area.
I agree with them that those other things are not so radically different from classical mechanics, but then they go nuts with the profundity of Bell's Theorem.
The departure of quantum mechanics from classicality was put into a very sharp and powerful form by John Bell [2, 20], who showed that some aspects of quantum entanglement can never fit into a model in which systems possess objective properties prior to measurement and that also obeys a principle of locality. Since the result only depends on certain empirically observed predictions of quantum theory, rather than the structure of the theory itself, any future theory beyond quantum theory will be subject to the same argument, so there can be no going back to a conception of the world that is both classical and local. ...

In the literature, this is often referred to by saying that either “locality” or “realism” must be given up. However you wish to parse the dilemma, it is clear that Bell inequality violations imply a radical departure from classical physics. ...

To sum up, we have shown that many phenomena that are traditionally viewed as intrinsically quantum-mechanical; such as randomness, discreteness, the indistinguishability of states, measurement-uncertainty, measurement-disturbance, complementarity, non-commutativity, interference, the no-cloning theorem, and the collapse of the wave-packet; all appear within classical statistical mechanics under reversible dynamics. These serve to map out classical fragments of quantum physics, in a search for the genuinely strange aspects of the theory. In addition to Bell’s theorem on the failure of local causality at a fundamental level, we have described two less well-known results that reveal further deep and subtle insights into the quantum realm.
Bell's theorem is a consequence of non-commutivity and those other principles. It does not contradict local causality, unless you are using contrived definitions (as the paper does).

If two observables do not commute, then measuring one leaves some uncertainty in the other one. That is the quantum behavior at the core of Bell's theorem. Measuring the position of an electron has the effect of localizing it, and that creates uncertainty in momentum.

Bell wanted to believe that the act of measuring an electron did not necessarily disturb it, and gave the value of some hidden variable that was determined all along. If he were right, then quantum mechanics would be proved wrong. So all he really had was an argument that some alternative theory of hidden variables is wrong.

Lubos Motl's latest rant is about some Christian videos explain quantum mechanics better than the atheist videos. Along the way, he says:
quantum mechanics teaches us that (especially before a measurement) there is no "objective truth about the state of Nature" from which all the knowledge of all observers would be derived as projections or a subset.
Yes, that is right. I am not sure about the religious implications, but that is a core view of quantum mechanics that goes back to Heisenberg, von Neumann, and Dirac. It is not something that Bell discovered decades later.

Saying that Bell's Theorem rules out "realism" seems very profound, until you learn that realism is just defined as some particular hidden variable theory.

Monday, August 17, 2015

John Conway's Life

There is a new book out about a mathematician, and as usual he is portrayed as a mentally ill misfit. Here is the WSJ review:
Even Mr. Conway’s darkest points somehow take a sharp veer into whimsy. In 1993, suffering from heart disease and somehow flat broke on a Princeton salary, he attempted suicide by pills. Upon recovering, he wore a T-shirt around campus that read “SUICIDE” in large block letters, apparently with the intent of diffusing rather than generating awkwardness. (“I wore it for 2 or 3 days until it got too sweaty,” he recalls.) ...

Mr. Conway typifies a popular stereotype of the mathematician: prone to wild enthusiasms, sweaty and wild-bearded, inattentive to the mundanities. Ms. Roberts, to her credit, reminds us that he is as much a social outlier among his colleagues as he would be in the general public; that when he forgets to show up to deliver a lecture, it’s annoying, not charming; that the sincere and profound admiration Mr. Conway enjoys is often tinted with exasperation. This is most notable in the only slightly touched-on subject of his romantic life. “I think John is the most selfish, childlike person I have ever met,” one of his three ex-wives tells Ms. Roberts. “One of the reasons I find that so intolerable is that I know damn well he can be human if he cares enough to bother.”
Hollywood usually treats mathematicians as mentally ill also.

A new physics biographical essay writes:
In the early 1970s, Yuri Golfand was among the discoverers of theoretical supersymmetry, a concept which completely changed mathematical physics in the 21st century. After his discovery, his research institution in Moscow fired him. He knew the humiliations of the Brezhnev regime firsthand, blacklisted and unemployed for the rest of the decade due to his desire to emigrate to Israel.
It calls supersymmetry a "revolutionary concept in theoretical physics". Supersymmetry certainly caused a lot of excitement, but it has been a gigantic dead end. Nothing has come out of that work that has any bearing on the real world. No Nobel Prizes have been given for any work related to supersymmetry. The world is not supersymmetric.

I don't want to minimize his hardships under Communism, but no one else got exit visas either.

Wednesday, August 12, 2015

Deutsch defends many-worlds philosophy

David Deutsch is one of the chief gurus of the many-worlds interpretation (MWI) of quantum mechanics, and of quantum computing. He says that quantum computing will work because of the efficiency of parallel computation being done in alternate universes.

I suspect that he was considered a crackpot at first, but now that we have 100s of millions of dollars being spent on these dead-ends of physics, he is revered as an insightful genius. He has been mentioned as a candidate for a Nobel Prize, if anyone ever finds any evidence for anything he says.

The MWI has two fatal flaws. First, there is no empirical evidence for it, and there can never be any such evidence. Second, it destroys the probabilistic predictions that are at the heart of quantum mechanics and every other science.

Deutsch has posted a new paper addressing these issues. There is no physics in the article; just philosophical hand-waving:
Claims that the standard methodology of scientific testing is inapplicable to Everettian quantum theory, and hence that the theory is untestable, are due to misconceptions about probability and about the logic of experimental testing. Refuting those claims by correcting those misconceptions leads to various simplifications, notably the elimination of everything probabilistic from fundamental physics (stochastic processes) and from the methodology of testing ('Bayesian' credences).
By "Everettian", he means MWI, and he shortens it to just "quantum theory", as if that were the most sensible interpretation. Copenhagen and other textbook interpretation are called "collapse" variants. The collapse is the idea that you refuse to consider the alternative (unobservable) universes.

Deutsch flips the arguments with a philosophical sleight-of-hand. He credits Karl Popper's rejection of positivism that a good scientific explanation is much more important than a crucial experiment. He agrees with the philosophers who say that there is no such thing as the crucial experiment. MWI doesn't explain any experiments but it does give a good explanation, so he says that it is philosophically superior to collapses.

He goes further and denies that any probabilistic theories are truly testable, and only something like MWI, which says that anything can happen without any probability estimates, should be considered testable. He concludes:
By adopting Popper’s explanatory, conjectural conception of science, and his objective, problem-based methodology of scientific testing (instead of ones that are subjective, inductivist, positivist, ‘Bayesian’ etc.), and bearing in mind the decision-theoretic argument, we can eliminate the perceived problems about testing Everettian quantum theory and arrive at several simplifications of methodological issues in general.

In particular, I have shown that the claim that the standard methods of testing are invalid for Everettian quantum theory depends on adopting a positivist or instrumentalist view of what the theory is about. The claim evaporates, given that science is about explaining the physical world.

Even ‘everything-possible-happens’ theories can be testable. But Everettian quantum theory is more than an everything-possible-happens theory. Because of its explanatory structure (exploited by, for instance, the decision-theoretic argument) it is testable in all the standard ways. It is the predictions of its ‘collapse’ variants (and any theory predicting literally stochastic processes in nature) that are not genuinely testable: their ‘tests’ depend on scientists conforming to a rule of behaviour, and not solely on reality conforming to explanations.
I cannot make any sense of this. All of science involves some sort of comparison of theory with experiment. The measurements never match up exactly, so we are always left with the problem of deciding whether the observations are within the margins of what the theory said was likely. There is no other way to do science, as far as I know.

If a theory gives probabilities and error estimates, as all good scientific theories do, then Deutsch says that it is not testable. If a theory says that everything possible happens, as MWI does, then Deutsch says that it is testable.

There is no merit to anything Deutsch says. Popper was wrong in his rejection of positivism. Duhem-Quine were wrong in their rejection of the crucial experiment. MWI is incoherent. Quantum computing is a pipe dream. You can reverse almost everything he says, and get closer to the truth.

Monday, August 10, 2015

Where are the extraterrestrials?

Dennis Overbye writes in the NY Times that not everyone is excited by the possibility of primitive life on Mars or elsewhere:
In an article published in Technology Review in 2008, Professor Bostrom declared that it would be a really bad sign for the future of humanity if we found even a microbe clinging to a rock on Mars. “Dead rocks and lifeless sands would lift my spirit,” he wrote.

Why?

It goes back to a lunch in 1950 in Los Alamos, N.M., the birthplace of the atomic bomb. The subject was flying saucers and interstellar travel. The physicist Enrico Fermi blurted out a question that has become famous among astronomers: “Where is everybody?”

The fact that there was no evidence outside supermarket tabloids that aliens had ever visited Earth convinced Fermi that interstellar travel was impossible. It would simply take too long to get anywhere.

The argument was expanded by scientists like Michael Hart and Frank Tipler, who concluded that extraterrestrial technological civilizations simply didn’t exist.

The logic is simple. Imagine that one million years from now Earthlings launch a robot to Alpha Centauri, the closest star system to our own. It gets there in a few years, and a million years later sends off probes to two other star systems. A million years after that, each of those sends off two more probes. Even allowing for generous travel times, in 100 million years roughly a nonillion stars (1030) could be visited. The galaxy contains maybe 200 billion stars, so each could be visited more than a trillion times in this robot crisscrossing.
I think that this is correct. If Earth-like planets are common, then it would only take 100M years for an advanced civilization to colonize the galaxy.

It seems reasonable to assume that primitive life has evolved on 100s of other planets in our galaxy. But it is doubtful that any of them evolved into an advanced civilization.

Earth has many strange features that have been essential to human life, and are unlikely elsewhere. We have a single sun, a single large moon to cause tides and stabilize the orbit, a Jupiter to clear out other junk, water covering 2/3 the Earth so sea and land life is possible, etc. We do not know where the water comes from.

Thursday, August 6, 2015

Newton studied alchemy

Alchemy is frequently cited as an example of pre-modern pseudo-science, like astrology. This always seemed unfair to me, as alchemists presumably spent most of their time studying properties of materials, and was hence legitimate early chemistry.

Even transmutation, such as trying to turn lead into gold, is not inherently crazy. As we now know, all matter is made of the same quarks and electrons, and there is no law of nature to prevent convert one kind of atom to another. It is just extremely difficult, and only possible today in very tiny quantities in giant particle accelerators.

In a new collection of essays on The Unknown Newton, William R. Newman writes on Newton and alchemy:
For the tercentenary celebration of Newton’s birth, Keynes famously wrote in an address that:
Newton was not the first of the age of reason. He was the last of the magicians, the last of the Babylonians and Sumerians, the last great mind which looked out on the visible and intellectual world with the same eyes as those who began to build our intellectual inheritance rather less than 10,000 years ago.
The thrust of Keynes’s address was that the conventional view of Newton as a “rationalist, one who had taught us to think on the lines of cold and untinctured reason,” was not quite right and that the truth was more complicated: one of the greatest scientists of all time spent a large part of his most creative years on various unscientific quests, including a search for that most elusive of alchemical substances, the philosophers’ stone. ...

Newton’s alchemy fits neither the Keynesian picture of the English natural philosopher as “the last of the magicians” nor the Dobbsian view of his alchemy as a religious quest. Instead, Newton’s alchemical studies reveal an early modern scholar and experimenter hard at work in deciphering extraordinarily difficult texts and a natural philosopher attempting to integrate the fruits of this research into his overall reform of scientific knowledge. Although this view of Newton’s alchemical scholarship and experimentation may be less evocative than Keynes’s or Dobbs’s, it conforms more closely to the depiction of Newton familiar to scholars of his physics, mathematics, and biblical studies. Throughout his divergent activities, Newton remained wedded to techniques of analysis and understanding that would be familiar to most of us today. The apparent incongruity between Newton the scientist and Newton the alchemist dissolves when we acquire a deeper understanding of alchemy and of the man himself.
The other essays examine Newton's religious investigations.

Wednesday, August 5, 2015

Theories of Everything, Mapped

Quanta magazine reports:
“Ever since the dawn of civilization,” Stephen Hawking wrote in his international bestseller A Brief History of Time, “people have not been content to see events as unconnected and inexplicable. They have craved an understanding of the underlying order in the world.”

In the quest for a unified, coherent description of all of nature — a “theory of everything” — physicists have unearthed the taproots linking ever more disparate phenomena.
The vast majority of these efforts are foolish and misguided, in my opinion, but please click on the link and click "START" for a snazzy map of all these theories on your screen. It looks great. You can waive your mouse over it, and feel as if you are touching some grand idea, all tied in together.

It is just a stupid collection of failed buzzwords, of course, but it looks great.

Monday, August 3, 2015

Wave function can be just our knowledge

Interpretations of quantum mechanics can disagree about whether the wave function Psi is a direct reflection of reality (ontology, ontic) or just a representation of our knowledge (epistemology, epistemic).

Some would also say to "shut up and calculate", and would be dismissive of such philosophical distinctions. Bohr would say that anytime you write formulas on paper, you are just trying to express our knowledge about a system.

A new paper comments on the PBR theorem:
Building upon the Harrigan‐Spekkens analysis, the PBR paper (Pusey, Barrett and Rudolph 2012) raises the question of whether a Ψ‐epistemic interpretation of the Ψ‐function is consistent with QM. According to the theorem proved in the paper, it is not, namely, if the epistemic interpretation is accepted and an overlap of the supports of two distinct probability distributions (corresponding to two distinct quantum states) is allowed, a violation of the predictions QM follows. PBR conclude that QM is not amenable to the epistemic interpretation. This surprising result has immediately attracted a great deal of attention. Most readers have taken the theorem at face value: The Ψ‐epistemic interpretation is indeed ruled out by the theorem and consequently, the remaining option is the Ψ‐ontic interpretations. In other words, the PBR theorem has been advertised as supporting a realist interpretation rather than an epistemic interpretation of Ψ. ...

What if we go radically epistemic and deny the assumption of definite physical states? In that case we take QM to be mute about the physical state of the system, interpreting it instead along the lines of Schrodinger, Pitowsky, Bub, Fuchs and others have suggested, as a maximal catalog of possible measurement results, a betting algorithm, a book‐keeping device. This option is left untouched by the PBR theorem. Not only is it not undermined by it, to the contrary, in ruling out a more classical probabilistic interpretation, which presupposes the existence of the ‘real’ state of the system, the PBR theorem in fact strengthens the radical epistemic interpretation.
So this paper made a big splash because the authors claimed to be disproving an epistemic interpretation, but it does nothing of the kind. It only gives an argument against a hidden variable theory that no one believed in anyway.

The original title to the PBR paper was The quantum state cannot be interpreted statistically, and was accepted for publication in Nature, a very high status journal. This raised eyebrows as the quantum state wave function has been interpreted statistically for 80+ years, and nothing short of a startling Nobel-Prize-winning discovery can change that.

But the title was incorrect use of terminology, and all they had was an argument against replacing quantum mechanics with a hidden variable theory of the type that had been considered and rejected 80 years ago. The argument had nothing to do with statistical interpretations. The title had to be changed, and the paper was published in a lesser journal.