Tuesday, February 28, 2012

IBM joins quest to build a qubit

A reader writes:
While it's refreshing to find someone as skeptical of scientific hype as you, I think you have done too far with the claim that "no one has a qubit yet".
So I report progress in efforts to prove me wrong. Today's NY Times reports:
I.B.M. is jumping into an area of computing that has, until now, been primarily the province of academia: the quest to build a quantum computer.

A computer that took advantage of the oddities of quantum physics could solve in seconds certain problems that would occupy present-day computers for billions of years. But for now, it is impossible to build such a computer because the bits of information it would need for the calculations fall apart before a calculation can be completed. The problem is, in essence, like trying to knit a sweater with yarn that unravels before the first purl.

On Tuesday, I.B.M. researchers will present experimental results that they say put them close to solving this problem, both by lengthening the lifetime of the quantum bits of information and by quickening the pace of computation. The presentation will take place at a meeting of the American Physical Society in Boston.

“In the past, people have said, maybe it’s 50 years away, it’s a dream, maybe it’ll happen sometime,” said Mark B. Ketchen, manager of the physics of information group at I.B.M.’s Thomas J. Watson Research Center in Yorktown Heights, N.Y. “I used to think it was 50. Now I’m thinking like it’s 15 or a little more. It’s within reach. It’s within our lifetime. It’s going to happen.”
This acknowledges that they cannot build a quantum computer because no one can make a qubit that is stable enough to be used in a computation.

The IBM method is commercially impractical because it requires a circuit that is a whole lot colder than liquid helium:
When cooled to a hundredth of a degree above absolute zero, the circuits act as qubits.

The problem is that a qubit becomes scrambled in short order, and the information it carries turns into gibberish. ...

Even though that is still not long enough for perfect calculations, it is almost good enough for error correction algorithms to detect and fix any mistakes. “We’re just crossing this threshold,” Dr. Ketchen said, “which is a big morale booster that says, gee, this is becoming doable.”

Below the threshold, generating reliable answers is impossible. “No matter how many qubits you had, you couldn’t even get one effectively good one because of the error rates being too high,” he said.
I read this as saying that IBM is claiming to make progress towards a qubit that is suitable for computation, but that no one has made such a qubit yet.

I get a lot of flack for being a quantum computer skeptic. But when I deny that anyone has made a true qubit, I am just saying that same thing that this NY Times article says, and what the experts quoted in it say. Where I disagree with them is where they express confidence in doing something that has never been done before.

To me, it seems much more likely that quantum computing is impossible. I say this because (1) it is contrary to conventional notions of computability, (2) it is not implied by quantum mechanics, (3) it is relentlessly overhyped, and (4) it has been pursued by a lot of smart men and money for decades, without much to show for the work.

Of course you have no more reason to believe my predictions than those of a big-shot MIT professor. Just ask yourself why you keep reading stories about quantum computers are doing complex computations with dozens of qubits, and yet no one has a convincing demonstration that anyone has even computed anything with a single qubit.

Update: Lubos Motl summarizes quantum computers and adds:
Errors in the calculations and decoherence make all existing prototypes of a quantum computer unusable. So far it is the case. Maybe one of you will help to solve the technological obstacles.
The obstacles may be more than technological; as a comment below points out, the laws of physics may require that calculating with qubits requires exponentially large amounts of energy.

Monday, February 27, 2012

Bohmian mechanics makes no sense

A. S. Sanz writes in a new paper:
At present, there is no doubt that quantum mechanics can be considered the most successful theory ever devised to explain the physical world. Its applicability ranges from very fundamental physical problems to the high-technology applications that are nowadays an important part of our daily life. This theory, though, still constitutes a veiled mystery at a deeper level of understanding, for there is a lack of a clear interpretation of the physics underlying quantum systems. This is somehow connected to its widely accepted interpretation, namely the Copenhagen interpretation [1], which not only does not allow us to think of quantum systems as we do of classical ones, but it just forbids such a thing.

A feasible way to surmount this drawback (although surely not the final one) comes through Bohmian mechanics [2–5].
There are two problems with this. First, Bohmian mechanics has not been able to reproduce the successes of quantum mechanics. Second, Bohmian mechanics is much more bizarre and harder to interpret physically than Copenhagen.

As Wikipedia explains:
The de Broglie–Bohm theory is explicitly non-local: The velocity of any one particle depends on the value of the wavefunction, which depends on the whole configuration of the universe. Because the known laws of physics are all local, and because non-local interactions combined with relativity lead to causal paradoxes, many physicists find this unacceptable.
Bohm ironicly called it a "causal interpretation" even tho it violates causality. I think that it is a little strange that Bohmian advocates act as if they are clarifying some quantum mystery, when they replace ordinary quantum mechanics with something that makes no sense.

Friday, February 24, 2012

Universe not so interconnected

Sean Carroll writes on his Cosmic Variane blog:
More recently, though, another excerpt from this lecture has been passed around, this one about ramifications of the Pauli Exclusion Principle. (Headline at io9: “Brian Cox explains the interconnectedness of the universe, explodes your brain.”)

The problem is that, in this video, the proffered mind-bending consequences of quantum mechanics aren’t actually correct.
Lubos Motl agrees:
It's very refreshing to agree with Sean Carroll (and a guy named Tom Swanson) on something.

Brian Cox wanted to use quantum mechanics to defend the Gaia religious proposition that "everything is connected with everything else" (the last sentence of the video above). I am convinced that this main "punch line" he wanted to prove was predetermined by ideological prejudices and goals.

The vague misconception that "everything is connected with everything else" is a pillar of the broader environmentalist movement into which Cox indisputably belongs.
I explained Cox's errors here back in Dec. 2011:
He is talking about a theoretical effect that is not measurable unless the electrons are very close. I think that it is a bizarre and unscientific point. It is like saying that my finger has a graviational effect on Venus. If so, it is negligible and unobservable. ... The idea that everything is connected to everything else seems contrary to causality. If that is what Cox means to say, then he ought to say whether it is contrary to causality.
I was complaining because the Bad Astronomer Phil Plait endorsed this video, in spite of its faults.

A theme of my blog is to attack real scientists who present unscientific conclusions. That is, I attack pseudoscience, but I am really not interested in astrology and other such matters. I prefer to point out pseudoscience in academically respectable channels. There is a lot of it in quantum mechanics.

I just noticed that Carroll and Motl were responding to Cox's defense of his video in the WSJ:
I recently gave a lecture, screened on the BBC, about quantum theory, in which I pointed out that “everything is connected to everything else”. This is literally true if quantum theory as currently understood is not augmented by new physics. This means that the subatomic constituents of your body are constantly shifting, albeit absolutely imperceptibly, in response to events happening an arbitrarily large distance away; for the sake of argument, let’s say on the other side of the Universe.

This statement received some criticism in scientific circles. Not because it’s wrong, because it isn’t; without this behavior, we wouldn’t be able to explain the bonds that hold molecules together. The problem is that it sounds like woo woo, and quantum theory attracts woo-woo merde-merchants like the pronouncements of New Age mystics attract flies – metaphorically speaking.
I guess that I am one of those that Cox is responding to. It is "woo woo" when he claims a scientific truth and then qualifies it with "albeit absolutely imperceptibly".

Wednesday, February 22, 2012

Double-slit needs no probabilities

I posted below that no probabilities are needed in quantum mechanics. A reader asks whether the Double-slit experiment proves randomness in quantum mechanics.

I don't see how any physical experiment could prove randomness, or directly observe probabilities. The double-slit experiment was performed and understood by Young in 1803, and no one thought then that it implied anything about probability or randomness.

I think that the relation is as follows. If you assume that light is composed of classical (non-quantum) photons, then the photons go thru one slit or the other, with some probability for each. The diffraction pattern can also be interpreted as a probability distribution.

You get similar results if you collect data on coin tosses. It does not say anything about whether the coin tosses are deterministic or probabilistic.

To get the quantum probabilities, we had to assume classical particles, which is clearly an invalid assumption. Quantum mechanics is emphatic that light does not consist of classical particles. Sometimes quantum mechanics textbooks say that light is composed of particles, but then they are funny particles that can be in two places at once. So we cannot say that the light particle goes thru one slit or the other.

Monday, February 20, 2012

One atom transistor

ScienceDaily reports:
The smallest transistor ever built -- in fact, the smallest transistor that can be built -- has been created using a single phosphorus atom by an international team of researchers at the University of New South Wales, Purdue University and the University of Melbourne. ...

"This is a beautiful demonstration of controlling matter at the atomic scale to make a real device," Simmons says. "Fifty years ago when the first transistor was developed, no one could have predicted the role that computers would play in our society today. As we transition to atomic-scale devices, we are now entering a new paradigm where quantum mechanics promises a similar technological disruption. It is the promise of this future technology that makes this present development so exciting." ...

The single-atom transistor does have one serious limitation: It must be kept very cold, at least as cold as liquid nitrogen, or minus 391 degrees Fahrenheit (minus 196 Celsius). ...
This is not a new paradism. Quantum mechanics has always been used to make transistors. We have no other understanding of them. But the real hype here is not smaller or faster computers, but the holy grail of quantum computers.
The single-atom transistor could lead the way to building a quantum computer that works by controlling the electrons and thereby the quantum information, or qubits. Some scientists, however, have doubts that such a device can ever be built.

"Whilst this result is a major milestone in scalable silicon quantum computing, it does not answer the question of whether quantum computing is possible or not," Simmons says. "The answer to this lies in whether quantum coherence can be controlled over large numbers of qubits. The technique we have developed is potentially scalable, using the same materials as the silicon industry, but more time is needed to realize this goal."
This is where quantum computing research has been stuck for 30 years. Physicists can find more and cleverer ways of demonstrating quantum phenonema, but they cannot show that scalable qubits are possible. I am betting that scalable qubits are impossible.

Wired explains:
This means a quantum computer could do things a classical machine never could, letting you, say, crack today’s most complex encryption algorithms in relatively short order. But physicists still can’t agree on whether a quantum computer can actually be built. The rub is that if you interact with a quantum system, it “decoheres,” collapsing into a single state. In other words, the qubit turns into an ordinary bit. If you want to build a quantum computer, you have to — among other things — isolate its qubits from their surrounding environment.

The new research from Purdue and New South Wales does not demonstrate a quantum bit. But in placing a single-atom transistor on a silicon crystal — and carefully isolating it from the surrounding substrate — it provides a clearer path to a working quantum computer.
Note the admission that no one has a qubit yet, in spite of all the hype.

Sunday, February 19, 2012

Quantum computer hype in Vancouver

The Vancouver Sun shows a picture of computer scientist Scott Aaronson and brags about the vast computational advantages of quantum computers:
“If this all works well, this is five years off. ...

“But who knows what will happen in the future. That's how people used to think about regular computers.”
The field has a dramatic mix of views. One company says that it is in commercial production of 100-qubit computers. A qubit is the basic building block of a quantum computer. If we could make qubits that could be chained together, then we could make quantum computers. We cannot make quantum computers, and predictions vary from saying that it will be possible in 5 years, to saying that it is forbidden by the laws of physics.
Aaronson thinks some problems in mathematics will remain beyond the limits of quantum computing and indeed, he thinks a breakthrough in development of a large-scale quantum computer is many decades, rather than a few years, away.

“Even quantum computers would have significant limitations. If you're just looking for a needle in a haystack. . . then not even a quantum computer will give you an exponential speedup for this problem,” Aaronson said.

He said that view is contrary to most popular representations of quantum computers as infinitely capable.

Nonetheless, he rejects skeptics who doubt they will get built – he's offering a $100,000 prize to anyone who can demonstrate that it's impossible to build a large scale quantum computer.
Aaronson says that I was one of the skeptics that pushed him over the edge to offer his prize.

Czech String theorist Lubos Motl has another view:
Many people have said many stupid things about quantum computation. Many but not all anti-quantum zealots of course believe that quantum computation is impossible because the world must be fundamentally classical and a classical computer of a given size can't solve certain problems after a certain number of steps.
That is not my view. I believe that the world is fundamentally quantized and not classical.
Other people believe in quantum computers but they have used them to "prove" bizarre "interpretations" of quantum mechanics. In particular, David Deutsch has said that the likely existence of quantum computers proves the many-worlds interpretation of quantum mechanics.
At least Deutsch does recognize that a true quantum computer would demonstrate a phenomenon that has never been seen before in any experiment. Because if it had, then that would be a proof of many-worlds, and everyone agrees that there is no proof of many-worlds. And I do not believe that there will ever be any proof of many-worlds, as it is a silly and unscientific idea.

Motl goes on with a little rant about the nature of science:
The only way to rule out a hypothesis is to actually find a wrong prediction that the hypothesis makes, one that disagrees with observations. Various anti-quantum zealots love to use some "alternative", non-scientific ways to falsify theories. A theory doesn't agree with their medieval prejudices so they just decide it means that the theory has to be abandoned. But science hasn't worked in this way for 500 years. What a prejudiced bigot a priori thinks is totally irrelevant in science.
That's right. A-priori opinions about how the world works are irrelevant, whether then come from Einstein or anyone else.

My main reason for being skeptical about scalable quantum computing is that decades of research by really smart people and 100s of millions of dollars have gone into this, and no one has even figured out a way to show that it is even possible. If the dramatic and implausible consequences were really possible, then I would expect to see some evidence of it. At this point, it seems more likely that some law of physics is preventing it, such as the laws of thermodynamics that prevent perpetual motion machines.

Motl concludes:
Quantum computers demonstrate the power of quantum mechanics, the fundamental framework underlying the laws of physics in our Universe, very crisply and if and when they will be produced, the world may substantially change, at least at some level. Let's hope the change won't be catastrophic.
Okay, but that is wishful thinking. If he follows his own advice, then he has to admit that the world may not follow his a-priori prejudices.

Update: Aaronson says he visited D-Wave Systems near Vancouver, and he reiterated his skepticism about their quantum computer:
It remains true, as I’ve reiterated here for years, that we have no direct evidence that quantum coherence is playing a role in the observed speedup, or indeed that entanglement between qubits is ever present in the system. ...

So I hereby retire my notorious comment from 2007, about the 16-bit machine that D-Wave used for its Sudoku demonstration being no more computationally-useful than a roast-beef sandwich. ...

... the fundraising pressure is always for more qubits and more dramatic announcements, not for clearer understanding of its systems. So, let me try to get a message out to the pointy-haired bosses of the world: a single qubit that you understand is better than a thousand qubits that you don’t. There’s a reason why academic quantum computing groups focus on pushing down decoherence and demonstrating entanglement in 2, 3, or 4 qubits: because that way, at least you know that the qubits are qubits! Once you’ve shown that the foundation is solid, then you try to scale up. ...

For the first time, I find myself really, genuinely hoping — with all my heart—that D-Wave will succeed in proving that it can do some (not necessarily universal) form of scalable quantum computation. For, if nothing else, such a success would prove to the world that my $100,000 is safe, and decisively refute the QC skeptics who, right now, are getting even further under my skin than the D-Wave boosters ever did.
In other words, he accused them of being charlatans 5 years ago for over-hyping quantum computing when they cannot even prove that they have one true qubit. But now he hopes that they will make a quantum computer so that he can prove me and the other quantum skeptics wrong! This is pathetic. Aaronson seems to realize that the field is going nowhere.

Friday, February 17, 2012

Sprung from Einstein’s head

Science writer Jeremy Bernstein writes in a new paper:
Periodically I prepare an imaginary lecture the purpose of which is to remind philosophers of physics, and indeed some physicists, that the quantum theory had its origins in experiments. Unlike general relativity which seems to have sprung from Einstein’s head, the quantum theory was a response to experiment. Even de Broglie’s conjecture that particles had also a wave-like nature was influenced by how this notion could be used to explain the quantization of the radii of the Bohr orbits. The experiment driven theoretical developments of the quantum theory began with Planck and have continued ever since.
General relativity sprung from Einstein’s head? One of the persistent myths that I debunk in my book is that Einstein created relativity out of pure thought, with any input from experiment. Einstein himself promoted this myth in his later years, and paradigm shift philosophers have made it the centerpiece of bogus theories about how science works.

Usually the myth is told about the special relativity of 1905, as Einstein's biographers acknowledge that he relied heavily on others for general relativity.

The term general relativity means (special, spacetime, electromagnetic) relativity applied to gravity. It is called general because it is nonlinear. The first breakthru was in 1905, when Poincare discovered the space-time metric, proposed a Lorentz-invariant theory of gravity, and explained how gravity could propagate at the speed of light and still be consistent with solar system observations. A couple of years later he announced that he had figured out how to use relativity to partially explain an anomaly in Mercury's orbit.

Einstein's biggest breakthru was to deduce in 1907 that gravity had an apparent effect on clocks.

We know from Einstein's letters that he spent several years trying (off and on) to extend Poincare's work on Mercury. Others helped him on the problem. He also wanted to explain the deflection of starlight. His next big breakthru was in 1915 when he showed how Grossmann's 1913 relativity equations could be used to explain the Mercury anomaly. When Hilbert gave another derivation of Grossmann's equations in 1915, Einstein became convinced that those equations must be right. Einstein's famous general relativity paper was published in 1916, with an acknowledgement to Grossmann, but no mention of Poincare or Hilbert.

Thus relativity had its origin in experiments, just like quantum mechanics.

Update: A new paper on Einstein the Stubborn: Correspondence between Einstein and Levi-Civita has explained the correspondence between Einstein and Levi-Civita and Hilbert in 1914-16. The letters mostly consisted of Levi-Civita and Hilbert trying to convince Einstein of errors in his general relativity papers. Grossmann had introduced covariant equations in 1913, but Einstein did not accept them, and gave fallacious arguments for non-covariant equations. Einstein finally admitted in 1916 to Hilbert, "The error you found in my paper of 1914 has now become completely clear to me".

Wednesday, February 15, 2012

No quantum probabilities needed

Here is how probabilities arise in quantum mechanics.

The core of the theory is an algebra of observables. These include position coordiates, momentum, energy, spin, electric charge, and anything else that is measurable as a real variable.

The key fact is that the observables do not commute. That is, the position X and the momentum P have the property that XP is not equal to PX. They differ by h-bar, a small quantity called Planck's constant.

This is not so radical, as many everyday observables also have this property that observations depend on the order that they are performed. For example, poll questions:
Sometimes the very order of the questions can have an impact on the results. Often that impact is intentional; sometimes it is not. The impact of order can often be subtle.

During troubled economic times, for example, if people are asked what they think of the economy before they are asked their opinion of the president, the presidential popularity rating will probably be lower than if you had reversed the order of the questions. And in good economic times, the opposite is true.
For more, see Why Question Order Changes Poll Results.

To observe a system, we need a representation of the observables on a Hilbert space of possible system states. That means that a vector ψ represents the state of the system, that an observable A acts on ψ to give a new state Aψ, and that two vectors ψ1 and ψ2 can be combined to get a number <ψ12>. The latter is like an ordinary dot product and gives 0 when the vectors are orthogonal.

If an observable A is measured is measured on a system state ψ, the expected value is <ψ|Aψ>, also written <ψ|A|ψ>. It is a real number.

An actual lab measured value may not match the expected value exactly. Real numbers never match exactly in the lab, with quantum mechanics or any other scientific theory. The standard deviation, or sigma, is also an observable with an expected value. Thus, the mechanics might say that a particle will be observed at a distance of 5.24 ± .03 meters. Then a measurement is likely to be between 5.21 and 5.27.

So quantum mechanics make probabilistic predictions in the sense that it gives a range of likely outcomes for measurements. But every other branch of science does something similar, and this is not why quantum mechanics is said to be probabilistic.

Quantum mechanics is said to be probabilistic because is predicts probabilities. Here is how. Suppose that the observable A is a yes-no (boolean) observable, such as asking whether an electron is in a particular region of space. Yes means 1, no means 0, and no other values are observed. Then the expected value <ψ|A|ψ> will be in the range [0,1]. If the value is 1, then you can be sure of a yes, and if the value is 0, then you can be sure of a no. If the value is in between, then it can be interpreted as a probability of a yes. This interpretation is called the Born rule. Max Born suggested it as one possibility in a 1926 paper footnote, and got a Nobel prize for it in 1954.

Probabilities do not play an essential role here. Testing the Born rule is just a special case of testing an expected value of an observable, where the observable is a yes-no variable. An experiment does not really say whether there is any genuine randomness. It just says that if the expected value of a yes-no observable is 0.65, and you do 100 experiments, then you should get about 65 yes outcomes.

It is better to just say that quantum mechanics predicts the expected values of observables. That is what the formulas really do, and that is how the theory is tested. The Born rule adds an interpretation in the case of a yes-no observable. But that interpretation is just metaphysical fluff. There is no experimental test for it. The tests are just for the expected values, and not for the probabilities.

Thus I do not believe that it is either necessary or very useful to talk about probabilities in quantum mechanics. I guess you could say that the probability gives a way of understanding that the same experiment does not give the same outcome every time, but it does not give any more quantitatively useful info. This understanding is nothing special because every other branch of science also has variation in experimental outcomes.

My view here is a minority view. I have not seen anyone else express it this way. The textbooks usually say that ψ is a probability density or amplitude. But ψ is complex-valued or maybe even spinor-valued, and it requires some computation to get a probability. It is not a probability. That computation is precisely the expectation value described above. Sometimes the textbooks admit that the quantum probabilities require special interpretation because they can be negative. I say that negative probabilities are not probabilities and that the probabilities are no more essential to quantum mechanics than to any other physical theory that does real number computations.

I reader asks what quantum interpretation this is. It is similar to the ensemble interpretation, without the probabilities.

Monday, February 13, 2012

Quantum mysteries disentangled

I just stumbled across a video on The Quantum Conspiracy: What Popularizers of QM Don't Want you to know. It was a Google Tech Talk January 6, 2011 Presented by Ron Garret. His paper is Quantum Mysteries Disentangled.

It is an explanation of some of the mysteries of quantum mechanics, by a non-physicist. He has a slightly unconventional view that I did not find totally persuasive. But I do agree with him that physicists present the subject as more mysterious than it really is, as if there is a conspiracy to confuse you.

A reader sends the 1997 paper, Quantum Mechanics of Measurement, N. J. Cerf, C. Adami, for more details on Garret's view.

Friday, February 10, 2012

Pushed over the quantum edge

Scott Aaronson offered $100,000 (and then reneged) for disproving scalable quantum computing, and added:
Besides Gil and Robert Alicki, other notable QC skeptics include Leonid Levin (of Cook-Levin Theorem fame), Oded Goldreich, Gerard ‘t Hooft (the Nobel physicist), Stephen Wolfram, and Ed Fredkin (well, he actually believes P=BQP). Besides them, my experience has been that there’s also a significantly larger group of physicists, chemists, and computer scientists who agree with the anti-QC sentiments but haven’t articulated them in print (there were even a few who angrily accosted me after department colloquia to accuse me of peddling lies!) If you read the comment threads on Gil’s blog, you’ll see lots of contributions from two more skeptics, both of whom played large roles in “pushing me over the edge” to make this bet: Roger Schlafly (author of a book called “How Einstein Ruined Physics”) and Craig Feinstein (author of numerous wrong P!=NP proofs).
So a lot of important scientists are skeptical about quantum computing (see also here), and they do not necessarily say so on the record. But a few negative blog comments, and he goes ballistic!

Craig criticized it in comments here, and I did on this blog and here.

Quantum computing is one of those subjects that academic scientists are not supposed to criticize because it gets a lot of govt grant funding. It frequently makes extravagent promises about a new computer paradigm, and computers that will outperform all current computers. None of its promises have ever been realized, and there is no likelihood that they ever will. I am glad that I have provoked Scott into defending his position, but his offer is meaningless because he has defined it in a way so that he will never have to pay.

Scott is entitled to his opinion, of course. But we ought to understand that he is doing abstract analyses of hypothetical computers that do not exist in the real world, and are probably contrary to the laws of physics.

Scott argues in IEEE Spectrum:
I study quantum computing at MIT. Recently, on my blog, I offered a $100 000 reward for a demonstration, convincing to me, that scalable quantum computing is impossible in the physical world. The award is entirely at my discretion; ...

Most of the skeptics say that they have no problem with quantum mechanics itself (it is, after all, the best-confirmed physical theory of all time); it's only scalable quantum computers that they object to. To date, though, no one really knows how you can have quantum mechanics without the possibility of quantum fault-tolerance. So as I see it, the burden falls on the skeptics to give an alternative account of what's going on that would predict the impossibility of scalable QC.
I think that is a strange view. Quantum mechanics is the best explanation we have of what is going on. It just does not imply scalable QC, and all attempts at scalable QC have failed.

Monday, February 6, 2012

Quantum computing not like Martian trip

Computer scientist Scott Aaronson argues for quantum computing:
I became interested in quantum computing because of a simple trilemma: either (i) the Extended Church-Turing Thesis is false, (ii) quantum mechanics is false, or (iii) factoring is in classical polynomial time. As I put it in my dissertation, all three possibilities seem like wild, crackpot speculations, but at least one of them is true! The question of which will remain until it’s answered. ...

Yeah, alright. So I ought to amend (ii) from “quantum mechanics is false” to “current low-energy physical theories are wrong.”
He adds:
No, if you accept quantum mechanics, the burden is on you to explain why a computer couldn’t be built that takes advantage of the phenomena of superposition, interference, and entanglement that have been the entire core of quantum mechanics, verified over and over, since 1926.

Believing quantum mechanics but not accepting the possibility of QC is somewhat like believing Newtonian physics but not accepting the possibility of humans traveling to Mars.

I accept quantum mechanics, but not quantum computing. It is just not true that quantum computing follows from the quantum mechanics of 1926. I guess Aaronson tried to show that in his papers cited below and failed. So he claims that somehow the burden is on someone else to show the opposite.

The Mars analogy is ridiculous. The feasibility of a Mars trip is an easy extrapolation from the Moon trip. But there is no demonstrated feasibility of quantum computing.

Scott Aaronson replies:
Sure, it’s easy to understand the impossibility of quantum computing, in exactly the same way it’s easy to understand how the earth can be resting on a giant turtle. The key is not to ask what the turtle’s standing on, and likewise, not to ask what the flaw is in our current understanding of quantum mechanics that makes QC impossible. All sorts of scientific problems can be quickly cleared up this way, once we learn to stop asking annoying followup questions and embrace doofosity!
The problem with quantum computing is that it does not follow from any established scientific theory, there is no observational evidence for it, and it leads to implausible outcomes. There are a lot of such speculative concepts in physics, and we ordinarily reject them. Examples are time travel, Maxwell's demon, tachyons, and wormholes. Quantum computing is like the giant turtle, and the burden on its proponents should be to show some good reason for believing in such a far-fetched concept.

Friday, February 3, 2012

Space is not digital

The current SciAm asks is space digital?
Craig Hogan believes that the world is fuzzy. This is not a metaphor. Hogan, a physicist at the University of Chicago and director of the Fermilab Particle Astrophysics Center near Batavia, Ill., thinks that if we were to peer down at the tiniest subdivisions of space and time, we would find a universe filled with an intrinsic jitter, the busy hum of static. This hum comes not from particles bouncing in and out of being or other kinds of quantum froth that physicists have argued about in the past. Rather Hogan’s noise would come about if space was not, as we have long assumed, smooth and continuous, a glassy backdrop to the dance of fields and particles. Hogan’s noise arises if space is made of chunks. Blocks. Bits. Hogan’s noise would imply that the universe is digital.
The experiment is a variant of the Michelson–Morley experiment that was so crucial for the discovery of special relativity. Another variant is LIGO, which tries to detect gravity waves from another galaxy.

Philosophers commonly say that it was silly for physicists to repeat the Michelson-Morley experiment so many times in so many different ways in the early 20th century, because Einstein deduced relativity from pure thought. I guess they think of relativity as a believe system that is independent of empirical evidence. They are wrong about the origin of relativity, as I explain in my book.

I do think that the belief in digital space is strange and unwarranted. These new experiments are very unlikely to find any evidence for it. Contrary to some beliefs, quantum mechanics does not require digital space.

Wednesday, February 1, 2012

The impossibility of quantum computers

A blog is debating whether quantum computers are the perpetual motion machines of the 21st century:
Gil Kalai and Aram Harrow are world experts on mathematical frameworks for quantum computation. They hold opposing opinions on whether or not quantum computers are possible. ...

Are quantum computers feasible? Or are their underlying models defeated by some fundamental physical laws? ...

As an aside, let me briefly say why I tend to regard universal quantum computers as unrealistic. An explanation for why universal quantum computers are unrealistic may require some change in physics theory of quantum decoherence. On the other hand, universal quantum computers will be physical devices that are able to simulate arbitrary quantum evolutions, where the word “simulate” is understood in the strong sense that the computer will actually create an identical quantum state to the state created by the evolution it simulates, and the word “arbitrary” is understood in the strong sense that it applies to every quantum evolution we can imagine as long as it obeys the rules of quantum mechanics. As such, quantum computers propose a major change in physical reality.
The perpetual motion analogy is a good one. Quantum computer research keep claiming progress, but it is like the progress of free energy researchers. Yes, they may find some slight gain in efficiency, but the research says nothing toward showing that the goal is attainable.

It is just not true that the skeptics of quantum computing are really skeptics of quantum mechanics. All of quantum mechanics can be true without quantum computers.

Update: Physicist Scott Aaronson is betting that Quantum Computing is possible:
I hereby offer $100,000 for a demonstration, convincing to me, that scalable QC is impossible in the physical world.

... Whether Bigfoot exists is a question about the contingent history of evolution on Earth. By contrast, whether scalable QC is possible is a question about the laws of physics. It’s entirely conceivable that future developments in physics would conflict with scalable QC in the same way relativity conflicts with faster-than-light communication and the Second Law conflicts with perpetuum mobiles. It’s such a development in physics that I’m offering $100k for.
I am not sure about this distinction. The post has four conjectures that are consistent with quantum mechanics and all of the experimental evidence, but make quantum computers impossible. The Second law of thermodynamics was demonstrated by people trying to build perpetual motion machines. We do have other arguments for it, but I doubt that they would cause someone to pay off a $100k bet.

Update: Aaronson now argues "that the burden is on the QC skeptics to answer" how quantum computing is impossible, based on his 2003 paper and his PhD thesis. This opinion seems bizarre to me, considering that quantum computing is contrary to every experiment that has even been done, but I guess that I will have to study his paper. Unfortunately, he has already reneged on his $100k offer.