Pages

Friday, April 25, 2014

Aaronson back to quantum over-hype

Scott Aaronson has a new article on randomness, and promises:
Determining whether numbers truly can have no pattern has implications for quantum mechanics, not to mention the stock market and data security. ...

In my next column, I’ll discuss quantum mechanics and a famous theorem called the Bell inequality, and how they let us generate random numbers under the sole assumption that it’s impossible to send signals faster than light. Notably, the latter approach won’t suffer from the problem of uncomputability — so unlike Kolmogorov complexity, it will actually lead to practical methods for generating guaranteed random numbers. ...

Part II, to appear in the next issue, will be all about quantum entanglement and Bell’s Theorem, and their very recent use in striking protocols for generating so-called “Einstein-certified random numbers”—something of much more immediate practical interest.
If you cannot wait for his sequel, you can read the technical details in A quantum random-number generator for encryption, security and Infinite Randomness Expansion and Amplification with a Constant Number of Devices. The latter starts:
Bell’s Theorem states that the outcomes of local measurements on spatially separated systems cannot be predetermined, due to the phenomenon of quantum entanglement [Bel64]. This is one of the most important “no-go” results in physics because it rules out the possibility of a local hidden variable theory that reproduces the predictions of quantum mechanics. However, Bell’s Theorem has also found application in quantum information as a positive result, in that it gives a way to certify the generation of genuine randomness: if measurement outcomes of separated systems exhibit non-local correlations (e.g. correlations that violate so-called Bell Inequalities), then the outcomes cannot be deterministic.

While Bell’s Theorem does give a method to certify randomness, there is a caveat. The measurement settings used on the separated systems have to be chosen at random! Nevertheless, it is possible to choose the measurement settings in a randomness-efficient manner such that the measurement outcomes certifiably contain more randomness (as measured by, say, min-entropy) than the amount of randomness used as input. This is the idea behind randomness expansion protocols, ...
As usual, Aaronson is overselling the quantum mysticism. Randomness is a metaphysical concept, and quantum mechanics cannot say whether anything is random. I thought he understood this, as I cited him for explaining that quantum mechanics can be interpreted as deterministic or non-deterministic.

Bell's theorem says that there are observable differences between quantum mechanics and hidden variable theories. It was very exciting to those who wanted to disprove quantum mechanics, but all the experiments confirmed what everyone thought for decades, so it was no big deal.

The theory looks at two identical particles emitted from the same process. If you measure both the same way, you get equal and opposite results. If you measure the second one differently, then quantum mechanics gives formulas for probabilities, based on the measurement of the first.

Thus measuring the second particle seems determined if measured like the first, and random otherwise. So you can randomly decide how to do the measurement, and then get a random result.

Impressed by that result? No, of course not. It is trivial. Random just means that it is not easily predicted by a correlation with something else. All this is getting is a second particle measurement that seems random compared to the first. But it is not random in any absolute sense, and might be predicted by knowledge of how the pair was produced.

If you want a random bit, just toss a coin. That will be random in the sense of being uncorrelated with whatever else you are doing. What this research does is to toss two correlated coins, do some fancy manipulations so that they are effectively not correlated, and then conclude that one coin is random because it cannot be predicted by the other.

The second coin is giving you a reference point for the analysis, but it is not adding to the randomness. The whole idea that hte second coin is a source of randomness is fallacious.

Aaronson claims that he can get certified truly random bit from the sole assumption of the lack of faster-than-light communication from one coin to the other. But that is nonsense. That assumption can only get you independence from the first coin, but no more.

All of this Bell hocus pocus has no practical application to cryptography, as much easier methods are in common use, such as SHA-256.

I thought that Aaronson was cleaning up his act.

1 comment:

  1. The common variable in 'climate change research' and 'quantum computing' is $.
    This is what happens to science when the money spigot flows only for a particular conclusion or outcome, you finally get to see which characteristic truly guides a scientist or researcher; 1. the need for scientific objectivity and honesty, or 2. want of more funding. If the decision between the two choices causes any hesitation, the party involved is highly suspect in fraud.

    ReplyDelete