Thursday, February 14, 2019

An orbit is a perpetual fall, said in 1614

Christopher M. Graney writes:
In 1614 Johann Georg Locher, a student of the Jesuit astronomer Christoph Scheiner, proposed a physical mechanism to explain how the Earth could orbit the sun. An orbit, Locher said, is a perpetual fall. He proposed this despite the fact that he rejected the Copernican system, citing problems with falling bodies and the sizes of stars under that system. In 1651 and again in 1680, Jesuit writers Giovanni Battista Riccioli and Athanasius Kircher, respectively, considered and rejected outright Locher's idea of an orbit as a perpetual fall.
This is interesting because it is widely assumed that medieval geocentrists suffered from too much religion or a lack of imagination or a refusal to consider scientific arguments.

In fact, someone had a model of Earth's orbit that was conceptually similar to Newton's. Earth is in free fall towards the Sun.

Like Tycho Brahe, Locher accepted that the other planets revolved around the Sun. He didn't think that Earth moved because of the failure to observe the Coriolis force, among other reasons.

The Coriolis force was demonstrated a couple of centuries later.

Occasionally someone says Copernicus or Galileo created modern science, as the previous geocentrism was completely unscientific. This is nonsense. In 1600, there were legitimate scientific arguments for and against geocentrism.

Tuesday, February 12, 2019

The big tech firms can be wrong

You might think that if well-respected technology companies, like IBM, Google, and Microsoft, are solidly pursuing quantum computing, then it must have some commercial viability.

I thought so too, until the Intel Itanium chip. Just look at this chart. Everyone in the industry was convinced that the chip would take over the whole CPU market. It is still hard to understand how everyone could be so wrong.

Saturday, February 9, 2019

Building Quantum Computers Is Hard

ExtremeTech reports:
You may have read that quantum computers one day could break most current cryptography systems. They will be able to do that because there are some very clever algorithms designed to run on quantum computers that can solve a hard math problem, which in turn can be used to factor very large numbers. One of the most famous is Shor’s Factoring Algorithm. The difficulty of factoring large numbers is essential to the security of all public-private key systems — which are the most commonly used today. Current quantum computers don’t have nearly enough qubits to attempt the task, but various experts predict they will within the next 3-8 years. That leads to some potentially dangerous situations, such as if only governments and the super-rich had access to the ultra-secure encryption provided by quantum computers.

Why Building Quantum Computers Is Hard

There are plenty of reasons quantum computers are taking a long time to develop. For starters, you need to find a way to isolate and control a physical object that implements a qubit. That also requires cooling it down to essentially zero (as in .015 degrees Kelvin, in the case of IBM‘s Quantum One). Even at such a low temperature, qubits are only stable (retaining coherence) for a very short time. That greatly limits the flexibility of programmers in how many operations they can perform before needing to read out a result.

Not only do programs need to be constrained, but they need to be run many times, as current qubit implementations have a high error rate. Additionally, entanglement isn’t easy to implement in hardware either. In many designs, only some of the qubits are entangled, so the compiler needs to be smart enough to swap bits around as needed to help simulate a system where all the bits can potentially be entangled.
So quantum computing is extraordinarily difficult, but they will break most of our computer security systems, and experts predict it within the next 3-8 years.

Thursday, February 7, 2019

Gell-Mann agrees with me about Bell

I have occasionally argued that Bell's Theorem has been wildly misinterpreted, and that it doesn't prove nonlocality or anything interesting like that.

Readers have supplied references saying that I am wrong.

Now I find a short Murray Gell-Mann interview video agreeing with me.

The Bell test experiments do show that quantum mechanics differs from certain classical theories, but not by spookiness, entanglement, or nonlocality. You could say that the particles are entangled, but classical theories show similar effects.

He says:
It is a matter of giving a dog a bad name and hanging him. (laughs) When the quantum mechanical predictions for this experiment were fully verified, I would have thought everybody would say "great!" and go home. ...

When two variables at the same time don't commute, any measurement of both of them would have to be carried out with one measurement on one branch of history, and the other measurement on the other branch of history. That's all there is to it. ... People are still mesmerized by this confusing language of nonlocality.
That's right. The Bell paradoxes are based on comparing one branch of history to another, as if there were counterfactual definiteness. Quantum mechanics forbids this, if you are comparing noncommuting observables.

The Bell theorem and experiments did not really tell us anything that had not already been conventional wisdom for decades.

The bizarre thing about Bell's Theorem is that some physicists say that it is the most profound discovery in centuries, and other just shrug it off as a triviality. I do not know of any difference of opinion this wide in the whole history of science. After years of reading papers about it, I have moved to the latter camp. The theorem encapsulates why some people have conceptual troubles with quantum mechanics, but if the accept the conventional wisdom of 1930, then it has nothing interesting.

Tuesday, February 5, 2019

Quantum repeaters are worthless

"University of Toronto Engineering professor Hoi-Kwong Lo and his collaborators have developed a prototype for a key element for all-photonic quantum repeaters, a critical step in long-distance quantum communication," reports Phys.Org. This proof-of-principle device could serve as the backbone of a future quantum internet. From the report:

In light of [the security issues with today's internet], researchers have proposed other ways of transmitting data that would leverage key features of quantum physics to provide virtually unbreakable encryption. One of the most promising technologies involves a technique known as quantum key distribution (QKD). QKD exploits the fact that the simple act of sensing or measuring the state of a quantum system disturbs that system. Because of this, any third-party eavesdropping would leave behind a clearly detectable trace, and the communication can be aborted before any sensitive information is lost. Until now, this type of quantum security has been demonstrated in small-scale systems.
So if this technology becomes commercially available, you can set up a network that would have to be shut down if anyone tries to spy on it.

Or you can use conventional cryptography that has been in common use for 30 years, and continue to communicate securely regardless of how many people might be trying to spy on you.

Monday, February 4, 2019

There is no epochal revolution coming

Scott Aaronson acknowledges that the CERN LHC failed to the find the SUSY particles it was supposed to find, and then makes a comment that appears targeted at me:
The situation reminds me a little of the quantum computing skeptics who say: scalable QC can never work, in practice and probably even in principle; the mainstream physics community only thinks it can work because of groupthink and hype; therefore, we shouldn’t waste more funds trying to make it work. With the sole, very interesting exception of Gil Kalai, none of the skeptics ever seem to draw what strikes me as an equally logical conclusion: whoa, let’s go full speed ahead with trying to build a scalable QC, because there’s an epochal revolution in physics to be had here — once the experimenters finally see that I was right and the mainstream was wrong, and they start to unravel the reasons why!
I don't draw that conclusion because I don't believe in that "epochal revolution" either.

When the LHC failed to find supersymmetry particles, did we have an epochal revolution telling us why naturalness and SUSY and unified field theories failed?

No. For the most part, physicists went on believing in string theories and all their other nutty ideas, and just claimed that maybe bigger and better accelerators would vindicate them somehow. That will go on until the money for big projects runs out.

The cost of accelerators is exponentially increasing, and the money will run out. They need an accelerator the size of the solar system to get what they really want.

Quantum computers have already had 20 years of hype, billions of research dollars, and some of the world's smartest people working on them. So far, no quantum supremacy. IBM and Google have been promising that for over a year, but they are not even explaining how their plans went wrong. This could go on indefinitely, without any additional proof of the folly of their thinking.

Sunday, February 3, 2019

Inference mag blamed in leftist attack

I defended Sheldon Glashow's negative review in Inference of a popular quantum mechanics book by Adam Becker.

Most, if not all, popular accounts of quantum mechanics are filled with mystical nonsense about Schroedinger cats, entanglement, non-locality, etc. Instead of just explaining the theory, they do everything to convince you that the theory is incomprehensible. Becker's book was in that category, and Glashow's sensible review throws cold water on the nonsense.

Now Becker as struck back, and posted an article in Undark attempting to trash the online journal Inference as unscientific and aligned with evil Trump supporters. Inference responds, and so does Peter Woit.

I guess I'll read some more of those Inference articles to see if the journal really has a right-wing bias. If so, that would be refreshing, as Scientific American and all the other science journals have a left-wing bias. But I doubt it. In Becker's dispute, Glashow simply defends orthodox quantum mechanics while Becker's book is grossly misleading.

Becker says that Inference offered him a "fine fee" to write a response to the negative book review, and he declined. Most authors are eager to defend themselves against a negative review.

Among other things, Becker attacks Inference for this essay arguing that Copernican astronomy in the 16th century was somewhat more accurate than Ptolemaic astronomy. It appears to be a very good analysis. The gripe is that Tipler also has some unusual theological views that do not appear to be relevant to his essay.

Philosophers of science are always talking about the grounds for accepting or rejecting Copernicanism. And yet they hardly ever address the quantitative accuracy.

It has become typical for left-wingers to (1) cling to weirdo unscientific beliefs; (2) get very upset when an organization publishes views contrary to their ideology; and (3) launch guilt-by-association and character assassinations against those who permit contrary views.