Pages

Sunday, June 30, 2019

Xanadu quantum computing startup gets $32M

A reader sends this blog post:
In a previous blog post, I commented about Xanadu AI:

Xanadu.ai, a quantum computing Theranos, will never make a profit

The latest news about the Xanadu Ponzi scheme is that they just got an extra CAN$32 Million (This raises their total funding so far to CAN$41M, according to TechCruch). This company has practically zero chance of succeeding. Their quantum computing technology, using squeezed light, is **far inferior** to the ones (ion trap, squids, optical, anyons, quantum dots) being pursued by a crowded field of well funded startups (Rigetti, IonQ, PsiQ, and many others) and giant monopolies (IBM, Google, Microsoft, Intel, Alibaba, Huawei, …).

The latest 32 million was reported in the Globe and Mail:

Toronto startup Xanadu raises $32-million to help build ‘world’s most powerful computer’

Excerpts in boldface:

But Xanadu needs to take “three giant steps,” before it can fully commercialize its technology, said Massachusetts Institute of Technology mechanical engineering and physics professor Seth Lloyd, a leading expert in quantum computing who advises the startup:

“They need to improve the squeezing by a significant amount and show they can get many pulses of this squeezed light into their device, and then control these pulses … [then] show you can actually do something that’s useful to people with the device. Given what they’re trying to do, they’re on schedule. Any one of those things could fail, which is the nature of science and life and being a startup.”
I don't know anything about this company, but they all seem like scams to me. As long as there is investor money for these projects, they will continue, no matter how many giant steps are needed for commercialization.

Thursday, June 27, 2019

The quantum entanglement paradox

The Wikipedia article on quantum entanglement explains the "spooky" paradox:
The paradox is that a measurement made on either of the particles apparently collapses the state of the entire entangled system — and does so instantaneously, before any information about the measurement result could have been communicated to the other particle (assuming that information cannot travel faster than light) and hence assured the "proper" outcome of the measurement of the other part of the entangled pair. In the Copenhagen interpretation, the result of a spin measurement on one of the particles is a collapse into a state ...

(In fact similar paradoxes can arise even without entanglement: the position of a single particle is spread out over space, and two widely separated detectors attempting to detect the particle in two different places must instantaneously attain appropriate correlation, so that they do not both detect the particle.)
And similar paradoxes are in classical mechanics. If you are trying to predict whether an asteroid will strike the Earth using Newtonian mechanics, then you will predict the asteroid state as an ever expanding ellipsoid representing its probable locations. If you later observe the asteroid in one part of the ellipsoid, then you immediately know that it is not in another part of the ellipsoid, and you instantaneously collapse your ellipsoid.

If the quantum paradox disturbs you as being spooky or nonlocal, then the asteroid paradox should disturb you the same way.

To be fair, the quantum situation is more subtle because (1) measuring a particle's position or spin necessarily disturbs it; and (2) when two particles are entangled we only know how to represent the state of the combination.

These subtleties may or may not disturb you, but they are really separate issues. The spooky action-at-a-distance paradox is fully present in the classical asteroid problem.

Books and articles on quantum mechanics seem to try to confuse more than enlighten. So to describe entanglement, they will throw in complications like the fact that an electron is not just a particle, but has wave properties.

An example of unnecessary complications is the Delayed-choice quantum eraser experiment. But this is just to confuse you, as you can see from this recent paper:
The "Delayed Choice Quantum Eraser" Neither Erases Nor Delays

It is demonstrated that ‘quantum eraser’ (QE) experiments do not erase any information. Nor do they demonstrate retrocausation or ‘temporal nonlocality’ in their ‘delayed choice’ form, beyond standard EPR correlations. It is shown that the erroneous erasure claims arise from assuming that the improper mixed state of the signal photon physically prefers either the ‘which way’ or ‘both ways’ basis, when no such preference is warranted. The latter point is illustrated through comparison of the QE spatial state space with the spin-1/2 space of particles in the EPR-spin experiment.
In other words, nothing to see here.

Quantum mechanics is still funny because electrons and photons have both wave and particle properties. You can think of them as particles, as long as you respect the Heisenberg uncertain, and do not try to give simultaneous values for noncommuting observables. And the wave function of an electron is not really the same as the electron itself.

But many of the quantum paradoxes trick you by combining different things to obscure quantum principle involved.

By the way, it appears that the public wants NASA to focus more on those asteroid ellipsoids. Nobody really wants to go to Mars. Inhabiting Mars is just a science fiction fantasy.

Tuesday, June 25, 2019

PBS TV hypes quantum technologies

PBS TV had a news segment on the billions of dollars being spent on the emerging quantum technology of quantum info and computing, but we are falling behind the Chinese. We need to spend many billions more on a new generation of quantum engineers, or we will fall ten years behind. Already the Chinese have sent an ambiguous photon to a satellite!

There are those who argue that quantum-related technologies are already 30% of our economy, but that is not what PBS is talking about. Computer chips, disc drives, laser, displays, etc are all developed with quantum mechanics, so yes, quantum mechanics is essential to our economy. But they mean quantum spookiness of electrons being in two places at the same time to achieve a parallel computing breakthrough, or some such nonsense.

Sunday, June 23, 2019

Comparing Bohm to a medieval astrologer

Some scientists like to think that they are fighting the good fight of finding truth against modern inquisitors. It is usually nonsense. I don't know of any examples of good science being suppressed.

A Nautilus essay:
The Spirit of the Inquisition Lives in Science
What a 16th-century scientist can tell us about the fate of a physicist like David Bohm. ...

I’ve been schooled in quantum physics and trained to think rationally, dissecting facts and ideas dispassionately. And here I am constantly carrying on imaginary conversations with a 16th-century astrologer. ...

Those communist associations, coupled with the national security implications of his Ph.D. work, made him a target for Senator Joe McCarthy’s crusade against un-American activities.

Bohm refused to answer questions, and refused to name anyone that the McCarthyists should investigate. He was arrested. By the time he was acquitted, he had been suspended from Princeton. In 1951, unemployable in the United States, Bohm took a job in Brazil. ...

Bohm’s idea of an invisible, undetectable pilot wave was roundly criticized, but a man who had survived the McCarthy witch hunts was not easily put off. Having overcome the most heinous character assassination of the era, he could take a little heat. ...

But there are two problems. The first is that, in order to get the predictions right about the interference effect and the ultimate distribution of the photons at the detector, you have to work backward from the final result.

The second problem is that Bohm’s pilot wave is odd—in a way that physicists call “nonlocal.” This means that the properties and future state of our photon are not determined solely by the conditions and actions in its immediate vicinity. ...

The de Broglie-Bohm interpretation of quantum physics, as it is now known, is not popular. Only one venerated physicist has ever really championed it: John Bell, the Irishman who came up with the first definitive test for the existence of entanglement. ...

In the end, we can be reasonably confident that none of our current interpretations of quantum theory are right.
This is ridiculous. Bohm was a Communist who refused to testify about his Communist activities. That is not the behavior of a truth-seeker, or of a loyal American.

His physics work was junk. It did inspire Bell, and Bell inspired a generation of crackpots, but his interpretation of quantum mechanics is very much inferior to Copenhagen.

Say you want to understand light going thru a double-slit and creating an interference pattern. If you think of light as a wave, then all waves cause interference patterns, and it is obvious. Or you can think of light as photons with wave-like properties. Some like to think of light as photons, but funny particles that can be in two places at once and go thru both slits.

Bohm says the photons really are particles, and a photon only goes thru one slit. That is supposed to make his interpretation intuitive. But the photon is guided by some distant ghostly pilot wave that tells it what to do. You could trap the photon in a box, but you cannot predict the photon's behavior based on what is in the box, because it is controlled by a pilot wave that is not even in the box.

Why would any prefer that explanation?

It is somewhat interesting to see what can be done with a goofy interpretation of quantum mechanics, but that's all. As far as I know, Bohm's interpretation only works in contrived test cases, and it is not useful for anything.

After writing this, I see that Lubos Motl posted a rant against the same article:
Meanwhile, Brooks is fighting for "progress". Which means to ban the uncertainty principle and return us to classical physics where the position and momentum existed simultaneously and without any measurement. And his progress involves the talking to the ultimate authorities – the ghosts of astrologers. ...

And it's completely wrong to arbitrarily identify 20th century scientists such as Niels Bohr with the Inquisition etc. Bohr has had clearly nothing to do whatsoever with the Inquisition. Also, it's absolutely wrong to defend scientific theories according to the subjective likability of their political views. Whether David Bohm was a commie – while Pascual Jordan was a staunch believer in the ideas of NSDAP – is completely irrelevant for the evaluation of the validity of their scientific propositions. I would have trouble with the political views of both of these men but their physics must clearly be considered as an entity that is independent from the men's politics.
I don't know whether there is a relation between being a Commie and pushing a spooky action-at-a-distance theory, or in believing that Bohr led an inquisition against Einstein.

Wanting to give a precise position for the electron at all times is an attempt to ban the uncertainty principle, and deny the basic wave theory of matter that was established a century ago.

Friday, June 21, 2019

Quantum supremacy will just be a random number generator

Now that Google is once again bragging that it is about to unveil a Nobel-Prize-worthy quantum computer, guess what its first application will be? Completely worthless, of course.

Quanta mag reports:
But now that Google’s quantum processor is rumored to be close to reaching this goal, imminent quantum supremacy may turn out to have an important application after all: generating pure randomness.

Randomness is crucial for almost everything ...

Genuine, verifiable randomness ... is extremely hard to come by.

That could change once quantum computers demonstrate their superiority. Those first tasks, initially intended to simply show off the technology’s prowess, could also produce true, certified randomness. “We are really excited about it,” said John Martinis, a physicist at the University of California, Santa Barbara, who heads Google’s quantum computing efforts. “We are hoping that this is the first application of a quantum computer.”
This is just crazy. There are 50 cent chips that generate quantum randomness. Why would anyone use a $100M quantum computer?
Measuring the qubits is akin to reaching blindfolded into the box and randomly sampling one string from the distribution.

How does this get us to random numbers? Crucially, the 50-bit string sampled by the quantum computer will have a lot of entropy, a measure of disorder or unpredictability, and hence randomness. “This might actually be kind of a big deal,” said Scott Aaronson, a computer scientist at the University of Texas, Austin, who came up with the new protocol. “Not because it’s the most important application of quantum computers — I think it’s far from that — rather, because it looks like probably the first application of quantum computers that will be technologically feasible to implement.”

Aaronson’s protocol to generate randomness is fairly straightforward. A classical computer first gathers a few bits of randomness from some trusted source and uses this “seed randomness” to generate the description of a quantum circuit. The random bits determine the types of quantum gates ...

Aaronson, for now, is waiting for Google’s system. “Whether the thing they are going to roll out is going to be actually good enough to achieve quantum supremacy is a big question,” he said.

If it is, then verifiable quantum randomness from a single quantum device is around the corner. “We think it’s useful and a potential market, and that’s something we want to think about offering to people,” Martinis said.
I am beginning to think that these guys are trolling us.

If you want some random numbers, just close your eyes and type some junk into your keyboard. Then get the code for the SHA-1 hash function, and apply it repeatedly. Free implementations have been available for 25 years.

SHA-1 has not been broken, but it has some theoretical weaknesses that might be an issue at some future date. If that bothers you, then you can switch to SHA-512. That will have no problem for the foreseeable future.

With all the hype for quantum computing, they admit that (1) quantum supremacy has not been achieved, and (2) even when it is, its most use application might be random numbers.

Update: Peter Shor weighs in with his views:
Horgan: The National Academy of Sciences reports that “it is still too early to be able to predict the time horizon for a practical quantum computer,” and IEEE Spectrum claims we won’t see “useful” quantum computers “in the foreseeable future.” Are these critiques fair? Will quantum computers ever live up to their hype?

Shor: The NAS report was prepared by a committee of experts who spent a great deal of time thinking about possible different ways of achieving quantum computation, the various roadblocks for these methods, and about the difficulties of making a working quantum computer, and I think they give a fair appraisal of the difficulties of the task. 

The IEEE Spectrum article gives a much more pessimistic assessment. It was written by one physicist who has had a negative view of quantum computers since the very beginning of the field. Briefly, he believes that making quantum computers fault-tolerant is a much more difficult task than it is generally believed to be. (Let me note that it is generally believed to be extremely difficult, but that it is theoretically possible if you have accurate enough quantum gates and a large but not impossible amount of overhead.) I don't believe that his arguments are justified.

Horgan: Scott Aaronson said on this blog that “ideas from quantum information and computation will be helpful and possibly even essential for continued progress on the conceptual puzzles of quantum gravity.” Do you agree?

Shor: I want to partially agree.
He also has other opinions about theoretical physics that I may address separately.

Wednesday, June 19, 2019

Doubly exponential growth in quantum computing

Quanta mag reports:
That rapid improvement has led to what’s being called “Neven’s law,” a new kind of rule to describe how quickly quantum computers are gaining on classical ones. The rule began as an in-house observation before Neven mentioned it in May at the Google Quantum Spring Symposium. There, he said that quantum computers are gaining computational power relative to classical ones at a “doubly exponential” rate — a staggeringly fast clip. ...

Doubly exponential growth is so singular that it’s hard to find examples of it in the real world. The rate of progress in quantum computing may be the first.

The doubly exponential rate at which, according to Neven, quantum computers are gaining on classical ones is a result of two exponential factors combined with each other. The first is that quantum computers have an intrinsic exponential advantage over classical ones ... The second exponential factor comes from the rapid improvement of quantum processors.
Something this remarkable would surely be documented by a peer-reviewed scientific paper, right? I see no mention of one.

Our usual expert weighs in:
“I think the undeniable reality of this progress puts the ball firmly in the court of those who believe scalable quantum computing can’t work,” wrote Scott Aaronson, a computer scientist at the University of Texas, Austin, in an email. “They’re the ones who need to articulate where and why the progress will stop.”
The answer comes next: the supposed progress may be all an illusion unless quantum supremacy is reached.
Google has been particularly vocal about its pursuit of this milestone, known as “quantum supremacy.”

So far, quantum supremacy has proved elusive — sometimes seemingly around the corner, but never yet at hand. But if Neven’s law holds, it can’t be far away. Neven wouldn’t say exactly when he anticipates the Google team will achieve quantum supremacy, but he allowed that it could happen soon.

“We often say we think we will achieve it in 2019,” Neven said. “The writing is on the wall.”
Neven works for Google. They often said 2016, then 2017, then 2018, and now 2019.

For how many years can we have double exponential progress, and still no convincing demo that scalable quantum computing can work? Not many. We should have a verdict soon.

Sunday, June 16, 2019

Bell only proved a null result

Peter Woit has closed comments on Bell locality without responding to a lot of sharp criticism, so I am not sure where he stands.

Marty Tysanner wrote:
While I may understand their rationale, it still dismays me that so many people have thrown locality “under the bus” (so to speak). Locality isn’t like the 19th century ether that was invented to explain certain physical phenomena; it’s at the core of relativity, electromagnetism, classical mechanics, and even QM evolution up to the time of measurement. I can’t think of anywhere else in all of fundamental physics – outside QM entanglement – where locality is in question. Moreover, there are obvious conceptual issues with non-local influences/communication between entangled particles at space-like separation: How do the particles “find” each other within the entire universe, so they can “communicate” (other than tautalogically or some other version of “it just happens; don’t ask how”); and why don’t we see nonlocality in other contexts if it’s a truly fundamental aspect of nature? ...

Given the centrality of locality in physics, I think we should “fight to the death” to preserve it in a fundamental theory.
That part of Tysanner's comment is correct. Locality is at the core of nearly all of Physics, including quantum field theory (QFT). If it were false, we would expect to see some convincing evidence, and then do some radical rethinking of out theories.

We do see the Bell test experiments, but they only negate the combination of locality and counterfactual definiteness. You can preserve locality by rejecting counterfactual definiteness. That is what mainstream physicists have done for 90 years.

Tim Maudlin disagrees with Woit and writes:
You do experiments in two (or three: GHZ) labs. You get results of those experiments. Those results display correlations that no local theory (in the precise sense defined by Bell) can predict. Ergo, no local theory can be the correct theory of the actual universe. I.e. actual physics is not local (in Bell’s sense).
Bell defined a local theory as a classical theory of local hidden variables. In later papers, he called them beables, and gave some philosophical arguments for believing in them, but the definition of locality is the same.

Quantum mechanics uses non-commuting observables. By the uncertainty principle, they cannot have simultaneous values. That remains true if you call them beables instead of observables. If you create a model in which they do have simultaneous, but maybe unknown values, then you get predictions that contradict experiment. That is what Bell and his followers have shown.

The only conclusion is the null result: No reason to reject quantum mechanics. Any other conclusion is an error.

Update: Lubos Motl completely agrees with me about Bell. Bell proved a nice little theorem in 1964 that supports locally causal quantum mechanics. Then he wrote some later papers that did nothing but confuse people who refuse to accept quantum mechanics:
In this later paper, Bell coined the new terms "local beables" and "local causality" which have turned his writing into complete mess combining outright wrong statements with totally illogical definitions. He also tried to retroactively rewrite what he did in 1964. While in 1964, as I said, he was rather clear that he made two main assumptions about the theory, namely that it is local and it is classical (although he wasn't a sufficiently clearly thinking physicist to actually use the word "classical"), in the 1976 paper, he already tried to claim that he had only made one assumption, "local causality". ...

There only exists one physically meaningful notion of locality or local causality – and it's what holds whenever the special theory of relativity (or its Lorentz invariance) is correct. The probabilities of measurements done purely in region A are determined by the prehistory of A – and don't depend on further data in the region B although it may contain objects previously in contact with the objects in A. There may exist correlations between measurements in A and B but all of those are calculable from the conditions in the past when A,B were a part of a single system AB. What's important is that people's decisions (e.g. what to measure), nuclear explosions, and other random events in region B don't cause changes to the system A. ...

The claim by Bell, Eric Cavalcanti, and this whole stupid cult that quantum mechanical theories cannot be "Bell locally causal" is wrong ...

People who talk about "beables" in quantum mechanics are doing an equally silly mistake as a molecular geneticist who builds his science on the seven days of creation.
This is correct. The term "beables" is just a goofy term for local hidden variables in a classical theory. Talking about them is just expressing a religious objection to quantum mechanics.

Saturday, June 15, 2019

Woit dives into Bell non-locality

Peter Woit writes:
what’s all this nonsense about Bell’s theorem and supposed non-locality?

If I go to the Scholarpedia entry for Bell’s theorem, I’m told that:
Bell’s theorem asserts that if certain predictions of quantum theory are correct then our world is non-local.
but I don’t see this at all. As far as I can tell, for all the experiments that come up in discussions of Bell’s theorem, if you do a local measurement you get a local result, and only if you do a non-local measurement can you get a non-local result. Yes, Bell’s theorem tells you that if you try and replace the extremely simple quantum mechanical description of a spin 1/2 degree of freedom by a vastly more complicated and ugly description, it’s going to have to be non-local. But why would you want to do that anyway?
He is right, and he cites a Gell-Mann video in agreement, but most of the comments don't.

In short, Bell proved in about 1965 a nice theorem showing the difference between quantum mechanics and classical theories. He showed that assuming local hidden variables leads to different conclusions.

In later years, Bell convinced many people that the assumption of hidden variables was not necessary. They are wrong, and most respectable physicists, textbooks, and Wikipedia articles say so. Some say that he would have deserved a Nobel Prize if he had been right, but he got no such prize.

Lee Smolin responds:
I have made what I hope is a strong case for taking seriously what we might call the “John Bell point of view” in my recent book Einstein’s Unfinished Revolution, and find myself mostly in agreement with Eric Dennis and Marko. I would very briefly underline a few points:

-This is not a debate resting on confusions of words. Nor is there, to my knowledge, confusion among experts about the direct implications of the experimental results that test the Bell inequalities. The main non-trivial assumption leading to those inequalities is a statement that is usually labeled “Bell-locality”. Roughly this says (given the usual set up) that the “choice of device setting at A cannot influence the same-time output at B”.

Nothing from either quantum mechanics nor classical mechanics is assumed. The experiments test “Bell-locality” and the experimental results are (after careful examination of loop-holes etc.) that the inequality is cleanly and convincingly violated in nature. Therefor “Bell-locality” is false in nature.

-The conclusion that “Bell-locality” is false in nature is an objective fact. It does not depend on what view you may hold on the ultimate correctness, completeness or incompleteness of QM. Bohmians, Copenhagenists,Everetians etc all come to the same conclusion.
This is so wrong that I am tempted to cite Lubos Motl's opinion of Smolin.

There is no experiment where the choice of device setting at A influences the same-time output at B. The Bell experiments only show that the choice of device setting at A influences how we go about making predictions at B, but it never affects the output at B.

There is a big difference. One is spooky action-at-a-distance, and one is not.

It is amazing that Smolin could write a book in this subject, and get this simple point wrong.

I added this comment, but it was deleted:
As Wikipedia says: Cornell solid-state physicist David Mermin has described the appraisals of the importance of Bell's theorem in the physics community as ranging from "indifference" to "wild extravagance".

Bell assumed local hidden variables. If you believe that the theory of everything should be based on hidden variables, then Bell's theorem is a big deal. If you believe in QM as a perfectly good non-hidden-variable theory, then Bell's theorem has no importance.

As Peter says, "systems don’t have simultaneous well-defined values for non-commuting observables." That is another way of saying that Bell's assumption of hidden variables is violated.
Motl posted his explanation.
You would need non-locality in a classical theory to fake the quantum results. But that doesn't mean that our Universe is non-local because your classical theory – and any classical theory – is just wrong. In particular, the correct theory – quantum mechanics – says that the wave function is not an actual observable. It means that its collapse isn't an "objective phenomenon" that could cause some non-local influences. Instead, it may always be interpreted as the observer's learning of some new information.
One comment cited Travis Norsen to support Bell. I explained his errors in this 2015 post.

Peter Shor comments:
First: quantum mechanics doesn’t just break classical probability theory (as Bell demonstrated); it breaks classical computational complexity theory and classical information theory as well. This is why there are a number of computer scientists who are convinced that quantum computers can’t possibly work.
Those computer scientists might be right.

The Bell test experiments were done by physicists with a belief that they would disprove quantum mechanics, and overthrow 50 years of conventional wisdom. All they did was to confirm what the textbooks said.

Most of the XX century was without any firm opinion that quantum mechanics breaks classical computational complexity theory and classical information theory. That opinion developed in the last 30 years, just as experiments were being done. While experiments have confirmed aspects of quantum mechanics that Bell doubted, the jury is still out on quantum computing.

Wednesday, June 12, 2019

Constructing Bell paradoxes

An economics professor and puzzle author posted this variant of a Bell Theorem paradox:
The Brain Teaser: Today we both chose Tails and both saw green. What colors were on our Heads sides? ...

Possible Resolution I. There are no such coins. You’re right. There are no such coins. But there are subatomic particles that behave exactly like these coins. ...

Possible Resolution II: The coins are very very sneaky and they like to screw around with our minds, so they change their own colors depending on the choices we make, just to fool us. .. .

Possible Resolution III: Neither side of either coin has a color until we decide to examine it, so that on a day when I examine my tails side, it makes no sense to ask about the color of the heads side in the first place.

Therefore I am not allowed to pose this brain teaser in the first place. ...

So where does this leave us?

It leaves us with quantum mechanics, which correctly predicts the result of this experiment, among kajillions of others, and which is incompatible with classical intuitions (as the puzzle in the post is intended to illustrate), but is perfectly consistent with different intuitions which take a while to cultivate, but are not impossible to get used to.
I do not think that these Bell puzzles are helpful in understanding quantum mechanics.

Quantum mechanics was created to deal with a fundamental observed fact: Electricity and light are sometimes seen as waves, and sometimes particles.

If you model them as waves, then the discrete energy values are puzzling. If you model them as particles, then the interference patterns are puzzling.

Heisenberg got around this difficulty by saying that you could think of an electron as a particle, but one where you cannot measure the position and momentum at the same time, as you could if it were really a particle.

All these Bell paradoxes are based on pretending that the election is a particle where conflicting measurements can be made at the same time.

If you think that an electron or photon is a particle, and you want to see something puzzling, just look at the double-slit experiment.

Subatomic particles are not like coins in the above puzzle. Coins have properties like position and momentum that are independent of observation. Subatomic particles have wave-like properties, and the particle-like properties depend on how they are observed. These are simple facts of nature that have been understood since the 1920s.

I often hear of physicists who hope that someone will find a new interpretation of quantum mechanics that will make these wave-particle puzzles go away. Forget it. They are just refusing to accept facts of nature.

Monday, June 10, 2019

Smolin attacks Einstein myth

Lee Smolin writes in a 2015 paper:
There is a myth that Einstein's discovery of general relativity was due to his following beautiful mathematics to discover new insights about nature. I argue that this is an incorrect reading of the history and that what Einstein did was to follow physical insights which arose from asking that the story we tell of how nature works be coherent.
Smolin assumes that Einstein discovered general relativity, but if you study the history of general relativity, you find that most of the key ideas were developed by others.

Poincare and Minkowski developed four-dimensional spacetime, Poincare the finite propagation speed of gravity and the first relativistic field theory for gravity, Minkowski and Laue the stress-energy tensor, Ricci, Levi-Civita, and Grossmann the Ricci tensor, Poincare a partial explanation for the Mercury anomaly, Hilbert the Lagrangian approach, Schwarzschild the black hole metric.

The big equation of general relativity that was used to solve the Mercury anomaly was just Ricci = 0.
But how exactly did Einstein perform the seemingly miraculous feat of inventing a theory that correctly describes phenomena that had not yet even been observed? There is a myth which is usually trotted out to answer this query, which is that Einstein was a lone genius who followed beautiful mathematics to discover his great theory. Genius, inspired by aesthetics. Mathematics as a tool of prophecy.

No one was more responsible for spreading this myth than Einstein himself, who described in several essays and popular talks in the 1920's and later how he followed a trial of mathematical beauty to his discovery of general relativity. As Einstein wrote in his autobiographical notes,

``I have learned something else from the theory of gravitation: no collection of empirical facts, no matter how comprehensive, can ever lead to the formulation of such complicated equations ... [they] can only be found by the discovery of a logically simple mathematical condition that completely, or almost completely, determines the equations. Once one has those sufficiently strong formal conditions one requires only little knowledge of facts to set up a theory[3].

In the last twenty years historians have been doing a careful job of studying what Einstein actually did during the eight years of hard, often frustrating work ... And it was very different from the myth.
Einstein constructed several myths about himself.
In the absence of ideas and insights about nature, Einstein fell back on mathematics as his guide. He constructed a myth about how mathematical beauty had been prophetic for his invention of general relativity and he attempted to use it to justify his forays into unified field theory.

Einstein's search for a unified field theory failed, and the roots of this failure are his embrace of mathematical beauty as a guiding principle. Over the thirty-five years between 1920 and his death in 1955 Einstein attempted many versions of a unified field theory. He tried higher, hidden dimensions, they failed. He tried more general versions of curved geometries beyond the geometry used in general relativity. They all failed to produce a useful unification.
I wrote a book on this subject: How Einstein Ruined Physics.
According to popular accounts of the scientific method, such as Thomas Kuhn's The Structure of Scientific Revolutions[1], theories are invented to describe phenomena which experimentalists have previously discovered. ...

This simple schema does not apply to general relativity. All the characteristic phenomena that general relativity describes were unknown in 1915 when Einstein published his theory. These include the expanding universe, black holes, light bending in gravitational fields, gravitational lenses, time slowing down in gravitational fields, gravitational waves, dark energy. Not only were these phenomena not yet observed in 1915, most of them had not even been thought about. The fact that a century later, all of these are well confirmed is a triumph unmatched by any other theory in the history of science.
It would be amazing if Einstein predicted all of these things out of pure theory in 1915. But Einstein did not even believe in most of them himself.

The expanding universe has little to do with general relativity. Gravity is an attractive force. If the universe were not expanding, it would have collapsed already, under either Newtonian gravity or general relativity.

Black holes and gravitational lenses are consequences of strong gravitational fields, regardless of relativity. So is bending light, if light has mass. Dark energy was not predicted or discovered until about 20 years ago.

Einstein did discover time slowing down in gravitational fields, but as a consequence of special relativity, in 1908.

Poincare discovered a relativistic theory of gravity in 1905 that predicted gravitational waves.

So Einstein's 1915 theory is not responsible for any of these things.

Special relativity was invented to explain experiments, like the Michelson-Morley experiment. General relativity was invented to reconcile special relativity with gravity, and to explain gravitational causality and the Mercury anomaly.

I wrote a series of posts on how relativity might have been invented independently of experiment. But historically, it did not happen that way. All of the worthwhile scientific theories were invented to describe phenomena which experimentalists have previously discovered.

Theoretical physicists today like to ignore experiments and try to find the great unified theory based on pure thought, so they like to promote the myth that such an approach has succeeded in the past.

Friday, June 7, 2019

My blog motto is validated

This blog motto is:
Natura non facit saltus (Latin for "nature does not make jumps") has been an important principle of natural philosophy. ...

The principle expresses the idea that natural things and properties change gradually, rather than suddenly. In a mathematical context, this allows one to assume that the solutions of the governing equations are continuous, and also does not preclude their being differentiable (differentiability implies continuity). Modern day quantum mechanics is sometimes seen as violating the principle, with its idea of a quantum leap.[5] Erwin Schrödinger in his objections to quantum jumps supported the principle, and initially developed his wave mechanics in order to remove these jumps.
Quanta mag reports:
When quantum mechanics was first developed a century ago as a theory for understanding the atomic-scale world, one of its key concepts was so radical, bold and counter-intuitive that it passed into popular language: the “quantum leap.” Purists might object that the common habit of applying this term to a big change misses the point that jumps between two quantum states are typically tiny, which is precisely why they weren’t noticed sooner. But the real point is that they’re sudden. So sudden, in fact, that many of the pioneers of quantum mechanics assumed they were instantaneous.

A new experiment shows that they aren’t. By making a kind of high-speed movie of a quantum leap, the work reveals that the process is as gradual as the melting of a snowman in the sun. “If we can measure a quantum jump fast and efficiently enough,” said Michel Devoret of Yale University, “it is actually a continuous process.”

In a 1952 paper called “Are there quantum jumps?,” Schrödinger answered with a firm “no,” his irritation all too evident in the way he called them “quantum jerks.” ...

The techniques developed by the Yale team reveal the changing mindset of a system during a quantum jump.
The article goes on to discuss randomness, which is really another issue.

The slogan applies to everything from electrons to grand ideas. Things change continuously.

Wednesday, June 5, 2019

Paper says Hawking faked his illness

A Discover mag blog reports:
A paper in a peer-reviewed medical journal makes the suggestion that physicist Stephen Hawking’s disability, which famously confined him to a wheelchair and robbed him of his speech, was psychosomatic in nature.
If you think that is impossible, the author notes that a 2014 biographical movie about Hawking featured an actor who faked Hawking's illness very convincingly. If an actor could do it, then so could Hawking!

This seems a little wacky, but there are a lot of others who believe that Hawking was misdiagnosed.

Tuesday, June 4, 2019

Reality and substantiality of the luminiferous ether

Science Friday has a segment making of wrong theories in the history of science, and last Friday the target was the aether. They mock this 1884 Lord Kelvin essay:
This thing we call the luminiferous ether. That is the only substance we are confident of in dynamics. One thing we are sure of, and that is the reality and substantiality of the luminiferous ether. ...

I move through this “luminiferous ether” as if it were nothing. ... What can this luminiferous ether be? It is something that the planets move through with the greatest ease. It permeates our air; it is nearly in the same condition, so far as our means of judging are concerned, in our air and in the inter-planetary space. The air disturbs it but little; you may reduce air by air-pumps to the hundred thousandth of its density, and you make little effect in the transmission of light through it. The luminiferous ether is an elastic solid, for which the nearest analogy I can give you is this jelly which you see, 5 and the nearest analogy to the waves of light is the motion, which you can imagine, of this elastic jelly, with a ball of wood floating in the middle of it. ...

What we know of the luminiferous ether is that it has the rigidity of a solid and gradually yields. Whether or not it is brittle and cracks we cannot yet tell, but I believe the discoveries in electricity and the motions of comets and the marvellous spurts of light from them, tend to show cracks in the luminiferous ether — show a correspondence between the electric flash and the aurora borealis and cracks in the luminiferous ether. Do not take this as an assertion, it is hardly more than a vague scientific dream: but you may regard the existence of the luminiferous ether as a reality of science; that is, we have an all-pervading medium, an elastic solid, with a great degree of rigidity — an rigidity so prodigious in proportion to its density that the vibrations of light in it have the frequencies I have mentioned, with the wave-lengths I have mentioned. The fundamental question as to whether or not luminiferous ether has gravity has not been answered. We have no knowledge that the luminiferous ether is attracted by gravity; it is sometimes called imponderable because some people vainly imagine that it has no weight; I call it matter with the same kind of rigidity that this elastic jelly has.
Okay, some of this stuff about cracks and jelly seem ridiculous today, but most of it is mostly correct. The discovery of dark energy could be said to be an answer to his question about "whether or not luminiferous ether has gravity".

Special relativity taught nothing about whether the aether exists. It only taught that any such aether must be Lorentz invariant. As the program conceded at the end, modern quantum field theory does have the concept of a pervasive medium like the aether, as that is how the Higgs field is explained.

The program explained about the Michelson-Morley experiment, and how it failed to detect aether motion. While it was decisive in the discovery and acceptance of special relativity, Einstein seemed to have only vague third-hand knowledge of it.

Yes, it is a historical fact that the MM experiment was crucial for special relativity, but not for Einstein's thinking. Einstein's famous paper was to give an exposition of Lorentz's theory, but it did not bother to explain Lorentz's MM-based reasoning.

Monday, June 3, 2019

Einstein's biggest mistake

Gary J. Ferland has a new letter on Einstein's biggest mistake?
What, if any, was Einstein's biggest mistake, the one most affecting our physics today? There is a perhaps apocryphal story, recounted by George Gamow, that he counted his cosmological constant as his biggest blunder. We now know his hypothesized cosmological constant to be correct. His lifelong rejection of quantum mechanics, an interesting side-story in the evolution of 20th-century physics, is a candidate. None of these introduced difficulties in how our physics is done today. It can be argued that his biggest actual mistake, one that affects many subfields of physics and chemistry and bewilders students today, occurred in his naming of his A and B coefficients.
Yes, his lifelong rejection of quantum mechanics is a candidate for his biggest mistake. So is his pursuit of unified field theory.

I think his biggest mistake was his failure to accept relativity as a geometric theory. Students today learn relativity as a geometric theory, and they are taught that Einstein discovered it all, but Einstein refused to accept the geometric view in both special and general relativity. I explain this point on this blog, such as here.

Lee Smolin wrote:
Einstein’s search for a unified field theory failed, and the roots of this failure are his embrace of mathematical beauty as a guiding principle. Over the thirty-five years between 1920 and his death in 1955 Einstein attempted many versions of a unified field theory. ...

Einstein already understood by 1922 that the hypothesis that there are extra, hidden dimensions could not give a unification of the forces. ...

As Einstein wrote to his friend Paul Ehrenfest, “It is anomalous to replace the four dimensional continuum by a five dimensional one and then to subsequently tie up artificially one of those five dimensions in order to account for the fact that it does not manifest itself.”[5].

[5] Quoted in Abraham Pais, Subtle is the lord (New York: Oxford Univ. Press, 1982) p.334
Einstein had a similar objection to extending from three-dimensional space to four-dimensional spacetime, with time having a different geometry.

The problem here is that Einstein did not truly recognize the significance of geometry. He had this blind spot all of his life.

Sunday, June 2, 2019

Wormholes are not entangled black holes

Lubos Motl has posted this rant against a prominent complexity theorist:
Let me simplify it. He is extremely skeptical of the ER-EPR correspondence because the wormholes belong to general relativity, entanglement belong to quantum mechanics, these are different theories, so the objects must be different, too! Ingenious. ER=EPR was disproven. End of story. Or is it? ...

In quantum gravity, there may be Einstein-Rosen bridges (non-traversable wormholes) and when they're there, the have the exact same effect on all observers and couples of observers as entangled black hole pairs. When they walk like a duck (an entangled black hole pair), quack like one, and so on, they are a duck. ...

Many laymen were trained to parrot and indefinitely repeat some completely wrong slogans such as "string theory is not even wrong because it's not falsifiable". ...

When I was an undergrad, instructors were generally teaching us that when a famous scientist associates himself with some extremely wrong statements, he loses his name, image, credibility, and often his job, too. I think that the times have changed and the expectations of the institutionalized science have dramatically weakened. ...
I agree with his last point. Today, famous scientists associate themselves with crackpot causes all the time, with no noticeable loss to their status.

String theory does not make any testable predictions. Wormholes do not exist. Entangled black holes do not exist. These nonexistent things are not even related.

By "quantum gravity", he does not mean the perfectly good theories we have today, and which predict every known situation where gravity might be quantized. He means "consistent" quantum gravity, which is some hypothetical theory that no one has ever found and could never be tested, but would explain the center singularity of a black hole and the first nanosecond of the big bang.

Lumo doesn't argue that there is any empirical merit to any of these ideas. Instead, he cites super-smart big-shot physicists who have published many 1000s of papers citing these ideas, and implies that critics are not smart enough to understand them.

Much of theoretical physics has become like comic books in the Marvel universe. Wild fantasies that have no bearing on reality.

Saturday, June 1, 2019

Google is researching Cold Fusion

News:
According to a peer-reviewed paper revealed this week, Google is continuing its experiments into the controversial science of cold fusion -- the theory that nuclear fusion, the process that powers the Sun, can produce energy in a table-top experiment at room temperature. While Google's recent project found no evidence that cold fusion is possible, it did make some advances in measurement and materials-science techniques that the researchers say could benefit energy research. "The team also hopes that its work will inspire others to revisit cold-fusion experiments, even if the phenomenon still fails to materialize," reports Nature.
The story got this comment:
Google tossed $10 million into research.
If it pans out, energy is trillions of dollars each and every year.

Someone at Google probably figured the odds of this working on one in ten thousand. A 1 in 10,000 chance of making your money multiply by a million X is a good bet.

Yeah their quantum computing research may be a bit similar. Spend a few million to potentially make a forty billion. If Google tries 1,000 long-shot projects at $10 million each, and only one succeeds to the extent that it makes a twenty billion, they've doubled their money.

It's a bit like you and spending a few minutes applying for a job making twice as much money. We probably won't get it, but it's worth the five minutes because if we do that 50 times we might we'll double our salary (and spend 250 minutes applying for jobs). Google can afford to try out $10 million here, $10 million there, see what ends up working.
Google is also working on electric flying cars, and other long-shots.

The actual Nature paper says:
The 1989 claim of ‘cold fusion’ was publicly heralded as the future of clean energy generation. However, subsequent failures to reproduce the effect heightened scepticism of this claim in the academic community, ... we embarked on a multi-institution programme to re-evaluate cold fusion to a high standard of scientific rigour. Here we describe our efforts, which have yet to yield any evidence of such an effect. Nonetheless, a by-product of our investigations has been to provide new insights ...
Google may soon be spinning its quantum computer research the same way.

The claim of quantum computing was heralded as the future of high-performance computing. Subsequent failures to achieve a quantum advantage have led to skepticism. A rigorous evaluation was needed. While they did not succeed in making a quantum computer, they provided some new insights.