Sunday, September 8, 2019

Carroll promotes his dopey new quantum book

Physicist Sean M. Carroll has a NY Times op-ed today promoting his stupid new book.
“I think I can safely say that nobody really understands quantum mechanics,” observed the physicist and Nobel laureate Richard Feynman. ...

What’s surprising is that physicists seem to be O.K. with not understanding the most important theory they have.
No, that is ridiculous.

I assume that Feynman meant that it is hard to relate quantum objects to classical objects with a more intuitive understanding. Physicists grappled with the theory in the 1920s, and by 1935 everyone had a good understanding of it.
The reality is exactly backward. Few modern physics departments have researchers working to understand the foundations of quantum theory.
That is because the foundations were well-understood 90 years ago.
In the 1950s the physicist David Bohm, egged on by Einstein, proposed an ingenious way of augmenting traditional quantum theory in order to solve the measurement problem. ... Around the same time, a graduate student named Hugh Everett invented the “many-worlds” theory, another attempt to solve the measurement problem, only to be ridiculed by Bohr’s defenders.
They deserved to be ridiculed, but their theories did nothing towards solving the measurement problem, are philosophically absurd, and have no empirical support.
The current generation of philosophers of physics takes quantum mechanics very seriously, and they have done crucially important work in bringing conceptual clarity to the field.
Who? I do not think that there is any living philosopher who has shed any light on the subject.
It’s hard to make progress when the data just keep confirming the theories we have, rather than pointing toward new ones.

The problem is that, despite the success of our current theories at fitting the data, they can’t be the final answer, because they are internally inconsistent.
This must sound crazy to an outsider. Physicists have perfectly good theories that explain all the data well, and yet Carroll writes books on why the theories are no good.

The theories are just fine. Carroll's philosophical prejudices are what is wrong.

Carroll does not say what would discredit himself -- he is a big believer in many-worlds theory. If he wrote an op-ed explaining exactly what he believes about quantum mechanics, everyone would deduce that he is a crackpot.

Philosopher Tim Maudlin also has a popular new essay on quantum mechanics. He is not so disturbed by the measurement problem, or indeterminism, or Schroedinger's cat, but he is tripped up by causality:
What Bell showed that if A and B are governed by local physics — no spooky-action-at-a-distance — then certain sorts of correlations between the behaviours of the systems cannot be predicted or explained by any local physics.
This is only true if "local physics" means a classical theory of local hidden variables. Bell did show that quantum mechanics can be distinguished from those classical theories, but there is still no action-at-a-distance.

Update: Lumo trashes Carroll's article. Woit traces an extremely misleading claim about a physics journal editorial policy. The journal just said that it is a physics journals, and articles have to have some physics in them. Philosophy articles could be published elsewhere.

Saturday, September 7, 2019

Deriving free will from singularities

William Tomos Edwards writes in Quillette to defend free will:
[Biologist Jerry] Coyne dismisses the relevance of quantum phenomena here. While it’s true that there is no conclusive evidence for non-trivial quantum effects in the brain, it is an area of ongoing research with promising avenues, and the observer effect heavily implies a connection. Coyne correctly points out that the fundamental randomness at the quantum level does not grant libertarian free will. Libertarian free will implies that humans produce output from a process that is neither random nor deterministic. What process could fit the bill?
No, they are both wrong. Libertarian free will certainly does imply that human produce output that is not predictable by others, and hence random. That is the definition of randomness.

Quantum randomness is not some other kind of randomness. There is only one kind of randomness.

Then Edwards goes off the rails:
Well, if the human decision-making process recruits one or more irremovable singularities, and achieves fundamentally unpredictable output from those, I would consider that a sufficient approximation to libertarian free will. Furthermore, a singularity could be a good approximation to an “agent.” Singularities do occur in nature, at the center of every black hole, and quite possibly at the beginning of the universe, and quantum phenomena leave plenty of room open for them. ...

The concept of a singularity becomes important once again here because if you can access some kind of instantaneous infinity and your options are fundamentally, non-trivially infinite, then it would seem you have escaped compatibilism and achieved a more profound freedom.
Now he is just trolling us. There are no singularities or infinities in nature. You can think of the center of a black hole that way, but it is not observable, so no one will ever know. There certainly aren't any black holes in your brain.

Coyne replies here, but quantum mechanics is out of his expertise.

Thursday, September 5, 2019

Universal grammar and other pseudosciences

Everyone agrees that astrology is pseudoscience, but this new paper takes on some respected academic subjects:
After considering a set of demarcation criteria, four pseudosciences are examined: psychoanalysis, speculative evolutionary psychology, universal grammar, and string theory. It is concluded that these theoretical frameworks do not meet the requirements to be considered genuinely scientific. ...

To discriminate between two different types of activities some kind of criteria or
standards are necessary. It is argued that the following four demarcation criteria are
suitable to distinguish science from pseudoscience:
1. Testability. ...
2. Evidence. ...
3. Reproducibility. ...
4. The 50-year criterion.
By these criteria, string theory fails to be science. It mentions that a couple of philosophers try to defend string theory, but only by inventing some new category. I guess they don't want to call it "pseudoscience" if respectable professors promote it.

Respectable professors also have a long history of supporting Freudian psychoanalysis.

This claim about universal grammar struck me:
Chomksy (1975, p. 4) argues that children learn language easily since they do it without formal instruction or conscious awareness.
Not only is Chomsky well-respected for these opinions, but Steve Pinker and many others have said similar things.

This puzzles me. I taught my kids to talk, and I would not describe it as easy. I had to give them formal instruction, and they seemed to be consciously aware of it.

The process takes about a year. It is a long series of incremental steps. Steps are: teaching the child to understand simple commands, such as "stop", articulating sounds like "hi", responding to sounds, like saying "hi" in response to "hi", learning simple nouns, like saying "ball" while pointing to a ball, developing a vocabulary of 20 nouns or so, learning simple verbs like "go", putting together subject-verb, putting together subject-verb-object, etc.

All of these steps are difficult for a two-year-old, and require a great deal of individual instruction and practice.

Sure, two-year-olds might learn a lot by observing, but you could say the same about other skills. Some are taught to dress themselves, while others learn by mimicking others. No one would say that children learn to dress themselves without instruction.
Steven Pinker was the first to popularize the hypothesis that language is an instinct. In his influential book The Language Instinct, Pinker asserts that “people know how to talk in more or less the sense that spiders know how to spin webs” (Pinker 1995, p. 18). Pinker’s analogy is striking, since it is obviously incorrect. A spider will spin webs even if it remains isolated since birth. On the other hand, a child who has been isolated since birth will not learn language. In other words, while web-spinning does not require previous experience and it is innate, language does require experience and it is learned.
Chomsky and Pinker are two of our most respected intellectuals today.

Googling indicates that Chomsky had one daughter and no grandkids. Pinker has no kids. I am not sure that is relevant, as many others have similarly claimed that children learn language naturally.

Monday, September 2, 2019

Psychology is in crisis

I often criticize the science of Physics here, some some other sciences are doing much worse. Such as:
Psychology is declared to be in crisis. The reliability of thousands of studies have been called into question by failures to replicate their results. ...

The replication crisis, if nothing else, has shown that productivity is not intrinsically valuable. Much of what psychology has produced has been shown, empirically, to be a waste of time, effort, and money. As Gibson put it: our gains are puny, our science ill-founded.
This is pretty harsh, but it doesn't even mention how many leaders in the field have turned out to be fraud, or how some sub-fields are extremely politicized, or how much damage they do to people in psychotherapies.

Friday, August 30, 2019

Einstein did not get relativity from Hume

An Aeon essay starts:
In 1915, Albert Einstein wrote a letter to the philosopher and physicist Moritz Schlick, who had recently composed an article on the theory of relativity. Einstein praised it: ‘From the philosophical perspective, nothing nearly as clear seems to have been written on the topic.’ Then he went on to express his intellectual debt to ‘Hume, whose Treatise of Human Nature I had studied avidly and with admiration shortly before discovering the theory of relativity. It is very possible that without these philosophical studies I would not have arrived at the solution.’

More than 30 years later, his opinion hadn’t changed, as he recounted in a letter to his friend, the engineer Michele Besso: ‘In so far as I can be aware, the immediate influence of D Hume on me was greater. I read him with Konrad Habicht and Solovine in Bern.’ We know that Einstein studied Hume’s Treatise (1738-40) in a reading circle with the mathematician Conrad Habicht and the philosophy student Maurice Solovine around 1902-03. This was in the process of devising the special theory of relativity, which Einstein eventually published in 1905. It is not clear, however, what it was in Hume’s philosophy that Einstein found useful to his physics.
It is amazing that anyone takes Einstein's egomania seriously.

Einstein got relativity from Lorentz and Poincare, and spent his whole life lying about it. Saying that he got his ideas from some philosopher is just a way of denying credit to Lorentz and Poincare.

Sunday, August 25, 2019

Worrying about testing the simulation hypothesis

Philosophy professor Preston Greene argues in the NY Times:
ut what if computers one day were to become so powerful, and these simulations so sophisticated, that each simulated “person” in the computer code were as complicated an individual as you or me, to such a degree that these people believed they were actually alive? And what if this has already happened?

In 2003, the philosopher Nick Bostrom made an ingenious argument that we might be living in a computer simulation created by a more advanced civilization. He argued that if you believe that our civilization will one day run many sophisticated simulations concerning its ancestors, then you should believe that we’re probably in an ancestor simulation right now. ...

In recent years, scientists have become interested in testing the theory. ...

o far, none of these experiments has been conducted, and I hope they never will be. Indeed, I am writing to warn that conducting these experiments could be a catastrophically bad idea — one that could cause the annihilation of our universe. ...

if our universe has been created by an advanced civilization for research purposes, then it is reasonable to assume that it is crucial to the researchers that we don’t find out that we’re in a simulation. If we were to prove that we live inside a simulation, this could cause our creators to terminate the simulation — to destroy our world. ...

As far as I am aware, no physicist proposing simulation experiments has considered the potential hazards of this work.
Isn't it great that we have philosophers to worry about stuff like this?

Extending this reasoning further, we should shut down the LHC particle collider, and all the quantum computer research. These are exceptionally difficult (ie, computationally intensive) to simulate. If we overwhelm the demands on the simulator, then the system could crash or get shut down.

We probably should not look for extraterrestrials either.

A psychiatrist wonders about our simulator overlords reading NY Times stories worrying about simulation.

I think these guys are serious, but I can't be sure. It is not any wackier than Many-Worlds.

Another philosopher, Richard Dawid, has a new paper on the philosophy of string theory:
String theory is a very different kind of conceptual scheme than any earlier physical theory. It is the first serious contender for a universal final theory. It is a theory for which previous expectations regarding the time horizon for completion are entirely inapplicable. It is a theory that generates a high degree of trust among its exponents for reasons that remain, at the present stage, entirely decoupled from empirical confirmation. Conceptually, the theory provides substantially new perspectives ...
Wow, a "universal final theory" that is entirely "decoupled" from experiment, and with no hope of "completion" in the foreseeable future. But it is conceptually interesting!

I am sure Dawid thinks that he is doing string theorists a favor by justifying their work, but he has to admit that the theory has no merit in any sense that anyone has ever recognized before.

Thursday, August 22, 2019

Quantum physics is not in crisis

The latest Lumo rant starts:
Critics of quantum mechanics are wrong about everything that is related to foundations of physics and quite often, they please their readers with the following:

Physics has been in a crisis since 1927. ...
You may see that they 1) resemble fanatical religious believers or their postmodern, climate alarmist imitators or the typical propaganda tricksters in totalitarian regimes. They tell you that there is a crisis so you should throw away the last pieces of your brain and behave as a madman – that will surely help. ...

In reality, the years 1925-1927 brought vastly more true, vastly more solid, vastly more elegant, and vastly more accurate foundations to physics, foundations that are perfectly consistent and that produce valid predictions whose relative accuracy may be \(10^{-15}\) (magnetic moment of the electron).

On the new postulates of quantum mechanics, people have built atomic and molecular physics, quantum chemistry, modern optics, lasers, condensed matter physics, superconductors, semiconductors, graphene and lots of new materials, transistors, diodes of many kind, LED and OLED and QLED panels, giant magnetoresistance, ...
He is attacking Sean M. Carroll's book, and other similar modern gripes about quantum mechanics.

I mostly agree with him. Quantum mechanics is the most successful theory we have, and we have professors saying it is in crisis, or it doesn't make sense, or the foundations are wrong, or some such nonsense.

If quantum mechanics does not obey your idea of what a theory should be, then it is time to re-examine your prejudices about what a theory should be. Quantum mechanics has succeeded beyond all expectations in every possible way.

Dr. Bee says:
Now, it seems that black holes can entirely vanish [over trillions of years] by emitting this radiation. Problem is, the radiation itself is entirely random and does not carry any information. So when a black hole is entirely gone and all you have left is the radiation, you do not know what formed the black hole. Such a process is fundamentally irreversible and therefore incompatible with quantum theory. It just does not fit together.
I am baffled how obviously intelligent physicists can say this nonsense. Everything in quantum mechanics is irreversible. I don't even know any reversible quantum experiments.

Quantum computers are supposed to do reversible operations on qubits, but they have never gotten it to work for more than a few microseconds, as far as I know. And Bee is worried that a trillion-year black hole decay might be irreversible? This is craziness.

Thierry Batard argues in a new paper:
In glaring contrast to its indisputable century-old experimental success, the ultimate objects and meaning of quantum physics remain a matter of vigorous debate among physicists and philosophers of science. ...

In the eyes of the Fields medalist René Thom (2016), this makes quantum physics “… far and away the intellectual scandal…” of the twentieth century. ...

quantum physics has “… been accused of being unreasonable and unacceptable, even inconsistent, by world-class physicists (for example, Newman…)” (Rovelli 1996)
How can something work flawlessly and be so unacceptable?

This is a bit like someone going around telling everyone that cell phones cannot possibly work. What are you going to believe -- your own eyes or some philosophical professor?

Wednesday, August 21, 2019

Carroll writes new book on Many Worlds

Peter Woit asks What’s the difference between Copenhagen and Everett?
What strikes me when thinking about these two supposedly very different points of view on quantum mechanics is that I’m having trouble seeing why they are actually any different at all.
To the extent that they are just interpretations, there is no substantive difference. With disputes about the definitions, this is not so clear.

Here are a couple of the better comments:
They difference is in the part that you don’t want to discuss, which is that Everettians postulate the other worlds are real, while Copenhagenists refuses to say anything about what cannot be observed.

Good old books inform that the same issue had been fiercely debated around 1926, when Schroedinger/Einstein wanted to describe everything via a deterministic local equation, getting rid of quantum jumps. Heisenberg/Bohr explained that it’s not possible because we see particles as events. Decoherence and all modern stuff allow to understand better but don’t change the key point: we need probabilities. So the Schroedinger equation is just a tool for computing probabilities in configuration space.
Woit goes on to review Sean M. Carroll's new book, which is a 368-page argument for the Many World Theory of quantum behavior.

Woit says Carroll is a good writer and explainer, but the Many Worlds stuff is the babbling of a crackpot. They theory is so silly it is hard to take anyone seriously who pushes Many Worlds.

Thursday, August 15, 2019

The Quantum Computing Party may never start

SciAm reports:
The Quantum Computing Party Hasn’t Even Started Yet

But your company may already be too late ...

For example, at IonQ, the company I co-founded to build quantum computer hardware, we used our first-generation machine to simulate a key measure of the energy of a water molecule. Why get excited when ordinary computers can handle the same calculation without breaking a sweat? ...

If you pay even a little attention to technology news, you've undoubtedly heard about the amazing potential of quantum computers, which exploit the unusual physics of the smallest particles in the universe. While many have heard the buzz surrounding quantum computing, most don't understand that you can't actually buy a quantum computer today, and the ones that do exist can't yet do more than your average laptop. ...

It will take a few more years of engineering for us to build capacity in the hundreds of qubits, but I am confident we will, and that those computers will deliver on the amazing potential of quantum technology.

The choice facing technology leaders in many industries is whether to start working today on the quantum software that will use the next generation of computers or whether to wait and watch the breakthroughs be made by more agile competitors.
Or wait to watch all the quantum computer companies fail.

He is right that you cannot buy a quantum computer, and the research models are so primitive as to be useless.

The party may never start.

Tuesday, August 13, 2019

Quantum Cryptography is still useless

IEEE Spectrum reports:
Quantum Cryptography Needs a Reboot

Quantum technologies—including quantum computing, ultra-sensitive quantum detectors, and quantum random number generators—are at the vanguard of many engineering fields today. Yet one of the earliest quantum applications, which dates back to the 1980s, still appears very far indeed from any kind of widespread, commercial rollout.

Despite decades of research, there’s no viable roadmap for how to scale quantum cryptography to secure real-world data and communications for the masses.

That’s not to say that quantum cryptography lacks commercial applications. ...

From a practical standpoint, then, it doesn’t appear that quantum cryptography will be anything more than a physically elaborate and costly—and, for many applications, largely ignorable—method of securely delivering cryptographic keys anytime soon.
So it does lack commercial applications. The technology does not do anything useful, as I have explained here many times.
“The same technologies that will allow you to do [quantum crypto] will also allow you to build networked quantum computers,” Bassett says. “Or allow you to have modular quantum computers that have different small quantum processors that all talk to each other. The way they talk to each other is through a quantum network, and that uses the same hardware that a quantum cryptography system would use.”

So ironically, the innards of quantum “cryptography” may one day help string smaller quantum computers together to make the kind of large-scale quantum information processor that could defeat… you guessed it… classical cryptography.
So all these folks think that classical cryptography is doomed. Someone will first have to invent a quantum processor, because we can try to network such processors.

Friday, August 9, 2019

$3M prize for dead-end physics idea

Dr. Bee reports:
The Breakthrough Prize is an initiative founded by billionaire Yuri Milner, now funded by a group of rich people which includes, next to Milner himself, Sergey Brin, Anne Wojcicki, and Mark Zuckerberg. The Prize is awarded in three different categories, Mathematics, Fundamental Physics, and Life Sciences. Today, a Special Breakthrough Prize in Fundamental Physics has been awarded to Sergio Ferrara, Dan Freedman, and Peter van Nieuwenhuizen for the invention of supergravity in 1976. The Prize of 3 million US$ will be split among the winners.
What you never heard of this work? That is because it was a dead-end, and never led to anything.

For a couple of years in the 1970s, supersymmetry gravity was an exciting idea, because it was thought that it would make quantum gravity renormalizable. However that turned out to be false, and the theory is worthless.

Like string theory, it has no connection to any observational science. But even work, it doesn't even make sense as a physical theory.

Update: Lumo writes:
Nature, Prospect Magazine, and Physics World wrote something completely different. The relevant pages of these media have been hijacked by vitriolic, one-dimensional, repetitive, scientifically clueless, deceitful, and self-serving anti-science activists and they tried to sling as much mud on theoretical physics as possible – which seems to be the primary job description of many of these writers and the society seems to enthusiastically fund this harmful parasitism.
Check them yourself. The Nature article says:
A lack of evidence should also not detract from supergravity’s achievements, argues Strominger, because the theory has already been used to solve mysteries about gravity. For instance, general relativity apparently allows particles to have negative masses and energies, in theory.
No, that is a big lie. Supergravity has nothing to do with positive mass. For details, see the comments on Woit's blog. Briefly, Witten published an outline for a proposed spinor proof of the Schoen-Yau positive mass theorem, and the paper ended with a short section starting with "a few speculative remarks will be made about the not altogether clear relation between the previous argument and supergravity." That's all.

Friday, August 2, 2019

Science journals must be politically correct

Indian-born British writer Angela Saini has found the formula, with articles in Scientific American:
The “race realists,” as they call themselves online, join the growing ranks of climate change deniers, anti-vaxxers and flat-earthers in insisting that science is under the yoke of some grand master plan designed to pull the wool over everyone’s eyes. In their case, a left-wing plot to promote racial equality when, as far as they’re concerned, racial equality is impossible for biological reasons. ...

Populism, ethnic nationalism and neo-Nazism are on the rise worldwide. If we are to prevent the mistakes of the past from happening again, we need to be more vigilant.
And Nature:
Racist ‘science’ must be seen for what it is: a way of rationalizing long-standing prejudices, to prop up a particular vision of society as racists would like it to be. It is about power. ... A world in thrall to far-right politics and ethnic nationalism demands vigilance. We must guard science against abuse and reinforce the essential unity of the human species.
She argues that there is no such thing as human races, and that genetics has nothing to do with the observed differences in athletic performance.

She is from India, which is not really competitive in the sports the rest of the world. So perhaps she does not realize how obvious the biological differences in sports are. But what excuse does Nature and SciAm for publishing her nonsense?

If these journals can lie to us about human races, then they can also lie about climate change and a lot of other subjects.

Wednesday, July 31, 2019

Newton would accept modern physics

A reader sends me this Nautilus interview from last year:
Kuhn’s popular because of his phrase, “the paradigm shift.” The idea, roughly, is that Einstein came along and displaced Newton. He superseded the old view about the universe and now Newtonians couldn’t talk with Einstein’s people because they had two fundamentally different versions of reality.

And this is nonsense because of course scientists talk to each other all the time. We are endlessly changing the nature of science without losing our ability to communicate with each other about it. It’s inconceivable to me that Newton and Einstein, if they had the opportunity to get together and carry on a conversation, would have stared at each other in kind of mute incomprehension. ...

So Kuhn’s idea, correct me if I’m wrong, is that to some degree we’re always trapped inside of our own biases, our own theories. We can’t see beyond the paradigm. And this stays on until a new paradigm comes along and then our view becomes outdated.
If Isaac Newton could somehow be brought from the past and educated in XX century physics, he would certainly reject Kuhnian ideas that the newer physics was revolutionary or incommensurable.

I think that Newton would conclude:

1. Newtonian physics is still considered valid on scales far beyond any he proposed or contemplated.

2. Relativity solves the problem of how gravity is transmitted at finite speed. (Poincare solved this in 1905 based on Lorentz's ideas; Einstein had nothing to do with it.)

3. The only planetary orbit requiring a post-Newtonian correction requires centuries of observations to get a very slight effect.

The modern philosophical ideas about scientific revolutions are complete nonsense. Physics has advanced a lot since Newton, but not so much that Newton would think that he had been proved wrong, or that he would find the new physics unrecognizable.

Monday, July 29, 2019

Dr. Bee endorses superdeterminism

Sabine Hossenfelder is usually fairly sensible, but now she has gone off the deep end:
A phenomenologist myself, I am agnostic about different interpretations of what is indeed the same math, such as QBism vs Copenhagen or the Many Worlds. ...

I find superdeterminism interesting ...

The stakes are high, for if quantum mechanics is not a fundamental theory, but can be derived from an underlying deterministic theory, this opens the door to new applications. That’s why I remain perplexed that what I think is the obvious route to progress is one most physicists have never even heard of. Maybe it’s just a reality they don’t want to wake up to. ...

Really, think about this for a moment. A superdeterministic theory reproduces quantum mechanics. It therefore makes the same predictions as quantum mechanics. (Or, well, if it doesn't, it's wrong, so forget about it.) Difference is that it makes *more* predictions besides that. (Because it's not probabilistic.)
I don't know how anyone can say that Copenhagen, Many Worlds, and superdeterminism all make the same predictions.

Not only is that false, but Many Worlds and superdeterminism are so absurd that there is nothing scientific about either one. They don't make any predictions. They are amusing philosophical thought experiments, but they have no "same math" as anything with any practical utility. They are like saying that we all live in a simulation, or as a figment of someone's imagination. Not really a scientifically meaningful idea.

I really wonder what Dr. Bee's conception of probability is, that she says these things. There is no way to make sense out of probability, consistent with her statements above. Maybe physics books never teach what probability is. I don't know how anyone can get it this wrong.

Wednesday, July 24, 2019

We have past our peak

Everyone celebrated the 50th anniversary of the Moon landing, leaving many to wonder if we will ever do anything so great again. It is like the Egyptian pyramids -- a symbol of a once-great civilization.

Bruce Charlton claims:
I suspect that human capability reached its peak or plateau around 1965-75 – at the time of the Apollo moon landings – and has been declining ever since.
The Woodley effect claims that intelligence has been declining for a century.

Another guy claims science is dead:
Briefly, the argument of this book is that real science is dead, and the main reason is that professional researchers are not even trying to seek the truth and speak the truth; and the reason for this is that professional ‘scientists’ no longer believe in the truth - no longer believe that there is an eternal unchanging reality beyond human wishes and organization which they have a duty to seek and proclaim to the best of their (naturally limited) abilities. Hence the vast structures of personnel and resources that constitute modern ‘science’ are not real science but instead merely a professional research bureaucracy, thus fake or pseudo-science; regulated by peer review (that is, committee opinion) rather than the search-for and service-to reality. Among the consequences are that modern publications in the research literature must be assumed to be worthless or misleading and should always be ignored. In practice, this means that nearly all ‘science’ needs to be demolished (or allowed to collapse) and real science carefully rebuilt outside the professional research structure, from the ground up, by real scientists who regard truth-seeking as an imperative and truthfulness as an iron law.

Monday, July 22, 2019

Free will is like magnetism and lightning

Leftist-atheist-evolutionist professor Jerry Coyne writes in Quillette:
... his a
rgument is discursive, confusing, contradictory, and sometimes misleading. ...

And you needn’t believe in pure physical determinism to reject free will. Much of the physical world, and what we deal with in everyday life, does follow the deterministic laws of classical mechanics, but there’s also true indeterminism in quantum mechanics. Yet even if there were quantum effects affecting our actions — and we have no evidence this is the case — that still doesn’t give us the kind of agency we want for free will. We can’t use our will to move electrons. Physical determinism is better described as “naturalism”: the view that the cosmos is completely governed by natural laws, including probabilistic ones like quantum mechanics.
So how does he lift a finger if he cannot use his will to move electrons?

There could be naturalism as well as free will. Perhaps consciousness and free are governed by natural laws, just like everything.

Saying that "the cosmos is completely governed by natural laws, including probabilistic ones" is just nonsense. If your laws are probabilistic, then they are not completely governing what happens. A probability is, by definition, and incomplete and indefinite statement about events.
As the physicist Sean Carroll has pointed out, ditching the laws of physics in the face of mystery is both unparsimonious and unproductive ...

Contracausal free will is the modern equivalent of black plague, magnetism and lightning — enigmatic phenomena that were once thought to defy natural explanation but don’t.
Remember that Carroll believes in the totally unscientific many-world interpretation. It would be better to listen to an astrologer on what is science.

I agree with his comparison of free will to magnetism. They seem mysterious only when they are not better understood.

"Contracausal free will" is just a term Coyne likes to make it sound self-contradictory. I say that he has libertarian free will to lift his finger. I would not call it contracausal, because his will causes his finger to rise, via blood, nerves, chemistry, and other natural processes.
For many reasons, belief in free will resembles belief in gods, including an emotional commitment in the face of no evidence, and the claim that subverting belief in either gods or free will endangers society by promoting nihilism and immorality. But a commitment to truth compels us to examine the evidence for our beliefs, and to avoid accepting illusions simply because they’re beneficial.
These atheists act as if they are making a compelling argument when they say "no evidence".

There is plenty of evidence for free will, just as primitive people had plenty of evidence for lightning.

There is also plenty of evidence for benefits to belief in free will. Just look at the difference between Christianity and Islam.

Friday, July 19, 2019

Yang 2020 wants quantum crypto

Presidential candidate Andrew Yang has this policy position:
However, quantum computers, using qubits, will theoretically be able to perform the calculations necessary to break our current encryptions standards in under a day. When that happens, all of our encrypted data will be vulnerable. That means our businesses, communications channels, and banking and national security systems may be accessible. ...

Second, we must heavily invest in quantum computing technology so that we develop our own systems ahead of our geopolitical rivals.
We must invest in the technology that will destroy our communications security infrastructure!

But don't worry, robots will take all our jobs and we can just collect $1k per month free and play video games all day.

It is nice to see a presidential candidate try to anticipate future trends, but I don't see this getting him any votes.

Wednesday, July 17, 2019

Early work on curved cosmological space

A new paper, Historical and Philosophical Aspects of the Einstein World, explains work on cosmological non-Euclidean geometry:
Pioneering work on non-Euclidean geometries in the late 19th century led some theoreticians to consider the possibility of a universe of non-Euclidean geometry. For example, Nikolai Lobachevsky considered the case of a universe of hyperbolic (negative) spatial curvature and noted that the lack of astronomical observations of stellar parallax set a minimum value of 4.5 light-years for the radius of curvature of such a universe (Lobachevsky 2010). On the other hand, Carl Friedrich Zöllner noted that a cosmos of spherical curvature might offer a solution to Olbers' paradox5 and even suggested that the laws of nature might be derived from the dynamical properties of curved space (Zöllner 1872). In the United States, astronomers such as Simon Newcomb and Charles Sanders Peirce took an interest in the concept of a universe of non-Euclidean geometry (Newcomb 1906; Peirce 1891 pp 174-175), while in Ireland, the astronomer Robert Stawall Ball initiated a program of observations of stellar parallax with the aim of determining the curvature of space (Ball 1881 pp 92-93; Kragh 2012a). An intriguing theoretical study of universes of non-Euclidean geometry was provided in this period by the German astronomer and theoretician Karl Schwarzschild, who calculated that astronomical observations set a lower bound of 60 and 1500 light-years for the radius of a cosmos of spherical and elliptical geometry respectively (Schwarzschild 1900). This model was developed further by the German astronomer Paul Harzer, who considered the distribution of stars and the absorption of starlight in a universe of closed geometry (Harzer 1908 pp 266-267).
It cites a 2012 Helge Kragh paper, Geometry and Astronomy: Pre-Einstein Speculations of Non-Euclidean Space, for more details.

The finiteness of the speed of light was first detected by astronomers, and that turned out to be more-or-less equivalent to spacetime being non-Euclidean. Poincare and Minkowski showed this in their relativity papers.

Efforts to find a large-scale cosmological curvature of space have failed. Gravity can be interpreted as spacetime curvature, but the universe seems flat on a large scale.

General relativity is commonly interpreted as gravity being spacetime curvature, but Einstein did not view it that way. He did not really buy into non-Euclidean geometry explanations as we do today.

Charles S. Peirce wrote in 1891:
The discovery that space has a curvature would be more than a striking one; it would be epoch-making. It would do more than anything to break up the belief in the immutable character of mechanical law, and would thus lead to a conception of the universe in which mechanical law should not be the head and centre of the whole. It would contribute to the improving respect paid to American science, were this made out here. . In my mind, this is part of a general theory of the universe, of which I have traced many consequences, - some true and others undiscovered, - and of which many more can be deduced; and with one striking success, I trust there would be little difficulty in getting other deductions tested. It is certain that the theory if true is of great moment.
This seems to be a clear anticipation of the possibility of curved space for cosmology.

Kragh writes:
While the possibility of space being non-Euclidean does not seem to have aroused interest among French astronomers, their colleagues in mathematics did occasionally consider the question, if in an abstract way only. As mentioned, many scientists were of the opinion that the geometry of space could be determined empirically, at least in principle. However, not all agreed, and especially not in France. On the basis of his conventionalist conception of science, Henri Poincaré argued that observations were of no value when it came to a determination of the structure of space. He first published his idea of physical geometry being a matter of convention in a paper of 1891 entitled “Les géométries non-euclidiennes,” and later elaborated it on several occasions.
I do think that this is a misunderstanding of Poincare's conventionalism.

Euclidean geometry is axiomatized mathematics. No observation can have any bearing on the mathematical truth of a theorem of Euclidean geometry. If Euclidean geometry turns out not to match the real world, then there are multiple ways to explain it. One could use Euclidean geometry as the foundation, or some other geometry.

That is surely what Poincare was saying. After all, Poincare was a pioneer in non-Euclidean geometry, and was the first to discover the non-Euclidean structure of spacetime. I have seen philosophers claim that Poincare was somehow opposed to geometrical interpretations of space, but I don't see how that could be true. He was simply distinguishing between mathematical and physical truths. Mathematicians consider the distinction to be very important, but physicists sometimes deny that there is a distinction.

Monday, July 15, 2019

No respect for MWI physicists

Lubos Motl has another explanation of what is wrong with the Many Worlds Interpretation:
The implicit assumption is that MWI can do "at least as well as Copenhagen, everyone can". Except that this statement is completely and totally wrong. ...

The MWI fairy-tales are among the top reasons why I lost my respect for many physicists who have done some nontrivial technical things. But they're just lousy thinkers if they can't figure out the lethal problems with the MWI above – in fact, they seem unable to figure out even 5% of those things. How much smarter the founding fathers of quantum mechanics were. They were able not only to understand them but to discover them in the first place – which is an achievement greater than a mere understanding, by many orders of magnitude.
I used to think that MWI was an interpretation, reproducing Copenhagen predictions. Then one's belief in it is a matter of metaphysical preference.

But it is not. MWI predicts nothing, and has no merits at all.

I agree with Lumo here. There are seemingly-competent physicists who endorse MWI, and I have lost all respect for them. MWI is such complete foolishness that anyone who endorses it should not be taken seriously on any scientific matter.

His arguments are somewhat different from the ones I have given here, but the end result is the same. There is no way to turn MWI into a useful theory. It is just a weirdo unscientific fantasy.

Friday, July 12, 2019

Most physicists deny determinism

Quillette has an essay on determinism:
Albert Einstein disagreed. He believed everything in the universe to be pre-determined, including the result of a coin toss, and the roll of a die. Einstein and his contemporary Niels Bohr engaged in a public scholarly rivalry over their differing interpretations of quantum mechanics. ...

Today, most professional physicists believe that processes at the sub-atomic scale don’t always occur in a definite, linked sequence of cause and effect events. The future cannot be precisely known or determined from the present. Nevertheless, some intellectuals remain loyalists to Einstein’s view. ...

The quarrel over biology comes down to something very simple; determinists hope to obtain the clearest possible picture of what is currently happening and what will happen next. ... Many—if not the majority of—intellectuals do indeed believe that there’s something wrong with this, because they understand the profundity of the philosophical and cultural revolution that has occurred. ...

It shouldn’t come as a surprise that many intellectuals dislike the idea that biology plays a determinative role in human affairs. With a DNA-driven view of the social world, we risk resigning ourselves to fatalism. Our future is no longer written in the stars. Now it’s written in our DNA. ...

Sam Harris has adamantly argued against the existence of free will. ... This view is actually very close to the majority of philosophers and scientists who think about such things. ...
So this is really what most intellectuals think about determinism and free will?

All scientific theories are partially deterministic. The past allows us to make predictions about the future, but never with perfect certainty. Most physicists believe that quantum mechanics precludes predictions with perfect certainty.

It appears that DNA determines a lot more than most people are willing to admit. But it does not determine everything. Identical twins are not identical.

Jerry Coyne promises to write a rebuttal. He is especially perturbed by the idea that people are better off if they believe in free will.

Isn't that obvious? The main people who do not believe in free will are schizophrenics, Moslems, Commies, and philosophers.

Tuesday, July 9, 2019

The Multiverse is a religion

Dr. Bee writes:
But believing in the multiverse is logically equivalent to believing in god, therefore it’s religion, not science.

To see why, let me pull together what I laid out in my previous videos. Scientists say that something exists if it is useful to describe observations. By “useful” I mean it is simpler than just collecting data. You can postulate the existence of things that are not useful to describe observations, such as gods, but this is no longer science.

Universes besides our own are logically equivalent to gods. They are unobservable by assumption, hence they can exist only in a religious sense. You can believe in them if you want to, but they are not part of science. ...

Fourth [common misunderstanding]. But then you are saying that discussing what’s inside a black hole is also not science

That’s equally wrong. Other universes are not science because you cannot observe them. But you can totally observe what’s inside a black hole. You just cannot come back and tell us about it. Besides, no one really thinks that the inside of a black hole will remain inaccessible forever. For these reasons, the situation is entirely different for black holes. If it was correct that the inside of black holes cannot be observed, this would indeed mean that postulating its existence is not scientific.
I agree that the multiverse is not science, for the reasons she gives, but the same is true about the black hole interior, inside the event horizon.

A commenter responds:
That seems like a very weak argument; the equivalent in religion to claiming God is observable because, by their postulates, you will observe Him when you die, and unexplainable near-death experiences prove the plausibility of that.
I agree with that also. Many religious believers say that we can observe God, heaven, angels, etc. after we die, and we just cannot come back to tell anyone.

That argument is similar to the argument that we can observe a black hole interior by falling into it.

I have no idea why Bee says that everyone believes that the black hole inside will be accessible. There is a misconception that LIGO observes black hole interiors when it detected black hole collisions. But that is not true. Its observations are explained entirely from outside the event horizons.

There is also a crazy belief that black holes will leak info as they evaporate Hawking radiation over the next trillion years. It is similar to the theory that if you send a rocket into the Sun with some paper encyclopedias, all that info will be eventually radiated back out to the solar system. Nobody thinks that is observable, so that is just another religion.

Also, the Hawking radiation takes place entirely in the vicinity of the event horizon, and does not depend on the interior.

Just to be clear, one can observe the mass, charge, angular momentum, and maybe a couple of other external values, but these are all observed based on what is outside the event horizon. If we could predict the interior, we would say that there would be very high energies near the center, and we do not have good physical theories for such energies.

Monday, July 8, 2019

Philip Ball attacks scientific mysteries

Science journalist Philip Ball writes:
Imagine if all our scientific theories and models told us only about averages: if the best weather forecasts could only give you the average ...
In a way, that is correct. When the weather forecaster says that there is an 80% chanceof rain, he is giving us a statistical average of how often it rains under current conditions.
In the early days of quantum mechanics, that seemed to be its inevitable limitation: It was a probabilistic theory, telling us only what we will observe on average if we collect records for many events or particles. To Erwin Schrödinger, whose eponymous equation prescribes how quantum objects behave, it was utterly meaningless to think about specific atoms or electrons doing things in real time. ...

But there’s another way to formulate quantum mechanics so that it can speak about single events happening in individual quantum systems. It is called quantum trajectory theory (QTT), and it’s perfectly compatible with the standard formalism of quantum mechanics — it’s really just a more detailed view of quantum behavior. The standard description is recovered over long timescales after the average of many events is computed.

In a direct challenge to Schrödinger’s pessimistic view, “QTT deals precisely with single particles and with events right as they are happening,”
This is probably just an interpretation of quantum mechanics with no real advantages over Copenhagen, but I don't know anything about it.

A previous Ball essay argued that no one knows what quantum mechanics means, because it has not been axiomatized, to his knowledge.

Ball writes a lot about physics and quantum mechanics, but he just ventured out into another area of science, with a London Guardian book review:
The concept of ‘race’ persists, even though it is biologically meaningless. This important book considers why

More than 90% of the top 20 performances in middle- and long-distance running are by black people of African heritage. Are they simply biologically better at it? That is precisely the kind of casual assumption that, as science writer Angela Saini shows in Superior, has kept “scientific racism” alive for centuries. In fact, more than half of those performances are by Kenyans, coming mostly from eight small tribes. One theory is that, having lived at high altitude for millennia, they have adapted to make more efficient use of oxygen when running. But studies have found no physiological advantage, and it’s possible that the answer is instead sociological. One thing is sure: having dark skin pigmentation is as irrelevant here as speaking a Kenyan language. The idea of “race” has nothing to contribute to the debate.

If you’re a typical Guardian reader, you might feel fine about, or flattered by, the notion that black people are better runners – it sounds positively antiracist, right? Yet this is the sort of reasoning that feeds racism: that there are meaningful biological distinctions between groups of humans (often on the basis of visible, literally superficial characteristics) that allow them to be categorised into distinct “races”, from which we can meaningfully predict traits.

The idea is so deeply ingrained that it is hard even to talk about race without seeming to accept its tenets.
Really? Race has nothing to do with why Kenyans win all the long-distance races? Science has proved that the Kenyans have no physiological advantage?

I marvel at how he can just jump from one scientific topic to another. He can just casually say that the evidence in front of your eyes is wrong, because of some weirdo ideological belief.
mail-order DNA analysis companies promote a genetic identity politics ...

Sometimes this racial agenda is subconscious. ... “It takes some mental acrobatics to be an intellectual racist in the light of the scientific information we have today,” says Saini, “but those who want to do it, will.” ...

The problem with scientists, Saini says, is that they too often assume they are above racism and so fail to engage with the history, politics and lived experience of race.

... United States has a racist president ...
Here is an unfavorable review of the same book. Saini wrote a previous book denying sex differences.

If there is no such thing as race, then how can anyone be a racist?

How can I make sense out of Democrat candidate who talk about race-based policies all the time?

Saying there is no such thing as race is just wishful thinking.

Ball and the book reject all the DNA evidence, and assume that scientists are corrupted by prejudice in everything they do on this subject.

If so, is the same true about quantum mechanics? It does indeed take a lot of mental acrobatics to believe in many-worlds or a lot of other variations on textbook quantum mechanics.

Wednesday, July 3, 2019

Aaronson's provability is hard to grasp

I mocked Scott Aaronson's quantum computer random number generator, and he replies:
3. The entire point of my recent work, on certified randomness generation (see for example here or here), is that sampling random bits with a NISQ-era device could have a practical application. That application is … I hope you’re sitting down for this … sampling random bits! And then, more importantly and nontrivially, proving to a faraway skeptic that the bits really were randomly generated. ...

As I explicitly said in the post, the whole point of my scheme is to prove to a faraway skeptic — one who doesn’t trust your hardware — that the bits you generated are really random. If you don’t have that requirement, then generating random bits is obviously trivial with existing technology. If you do have the requirement, on the other hand, then you’ll have to do something interesting — and as far as I know, as long as it’s rooted in physics, it will either involve Bell inequality violation or quantum computation.

The weird thing is, I’ve given hour-long talks where I’ve hammered home the above idea at least 20 times (“the entire point of this scheme is to prove the bits are random to a faraway skeptic…”), and then gotten questions afterward that showed that people completely missed it anyway (“why not just use my local hardware RNG? isn’t that random enough?”). Is there something about the requirement of provability that’s particularly hard to grasp??
It is funny to see Scott complain about being misunderstood. It is a common theme on his blog. His motto on the top of his blog is just to clarify one of those misunderstandings.

And whenever he expresses a political opinion, he always has to issue a number of clarifications.

To answer his question, mathematical provability is not hard to grasp, but randomness is impossible to prove. At best he is proving something relative to the random oracle model, some assumptions about the hardware, some assumptions about entanglement, etc.

I am sure his audience keeps waiting for him to claim something useful. Instead he only claims that the useful application is to refute the QC skeptics!

He really hates to admit that QC might just be a house of cards, so he has to pretent that it is all the fault of the QC skeptics that it is a house of cards.

This is a bit like saying that Enron and Theranos were completely valid companies because no one had proved that they were frauds. Until they did.

Sorry, but we QC skeptics are not going to be refuted by some hokey random number generator that is a trillion times worse than the ones in common use.

Tuesday, July 2, 2019

Dopey philosopher attacks Feynman

Massimo Pigliucci is a philosophy professor specializing in science and pseudo-science. He writes a lot. I used to follow his blogs and sometimes post comments there, but he deleted too many of my comments, so I gave up.

He is knowledeable about biology and evolution, but he writes nonsense on the subject of the hard sciences. I have criticized him on this blog, such as here.

In particular, he has this funny idea that philosophers know more than physicists, and hates it when physicists ignore philosophers.

So I agree with Lubos Motl trashing Pigliucci's att on Feynman. I won't repeat Lumo's points, but I address this:
To begin with, the history of physics (alas, seldom studied by physicists) clearly shows that many simple theories have had to be abandoned in favour of more complex and ‘ugly’ ones. The notion that the Universe is in a steady state is simpler than one requiring an ongoing expansion; and yet scientists do now think that the Universe has been expanding for almost 14 billion years. In the 17th century Johannes Kepler realised that Copernicus’ theory was too beautiful to be true, since, as it turns out, planets don’t go around the Sun in perfect (according to human aesthetics!) circles, but rather following somewhat uglier ellipses.
Pigliucci is wrong at every level.

The steady state model of the universe is not simpler. It was plagued with paradoxes, such as
Olbers' paradox and the problem of why gravity does not collapse the galaxies. The expanding universe is the simplest solution to those puzzles.

The Copernicus theory was not simpler and more beautiful than Kepler's. Pigliucci probably does not realize that Copernicus used epicycles. Ellipses are more beautiful that Copernicus's constructions.

The real problem with modern philosophers of science is not just that they are ignorant, arrogant, and irrelevant. The problem is that the field is dominated by philosophers who are anti-science. They deny the scientific method, and much of what scientists believe. They have become enemies of science.

Lumo writes:
Mr Pigliucci mentions some times when physicists such as Einstein respected philosophers. ... Instead, those philosophers – especially the positivists – actually found some new ways of thinking that were used in the relativistic and quantum revolutions in 20th century physics. ... But nothing like that has taken place for quite some time – approximately for one century. And even Mach and the positivists were probably just lucky – even a broken calendar that shows the last two digits is correct once a century.
That is right. When Einstein was senile, there were philosophers of science who said sensible things. Maybe they were just lucky, I don't know. But that was almost a century ago. Philosophers of science have not said anything worthwhile in decades, and most of what they say is counter to science.

Update: Pigliucci ends with:
But philosophy has made much progress since Plato, and so has science. It is therefore a good idea for scientists and philosophers alike to check with each other before uttering notions that might be hard to defend, especially when it comes to figures who are influential with the public. To quote another philosopher, Ludwig Wittgenstein, in a different context: ‘Whereof one cannot speak, thereof one must be silent.’
There are many examples of prominent physicists who educated themselves on philosophy before speaking about it. Feynman was one. A more recent example is Steve Weinberg, who wrote essays on the failure of modern philosophy to address what modern physics is all about.

We do not see philosophers similarly educated about modern physics. Instead, the dominant views among them are anti-science.

Monday, July 1, 2019

Special relativity more important than general relativity

Einstein historian Tilman Sauer writes:
The completion of the general theory of relativity in late 1915 is considered Einstein’s greatest and most lasting achievement.
Some say that, but my guess is that more say that the 1905 special relativity theory was the greatest.

Special relativity profoundly changes all of modern physics. No one can be a physicist today without knowing special relativity. The most basic variables of physics are space and time coordinates, and special relativity told us how they were related. Suddenly things made sense that never made sense before. It is hard to imagine a more essential shift in thinking.

On the other hand, general relativity has not been very consequential. Without it, XX century physics would not have been much different. Contrary to what you may read elsewhere, it had no effect on the GPS satellite system, as GPS only uses what was previously known about special relativity. It solved some theoretical puzzles, but it would have been eventually seen as the logical result of combining gravity with special relativity, even if Einstein never worked on it.

Constructing a relativistic gravity theory was mainly a mathematics problem, and it is well-known that Einstein relied on mathematicians for most of the crucial ideas. The main equation is that the Ricci tensor is zero, which is not too surprising once you figure out the relevance of the Ricci tensor. General relativity is a nice theory, but it is just not that physically important.

Among those who see special relativity as the big breakthrough, there is also a wide divergence of opinion on how to credit Einstein. Some say that that the whole theory begins and ends with Einstein's 1905 paper, as an individual stroke of genius. But everything in that paper was done much better in papers by others published earlier.

Sauer's paper discusses Einstein's notes, but they are mostly worthless nonsense.

Sunday, June 30, 2019

Xanadu quantum computing startup gets $32M

A reader sends this blog post:
In a previous blog post, I commented about Xanadu AI:

Xanadu.ai, a quantum computing Theranos, will never make a profit

The latest news about the Xanadu Ponzi scheme is that they just got an extra CAN$32 Million (This raises their total funding so far to CAN$41M, according to TechCruch). This company has practically zero chance of succeeding. Their quantum computing technology, using squeezed light, is **far inferior** to the ones (ion trap, squids, optical, anyons, quantum dots) being pursued by a crowded field of well funded startups (Rigetti, IonQ, PsiQ, and many others) and giant monopolies (IBM, Google, Microsoft, Intel, Alibaba, Huawei, …).

The latest 32 million was reported in the Globe and Mail:

Toronto startup Xanadu raises $32-million to help build ‘world’s most powerful computer’

Excerpts in boldface:

But Xanadu needs to take “three giant steps,” before it can fully commercialize its technology, said Massachusetts Institute of Technology mechanical engineering and physics professor Seth Lloyd, a leading expert in quantum computing who advises the startup:

“They need to improve the squeezing by a significant amount and show they can get many pulses of this squeezed light into their device, and then control these pulses … [then] show you can actually do something that’s useful to people with the device. Given what they’re trying to do, they’re on schedule. Any one of those things could fail, which is the nature of science and life and being a startup.”
I don't know anything about this company, but they all seem like scams to me. As long as there is investor money for these projects, they will continue, no matter how many giant steps are needed for commercialization.

Thursday, June 27, 2019

The quantum entanglement paradox

The Wikipedia article on quantum entanglement explains the "spooky" paradox:
The paradox is that a measurement made on either of the particles apparently collapses the state of the entire entangled system — and does so instantaneously, before any information about the measurement result could have been communicated to the other particle (assuming that information cannot travel faster than light) and hence assured the "proper" outcome of the measurement of the other part of the entangled pair. In the Copenhagen interpretation, the result of a spin measurement on one of the particles is a collapse into a state ...

(In fact similar paradoxes can arise even without entanglement: the position of a single particle is spread out over space, and two widely separated detectors attempting to detect the particle in two different places must instantaneously attain appropriate correlation, so that they do not both detect the particle.)
And similar paradoxes are in classical mechanics. If you are trying to predict whether an asteroid will strike the Earth using Newtonian mechanics, then you will predict the asteroid state as an ever expanding ellipsoid representing its probable locations. If you later observe the asteroid in one part of the ellipsoid, then you immediately know that it is not in another part of the ellipsoid, and you instantaneously collapse your ellipsoid.

If the quantum paradox disturbs you as being spooky or nonlocal, then the asteroid paradox should disturb you the same way.

To be fair, the quantum situation is more subtle because (1) measuring a particle's position or spin necessarily disturbs it; and (2) when two particles are entangled we only know how to represent the state of the combination.

These subtleties may or may not disturb you, but they are really separate issues. The spooky action-at-a-distance paradox is fully present in the classical asteroid problem.

Books and articles on quantum mechanics seem to try to confuse more than enlighten. So to describe entanglement, they will throw in complications like the fact that an electron is not just a particle, but has wave properties.

An example of unnecessary complications is the Delayed-choice quantum eraser experiment. But this is just to confuse you, as you can see from this recent paper:
The "Delayed Choice Quantum Eraser" Neither Erases Nor Delays

It is demonstrated that ‘quantum eraser’ (QE) experiments do not erase any information. Nor do they demonstrate retrocausation or ‘temporal nonlocality’ in their ‘delayed choice’ form, beyond standard EPR correlations. It is shown that the erroneous erasure claims arise from assuming that the improper mixed state of the signal photon physically prefers either the ‘which way’ or ‘both ways’ basis, when no such preference is warranted. The latter point is illustrated through comparison of the QE spatial state space with the spin-1/2 space of particles in the EPR-spin experiment.
In other words, nothing to see here.

Quantum mechanics is still funny because electrons and photons have both wave and particle properties. You can think of them as particles, as long as you respect the Heisenberg uncertain, and do not try to give simultaneous values for noncommuting observables. And the wave function of an electron is not really the same as the electron itself.

But many of the quantum paradoxes trick you by combining different things to obscure quantum principle involved.

By the way, it appears that the public wants NASA to focus more on those asteroid ellipsoids. Nobody really wants to go to Mars. Inhabiting Mars is just a science fiction fantasy.

Tuesday, June 25, 2019

PBS TV hypes quantum technologies

PBS TV had a news segment on the billions of dollars being spent on the emerging quantum technology of quantum info and computing, but we are falling behind the Chinese. We need to spend many billions more on a new generation of quantum engineers, or we will fall ten years behind. Already the Chinese have sent an ambiguous photon to a satellite!

There are those who argue that quantum-related technologies are already 30% of our economy, but that is not what PBS is talking about. Computer chips, disc drives, laser, displays, etc are all developed with quantum mechanics, so yes, quantum mechanics is essential to our economy. But they mean quantum spookiness of electrons being in two places at the same time to achieve a parallel computing breakthrough, or some such nonsense.

Sunday, June 23, 2019

Comparing Bohm to a medieval astrologer

Some scientists like to think that they are fighting the good fight of finding truth against modern inquisitors. It is usually nonsense. I don't know of any examples of good science being suppressed.

A Nautilus essay:
The Spirit of the Inquisition Lives in Science
What a 16th-century scientist can tell us about the fate of a physicist like David Bohm. ...

I’ve been schooled in quantum physics and trained to think rationally, dissecting facts and ideas dispassionately. And here I am constantly carrying on imaginary conversations with a 16th-century astrologer. ...

Those communist associations, coupled with the national security implications of his Ph.D. work, made him a target for Senator Joe McCarthy’s crusade against un-American activities.

Bohm refused to answer questions, and refused to name anyone that the McCarthyists should investigate. He was arrested. By the time he was acquitted, he had been suspended from Princeton. In 1951, unemployable in the United States, Bohm took a job in Brazil. ...

Bohm’s idea of an invisible, undetectable pilot wave was roundly criticized, but a man who had survived the McCarthy witch hunts was not easily put off. Having overcome the most heinous character assassination of the era, he could take a little heat. ...

But there are two problems. The first is that, in order to get the predictions right about the interference effect and the ultimate distribution of the photons at the detector, you have to work backward from the final result.

The second problem is that Bohm’s pilot wave is odd—in a way that physicists call “nonlocal.” This means that the properties and future state of our photon are not determined solely by the conditions and actions in its immediate vicinity. ...

The de Broglie-Bohm interpretation of quantum physics, as it is now known, is not popular. Only one venerated physicist has ever really championed it: John Bell, the Irishman who came up with the first definitive test for the existence of entanglement. ...

In the end, we can be reasonably confident that none of our current interpretations of quantum theory are right.
This is ridiculous. Bohm was a Communist who refused to testify about his Communist activities. That is not the behavior of a truth-seeker, or of a loyal American.

His physics work was junk. It did inspire Bell, and Bell inspired a generation of crackpots, but his interpretation of quantum mechanics is very much inferior to Copenhagen.

Say you want to understand light going thru a double-slit and creating an interference pattern. If you think of light as a wave, then all waves cause interference patterns, and it is obvious. Or you can think of light as photons with wave-like properties. Some like to think of light as photons, but funny particles that can be in two places at once and go thru both slits.

Bohm says the photons really are particles, and a photon only goes thru one slit. That is supposed to make his interpretation intuitive. But the photon is guided by some distant ghostly pilot wave that tells it what to do. You could trap the photon in a box, but you cannot predict the photon's behavior based on what is in the box, because it is controlled by a pilot wave that is not even in the box.

Why would any prefer that explanation?

It is somewhat interesting to see what can be done with a goofy interpretation of quantum mechanics, but that's all. As far as I know, Bohm's interpretation only works in contrived test cases, and it is not useful for anything.

After writing this, I see that Lubos Motl posted a rant against the same article:
Meanwhile, Brooks is fighting for "progress". Which means to ban the uncertainty principle and return us to classical physics where the position and momentum existed simultaneously and without any measurement. And his progress involves the talking to the ultimate authorities – the ghosts of astrologers. ...

And it's completely wrong to arbitrarily identify 20th century scientists such as Niels Bohr with the Inquisition etc. Bohr has had clearly nothing to do whatsoever with the Inquisition. Also, it's absolutely wrong to defend scientific theories according to the subjective likability of their political views. Whether David Bohm was a commie – while Pascual Jordan was a staunch believer in the ideas of NSDAP – is completely irrelevant for the evaluation of the validity of their scientific propositions. I would have trouble with the political views of both of these men but their physics must clearly be considered as an entity that is independent from the men's politics.
I don't know whether there is a relation between being a Commie and pushing a spooky action-at-a-distance theory, or in believing that Bohr led an inquisition against Einstein.

Wanting to give a precise position for the electron at all times is an attempt to ban the uncertainty principle, and deny the basic wave theory of matter that was established a century ago.

Friday, June 21, 2019

Quantum supremacy will just be a random number generator

Now that Google is once again bragging that it is about to unveil a Nobel-Prize-worthy quantum computer, guess what its first application will be? Completely worthless, of course.

Quanta mag reports:
But now that Google’s quantum processor is rumored to be close to reaching this goal, imminent quantum supremacy may turn out to have an important application after all: generating pure randomness.

Randomness is crucial for almost everything ...

Genuine, verifiable randomness ... is extremely hard to come by.

That could change once quantum computers demonstrate their superiority. Those first tasks, initially intended to simply show off the technology’s prowess, could also produce true, certified randomness. “We are really excited about it,” said John Martinis, a physicist at the University of California, Santa Barbara, who heads Google’s quantum computing efforts. “We are hoping that this is the first application of a quantum computer.”
This is just crazy. There are 50 cent chips that generate quantum randomness. Why would anyone use a $100M quantum computer?
Measuring the qubits is akin to reaching blindfolded into the box and randomly sampling one string from the distribution.

How does this get us to random numbers? Crucially, the 50-bit string sampled by the quantum computer will have a lot of entropy, a measure of disorder or unpredictability, and hence randomness. “This might actually be kind of a big deal,” said Scott Aaronson, a computer scientist at the University of Texas, Austin, who came up with the new protocol. “Not because it’s the most important application of quantum computers — I think it’s far from that — rather, because it looks like probably the first application of quantum computers that will be technologically feasible to implement.”

Aaronson’s protocol to generate randomness is fairly straightforward. A classical computer first gathers a few bits of randomness from some trusted source and uses this “seed randomness” to generate the description of a quantum circuit. The random bits determine the types of quantum gates ...

Aaronson, for now, is waiting for Google’s system. “Whether the thing they are going to roll out is going to be actually good enough to achieve quantum supremacy is a big question,” he said.

If it is, then verifiable quantum randomness from a single quantum device is around the corner. “We think it’s useful and a potential market, and that’s something we want to think about offering to people,” Martinis said.
I am beginning to think that these guys are trolling us.

If you want some random numbers, just close your eyes and type some junk into your keyboard. Then get the code for the SHA-1 hash function, and apply it repeatedly. Free implementations have been available for 25 years.

SHA-1 has not been broken, but it has some theoretical weaknesses that might be an issue at some future date. If that bothers you, then you can switch to SHA-512. That will have no problem for the foreseeable future.

With all the hype for quantum computing, they admit that (1) quantum supremacy has not been achieved, and (2) even when it is, its most use application might be random numbers.

Update: Peter Shor weighs in with his views:
Horgan: The National Academy of Sciences reports that “it is still too early to be able to predict the time horizon for a practical quantum computer,” and IEEE Spectrum claims we won’t see “useful” quantum computers “in the foreseeable future.” Are these critiques fair? Will quantum computers ever live up to their hype?

Shor: The NAS report was prepared by a committee of experts who spent a great deal of time thinking about possible different ways of achieving quantum computation, the various roadblocks for these methods, and about the difficulties of making a working quantum computer, and I think they give a fair appraisal of the difficulties of the task. 

The IEEE Spectrum article gives a much more pessimistic assessment. It was written by one physicist who has had a negative view of quantum computers since the very beginning of the field. Briefly, he believes that making quantum computers fault-tolerant is a much more difficult task than it is generally believed to be. (Let me note that it is generally believed to be extremely difficult, but that it is theoretically possible if you have accurate enough quantum gates and a large but not impossible amount of overhead.) I don't believe that his arguments are justified.

Horgan: Scott Aaronson said on this blog that “ideas from quantum information and computation will be helpful and possibly even essential for continued progress on the conceptual puzzles of quantum gravity.” Do you agree?

Shor: I want to partially agree.
He also has other opinions about theoretical physics that I may address separately.

Wednesday, June 19, 2019

Doubly exponential growth in quantum computing

Quanta mag reports:
That rapid improvement has led to what’s being called “Neven’s law,” a new kind of rule to describe how quickly quantum computers are gaining on classical ones. The rule began as an in-house observation before Neven mentioned it in May at the Google Quantum Spring Symposium. There, he said that quantum computers are gaining computational power relative to classical ones at a “doubly exponential” rate — a staggeringly fast clip. ...

Doubly exponential growth is so singular that it’s hard to find examples of it in the real world. The rate of progress in quantum computing may be the first.

The doubly exponential rate at which, according to Neven, quantum computers are gaining on classical ones is a result of two exponential factors combined with each other. The first is that quantum computers have an intrinsic exponential advantage over classical ones ... The second exponential factor comes from the rapid improvement of quantum processors.
Something this remarkable would surely be documented by a peer-reviewed scientific paper, right? I see no mention of one.

Our usual expert weighs in:
“I think the undeniable reality of this progress puts the ball firmly in the court of those who believe scalable quantum computing can’t work,” wrote Scott Aaronson, a computer scientist at the University of Texas, Austin, in an email. “They’re the ones who need to articulate where and why the progress will stop.”
The answer comes next: the supposed progress may be all an illusion unless quantum supremacy is reached.
Google has been particularly vocal about its pursuit of this milestone, known as “quantum supremacy.”

So far, quantum supremacy has proved elusive — sometimes seemingly around the corner, but never yet at hand. But if Neven’s law holds, it can’t be far away. Neven wouldn’t say exactly when he anticipates the Google team will achieve quantum supremacy, but he allowed that it could happen soon.

“We often say we think we will achieve it in 2019,” Neven said. “The writing is on the wall.”
Neven works for Google. They often said 2016, then 2017, then 2018, and now 2019.

For how many years can we have double exponential progress, and still no convincing demo that scalable quantum computing can work? Not many. We should have a verdict soon.

Sunday, June 16, 2019

Bell only proved a null result

Peter Woit has closed comments on Bell locality without responding to a lot of sharp criticism, so I am not sure where he stands.

Marty Tysanner wrote:
While I may understand their rationale, it still dismays me that so many people have thrown locality “under the bus” (so to speak). Locality isn’t like the 19th century ether that was invented to explain certain physical phenomena; it’s at the core of relativity, electromagnetism, classical mechanics, and even QM evolution up to the time of measurement. I can’t think of anywhere else in all of fundamental physics – outside QM entanglement – where locality is in question. Moreover, there are obvious conceptual issues with non-local influences/communication between entangled particles at space-like separation: How do the particles “find” each other within the entire universe, so they can “communicate” (other than tautalogically or some other version of “it just happens; don’t ask how”); and why don’t we see nonlocality in other contexts if it’s a truly fundamental aspect of nature? ...

Given the centrality of locality in physics, I think we should “fight to the death” to preserve it in a fundamental theory.
That part of Tysanner's comment is correct. Locality is at the core of nearly all of Physics, including quantum field theory (QFT). If it were false, we would expect to see some convincing evidence, and then do some radical rethinking of out theories.

We do see the Bell test experiments, but they only negate the combination of locality and counterfactual definiteness. You can preserve locality by rejecting counterfactual definiteness. That is what mainstream physicists have done for 90 years.

Tim Maudlin disagrees with Woit and writes:
You do experiments in two (or three: GHZ) labs. You get results of those experiments. Those results display correlations that no local theory (in the precise sense defined by Bell) can predict. Ergo, no local theory can be the correct theory of the actual universe. I.e. actual physics is not local (in Bell’s sense).
Bell defined a local theory as a classical theory of local hidden variables. In later papers, he called them beables, and gave some philosophical arguments for believing in them, but the definition of locality is the same.

Quantum mechanics uses non-commuting observables. By the uncertainty principle, they cannot have simultaneous values. That remains true if you call them beables instead of observables. If you create a model in which they do have simultaneous, but maybe unknown values, then you get predictions that contradict experiment. That is what Bell and his followers have shown.

The only conclusion is the null result: No reason to reject quantum mechanics. Any other conclusion is an error.

Update: Lubos Motl completely agrees with me about Bell. Bell proved a nice little theorem in 1964 that supports locally causal quantum mechanics. Then he wrote some later papers that did nothing but confuse people who refuse to accept quantum mechanics:
In this later paper, Bell coined the new terms "local beables" and "local causality" which have turned his writing into complete mess combining outright wrong statements with totally illogical definitions. He also tried to retroactively rewrite what he did in 1964. While in 1964, as I said, he was rather clear that he made two main assumptions about the theory, namely that it is local and it is classical (although he wasn't a sufficiently clearly thinking physicist to actually use the word "classical"), in the 1976 paper, he already tried to claim that he had only made one assumption, "local causality". ...

There only exists one physically meaningful notion of locality or local causality – and it's what holds whenever the special theory of relativity (or its Lorentz invariance) is correct. The probabilities of measurements done purely in region A are determined by the prehistory of A – and don't depend on further data in the region B although it may contain objects previously in contact with the objects in A. There may exist correlations between measurements in A and B but all of those are calculable from the conditions in the past when A,B were a part of a single system AB. What's important is that people's decisions (e.g. what to measure), nuclear explosions, and other random events in region B don't cause changes to the system A. ...

The claim by Bell, Eric Cavalcanti, and this whole stupid cult that quantum mechanical theories cannot be "Bell locally causal" is wrong ...

People who talk about "beables" in quantum mechanics are doing an equally silly mistake as a molecular geneticist who builds his science on the seven days of creation.
This is correct. The term "beables" is just a goofy term for local hidden variables in a classical theory. Talking about them is just expressing a religious objection to quantum mechanics.

Saturday, June 15, 2019

Woit dives into Bell non-locality

Peter Woit writes:
what’s all this nonsense about Bell’s theorem and supposed non-locality?

If I go to the Scholarpedia entry for Bell’s theorem, I’m told that:
Bell’s theorem asserts that if certain predictions of quantum theory are correct then our world is non-local.
but I don’t see this at all. As far as I can tell, for all the experiments that come up in discussions of Bell’s theorem, if you do a local measurement you get a local result, and only if you do a non-local measurement can you get a non-local result. Yes, Bell’s theorem tells you that if you try and replace the extremely simple quantum mechanical description of a spin 1/2 degree of freedom by a vastly more complicated and ugly description, it’s going to have to be non-local. But why would you want to do that anyway?
He is right, and he cites a Gell-Mann video in agreement, but most of the comments don't.

In short, Bell proved in about 1965 a nice theorem showing the difference between quantum mechanics and classical theories. He showed that assuming local hidden variables leads to different conclusions.

In later years, Bell convinced many people that the assumption of hidden variables was not necessary. They are wrong, and most respectable physicists, textbooks, and Wikipedia articles say so. Some say that he would have deserved a Nobel Prize if he had been right, but he got no such prize.

Lee Smolin responds:
I have made what I hope is a strong case for taking seriously what we might call the “John Bell point of view” in my recent book Einstein’s Unfinished Revolution, and find myself mostly in agreement with Eric Dennis and Marko. I would very briefly underline a few points:

-This is not a debate resting on confusions of words. Nor is there, to my knowledge, confusion among experts about the direct implications of the experimental results that test the Bell inequalities. The main non-trivial assumption leading to those inequalities is a statement that is usually labeled “Bell-locality”. Roughly this says (given the usual set up) that the “choice of device setting at A cannot influence the same-time output at B”.

Nothing from either quantum mechanics nor classical mechanics is assumed. The experiments test “Bell-locality” and the experimental results are (after careful examination of loop-holes etc.) that the inequality is cleanly and convincingly violated in nature. Therefor “Bell-locality” is false in nature.

-The conclusion that “Bell-locality” is false in nature is an objective fact. It does not depend on what view you may hold on the ultimate correctness, completeness or incompleteness of QM. Bohmians, Copenhagenists,Everetians etc all come to the same conclusion.
This is so wrong that I am tempted to cite Lubos Motl's opinion of Smolin.

There is no experiment where the choice of device setting at A influences the same-time output at B. The Bell experiments only show that the choice of device setting at A influences how we go about making predictions at B, but it never affects the output at B.

There is a big difference. One is spooky action-at-a-distance, and one is not.

It is amazing that Smolin could write a book in this subject, and get this simple point wrong.

I added this comment, but it was deleted:
As Wikipedia says: Cornell solid-state physicist David Mermin has described the appraisals of the importance of Bell's theorem in the physics community as ranging from "indifference" to "wild extravagance".

Bell assumed local hidden variables. If you believe that the theory of everything should be based on hidden variables, then Bell's theorem is a big deal. If you believe in QM as a perfectly good non-hidden-variable theory, then Bell's theorem has no importance.

As Peter says, "systems don’t have simultaneous well-defined values for non-commuting observables." That is another way of saying that Bell's assumption of hidden variables is violated.
Motl posted his explanation.
You would need non-locality in a classical theory to fake the quantum results. But that doesn't mean that our Universe is non-local because your classical theory – and any classical theory – is just wrong. In particular, the correct theory – quantum mechanics – says that the wave function is not an actual observable. It means that its collapse isn't an "objective phenomenon" that could cause some non-local influences. Instead, it may always be interpreted as the observer's learning of some new information.
One comment cited Travis Norsen to support Bell. I explained his errors in this 2015 post.

Peter Shor comments:
First: quantum mechanics doesn’t just break classical probability theory (as Bell demonstrated); it breaks classical computational complexity theory and classical information theory as well. This is why there are a number of computer scientists who are convinced that quantum computers can’t possibly work.
Those computer scientists might be right.

The Bell test experiments were done by physicists with a belief that they would disprove quantum mechanics, and overthrow 50 years of conventional wisdom. All they did was to confirm what the textbooks said.

Most of the XX century was without any firm opinion that quantum mechanics breaks classical computational complexity theory and classical information theory. That opinion developed in the last 30 years, just as experiments were being done. While experiments have confirmed aspects of quantum mechanics that Bell doubted, the jury is still out on quantum computing.