Thursday, February 14, 2019

An orbit is a perpetual fall, said in 1614

Christopher M. Graney writes:
In 1614 Johann Georg Locher, a student of the Jesuit astronomer Christoph Scheiner, proposed a physical mechanism to explain how the Earth could orbit the sun. An orbit, Locher said, is a perpetual fall. He proposed this despite the fact that he rejected the Copernican system, citing problems with falling bodies and the sizes of stars under that system. In 1651 and again in 1680, Jesuit writers Giovanni Battista Riccioli and Athanasius Kircher, respectively, considered and rejected outright Locher's idea of an orbit as a perpetual fall.
This is interesting because it is widely assumed that medieval geocentrists suffered from too much religion or a lack of imagination or a refusal to consider scientific arguments.

In fact, someone had a model of Earth's orbit that was conceptually similar to Newton's. Earth is in free fall towards the Sun.

Like Tycho Brahe, Locher accepted that the other planets revolved around the Sun. He didn't think that Earth moved because of the failure to observe the Coriolis force, among other reasons.

The Coriolis force was demonstrated a couple of centuries later.

Occasionally someone says Copernicus or Galileo created modern science, as the previous geocentrism was completely unscientific. This is nonsense. In 1600, there were legitimate scientific arguments for and against geocentrism.

Tuesday, February 12, 2019

The big tech firms can be wrong

You might think that if well-respected technology companies, like IBM, Google, and Microsoft, are solidly pursuing quantum computing, then it must have some commercial viability.

I thought so too, until the Intel Itanium chip. Just look at this chart. Everyone in the industry was convinced that the chip would take over the whole CPU market. It is still hard to understand how everyone could be so wrong.

Saturday, February 9, 2019

Building Quantum Computers Is Hard

ExtremeTech reports:
You may have read that quantum computers one day could break most current cryptography systems. They will be able to do that because there are some very clever algorithms designed to run on quantum computers that can solve a hard math problem, which in turn can be used to factor very large numbers. One of the most famous is Shor’s Factoring Algorithm. The difficulty of factoring large numbers is essential to the security of all public-private key systems — which are the most commonly used today. Current quantum computers don’t have nearly enough qubits to attempt the task, but various experts predict they will within the next 3-8 years. That leads to some potentially dangerous situations, such as if only governments and the super-rich had access to the ultra-secure encryption provided by quantum computers.

Why Building Quantum Computers Is Hard

There are plenty of reasons quantum computers are taking a long time to develop. For starters, you need to find a way to isolate and control a physical object that implements a qubit. That also requires cooling it down to essentially zero (as in .015 degrees Kelvin, in the case of IBM‘s Quantum One). Even at such a low temperature, qubits are only stable (retaining coherence) for a very short time. That greatly limits the flexibility of programmers in how many operations they can perform before needing to read out a result.

Not only do programs need to be constrained, but they need to be run many times, as current qubit implementations have a high error rate. Additionally, entanglement isn’t easy to implement in hardware either. In many designs, only some of the qubits are entangled, so the compiler needs to be smart enough to swap bits around as needed to help simulate a system where all the bits can potentially be entangled.
So quantum computing is extraordinarily difficult, but they will break most of our computer security systems, and experts predict it within the next 3-8 years.

Thursday, February 7, 2019

Gell-Mann agrees with me about Bell

I have occasionally argued that Bell's Theorem has been wildly misinterpreted, and that it doesn't prove nonlocality or anything interesting like that.

Readers have supplied references saying that I am wrong.

Now I find a short Murray Gell-Mann interview video agreeing with me.

The Bell test experiments do show that quantum mechanics differs from certain classical theories, but not by spookiness, entanglement, or nonlocality. You could say that the particles are entangled, but classical theories show similar effects.

He says:
It is a matter of giving a dog a bad name and hanging him. (laughs) When the quantum mechanical predictions for this experiment were fully verified, I would have thought everybody would say "great!" and go home. ...

When two variables at the same time don't commute, any measurement of both of them would have to be carried out with one measurement on one branch of history, and the other measurement on the other branch of history. That's all there is to it. ... People are still mesmerized by this confusing language of nonlocality.
That's right. The Bell paradoxes are based on comparing one branch of history to another, as if there were counterfactual definiteness. Quantum mechanics forbids this, if you are comparing noncommuting observables.

The Bell theorem and experiments did not really tell us anything that had not already been conventional wisdom for decades.

The bizarre thing about Bell's Theorem is that some physicists say that it is the most profound discovery in centuries, and other just shrug it off as a triviality. I do not know of any difference of opinion this wide in the whole history of science. After years of reading papers about it, I have moved to the latter camp. The theorem encapsulates why some people have conceptual troubles with quantum mechanics, but if the accept the conventional wisdom of 1930, then it has nothing interesting.

Tuesday, February 5, 2019

Quantum repeaters are worthless

"University of Toronto Engineering professor Hoi-Kwong Lo and his collaborators have developed a prototype for a key element for all-photonic quantum repeaters, a critical step in long-distance quantum communication," reports Phys.Org. This proof-of-principle device could serve as the backbone of a future quantum internet. From the report:

In light of [the security issues with today's internet], researchers have proposed other ways of transmitting data that would leverage key features of quantum physics to provide virtually unbreakable encryption. One of the most promising technologies involves a technique known as quantum key distribution (QKD). QKD exploits the fact that the simple act of sensing or measuring the state of a quantum system disturbs that system. Because of this, any third-party eavesdropping would leave behind a clearly detectable trace, and the communication can be aborted before any sensitive information is lost. Until now, this type of quantum security has been demonstrated in small-scale systems.
So if this technology becomes commercially available, you can set up a network that would have to be shut down if anyone tries to spy on it.

Or you can use conventional cryptography that has been in common use for 30 years, and continue to communicate securely regardless of how many people might be trying to spy on you.

Monday, February 4, 2019

There is no epochal revolution coming

Scott Aaronson acknowledges that the CERN LHC failed to the find the SUSY particles it was supposed to find, and then makes a comment that appears targeted at me:
The situation reminds me a little of the quantum computing skeptics who say: scalable QC can never work, in practice and probably even in principle; the mainstream physics community only thinks it can work because of groupthink and hype; therefore, we shouldn’t waste more funds trying to make it work. With the sole, very interesting exception of Gil Kalai, none of the skeptics ever seem to draw what strikes me as an equally logical conclusion: whoa, let’s go full speed ahead with trying to build a scalable QC, because there’s an epochal revolution in physics to be had here — once the experimenters finally see that I was right and the mainstream was wrong, and they start to unravel the reasons why!
I don't draw that conclusion because I don't believe in that "epochal revolution" either.

When the LHC failed to find supersymmetry particles, did we have an epochal revolution telling us why naturalness and SUSY and unified field theories failed?

No. For the most part, physicists went on believing in string theories and all their other nutty ideas, and just claimed that maybe bigger and better accelerators would vindicate them somehow. That will go on until the money for big projects runs out.

The cost of accelerators is exponentially increasing, and the money will run out. They need an accelerator the size of the solar system to get what they really want.

Quantum computers have already had 20 years of hype, billions of research dollars, and some of the world's smartest people working on them. So far, no quantum supremacy. IBM and Google have been promising that for over a year, but they are not even explaining how their plans went wrong. This could go on indefinitely, without any additional proof of the folly of their thinking.

Sunday, February 3, 2019

Inference mag blamed in leftist attack

I defended Sheldon Glashow's negative review in Inference of a popular quantum mechanics book by Adam Becker.

Most, if not all, popular accounts of quantum mechanics are filled with mystical nonsense about Schroedinger cats, entanglement, non-locality, etc. Instead of just explaining the theory, they do everything to convince you that the theory is incomprehensible. Becker's book was in that category, and Glashow's sensible review throws cold water on the nonsense.

Now Becker as struck back, and posted an article in Undark attempting to trash the online journal Inference as unscientific and aligned with evil Trump supporters. Inference responds, and so does Peter Woit.

I guess I'll read some more of those Inference articles to see if the journal really has a right-wing bias. If so, that would be refreshing, as Scientific American and all the other science journals have a left-wing bias. But I doubt it. In Becker's dispute, Glashow simply defends orthodox quantum mechanics while Becker's book is grossly misleading.

Becker says that Inference offered him a "fine fee" to write a response to the negative book review, and he declined. Most authors are eager to defend themselves against a negative review.

Among other things, Becker attacks Inference for this essay arguing that Copernican astronomy in the 16th century was somewhat more accurate than Ptolemaic astronomy. It appears to be a very good analysis. The gripe is that Tipler also has some unusual theological views that do not appear to be relevant to his essay.

Philosophers of science are always talking about the grounds for accepting or rejecting Copernicanism. And yet they hardly ever address the quantitative accuracy.

It has become typical for left-wingers to (1) cling to weirdo unscientific beliefs; (2) get very upset when an organization publishes views contrary to their ideology; and (3) launch guilt-by-association and character assassinations against those who permit contrary views.

Wednesday, January 30, 2019

Is some research too politically dangerous?

Gerhard Meisenberg argues:
Some authors have proposed that research on cognitive differences is too dangerous to be allowed to proceed unchecked. ...

1. The alternative to knowledge about human intelligence differences is not ignorance, but false beliefs that people create to explain real-world phenomena.
2. In most cases, true knowledge is more likely than false beliefs to lead to beneficial outcomes.
3. The proper question to ask is not whether intelligence research is dangerous, but whether people in modern societies possess the moral values and intellectual abilities required to make good use of the knowledge.
4. If moral values are found to be lagging behind factual knowledge in modern societies, the appropriate response is not the restriction of “dangerous” knowledge, but the development of moral values capable of putting the knowledge to good use.
You might think that this is obvious, but it is not. The big European Physics center, CERN, has a policy of sponsoring lectures on how women are disadvantaged, and firing anyone who presents data and evidence to the contrary.

Friday, January 25, 2019

Belief in infinite doppelgangers

I mentioned that Brian Greene is sympathetic to many-worlds theory without endorsing it, but he definitely believes in a multiverse that is almost as goofy:
Physicists Brian Greene and Max Tegmark both make variants of the claim that if the universe is infinite and matter is roughly uniformly distributed that there are infinitely many “people with the same appearance, name and memories as you, who play out every possible permutation of your life choices.”
Greene's 2011 book said:
“[I]f the universe is infinite there’s a breathtaking conclusion that has received relatively scant attention. In the far reaches of an infinite cosmos, there’s a galaxy that looks just like the Milky Way, with a solar system that’s the spitting image of ours, with a planet that’s a dead ringer for earth, with a house that’s indistinguishable from yours, inhabited by someone who looks just like you, who is right now reading this very book and imagining you, in a distant galaxy, just reaching the end of this sentence. And there’s not just one such copy. In an infinite universe, there are infinitely many. In some, your doppelgänger is now reading this sentence, along with you. In others, he or she has skipped ahead, or feels in need of a snack and has put the book down. In others still, he or she has, well, a less than felicitous disposition and is someone you’d rather not meet in a dark alley.”
I am not sure which is crazier, this, nonlocality, denial of free will, or many-worlds theory.

When some non-physicists like Deepak Chopra says stuff like this, they get mocked by physicists.

Greene would say that there is someone like in a distant universe, but he regularly blogs in favor of quantum computing instead of against it.

Why do physics professors like Greene and Tegmark get a free pass?

The above paper addresses several problems with the Greene-Tegmark view.

I think the problem is more basic. Once you start talking about infinities like that, you have left the realm of science. You as might as well be talking about angels dancing on the head of a pin.

You might be surprised that a mathematician like myself would be so hostile to infinities. After all, mathematicians use infinities all the time, and know how to deal with all the paradoxes. But the infinities are short-hands for logical arguments that make perfect sense.

What do you do with beliefs that you have no free will, and you have infinitely many copies of yourself in distant universes leading parallel lives? Or that what you think is a personal decision is really splitting yourself from an identical right here in this universe, but invisible? In some of these universes, bizarre things happen, like a tiger giving birth to a goat. But why don't we ever see such nonsense? You can say that those events are unlikely, but there are universes where they happen all the time. How do you know that we are not in one?

I am just scratching the surface of what a nonsensical world view this is.

Historians of science wonder how Galileo and Newton had such coldly rational views towards analyzing the mechanics of simple experiments or celestial observations, and yet they completely accepted all sorts of biblical religious that most scientists today say is just stupid mysticism. How is that possible?

Someday, Greene, Tegmark, and many other leading physicists will be seen similarly. They wrote some good scientific papers, but they also believed in total nonsense that a child could see was ridiculous.

Here are some mathematicians talking about infinities, from a recent podcast:
Dr. Garibaldi decided to talk about a theorem he calls the unknowability of irrational numbers. Many math enthusiasts are familiar with the idea of countable versus uncountable infinities. ...

The set of all real numbers—all points on the number line—is uncountable, as Georg Cantor proved using a beautiful argument called diagonalization. The basic idea is that any list of real numbers will be incomplete: if someone tells you they’ve listed the real numbers, you can cook up a number their list omits. ...

The end result is, in Dr. Garibaldi’s words, sort of hideous. Any classes of numbers you can describe explicitly end up being merely countably infinite. Even with heaping helpings of logarithms, trigonometry, and gumption, the number line is more unknown than known.
So all the real numbers we know anything about are countable. All our knowledge is countable. The reals are uncountable, so almost all real numbers are unknowable in some sense.

Mathematicians all understand this, and it is important in some mathematical arguments, but it doesn't really have any grand philosophical implications. Knowledge is countable because of they way knowledge is defined and accepted. Actual things are finite.

You could say that there is some real number that perfectly encodes your doppelganger, or records all your memories, or predicts all your future behavior, or any other weirdo fantasy you have. We cannot construct that real number, or say anything interesting about it.

You can fantasize all you want about alternate realities, but the physics and the math don't really add anything.

Wednesday, January 23, 2019

No need for new collider

Bee Hossenfelder writes:
Since the late 1960s, when physicists hit on the “particle zoo” at nuclear energies, they always had a good reason to build a larger collider. That’s because their theories of elementary matter were incomplete. But now, with the Higgs-boson found in 2012, their theory – the “standard model of particle physics” – is complete. It’s done. There’s nothing missing. All Pokemon caught.

The Higgs was the last good prediction that particle physicists had. This prediction dates back to the 1960s and it was based on sound mathematics. In contrast to this, the current predictions for new particles at a larger collider – eg supersymmetric partner particles or dark matter particles – are not based on sound mathematics. These predictions are based on what is called an “argument from naturalness” and those arguments are little more than wishful thinking dressed in equations. ...

This situation is unprecedented in particle physics. The only reliable prediction we currently have for physics beyond the standard model is that we should eventually see effects of quantum gravity. But for that we would have to reach energies 13 orders of magnitude higher than what even the next collider would deliver. It’s way out of reach.
I agree with this. Bee is going to be an outcast, because high-energy physicists are not going to like someone throwing cold water on their $10B funding proposals.

If money were budgeted for a new collider and then canceled, would the money be spent on anything better? Maybe not.

I like Physics, and I like money being spent on it. But physicists need to tell the truth about what they are doing. A bigger new collider is unlikely to tell us much.

Update: Bee also has an op-ed in the NY Times:
I used to be a particle physicist. ...

The stories about new particles, dark matter and additional dimensions were repeated in countless media outlets from before the launch of the L.H.C. until a few years ago. What happened to those predictions? The simple answer is this: Those predictions were wrong — that much is now clear. ...

To date, particle physicists have no reliable prediction that there should be anything new to find until about 15 orders of magnitude above the currently accessible energies. And the only reliable prediction they had for the L.H.C. was that of the Higgs boson. Unfortunately, particle physicists have not been very forthcoming with this information.

Monday, January 21, 2019

The Black Hole Singularity

Einstein famously did not believe in black holes, because a black hole has a singularity, and singularities do not occur in nature.

But which black hole singularity?

A black hole actually has two singularities, one at the center and one at the event horizon. Both singularities show up in common parameterizations of the Schwarzschild solution. Each could have been troublesome for Einstein.

The singularity at the center is based on the idea that all the mass collapses to a single point, so that the point geometrically blows up to something of infinite diameter. Physically, the idea is that if the mass is sufficiently concentrated, the gravity force will overwhelm all other forces, including the Pauli exclusion force. Then nothing can stop all the mass disappearing into the hole in spacetime.

Wikipedia says:
The first modern solution of general relativity that would characterize a black hole was found by Karl Schwarzschild in 1916, although its interpretation as a region of space from which nothing can escape was first published by David Finkelstein in 1958. Black holes were long considered a mathematical curiosity; it was during the 1960s that theoretical work showed they were a generic prediction of general relativity. The discovery of neutron stars in the late 1960s sparked interest in gravitationally collapsed compact objects as a possible astrophysical reality.
I don't get this. What did they think that a black hole was, before 1958?

This singularity is indeed a troublesome concept, but it is not clear that it has any physical significance. We don't know that gravity will really overwhelm all other forces. That belief is based on an extrapolation that can never be tested. Nothing inside the black hole can have any causal effect on anything outside the black hole.

The other singularity is at the event horizon. This was not well understood until decades after Schwarzschild. Now it is common for textbook to say that it is a removable singularity, or maybe not a real singularity, because someone falling into the black hole may not notice anything strange at the event horizon. I say "may not", because some argue that someone would see a firewall.

The event horizon does appear to be a singularity in some coordinate systems, and therefore to some observers. Someone watching the black hole might think it very strange that a falling object seems to take an infinite amount of time to cross the event horizon. That infinity is a singularity.

You don't really need to believe in singularities to study black hole physics. The singularities are not observable.

Friday, January 18, 2019

Psychiatrist blogger tries Kuhnian paradigms

Slate Star Codex is a very popular anonymous psychiatrist blog, and he takes a deep dive:
When I hear scientists talk about Thomas Kuhn, he sounds very reasonable. Scientists have theories that guide their work. Sometimes they run into things their theories can’t explain. Then some genius develops a new theory, and scientists are guided by that one. So the cycle repeats, knowledge gained with every step.

When I hear philosophers talk about Thomas Kuhn, he sounds like a madman. There is no such thing as ground-level truth! Only theory! No objective sense-data! Only theory! No basis for accepting or rejecting any theory over any other! Only theory! No scientists! Only theories, wearing lab coats and fake beards, hoping nobody will notice the charade!

I decided to read Kuhn’s The Structure Of Scientific Revolutions in order to understand this better. Having finished, I have come to a conclusion: yup, I can see why this book causes so much confusion. ...

But one of my big complaints about this book is that, for a purported description of How Science Everywhere Is Always Practiced, it really just gives five examples. Ptolemy/Copernicus on astronomy. Alchemy/Dalton on chemistry. Phlogiston/Lavoisier on combustion. Aristotle/Galileo/Newton/Einstein on motion. And ???/Franklin/Coulomb on electricity.

It doesn’t explain any of the examples. If you don’t already know what Coulomb’s contribution to electricity is and what previous ideas he overturned, you’re out of luck. And don’t try looking it up in a book either. Kuhn says that all the books have been written by people so engrossed in the current paradigm that they unconsciously jam past scientists into it, removing all evidence of paradigm shift. This made parts of the book a little beyond my level, since my knowledge of Coulomb begins and ends with “one amp times one second”.

Even saying Kuhn has five examples is giving him too much credit. He usually brings in one of his five per point he’s trying to make, meaning that you never get a really full view of how any of the five examples exactly fit into his system.

And all five examples are from physics. Kuhn says at the beginning that he wished he had time to talk about how his system fits biology, but he doesn’t. He’s unsure whether any of the social sciences are sciences at all, and nothing else even gets mentioned.
Kuhn is popular becaues he allows philosophers, social scientists, and crackpot to deny truth, and to complain that their ideas are just being ignored because they do not fit the paradigm.

SSC is right that Kuhn's points are only about poorly reasoned interpretations of centuries-old physics. Kuhn does not attempt to apply his silly paradigm theory to XX century science at all.

Even tho Kuhn wrote a whole book on quantum mechanics, he could never figure out whether it was a paradigm shift, because the supposed revolution did not match his theory about scientific revolutions.

Kuhn's best example is Ptolemy/Copernicus, but that was 500 years ago, and even that is very misleading.

SSC is right that the scientists who say Kuhn is reasonable will explain by saying something like "knowledge gained with every step." But Kuhn did not really see science that way. Switching from Ptolemy to Copernicus was not an increase in knowledge, he would say, but just a different point of view.

Scientists like to think that they are part of a vast program to establish facts and develop theories that converge on truth. Kuhn firmly rejected that view.

Kuhn has fallen somewhat out of favor among philosophers today, but only because they have moved on to even nuttier ideas of science.

Wednesday, January 16, 2019

Atomic laws are not deterministic

Evolutionist Jerry Coyne is on a free will rant again:
[Scott] Aaronson thinks there’s a real and important question in the free-will debates, but argues that that question is not whether physical determinism of our thoughts and actions be true, but whether they are predictable. ...

What I meant was “physical determinism” in the sense of “our behavior obeys the laws of physics”, not that it is always PREDICTABLY determined in advance. ...

As he says at 4:15, “My view is that I don’t care about determinism if it can’t be cashed out into actual predictability.”

This seems to me misguided, conflating predictability with the question of determinism. ...

What I care about is whether determinism be true. And I think it is, though of course I can’t prove it. All I can say is that the laws of physics don’t ever seem to be violated, and, as Sean Carroll emphasizes, the physics of everyday life is completely known. ...

What I meant was “physical determinism” in the sense of “our behavior obeys the laws of physics”, not that it is always PREDICTABLY determined in advance. ...

I’m doing my best to explain what seems obvious to me: we are material creatures made of atoms; our behaviors and actions stem from the arrangement of those atoms in our brains, and those atoms must obey the laws of physics. Therefore, our behaviors and actions must obey the laws of physics, and are “deterministic” in that sense. We are, in effect, robots made of meat, with a really sophisticated onboard guidance system. I know many people don’t like that notion, but I think that, given the laws of physics, it’s ineluctable.
I have to side with Aaronson here, and wonder what Coyne even means by "the laws of physics".

Of course the laws of physics are not violated. If they were, then they would not be laws of physics. Saying that does not tell us anything about free will.

Saying that we are made of atoms that obey the laws of physics is an odd argument for determinism. Our best theories about atoms are not deterministic.

Carroll has his own problems, as he believes in many-worlds.

Monday, January 14, 2019

IBM announces quantum computer

ExtremeTech reports:
At CES 2019, IBM Research has made what it hopes is a big step in that direction with what it calls the “first fully-integrated commercial quantum computer,” the Q System One. ...

IBM will be adding the Q System One to its arsenal of cloud-accessible quantum computers, first at its existing quantum data center, and at a new one planned for Poughkeepsie, New York. So for those who aren’t Fortune 500 companies with a budget to purchase their own (IBM hasn’t announced a price for the unit, but if you have to ask…), they’ll be able to make use of one. The current version reportedly “only” supports 20 Qubits, so the breakthrough isn’t in processing power compared with other research models, but instead in reliability and industrial design suitable for use in commercial environments.
This computer will be outperformed by your cell phone.

If the computer could actually do anything useful or have any performance advantage, you can be sure that IBM would be bragging about it. If they could achieve quantum supremacy, there would be academic papers and lobbying for a Nobel Prize.

Friday, January 11, 2019

Brian Greene still plugging string theory

Sam Harris interviews Brian Greene in this two-hour video.

Greene is indignant when Harris says that string theory has failed to deliver the goods. Greene says that the theory has made great progress, and has merged gravity and quantum mechanics. The only trouble is that we do not know what that merged theory is, and it has made any testable predictions. That is not much of a criticism, he says, because no quantum gravity theory will ever make any testable predictions.

Someone asked about Bohr saying that physics is about observables. Greene prefers a wider view, and says that physics should look behind the curtain and tell us what is really going on.

So Greene can justify a string theory with no testable predictions.

Greene also defended many-worlds theory and Bohmian mechanics, altho he has not fully adopted them because the measurement problem is unsolved.

Harris points out that Bohmian mechanics is nonlocal, so doing something in one place can have an instantaneous distant effect. Greene agreed, but said that quantum mechanics is nonlocal anyway.

Greene is very misleading here. It is true that in textbook QM, if you make a measurement and collapse the wavefunction, then your knowledge of some distant particle can be immediately affected. You can say that is nonlocal, but classical mechanics is nonlocal in the same way. Bohmian mechanics is different in that it says that an electron is in one place, but its physical effects are in another place. That is a fatal flaw, since no such nonlocality has ever been observed in nature.

And any defense of many-worlds is nutty.

He gives this argument, common among many-worlds advocates, that it is a simpler theory, and thus preferable under Occam's Razor. He gives an example. Suppose a simple quantum experiments results in an electron being in one of two places, symbolized by his left hand and right hand. Suppose you then find the electron in his left hand. Under Copenhagen, you would deduce that the electron is not in his right hand. But that deduction is an extra step, and the many-worlds theory is more parsimonious because it skips that step and posits that the electron is in his right hand in a parallel universe.

It is amazing to see an educated man make such a silly argument with a straight face. The argument really doesn't even have much to do with quantum mechanics, as you could use it with any theory that makes predictions, and concoct a many-worlds variant of the theory that does not make any predictions.

Besides many-worlds, Greene defends physical theories in which anything can happen. If you assume infinite space, infinite time, infinite universes, etc., then pretty much anything you can imagine would be happening somewhere, and happening infinitely many times. In particular, Jesus rose from the dead.

Greene agrees with Harris that humans have no free will. Greene rejects Harris's determinism, but says that the laws of physics have no room for free will.

At least Greene did not go along with Harris's wacky consequentialist vegetarian philosophy.

It is too bad that Physics does not have better spokesmen.