Thursday, March 22, 2018

Hawking also had unsupported beliefs

Evolutionary biologist atheist Jerry Coyne writes:
Stephen Hawking’s body was barely cold (or rather, his ashes were barely cold) when the religionists came muscling in with their tut-tutting and caveats about his accomplishments. For Father Raymond de Souza, a Canadian priest in Ontario (and Catholic Chaplain of Queen’s University), he did his kvetching in yesterday’s National Post. His column, as you see below, claims that “Hawking’s world was rather small.” Really? Why?

Well, because Hawking, while he made big advances in cosmology, couldn’t answer the BIG QUESTIONS about the Universe: namely, why does it exist? Why is there something rather than nothing? ...

But God is de Souza’s answer to this big question, and, further, the priest says that that answer is compatible with science. ...

Those are some questions for Father de Souza, but I have more:

What’s your evidence for God? And why do you adhere to the Catholic conception of God rather than the Muslim conception, which sees Jesus as a prophet but not a divine being? Why aren’t you a polytheist, like Hindus?
If God created the Big Bang, who created God?
If you say that God didn’t need a creator because He was eternal, why couldn’t the Universe be eternal?
And if God was, for some reason, eternal, what was he doing before he created the Universe? And why did he bother to create the Universe? Was he bored?

These questions aren’t original with me; they’re a staple of religious doubters. And of course Father de Souza can’t answer them except by spouting theological nonsense.
The Catholic Church does have all sort of beliefs that are grounded in faith and revelation, not scientific evidence. But so did Hawking, and the physicists that Coyne relies on, like Sean M. Carroll.

Hawking was a proponent of the multiverse and string theory. Hawking spent much of his life arguing about issues that cannot be resolved by any scientific observation.

Most of Coyne's questions, above, are not really scientific questions. There is no known scientific meaning to discussing what preceded the big bang, if anything. It is not clear that such questions make any sense.

Coyne sometimes questions the motives of his fellow humans. If he cannot necessarily figure out human motives, how can he expect to figure out God's motives?

I suspect that priest could give a good explanation for why he is not a Muslim, and not a polytheist.

I don't mind atheists calling out religious believers for having beliefs that are merely compatible with science, but not directly supported by evidence. But why are those atheists so supporting of physicists who do the same thing, with string theory, the multiverse, and black hole information?

Wednesday, March 21, 2018

The blockchain is not efficient

Nature mag reports:
Dexter Hadley thinks that artificial intelligence (AI) could do a far better job at detecting breast cancer than doctors do — if screening algorithms could be trained on millions of mammograms. The problem is getting access to such massive quantities of data. Because of privacy laws in many countries, sensitive medical information remains largely off-limits to researchers and technology companies.

So Hadley, a physician and computational biologist at the University of California, San Francisco, is trying a radical solution. He and his colleagues are building a system that allows people to share their medical data with researchers easily and securely — and retain control over it. Their method, which is based on the blockchain technology that underlies the cryptocurrency Bitcoin, will soon be put to the test. By May, Hadley and his colleagues will launch a study to train their AI algorithm to detect cancer using mammograms that they hope to obtain from between three million and five million US women.

The team joins a growing number of academic scientists and start-ups who are using blockchain to make sharing medical scans, hospital records and genetic data more attractive — and more efficient. Some projects will even pay people to use their information. The ultimate goal of many teams is to train AI algorithms on the data they solicit using the blockchain systems.
No the blockchain is not efficient, and does not offer any advantage to a project like this.

The blockchain is surely the least efficient algorithm ever widely deployed. Today it consumes energy equivalent to the usage of a small country, to maintain what would otherwise be a fairly trivial database.

It appears that someone got some grant money by adding some fashionable buzzwords: AI, blockchain, women's health.

The blockchain does not offer any confidentiality, or give patients any control over their data. This is all a big scam. It is amazing that a leading science journal could be so gullible.

Monday, March 19, 2018

Burned books are lost

"Preposterous" Sean M. Carroll writes, in memory of Stephen Hawking:
Hawking’s result had obvious and profound implications for how we think about black holes. Instead of being a cosmic dead end, where matter and energy disappear forever, they are dynamical objects that will eventually evaporate completely. But more importantly for theoretical physics, this discovery raised a question to which we still don’t know the answer: when matter falls into a black hole, and then the black hole radiates away, where does the information go?

If you take an encyclopedia and toss it into a fire, you might think the information contained inside is lost forever. But according to the laws of quantum mechanics, it isn’t really lost at all; if you were able to capture every bit of light and ash that emerged from the fire, in principle you could exactly reconstruct everything that went into it, even the print on the book pages. But black holes, if Hawking’s result is taken at face value, seem to destroy information, at least from the perspective of the outside world. This conundrum is the “black hole information loss puzzle,” and has been nagging at physicists for decades.

In recent years, progress in understanding quantum gravity (at a purely thought-experiment level) has convinced more people that the information really is preserved.
Common sense tells us that encyclopedia information gets lost in a fire. Quantum gravity thought experiments tell theorists otherwise. Whom are you going to believe?

I believe that the info is lost. I will continue to believe that until someone shows me some empirical evidence that it is not. And there is no law of quantum mechanics that says that info is never lost.

John Preskill (aka. Professor Quantum Supremacy) says otherwise. Info can't disappear because it is needed to make those quantum computers perform super-Turing feats!

Saturday, March 17, 2018

Hype to justify a new particle collider

Physicist Bee explains how theoretical particle physicists are making a phony push for funding billions of dollars for a new collider:
You haven’t seen headlines recently about the Large Hadron Collider, have you? That’s because even the most skilled science writers can’t find much to write about. ...

It’s a PR disaster that particle physics won’t be able to shake off easily. Before the LHC’s launch in 2008, many theorists expressed themselves confident the collider would produce new particles besides the Higgs boson. That hasn’t happened. And the public isn’t remotely as dumb as many academics wish. They’ll remember next time we come ask for money. ...

In an essay some months ago, Adam Falkowski expressed it this way:

“[P]article physics is currently experiencing the most serious crisis in its storied history. The feeling in the field is at best one of confusion and at worst depression”

At present, the best reason to build another particle collider, one with energies above the LHC’s, is to measure the properties of the Higgs-boson, specifically its self-interaction. But it’s difficult to spin a sexy story around such a technical detail. ...

You see what is happening here. Conjecturing a multiverse of any type (string landscape or eternal inflation or what have you) is useless. It doesn’t explain anything and you can’t calculate anything with it. But once you add a probability distribution on that multiverse, you can make calculations. Those calculations are math you can publish. And those publications you can later refer to in proposals read by people who can’t decipher the math. Mission accomplished.

The reason this cycle of empty predictions continues is that everyone involved only stands to benefit. From the particle physicists who write the papers to those who review the papers to those who cite the papers, everyone wants more funding for particle physics, so everyone plays along.
So all this hype about multiverse, susy, naturalness, strings, etc is a hoax to get funding for a new collider?

Theoretical physics peaked in about 1975. They found the Standard Model, and thus had a theory of everything. Instead of being happy with that, they claimed that they need to prove it wrong with proton decay, supersymmetry, grand unified field theories, etc. None of those worked, so they went on to multiverse, black hole firewalls, and other nonsense.

How much longer can this go on? Almost everything theoretical physicists talk about is a scam.

Stephen Hawking was a proponent of all this string theory, multiverse, supersymmetry, black hole information nonsense. I don't think that he did much in the way of serious research in these topics, but he hung out with other theoretical physicists who were all believers.

Friday, March 16, 2018

The blockchain bubble

The Bitcoin blockchain is interesting from cryptological or computing complexity view, but it is solves a problem of no interest to the typical consumer. Other technologies are preferable for the vast majority of applications. The blockchain has become a big scam.

A new essay explains some of the problems:
While much of the tech industry has grown bearish on the volatility of cryptocurrencies, enthusiasm for its underlying technology remains at an all-time high. Nowadays we see “blockchain” cropping up with impressive frequency in even the most unlikely startup pitches. And while blockchain technology does have genuinely interesting and potentially powerful use cases, it has enormous drawbacks for consumer applications that get little mention in media coverage. ...

As it stands, blockchain is caught between three competing objectives: fast, low-cost, and decentralized. It is not yet possible to make one chain that achieves all three. Fast and decentralized chains will incur a high cost because the storage and bandwidth requirements for historical archiving will be enormous and will bloat even with pruning. Aim for fast and low-cost, and you’ll have to introduce a bank-like authority (“Tangles” are a proposed solution, but not yet fully understood).

At high volume, a good credit card processor can settle a typical $2 transaction for somewhere around $0.10. Some of the largest online game economies manage more than a million user-to-user transactions per day, instantaneously, with no fees. And yet, I can name half a dozen startups trying to inject an expensive and slow blockchain into this very problem.

Blockchain is a customer support nightmare

For most consumers, losing a password to an online service is a mild inconvenience they’ve grown accustomed to, since typically, it’s quickly fixed by requesting an email reset, say, or talking with customer service.

Blockchain wallets and their passwords, by contrast, are tied to a file on a user’s hard disk and are absolutely critical to users trying to access the blockchain. By their very nature they have no recovery mechanism. “You lose your password, you lose everything” is an awful user experience for mainstream consumers and a nightmare for companies attempting to build their service on a blockchain. If you use a hosted service, the risk of theft or sudden loss of assets is very real, with central targets and limited traceability. ...

Nothing about blockchain applications is easy for consumers right now. Everyday users accustomed to making digital and online payments would have to be trained to make blockchain purchases, learning to apply the right mix of paranoia and caveat emptor to prevent theft or buying from shady dealers. Irreversible pseudonymous transactions do not lend themselves well to trust and integrity.

Compounding this is the speed and the transaction fees involved. Most public chains have settlements measured in minutes — unless you’re willing to pay high transaction fees. Compare that to the 2-10 seconds for a saved credit card transaction customers are accustomed to in the age of fast mobile interfaces and instant gratification. ...

These points only scratch the surface of what it’ll truly take to make blockchain ready for a mass market.
The Bitcoin blockchain does have some utility for the illicit transfer of money overseas, but it is hard to think of a legitimate use for it.

IBM, big banks, venture capitalists, and others are investing 100s of millions of dollars on this. It is all going to crash, because there aren't any legitimate applications that anyone has found.

Wednesday, March 14, 2018

Hawking had opinions on the black hole info paradox

From the NY Times Stephen Hawking obituary:
The discovery of black hole radiation also led to a 30-year controversy over the fate of things that had fallen into a black hole.

Dr. Hawking initially said that detailed information about whatever had fallen in would be lost forever because the particles coming out would be completely random, erasing whatever patterns had been present when they first fell in. Paraphrasing Einstein’s complaint about the randomness inherent in quantum mechanics, Dr. Hawking said, “God not only plays dice with the universe, but sometimes throws them where they can’t be seen.”

Many particle physicists protested that this violated a tenet of quantum physics, which says that knowledge is always preserved and can be retrieved. Leonard Susskind, a Stanford physicist who carried on the argument for decades, said, “Stephen correctly understood that if this was true, it would lead to the downfall of much of 20th-century physics.”

On another occasion, he characterized Dr. Hawking to his face as “one of the most obstinate people in the world; no, he is the most infuriating person in the universe.” Dr. Hawking grinned.

Dr. Hawking admitted defeat in 2004. Whatever information goes into a black hole will come back out when it explodes. One consequence, he noted sadly, was that one could not use black holes to escape to another universe. “I’m sorry to disappoint science fiction fans,” he said.

Despite his concession, however, the information paradox, as it is known, has become one of the hottest and deepest topics in theoretical physics. Physicists say they still do not know how information gets in or out of black holes.
Not only that, but physicists do not have a sufficiently coherent definition of information in order to make this a paradox. And even if they did, there were be no way to resolve an issue about information leaking out of a black hole.

This shows the sorry state of theoretical physics, that such a non-question could be "one of the hottest and deepest topics".
As a graduate student in 1963, he learned he had amyotrophic lateral sclerosis, a neuromuscular wasting disease also known as Lou Gehrig’s disease. He was given only a few years to live.
He probably had some other degenerative disease.
Dr. Hawking was a strong advocate of space exploration, saying it was essential to the long-term survival of the human race. “Life on Earth is at the ever-increasing risk of being wiped out by a disaster, such as sudden global nuclear war, a genetically engineered virus or other dangers we have not yet thought of,” he told an audience in Hong Kong in 2007.
Mars will always be much more hostile to human life than the Earth, no matter what we do to Earth.
By then string theory, which claimed finally to explain both gravity and the other forces and particles of nature as tiny microscopically vibrating strings, like notes on a violin, was the leading candidate for a “theory of everything.”

In “A Brief History of Time,” Dr. Hawking concluded that “if we do discover a complete theory” of the universe, “it should in time be understandable in broad principle by everyone, not just a few scientists.”

He added, “Then we shall all, philosophers, scientists and just ordinary people, be able to take part in the discussion of why it is that we and the universe exist.”

“If we find the answer to that,” he continued, “it would be the ultimate triumph of human reason — for then we would know the mind of God.”
There is not any hope that string theory will do that.

Hawking's scientific reputation rests on two things: extending the Penrose singularity theorems, and arguing that quantum effects cause black holes to very slowly radiate.

The 20th century mathematics revolution

Mathematician Frank Quinn wrote in 2012:
The physical sciences all went through "revolutions": wrenching transitions in which methods changed radically and became much more powerful. It is not widely realized, but there was a similar transition in mathematics between about 1890 and 1930. The first section briefly describes the changes that took place and why they qualify as a "revolution", and the second describes turmoil and resistance to the changes at the time.

The mathematical event was different from those in science, however. In science, most of the older material was wrong and discarded, while old mathematics needed precision upgrades but was mostly correct. The sciences were completely transformed while mathematics split, with the core changing profoundly but many applied areas, and mathematical science outside the core, relatively unchanged. The strangest difference is that the scientific revolutions were highly visible, while the significance of the mathematical event is essentially unrecognized.
More of his opinions are here. This is essentially correct. Relativity and quantum mechanics radically changed physics in those decades, and math was similarly changed.

I would not say that the older physics was discarded; previous ideas about Newtonian mechanics, thermodynamics, and electromagnetism are still correct within their domains of applicability. The new physics was a revolution in the sense of a turning-around with new views that permitted understandings that were unreachable with the old views.

The essence of the math revolution was precise definitions and logically complete proofs. It was led by Hilbert and some logicians. It was perfected by Bourbaki.

Of course mathematicians used the axiomatic method since Euclid, but only in the early XXc was it formalized to where proofs had no gray areas at all. The 19c still used infinitesimals and other constructs without rigorous justification.

Quinn goes on to complain that educators are almost entirely still stuck in the previous 19c math. I would add that about 90% of physicists today are also.

Quinn goes on to relate this lack of understanding to a lack of respecte for mathematicians:
Many scientists and engineers depend on mathematics, but its reliability makes it transparent rather than appreciated, and they often dismiss core mathematics as meaningless formalism and obsessive-compulsive about details. This is a cultural attitude that reflects feelings of power in their domains and world views that include little else, but it is encouraged by the opposition in elementary education and philosophy. In fact, hostility to mathematics is endemic in our culture. Imagine a conversation:

A: What do you do?
B: I am a --- .

A: Oh, I hate that. Ideally this response would be limited to such occupations as "serial killer", "child pornographer", and maybe "politician", but "mathematician" seems to work. It is common enough that many of us are reluctant to identify ourselves as mathematicians. Paul Halmos is said to have told outsiders that he was in "roofing and siding"!
Yes, mathematicians are widely regarded as freaks and weirdos. Hollywood nearly always portrays us as insane losers. There was another big movie last year doing that.

A new paper takes issue with how mathematician Felix Klein fits into this picture:
Historian Herbert Mehrtens sought to portray the history of turn-of-the-century mathematics as a struggle of modern vs countermodern, led respectively by David Hilbert and Felix Klein. Some of Mehrtens' conclusions have been picked up by both historians (Jeremy Gray) and mathematicians (Frank Quinn).
Klein is mostly famous for his Erlangen program, an 1872 essay that modernized our whole concept of geometry. He embedded geometries into projective spaces, and then charactized a geometry by its symmetry group and its invariants.

Courant wrote in 1926:
This so-called Erlangen Program, entitled `Comparative Considerations on recent geometrical research' has become perhaps the most influential and most-read mathematical text of the last 60 years.
These ideas were crucial for the development of relativity. Lorentz had correctly figured out how to resolve the paradox of Maxwell's equations and the Michelson-Morley experiment, but what really tied the theory together beautifully was the symmetry group and the invariants, as that made it a non-Euclidean geometry under the Erlangen Program. Such thinking clearly guided Poincare, Minkowski, Grossmann, and Hilbert.

No history of relativity even mentions this, as far as I know. The historians focus on Einstein, who never seemed to even understand this essential aspect of relativity.

The above paper argues that Klein was a very modern mathematician who has been unfairly maligned by Marxist historians. Quinn was duped by those historians.

Quinn wants to distinguish 19c non-rigorous math from XXc axiomatic modern math, the Marxist historians instead distinguish Aryan math, Jewish math, Nazi math, racist math, and modern math. Klein was said to be half-Jewish.

The paper has quotes like these:
"This is compounded by a defect which can be found in many very intelligent people, especially those from the semitic tribe[;]8 he [i.e., Kronecker] does not have sufficient imagination (I should rather say: intuition) and it is true to say that a mathematician who is not a little bit of a poet, will never be a consummate mathematician. Comparisons are instructive: the all-encompassing view which is directed towards the Highest, the Ideal,9 marks out, in a very striking manner, Abel as better than Jacobi, marks out Riemann as better than all his contemporaries (Eisenstein, Rosenhain), and marks out Helmholtz as better than Kirchhoff (even though the latter did not have a droplet of semitic blood)."

"When the National Socialists came to power in 1933, [Bieberbach] attempted to find political backing for his counter-modernist perspective on mathematics, and declared both, intuition and concreteness, to be the inborn characteristic of the mathematician of the German race, while the tendency towards abstractness and unconcrete logical subtleties would be the style of Jews and of the French (Mehrtens 1987). He thus turned countermodernism into outright racism and anti-modernism."
I don't know what to make of this. No one talks this way anymore.

Sunday, March 11, 2018

Krauss is being silenced

I posted about metooing Krauss.

Jerry Coyne blogs on The Lawrence Krauss affair:
After that article appeared, I did some digging on my own, and came up with three cases that have convinced me that Krauss engaged in sexual predation of both a physical nature (groping) and of a verbal nature (offensive and harassing comments). The allegations that convinced me are not public, but the accusers are sufficiently credible that I believe their claims to be true. Further, these claims buttress the general allegation of sexual misbehavior made in BuzzFeed. In my view, then, Krauss had a propensity to engage in sexual misconduct. ...

I am taking the step of not allowing comments on this post as I don’t really want any discussion here of my position, which I’ve arrived at after long cogitation. As I said, I don’t want trial by social media, and it would be hypocritical of me to allow that here.
Popular podcaster atheist Sam Harris has canceled a live public interview of Krauss, which was supposed to be a sequel to the last one.

It is not mentioned that Krauss is now apparently happily married to one of those women he supposedly sexually harassed when he was a single man 10 years ago. [According to a comment below, Krauss was divorced 8 years ago.]

Coyne has a popular blog, and probably most of his readers think that he is gay. He denies it, but he blogs a lot about his personal life, and it is obvious that he has no wife, no girlfriend, and no kids. Furthermore, he has stereotypical gay interests in music, arts, clothing, and pets. And his political views are mostly what you would expect from a gay atheist professor.

I am not saying this to criticize, but to give background for his opinions. He does not appear to have any worries that any woman is going to metoo him.

I have no way of knowing how he has flirted with women in the past, and I don't see how it is anyone's business.

I wonder where this is going. Is the Physics community going to sit back and let their colleagues be silenced and destroyed? If he were being ostracized for being a Communist, I am sure that Krauss's colleagues would stick up for him.

Like him or not, Krauss was one of the leading figures in the public image of Physics. Where is this going? He has views which are that of a typical Trump-hating leftist professor, but that is not good enough. Maybe Physics will have to be feminized, with only feminist professors being allowed to explain Physics to the public.

Update: Coyne responds:
What? I must be gay because I’m not worried about being #MeToo’d? ...

If I could have imagined all the ways people would go after me for my stand, I would never have dreamed up this one. Thanks, Roger, for a long moment of amusement. You’re an idiot.
I did not say that he is gay. If he were, then I think that he would probably say so. The point is about ostracizing Krauss.

Update: Commenter Craw writes "Well I think Coyne has simply misread Roger’s post entirely." Coyne replies:
I understand Craw’s post perfectly. I know his point was to defend the person at issue; the part about me being gay was simply his hamhanded attempt to understand why I was part of the “pile on”. But I found that part really humorous and a complete miss on the part of the writer.

Seriously, don’t insult me by saying that I didn’t understand what was a straightforward (though incredibly dumb) post. I picked out one part to show the lengths to which people will go to get at me for what I believe.

As Dark Buzz himself says, the point was not just that people were ostracizing the person at issue (unfairly, he thinks, but he’s wrong), but also to try to understand why I was part of the “pile on”.

You’re insulting the intelligence of not just me, but of everyone here. Best to leave this topic alone and stop posting excerpts from that website here. People can go over there to read any response by this befuddled individual.
Coyne says he wanted "to show the lengths to which people will go to get at me for what I believe."!

Coyne collected some anonymous and confidential gossip about Krauss, accused him of "sexual predation" and "propensity to engage in sexual misconduct", and announced that he is publicly disassociating himself from Krauss. But I am the one who is going to lengths to "get at" Coyne?!

I like Coyne's blog. Sometimes I disagree with him. Sometimes I say so.

Tuesday, March 6, 2018

Google Bristlecone would be compelling

Google just announced a quantum supremacy chip:
This device uses the same scheme for coupling, control, and readout, but is scaled to a square array of 72 qubits. We chose a device of this size to be able to demonstrate quantum supremacy in the future, investigate first and second order error-correction using the surface code, and to facilitate quantum algorithm development on actual hardware. ...

This device uses the same scheme for coupling, control, and readout, but is scaled to a square array of 72 qubits. We chose a device of this size to be able to demonstrate quantum supremacy in the future, investigate first and second order error-correction using the surface code, and to facilitate quantum algorithm development on actual hardware. ...

We believe Bristlecone would then be a compelling proof-of-principle for building larger scale quantum computers. ...

We are cautiously optimistic that quantum supremacy can be achieved with Bristlecone, and feel that learning to build and operate devices at this level of performance is an exciting challenge! We look forward to sharing the results and allowing collaborators to run experiments in the future.
In other words, they are not quite there yet, but any day now they will be announcing a Nobel-prize-worthy computer.

I expect to be reading announcements like this for the next five years. It is like reading that high-energy particle physicists are close to discovering supersymmetry. It is all a pipe dream.

Bell tacitly assumed commuting observables

Physicist N. David Mermin just posted a revision of his 1993 paper, Hidden Variables and the Two Theorems of John Bell:
In all these cases, as Bell pointed out immediately after proving the Bell-KS theorem, we have “tacitly assumed that the measurement of an observable must yield the same value independently of what other measurements must be made simultaneously.” ...

This tacit assumption that a hidden-variables theory has to assign to an observable A the same value whether A is measured as part of the mutually commuting set A, B, C, ... or a second mutually commuting set A, L, M, ... even when some of the L, M, ... fail to commute with some of the B, C, ..., is called “non-contextuality” by the philosophers. Is non-contextuality, as Bell seemed to suggest, as silly a condition as von Neumann’s — a foolish disregard of “the impossibility of any sharp distinction between the behavior of atomic objects and the interaction with the measuring instruments which serve to define the conditions under which the phenomena appear,” as Bohr23 put it?
Yes, it is a silly condition, because it is contrary to everything we know about atomic physics. Many observables do not commute. If you want to measure position and momentum of a particle, then it makes a big difference which you measure first. That is the essence of the Heisenberg uncertainty principle.

Some people try to claim that Bell just made physically reasonable assumptions, but that is false.
To those for whom non-locality is anathema, Bell’s Theorem finally spells the death of the hidden-variables program.31 But not for Bell. None of the no-hidden-variables theorems persuaded him that hidden variables were impossible. What Bell’s Theorem did suggest to Bell was the need to reexamine our understanding of Lorentz invariance, as he argues in his delightful essay on how to teach special relativity* and in Dennis Weaire’s transcription of Bell’s lecture on the Fitzgerald contraction.** “What is proved by impossibility proofs,” Bell declared, “is lack of imagination.”
Bell has his followers today, and they still refuse to accept impossibility proofs.

Update: A reader asks me to elaborate. Check the blog for past postings on this topic, for more detail.

A core tenet of quantum mechanics is that there are no hidden variables. Von Neumann was explicit about this in his textbook from about 1930. Bell, Einstein, and other dissenters have claimed that quantum mechanics is not a realistic theory, and that only a hidden variable theory achieves the sort of philosophical realism that they aspire too. Bell devised a clever way of testing his hidden variable ideas. Experiments have proved his hidden variable theories to be wrong.

I don't think Bell or Einstein ever stopped believing in hidden variable theories. Nearly everyone else, including myself, accepts that they have been proven wrong.

Saturday, March 3, 2018

The death of supersymmetry

The Economist mag reported:
“The great tragedy of science”, said Thomas Huxley, “is the slaying of a beautiful hypothesis by an ugly fact.” That, though, is the scientific method. If nature provides clear evidence that a hypothesis is wrong then you have to abandon it or at least modify it. It is psychologically uncomfortable, no doubt, for those with an interest in the correctness of the hypothesis in question. But at least everybody knows where they stand.

What happens, though, in the opposite case: when nature fails to contradict a hypothesis but stubbornly refuses to provide any facts that support it? Then nobody knows where he stands. This is fast becoming the case for a crucial hypothesis in physics, called Supersymmetry — or Susy, to its friends. Susy attempts to tie up many of the loose ends in physical theory by providing each of the known fundamental particles of matter and energy with a “supersymmetric” partner particle, called a sparticle. It is neat. It is elegant. But it is still unsupported by any actual facts. And 2017 looks like the year when the theory will either be confirmed or dropped.
No, it was not neat or elegant.

Theorists liked it because it could be used to cancel certain anomalies. With SUSY, string theory only need 6 extra dimensions instead of 22.

The Standard Model is neat and elegant because it models the universe with only about 20 parameters. Supersymmetric models all require at least 120 extra parameters, none of which have any connection to any known observational reality.

The SUSY models thus forced a vast and unnecessary complexity. Ptolemaic epicycles made much more sense, as they were only introduced to the extent needed to explain observations.

Sabine Hossenfelder posted that naturalness is nonsense. One could believe in supersymmetry independent of naturalness, but the two concepts seem to have the same followers. They have an almost religious belief that the universe will conform to their peculiar notions of beauty. For an example of such a believer, see Lubos Motl.

Wednesday, February 28, 2018

The danger of SETI

Seth Shostak writes:
With all the news stories these days about computer hacking, it probably comes as no surprise that someone is worried about hackers from outer space. Yes, there are now scientists who fret that space aliens might send messages that worm their way into human society — not to steal our passwords but to bring down our culture.

How exactly would they do that? Astrophysicists Michael Hippke and John Learned argue in a recent paper that our telescopes might pick up hazardous messages sent our way — a virus that shuts down our computers, for example, or something a bit like cosmic blackmail: “Do this for us, or we’ll make your sun go supernova and destroy Earth.” Or perhaps the cosmic hackers could trick us into building self-replicating nanobots, and then arrange for them to be let loose to chew up our planet or its inhabitants.
This sounds crazy, but if I believed that there were really advanced intelligent space aliens out there, then we should ignore their signals.

Imagine a super-intelligent DNA-based race of beings out there. What would they do? It would be impractical for them to come here. However, they could easily beam their DNA sequences here, and sucker us into synthesizing life forms based on them.

A super-advanced race could program DNA sequences as easily as we program ordinary digital computers. We are centuries away from understanding DNA that well. Even if we understood DNA well enough to create sequences to do a desired purpose, we might never understand it well enough to understand a clever DNA sequences written by someone else.

After all, computer software is that way. We can write programs to do more or less anything we want to do, but it is also possible to write a program with hidden features that are completely impractical to discover by inspecting the code.

So the space aliens could send us a DNA sequence. We would be curious, and someone would eventually synthesize it. Maybe we would get creatures that appeared to be ideal slaves. They would be strong and smart and do what they are told. Eventually we would mass-produce them and trust them with responsibility. Then some cryptographically concealed portion of the DNA gets triggered, and the creatures revolt, exterminate the humans, and start producing clones of the space aliens.

The only way to avoid this scenario is to never listen for the alien signals in the first place. The Search for Extraterrestrial Life should be stopped, as it is a threat to humanity if it ever succeeded.

Saturday, February 24, 2018

BuzzFeed tries to MeToo Krauss

BuzzFeed is attempting a MeToo takedown of a prominent physicist:
Although not a household name, Lawrence Krauss is a big shot among skeptics, a community that rejects all forms of faith — from religion and the supernatural, to unproven alternative medicines, to testimonials based on memory and anecdote — in favor of hard evidence, reason, and science.

Krauss offers the scientific method — constantly questioning, testing hypotheses, demanding evidence — as the basis of morality and the answer to societal injustices. Last year, at a Q&A event to promote his latest book, the conversation came around to the dearth of women and minorities in science. “Science itself overcomes misogyny and prejudice and bias,” Krauss said. “It’s built in.”

Online, you can buy “Lawrence Krauss for President” T-shirts and find his quotes turned into inspirational memes. He writes essays for the New Yorker and New York Times, helps decide when to move the hand of the Doomsday Clock, and has almost half a million followers on Twitter. He made a provocative (if critically panned) documentary, The Unbelievers, with the evolutionary biologist Richard Dawkins, another celebrated skeptic.

The skeptics draw heavily from traditionally male groups: scientists, philosophers, and libertarians, as well as geeky subcultures like gamers and sci-fi enthusiasts. The movement gained strength in the early 2000s, as the emerging blogosphere allowed like-minded “freethinkers” to connect and opened the community to more women like Hensley. It acquired a sharper political edge in the US culture wars, as skeptics, atheists, and scientists — including Krauss — joined forces to defend the teaching of evolution in public schools.

But today the movement is fracturing, with some of its most prominent members now attacking identity politics and “social justice warriors” in the name of free speech.
Politically, he is just what you would expect from a prominent academic: Trump-hater and bowing to leading liberal causes, including feminism. See his Twitter feed:
Nov 1, 2016: Women’s rights, and climate change. Two reasons Trump needs to lose, and hopefully Democrats gain senate majority.

April 14, 2017: Trump proves that beyond grabbing them, he doesn’t care about women’s health and welfare. No big surprise.

May 28, 2017: Even without the pussy grabbing one look at this and you know this is the kind of creep you would want your daughter to stay away from.

June 1, 2017: All bad. Not content to attack the environment, the administration joins religious fanatics to attack women’s rights
BuzzFeed charges:
And in his private life, according to a number of women in his orbit, Krauss exhibits some of the sexist behavior that he denounces in public.
I don't want to pile on here, or even to repeat the allegations. Most of them are anonymous. None are criminal, and none have any corroboration. The most serious complaint is from a woman who went to his hotel room in 2006, and complains that he made some sexual advances.

One of her critics is Rebecca Watson. She is a kooky feminist atheist/skeptic who complains a lot about misogyny among the overwhelmingly male atheist and skeptic community. She once got hysterical just because an acquaintance flirted with her in an elevator.

I believe people should be innocent until proven guilty. I do not believe that men should be destroyed by unverifiable non-criminal accusations from many years ago, even if they seem plausible. So I give Krauss a pass on this one.

Krauss illustrates the dangers of left-wing academic groupthink. My guess is that the atheist-skeptic community will purge their male scholars like Dawkins and Krauss, and allows feminist phonies like Watson to take over.

Krauss needs to rethink his Trump hatred. It used to be that he could gain approval from liberal women by reciting meaningless feminist slogans, but that strategy does not appear to work anymore.

Tuesday, February 20, 2018

Interpreting QM to doubt quantum computing

A comment said:
Scott is being a good sport here and telling the truth about quantum computers. They can't work without MWI! RIGHT!
In a later posting, Aaronson says:
Which interpretation of QM you espouse (e.g., MWI, Copenhagen, or Bohm) has no effect—none, zero—on what you should predict about the scalability of quantum computation, because by explicit design, all interpretations make exactly the same predictions for any experiment you can do on any system external to yourself.
This is contrary to the opinion of others like David Deutsch, who say that the many-worlds interpretation is what justifies quantum computing.

So Aaronson would presumably deny that he jumped on the many-worlds (MWI) bandwagon in order to justify quantum computing.

This comment gives an example of a very famous and respect theoretical physicist not believing in quantum computing:
As another approach, there’s a somewhat weird book by Gerard ‘t Hooft that you probably know about (warning, 3MB download):

It explicitly (p. 87) says it’s incompatible with large-scale QC and that if such QC happens then the book’s proposed theory is falsified. So at least it says something concrete :).
'tHooft was one of the main masterminds behind the Standard Model of particle physics, but he also has some funny ideas about super-determinism. So I think he is probably right about large-scale QC being impossible, but I am not endorsing his reasoning.

It is funny to warn about a 3MB download, as that is the average size of a web page today.

Monday, February 19, 2018

Trump and quantum computing

Scott Aaronson gripes about an article gushing about the coming quantum computers:
Would you agree that the Washington Post has been a leader in investigative journalism exposing Trump’s malfeasance? Do you, like me, consider them one of the most important venues on earth for people to be able to trust right now? How does it happen that the Washington Post publishes a quantum computing piece filled with errors that would embarrass a high-school student doing a term project (and we won’t even count the reference to Stephen “Hawkings” — that’s a freebie)?
No, the coverage of President Trump has been much more biased and misleading.

The author commented on Scott's blog, giving reputable sources for all the wild quantum computing claims. Scott has quit talking to the press. What do you expect from journalists, if all the experts are talking nonsense?

It could be worse, and NewScientist reported:
Quantum computer could have predicted Trump’s surprise election
Predicting the outcome of a general election is a challenge. But combining quantum computing with neural network technology could improve forecasts, according to a new study that used just such a network to model the 2016 US presidential elections.
The article is paywalled, so I don't know how bad it is.

The press promoted string theory, when all the big-shot professors said that it was the secret to the fundamental workings of the universe. Now they have moved onto quantum computing, and other topics.

A comment refers to this:
Briefly stated, the Gell-Mann Amnesia effect is as follows. You open the Newspaper to an article on some subject you know well. In Murray’s case (Murray Gell-Mann), physics. In mine, show business. You read the article and see the Journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect. I call these the “wet streets cause rain” stories. Paper’s full of them. In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.
Yes, and complaining about "fake news" inevitably leads to discussions of bogus stories in the Wash. Post, NY Times, and CNN.

Saturday, February 17, 2018

No real conflict between quantum and relativity

In a PBS TV Nova physics blog, a Harvard post-doc writes:
Sometimes the biggest puzzle in physics seems like the worst relationship in the universe. Quantum mechanics and general relativity are the two best theories in physics, but they have never been able to get along.

Quantum mechanics successfully describes the world of the very small, where nothing is predictable and objects don’t have precise positions until they are observed.

General relativity does well with describing massive objects. It says that the world behaves in a precise, predictable way, whether or not it’s observed.

Neither one has ever failed an experimental test. But so far no experiment has been able to show which — if either — of the two theories will hold up in the places where the two converge, such as the beginning of the universe and the center of a black hole.
That's right, no experiment can re-create the beginning of the Big Bang, or the center of a black hole. And that is why there is no conflict between quantum mechanics and relativity.

Thursday, February 15, 2018

How Kepler answered Tycho's arguments

Modern philosophers and historians like to make fun of early astronomers who believed in geocentrism (Sun going around the Earth), as if that were the epitome of unscientific thinking.

Here is a new paper:
In his 1606 De Stella Nova, Johannes Kepler attempted to answer Tycho Brahe's argument that the Copernican heliocentric hypothesis required all the fixed stars to dwarf the Sun, something Brahe found to be a great drawback of that hypothesis. This paper includes a translation into English of Chapter 16 of De Stella Nova, in which Kepler discusses this argument, along with brief outlines of both Tycho's argument and Kepler's answer (which references snakes, mites, men, and divine power, among other things). ...

Answers such as these to Brahe’s star size objection to Copernicus would endure. In 1651 Giovanni Battista Riccioli in his Almagestum Novum analyzed one hundred and twenty six proand anti-Copernican arguments, concluding that the vast majority in either direction were indecisive. As he saw it, there were two decisive arguments, both in favor of the antiCopernicans: one was the absence of any detectable Coriolis Effect (as it would be called today);6 the other was Brahe’s star size objection.
Here is the star size objection. Under a Copernican heliocentric model, the stars must be very far away, much farther than any known distances. Furthermore, the optics of the day made the stars appear to have a noticeable diameter that would make them unimaginably huge if they were really so far away.

The apparent diameters were later understood to be spurious artifacts of diffraction. The stars are not really so large.

Kepler's arguments are interesting, but do not resolve the matter. Nothing truly resolves the matter, because choosing a frame of reference is not a scientific question. You can say that a Sun-centered frame is closer to being inertial, or that the large size of the Sun compared to the Earth makes it more reasonable to think of the Earth as moving, but that's about all.

Monday, February 12, 2018

Physicist Lawrence Krauss on a podcast

The Sam Harris podcast #115 interviews a physicist:
In this episode of the Waking Up podcast, Sam Harris speaks with Lawrence Krauss and Matt Dillahunty about the threat of nuclear war, science and a universal conception of morality, the role of intuition in science, the primacy of consciousness, the nature of time, free will, the self, meditation, and other topics. This conversation was recorded at New York City Center on January 13, 2018.
You can also get it on YouTube.

It is amusing to hear these guys ramble on about their beliefs about consciousness, free will, etc., while all the time claiming to be such super-rational scientists that the word "believe" does not even apply to them.

I heard a story (from Ben Shapiro) that Sam Harris was in another public discussion on free will, and a questioner from the audience said, "You convince me that we have no free will, but I have a 5-year-old son. What should I tell him?" Harris was flustered, and then said to lie to the kid.

There are probably some things that he thinks that his audience is too stupid to understand. What else is he lying about?

Harris referred to how he spent years meditating and taking hallucinogenic drugs, after dropping out of college. Krauss was noticeably skeptical that he learned so much from taking drugs, but Harris made an analogy to Krauss studying mathematics. That is, just as Harris's LSD hallucinations might seem like nonsense to others, Krauss looking at a page of mathematical symbols also looks like nonsense to most people.

No, it is a stupid analogy. Advanced mathematics is demonstrably useful for all sorts of purposes. Nobody ever accomplished anything while on LSD. This sort of reasoning gives legitimacy to the nuttiest ideas, and it is surprising to get it from someone who made most of his reputation by badmouthing religion.

One quibble I have with Krauss is that about about 1:00:00, he says:
[questioning that] classical reason and logic should guide our notions.

The point is that classical reason and logic, when it comes to the world, are often wrong because our notions of classical reason and logic are based on our experience
Someone else says he is wrong, because the logic of Venn diagrams is not based on experience. Krauss sticks to his point, and says Venn diagrams can be wrong because an electron can be in two places at once.

I think that the problem here is that physicists, like Krauss, have a flawed view of mathematics. To the mathematician like myself, classical reason and logic are never wrong.

Electrons are not really in two places at once. But even if they were, they would not achieve some sort of logical impossibility. Nothing ever achieves a logical impossibility. It might be that you have a theory that is ambiguous about the electron location, or that there are 2 electrons, or something else. In my view, the electron is a wave that is not localized to a point, and separate places could have a possibility of observing it. Your exact view depends on your interpretation, but you are still going to use classical mathematics in any case.

I am still trying to get over Aaronson believing in many-worlds. It is hard for me to see how these smart professors can believe in such silly things.

Philosopher of Physics Tim Maudlin defends his favorite interpretations on Aaronson's site, and also has a new paper on The Labyrinth of Quantum Logic.

Quantum logic is a clever way to try to explain puzzles like the double slit, where light goes thru the double slit like a wave, but attempts to understand the experiment in terms of individual photon going thru one slit or the other are confusing. Quantum logic declares that it can be true that "the photon goes thru slit 1 and the photon goes thru slit 2", but the rules of logic need to be changed so that it does not imply that either "the photon goes thru slit 1" or "the photon goes thru slit 2" is true. In math jargon, they deny the law of the excluded middle.

As Maudlin explains, there has been some historical interest in quantum logic, but it has never proved useful, or even made much sense. No physical experiment can possibly effect our laws of mathematics. You can tell yourself that quantum logic explains the double slit experiment, but that's all you can do. It doesn't lead to anything else.

Quantum probability is another topic where people try to over-interpret quantum mechanics to try to tell us something about mathematics. Some say that quantum mechanics discovered that probabilities can be negative, or some such nonsense. Again, you can choose to think about things that way, but it has no bearing on mathematical probability theory, and probabilities are never negative.

The Harris and Krauss show was repeated in Chicago, with this summary:
I asked Sam at dinner if he was going to talk about free will, but he said that they’d covered that topic in a previous event, which was archived on his podcast. Nevertheless, one guy asked the speakers how, given the absence of free will, they could advise him how to cure his addiction to alcohol. That was a good question, because Sam and Lawrence are hard determinists (Matt is a compatibilist but still a determinist.) Answering that question without getting balled up in an infinite regress is quite difficult. If, for instance, you tell someone that they can choose to put themselves in a milieu where there is no alcohol and also surround themselves with supportive people (yes, that’s how it could be done), you risk making people think that you can make such a choice freely, instantiating dualism. I suppose a good answer is that one’s brain is a computer that weighs various inputs before giving the output (a decision), and that the advice Sam gave — which could of course influence the actions of the addict — was also adaptive, in the sense that he was giving strategies that his brain calculated had a higher probability of being useful. Further, we all try to be helpful to cement relationships and get a good reputation—that’s part of the evolved and learned program of our brains. But of course Sam had no “free” choice about his advice, and this shows the difficulty of discussing free will with those who haven’t thought about it. ...

The final remark came from Lawrence, who said that every time he stays in a hotel, his own gesture to diminish faith was to take the Gideon Bible, wrap it in a piece of paper, and throw it in the trash. Sam remarked dryly, “And that’s why atheists have such a good public image.”
Alchololics Anonymous teaches that alcoholics have little or no free will, and that they are doomed to remain alcoholics no matter what they do. They are advised to accept what they do not have the free will to change.

It is weird to worry about instantiating dualism.

Krauss is one of the better and more level-headed physicists in the public eye, but he thinks that he is helping people by telling them that they have no free will, and trashing their Bibles. He is good when he sticks to the physics.

Wednesday, February 7, 2018

Essays on What Is Fundamental?

submitted an essay to the annual FQXI essay contest.
What Is “Fundamental”
October 28, 2017 to January 22, 2018

Interesting physical systems can be described in a variety of languages. A cell, for example, might be understood in terms for example of quantum or classical mechanics, of computation, or information processing, of biochemistry, of evolution and genetics, or of behavior and function. We often consider some of these descriptions “more fundamental” than other more “emergent” ones, and many physicists pride themselves on pursuing the most fundamental sets of rules. But what exactly does it mean?

Are “more fundamental” constituents physically smaller? Not always: if inflation is correct, quanta of the inflaton field are as large as the observable universe.

Are “less fundamental” things made out of “more fundamental” ones? Perhaps – but while a cell is indeed "made of" atoms, it is perhaps more so “made of" structural and genetic information that is part of a long historical and evolutionary process. Is that process more fundamental than the cell?

Does a “more fundamental” description uniquely specify a “less fundamental” one? Not in many cases: consider string theory, with its landscape of 10500 or more low-energy limits. And the same laws of statistical mechanics can apply to many types of statistically described constituents.

Is “more fundamental” more economical or elegant in terms of concepts or entities? Only sometimes: a computational description of a circuit may be much more elegant than a wavefunction one. And there are hints that even gravity, a paragon of elegance, may be revealed as a statistical description of something else.
I couldn't get too excited about this essay, because "fundamental" is a subjective and ill-defined term. Nevertheless, I did post about 5 times in the last years on this subject, primarily in response to philosophers posting ridiculous opinions about what is fundamental in physics.

On the FQXi site, you can comment on my essay, and rate it on a scale from 1 to 10.

Monday, February 5, 2018

Aaronson gets on the Many-Worlds bus

Quantum computer complexity theorist Scott Aaronson has come out of the closet as a Many-Worlds supporter:
I will say this, however, in favor of Many-Worlds: it’s clearly and unequivocally the best interpretation of QM, as long as we leave ourselves out of the picture! I.e., as long as we say that the goal of physics is to give the simplest, cleanest possible mathematical description of the world that somewhere contains something that seems to correspond to observation, and we’re willing to shunt as much metaphysical weirdness as needed to those who worry themselves about details like “wait, so are we postulating the physical existence of a continuum of slightly different variants of me, or just an astronomically large finite number?”
This surprises me. H is the one who always insists on describing quantum mechanics in terms of probabilities. But in MWI, there is no Born rule, and there are no probabilities. All possible outcomes occur in different universes, and there is no known way to say that some universes are more probable than others.

Tim Maudlin has criticisms in the comments. He prefers the de Broglie Bohm interpretation, aka pilot wave theory, aka nonlocal hidden variable theory, aka (as he prefers) something with Bell's nonlocal beables.

Scott tries to answer why so many physicists have signed onto something so ridiculous:
Pascal #67:
There was a time when MWI was considered completely outlandish, but now it seems to be taken much more seriously.
What do you think caused this change in perspective?
Interesting question! Here are the first seven answers that spring to mind for me:

1. The founding generation of QM, the generation that had been directly influenced by Bohr and Heisenberg, died off. A new generation of physics students, less under their influence, decided that MWI made more sense to them. (You may want to read Max Tegmark’s personal account of his “conversion” to MWI, as a grad student in Berkeley, in his book. I suspect hundreds of similar stories played out.)

2. The quantum cosmologists mostly signed on to MWI, because Copenhagen didn’t seem to them to provide a sensible framework for the questions they were now asking. (Did the quantum fluctuations in the early universe acquire definite properties only when we, billions of years later, decided to measure the imprint of those properties on the CMB?)

3. David Deutsch, the most famous contemporary MWI proponent, was inspired by MWI to invent quantum computing; he later famously asked, “to anyone who still denies MWI, how does Shor’s algorithm work? if not parallel universes, then where was the number factored?” Anyone who understands Shor’s algorithm can give sophisticated answers to that question (mine are here). But in any case, what’s true is that quantum computing forced everyone’s attention onto the exponentiality of the wavefunction—something that was of course known since the 1920s, but (to my mind) shockingly underemphasized compared to other aspects of QM.

4. The development of decoherence theory, which fleshed out a lot of what had been implicit in Everett’s original treatment, and forced people to think more about under what conditions a measurement could be reversed.

5. The computer revolution. No, I’m serious. If you imagine it’s a computer making the measurement rather than a human observer, it somehow seems more natural to think about the computer’s memory becoming entangled with the system, but that then leads you in MWI-like directions (“but what if WE’RE THE COMPUTERS?”). Indeed, Everett explicitly took that tack in his 1957 paper. Also, if you approach physics from the standpoint of “how would I most easily simulate the whole universe on a computer?,” MWI is going to seem much more sensible to you than Copenhagen. It’s probably no coincidence that, after leaving physics, Everett spent the rest of his life doing CS and operations research stuff for the US defense department (mostly simulating nuclear wars, actually).

6. The rise of New Atheism, Richard Dawkins, Daniel Dennett, eliminativism about consciousness, and a subculture of self-confident Internet rationalists. Again, I’m serious. Once you’ve trained yourself to wield Occam’s Razor as combatively as people did for those other disputes, I think you’re like 90% of the way to MWI. (Again it’s probably no coincidence that, from what I know, Everett himself would’ve been perfectly at home in the worldview of the modern New Atheists and Internet rationalists.)

7. The leadership, in particular, of Eliezer Yudkowsky in modern online rationalism. Yudkowsky, more so even than Deutsch (if that’s possible), thinks it’s outlandish and insane to believe anything other than MWI — that all the sophisticated arguments against MWI have no more merit than the sophisticated arguments of 400 years ago against heliocentrism. (E.g., “If we could be moving at enormous speed, despite feeling exactly like we’re standing still, then radical skepticism would be justified about absolutely anything!”) Eliezer, and others like him, created a new phenomenon, of people needing to defensively justify why they weren’t Many-Worlders.
Wow. I think that Scott is mostly correct about these reasons, but it is surprising to see an MWI-advocate admit to them.

It is bizarre that radical rationalist atheist skeptics have somehow bullied mainstream physicists into believing in parallel unobservable universes. How does that happen? Yes, I know Bohr is dead, but can't anyone else fill in for him?

400 years ago, heliocentrism had the same predictions as the alternatives, with the heliocentrists starting to find some technical advantages. MWI does not make any quantitative predictions, and does not have any technical computational advantages. Some argue that MWI has a philosophical advantage in that it eliminates the observer, but that hasn't really given us any new physics.

MWI is still completely outlandish. It is amazing how many otherwise intelligent men have been sucked in by it.

Update: There is some technical discussion on Aaronson's blog about whether Born's rule can co-exist with Bohm's mechanics and with MWI. A couple of comments do a good job of explaining why failing to predict probabilities really is a fatal blow to MWI.

This issue goes right to the heart of what science is all about. Copenhagen quantum mechanics does a great job of predicting experiments, and became very widely accepted. Then comes MWI, which doesn't predict anything in our universe, but predicts all sorts of wild fantasies in unobservable parallel universes. So many of the smart professors jump ship, and endorse MWI? Weird.

Update: Lubos Motl piles on Aaronson, as usual.

Motl credits Aaronson with being about 80% correct, expecially when he (Aaronson) thinks for himself. Aaronson correctly explains what's wrong with the pilot wave and transactional interpretations. The big remaining issue is Copenhagen versus MWI.

That big issue goes right to the heart of what science is all about, and Motl explains it well. He creates an analogy of MWI to creationism (as Copenhagen to biological evolution). These creationism analogies get tiresome, but he makes a good point. MWI adds a huge belief structure way beyond what there can ever be evidence.

Update: In a later posting, Aaronson says:
Which interpretation of QM you espouse (e.g., MWI, Copenhagen, or Bohm) has no effect—none, zero—on what you should predict about the scalability of quantum computation, because by explicit design, all interpretations make exactly the same predictions for any experiment you can do on any system external to yourself.
This is contrary to the opinion of others like David Deutsch, who say that the many-worlds interpretation is what justifies quantum computing.

Wednesday, January 31, 2018

Horgan v Deutsch on consciousness

I posted on consciousness, without noticing a couple of other recent opinions.

SciAm's John Horgan writes:
Is science infinite? Can it keep giving us profound insights into the world forever? Or is it already bumping into limits, as I argued in The End of Science? In his 2011 book The Beginning of Infinity physicist David Deutsch made the case for boundlessness. When I asked him about consciousness in a recent Q&A he replied: “I think nothing worth understanding will always remain a mystery. And consciousness (qualia, creativity, free will etc.) seems eminently worth understanding.”

At a meeting I just attended in Switzerland, “The Enigma of Human Consciousness,” another eminent British physicist, Martin Rees, challenged Deutsch’s optimism. At the meeting scientists, philosophers and journalists (including me) chatted about animal consciousness, machine consciousness, psychedelics, Buddhism, meditation and other mind-body puzzles.

Rees, speaking via Skype from Cambridge, reiterated points he made last month in “Is There a Limit to Scientific Understanding?” In that essay Rees calls Beginning of Infinity “provocative and excellent” but disputes Deutsch’s central claim that science is boundless. Science “will hit the buffers at some point,” Rees warns. He continues:There are two reasons why this might happen. The optimistic one is that we clean up and codify certain areas (such as atomic physics) to the point that there’s no more to say. A second, more worrying possibility is that we’ll reach the limits of what our brains can grasp. There might be concepts, crucial to a full understanding of physical reality, that we aren’t aware of, any more than a monkey comprehends Darwinism or meteorology… Efforts to understand very complex systems, such as our own brains, might well be the first to hit such limits. Perhaps complex aggregates of atoms, whether brains or electronic machines, can never know all there is to know about themselves. 

Rees’s view resembles mine. In The End of Science I asserted that scientists are running into cognitive and physical limits and will never solve the deepest mysteries of nature, notably why there is something rather than nothing. I predicted that if we create super-intelligent machines, they too will be baffled by the enigma of their own existence.
It seems possible to me that we will never understand consciousness any better than we do today.

I have a lot of confidence in the power of science, but that is mainly for questions that have scientific formulations. These questions about consciousness do not necessarily have any answer.

Monday, January 29, 2018

Electrons may be conscious

From a Quartz essay:
Consciousness permeates reality. Rather than being just a unique feature of human subjective experience, it’s the foundation of the universe, present in every particle and all physical matter.

This sounds like easily-dismissible bunkum, but as traditional attempts to explain consciousness continue to fail, the “panpsychist” view is increasingly being taken seriously by credible philosophers, neuroscientists, and physicists, including figures such as neuroscientist Christof Koch and physicist Roger Penrose.

“Why should we think common sense is a good guide to what the universe is like?” says Philip Goff, a philosophy professor at Central European University in Budapest, Hungary. “Einstein tells us weird things about the nature of time that counters common sense; quantum mechanics runs counter to common sense. Our intuitive reaction isn’t necessarily a good guide to the nature of reality.”
I am not sure if this is nutty or not. We do not have a scientific definition of consciousness, so there is no way to test the ideas in this essay.

Nevertheless, there appears to be such a thing as consciousness, even if we cannot give a good definition of it.

Assuming you are a materialist, and not a dualist, the human brain is the sum of its constituent parts. Do those parts have a little bit of consciousness, or does consciousness only emerge after a certain cognitive capacity is reached? Both seem possible to me.

If consciousness is emergent, then we can expect AI computers to be conscious some day. Or maybe those computers will never be conscious until they are made of partially conscious parts.

There is an argument that decoherence times in a living brain environment are sufficiently fast that quantum mechanics cannot possibly play any part in consciousness. I do not accept that. The argument shows that you do not have Schroedinger cats in your head, or at least not for very long, but you quantum mechanics could have a vital role in decision making. We don't understand the brain well enough to say.

It may also turn out that consciousness will never be defined precisely enough for these questions to make sense.

Wednesday, January 24, 2018

Intellectuals are afraid of free will

It is funny to see scientists expressing a quasi-religious belief in determinism, and it rejecting free will.

The leftist-atheist-evolutionist writes:
One thing that’s struck me while interacting with various Scholars of Repute is how uncomfortable many get when they have to discuss free will. ...

No, I’m talking about other prominent thinkers, and I’ll use Richard Dawkins as an example. When I told him in Washington D.C. that, in our onstage conversation, that I would ask him about free will, he became visibly uncomfortable. ...

Why this avoidance of determinism? I’ve thought about it a lot, and the only conclusion I can arrive at is this: espousing the notion of determinism, and emphasizing its consequences, makes people uncomfortable, and they take that out on the determinist. For instance, suppose someone said — discussing the recent case of David Allen Turpin and Louise Anna Turpin, who held their 13 children captive under horrendous circumstances in their California home (chaining them to beds, starving them, etc.—”Yes, the Turpins people did a bad thing, but they had no choice. They were simply acting on the behavioral imperatives dictated by their genes and environment, and they couldn’t have done otherwise.”

If you said that, most people would think you a monster—a person without morals who was intent on excusing their behavior. But that statement about the Turpins is true! ...

But grasping determinism, as I, Sam [Harris], and people like Robert Sapolsky believe, would lead to recommending a complete overhaul of our justice system. ...

I assume that most readers here accept determinism of human behavior, with the possible exception of truly indeterminate quantum-mechanical phenomena that may affect our behavior but still don’t give us agency. What I want to know is why many intellectuals avoid discussing determinism, which I see as one of the most important issues of our time.
A comment disagreed, saying that quantum uncertainty in the brain implies that Turpin could have done otherwise. Coyne first accused a commenter of denying the laws of physics, and then qualified his post to say that he "could not CONSCIOUSLY have done otherwise. ... Randomness does not give us any “freedom”, ...".

There are several problems here. First, the laws of physics are not all deterministic. So one can say that Turpin had some free choice without denying any laws of physics.

Second, it is very difficult to say what is conscious behavior, and what is not, as we have no good scientific definition of consciousness. Does a dog make a conscious decision to chew on a bone? Can a computer possibly make a conscious decision? There is no consensus on how to answer questions like these.

Third, Coyne's put-down of randomness is nonsense. If you do have freedom to make arbitrary choices, then such choices look exactly like randomness. If you complain that my decisions are unpredictable, then you are complaining about my freedom to make decisions. There is no observable difference.

A week before, Coyne attacked E.O. Wilson:
Note in the second paragraph that Wilson cites “chance” in support of free will. If by “chance” he means “things that are determined but we can’t predict”, then that’s no support for the classic notion of free will: the “you could have chosen otherwise” sort. If he’s referring instead to pure quantum indeterminacy, well, that just confers unpredictability on our decisions, not agency. We don’t choose to make an electron jump in our brain.

From what I make of the third paragraph, his message is that because we are a long way from figuring out how we make behavioral decisions, we might as well act as if we have free will, especially because “confidence in free will is biologically adaptive.”
Wilson did not even express an opinion on free will, but merely expressed skepticism about understanding the brain.

Coyne also comments:
I’ve explained it many times; if you don’t understand the difference between somebody committing a good or bad act that was predetermined, and somebody freely choosing to perform a good or bad act for which they are praised or damned for supposedly making a good or bad choice, I can’t help you. They are different and the former isn’t empty.
But Coyne believes that the latter is empty, because no one can really choose anything.

My guess is that Dawkins is a believer in free will, but doesn't like to talk about it because he doesn't know how to square it with his widely-professed atheist beliefs. That could be true about other intellectuals as well.

While it may be baffling that some intellectuals are afraid to endorse determinism, I think that it is even more baffling that Coyne and Sam Harris are so eager to trying to convince people that no one has any choice about their thought processes.

I agree with this criticism of Sam Harris:
If there is no free will, why write books or try to convince anyone of anything? People will believe whatever they believe. They have no choice! Your position on free will is, therefore, self-refuting. The fact that you are trying to convince people of the truth of your argument proves that you think they have the very freedom that you deny them.
And yet many intellectuals deny free will, including physicists from Einstein to Max Tegmark.

Thursday, January 18, 2018

S. M. Carroll goes beyond falsifiability

Peter Woit writes:
Sean Carroll has a new paper out defending the Multiverse and attacking the naive Popperazi, entitled Beyond Falsifiability: Normal Science in a Multiverse. He also has a Beyond Falsifiability blog post here.

Much of the problem with the paper and blog post is that Carroll is arguing against a straw man, while ignoring the serious arguments about the problems with multiverse research.
Here is Carroll's argument that the multiverse is better than the Freudian-Marxist-crap that Popper was criticizing:
Popper was offering an alternative to the intuitive idea that we garner support for ideas by verifying or confirming them. In particular, he was concerned that theories such as the psychoanalysis of Freud and Adler, or Marxist historical analysis, made no definite predictions; no matter what evidence was obtained from patients or from history, one could come up with a story within the appropriate theory that seemed to fit all of the evidence. Falsifiability was meant as a corrective to the claims of such theories to scientific status.

On the face of it, the case of the multiverse seems quite different than the theories Popper was directly concerned with. There is no doubt that any particular multiverse scenario makes very definite claims about what is true. Such claims could conceivably be falsified, if we allow ourselves to count as "conceivable" observations made outside our light cone. (We can't actually make such observations in practice, but we can conceive of them.) So whatever one's stance toward the multiverse, its potential problems are of a different sort than those raised (in Popper's view) by psychoanalysis or Marxist history.

More broadly, falsifiability doesn't actually work as a solution to the demarcation problem, for reasons that have been discussed at great length by philosophers of science.
Got that? Just redefine "conceivable" to include observations that could never be done!

While Woit rejects string and multiverse theory, he is not sure about quantum computers:
I am no expert on quantum computing, but I do have quite a bit of experience with recognizing hype, and the Friedman piece appears to be well-loaded with it.
I'll give a hint here -- scientists don't need all the crazy hype if they have real results to brag about.

Monday, January 15, 2018

Gender fairness, rather than gender bias

I have quoted SciAm's John Horgan a few times, as he has some contrarian views about science and he is willing to express skepticism about big science fads. But he also has some conventional leftist blinders.

A couple of women posted a rebuttal to him on SciAm:
They found that the biggest barrier for women in STEM jobs was not sexism but their desire to form families. Overall, Ceci and Williams found that STEM careers were characterised by “gender fairness, rather than gender bias.” And, they stated, women across the sciences were more likely to receive hiring offers than men, their grants and articles were accepted at the same rate, they were cited at the same rate, and they were tenured and promoted at the same rate.

A year later, Ceci and Williams published the results of five national hiring experiments in which they sent hypothetical female and male applicants to STEM faculty members. They found that men and women faculty members from all four fields preferred female applicants 2:1 over identically qualified males.
This seems accurate to me. It is hard to find any women in academia with stories about how they have been mistreated.

Nevertheless, men get into trouble if they just say that there are personality differences between men and women. If you are a typical leftist man, you are expected to complain about sexism and the patriarchy, and defer to women on the subject.

Thursday, January 11, 2018

Intel claims 49-qubit computer

Here is news from the big Consumer Electronics Show:
Intel announced it has built a 49-qubit processor, suggesting it is on par with the quantum computing efforts at IBM and Google.

The announcement of the chip, code-named “Tangle Lake,” came during a pre-show keynote address by Intel CEO Brian Krzanich at this year’s Consumer Electronics Show (CES) in Las Vegas. “This 49-qubit chip pushes beyond our ability to simulate and is a step toward quantum supremacy, a point at which quantum computers far and away surpass the world’s best supercomputers,” said Krzanich. The chief exec went on to say that he expects quantum computing will have a profound impact in areas like material science and pharmaceuticals, among others. ...

In November 2017, IBM did announce it had constructed a 50-qubit prototype in the lab, while Google’s prediction of delivering a 49-qubit processor before the end of last year apparently did not pan out. As we’ve noted before, the mere presence of lots of qubits says little about the quality of the device. Attributes like coherence times and fault tolerance are at least as critical as size when it comes to quantum fiddling.

Details like that have not been made public for Tangle Lake, which Intel has characterized a “test chip.” Nevertheless, Intel’s ability to advance its technology so quickly seems to indicate the company will be able to compete with quantum computers being developed by Google, IBM, and a handful of quantum computing startups that have entered the space.
Until recently, the physics professors were saying that we needed 50 qubits to get quantum supremacy. Now these companies are claiming 49 qubits or barely 50 qubits, but they are not claiming quantum supremacy.

They don't really have 49 qubits. They are just saying that because it is the strongest claim that they can make, without someone calling their bluff and demanding proof of the quantum supremacy.
“In the quest to deliver a commercially viable quantum computing system, it’s anyone’s game,” said Mike Mayberry, corporate vice president and managing director of Intel Labs. “We expect it will be five to seven years before the industry gets to tackling engineering-scale problems, and it will likely require 1 million or more qubits to achieve commercial relevance.”
A million qubits? Each one has to be put in a Schrodinger cat state where it is 0 and 1 at the same time, pending an observation, and all million qubits have to be simultaneously entangled with each other.

This cannot happen in 5-7 years. This will never achieve commercial relevance.

Monday, January 8, 2018

The confidence interval fallacy

Statisticians have a concept called the p-value that is crucial to most papers in science and medicine, but is widely misunderstood. I just learned of another similarly-misunderstood concept.

Statisticians also have the confidence interval. But it does not mean what you think.

The Higgs boson has mass 125.09±0.21 GeV. You might see a statement that a 95% confidence interval for the mass is [124.88,125.30], and figure that physicists are 95% sure that the mass is within that interval. Or that 95% of the observations were within that interval.

Nope. It has some more roundabout definition. It does not directly give you confidence that the mass is within the interval.

Statistician A. Gelman recently admitted getting this wrong in his textbook, and you can learn more at The Fallacy of Placing Confidence in Confidence Intervals.

Some commenters at Gelman's blog say that the term was misnamed, and maybe should have been called "best guess interval" or something like that.

Saturday, January 6, 2018

Science perpetuating unequal social orders

A reader sends this 2017 paper on The careless use of language in quantum information:
An imperative aspect of modern science is that scientific institutions act for the benefit of a common scientific enterprise, rather than for the personal gain of individuals within them. This implies that science should not perpetuate existing or historical unequal social orders. Some scientific terminology, though, gives a very different impression. I will give two examples of terminology invented recently for the field of quantum information which use language associated with subordination, slavery, and racial segregation: 'ancilla qubit' and 'quantum supremacy'.
I first heard of this sort of objection in connection with Master/slave (technology)
Master/slave is a model of communication where one device or process has unidirectional control over one or more other devices. In some systems a master is selected from a group of eligible devices, with the other devices acting in the role of slaves.[1][2][3] ...

Appropriateness of terminology

In 2003, the County of Los Angeles in California asked that manufacturers, suppliers and contractors stop using "master" and "slave" terminology on products; the county made this request "based on the cultural diversity and sensitivity of Los Angeles County".[5][6] Following outcries about the request, the County of Los Angeles issued a statement saying that the decision was "nothing more than a request".[5] Due to the controversy,[citation needed] Global Language Monitor selected the term "master/slave" as the most politically incorrect word of 2004.[7]

In September 2016, MediaWiki deprecated instances of the term "slave" in favor of "replica".[8][9]

In December 2017, the Internet Systems Consortium, maintainers of BIND, decided to allow the words primary and secondary as a substitute for the well-known master/slave terminology. [10]
I am not even sure that people associate "white supremacy" with South Africa anymore. It appears to be becoming one of those meaningless name-calling epithets, like "nazi". Eg, if you oppose illegal immigration, you might be called a white supremacist.

Until everyone settled on "quantum supremacy", I used other terms on this blog, such as super-Turing. That is, the big goal is to make a computer that can do computations with a complexity that exceeds the capability of a Turing machine.

Meanwhile, the inventor of the quantum supremacy term has cooked a new term for the coming Google-IBM overhyped results:
Noisy Intermediate-Scale Quantum (NISQ) technology will be available in the near future. Quantum computers with 50-100 qubits may be able to perform tasks which surpass the capabilities of today's classical digital computers, but noise in quantum gates will limit the size of quantum circuits that can be executed reliably. NISQ devices will be useful tools for exploring many-body quantum physics, and may have other useful applications, but the 100-qubit quantum computer will not change the world right away --- we should regard it as a significant step toward the more powerful quantum technologies of the future. Quantum technologists should continue to strive for more accurate quantum gates and, eventually, fully fault-tolerant quantum computing. ...

We shouldn’t expect NISQ is to change the world by itself; instead it should be regarded as a step toward more powerful quantum technologies we’ll develop in the future. I do think that quantum computers will have transformative effects on society eventually, but these may still be decades away. We’re just not sure how long it’s going to take.
Will Google and IBM be happy claiming NISQ and admitting that quantum supremacy and transformative effects are decades away? I doubt it, but if they cannot achieve quantum supremacy, they will surely want to claim something.
A few years ago I spoke enthusiastically about quantum supremacy as an impending milestone for human civilization [20]. I suggested this term as a way to characterize computational tasks performable by quantum devices, where one could argue persuasively that no existing (or easily foreseeable) classical device could perform the same task, disregarding whether the task is useful in any other respect. I was trying to emphasize that now is a very privileged time in the coarse-grained history of technology on our planet, and I don’t regret doing so. ...

I’ve already emphasized repeatedly that it will probably be a long time before we have fault-tolerant quantum computers solving hard problems.
He sounds like Carl Sagan telling us about communication with intelligent life on other planets.

Thursday, January 4, 2018

Google promises quantum supremacy in 2018

NewScientist reports:
If all goes to plan in 2018, Google will unveil a device capable of performing calculations that no other computer on the planet can tackle. The quantum computing era is upon us.

Well, sort of. Google is set to achieve quantum supremacy, the long-awaited first demonstration of quantum computers’ ability to outperform ordinary machines at certain tasks. Regular computing bits can be in one of two states: 0 or 1. Their quantum cousins, qubits, get a performance boost by storing a mixture of both states at the same time.

Google’s planned device has just 49 qubits – hardly enough to threaten the world’s high-speed supercomputers. But the tech giant has stacked the deck heavily in its favour, choosing to attack a problem involving simulating the behaviour of random quantum objects – a significant home advantage for a quantum machine.

This task is useless. Solving it won’t build better AI, ...
Google promised quantum supremacy in 2017. Now it is 2018.

If we hear this every year for the next five years, will anyone finally agree that I am right to be skeptical?

Tuesday, January 2, 2018

Scientists censoring non-leftist views

Scott Aaronson is considering joining a group supporting a diversity of views in academia, but backed out because he believes that if someone like Donald Trump were elected, "I’d hope that American academia would speak with one voice".

Okay, he obvious does not favor a diversity of views, and does not even want representation of the electoral majority that voted for Trump.

SciAm blogger John Horgan writes:
In principle, evolutionary psychology, which seeks to understand our behavior in light of the fact that we are products of natural selection, can give us deep insights into ourselves. In practice, the field often reinforces insidious prejudices. That was the theme of my recent column “Darwin Was Sexist, and So Are Many Modern Scientists.”

The column provoked such intense pushback that I decided to write this follow-up post. ...

Political scientist Charles Murray complained that Scientific American “has been adamantly PC since before PC was a thing,” which as someone who began writing for the magazine in 1986 I take as a compliment. ...

War seems to have emerged not millions of years ago but about 12,000 years ago when our ancestors started abandoning their nomadic ways and settling down. ... War and patriarchy, in other words, are relatively recent cultural developments. ...

Proponents of biological theories of sexual inequality accuse their critics of being “blank slaters,” who deny any innate psychological tendencies between the sexes. This is a straw man. I am not a blank-slater, nor do I know any critic of evolutionary psychology who is. But I fear that biological theorizing about these tendencies, in our still-sexist world, does more harm than good. It empowers the social injustice warriors, and that is the last thing our world needs.
Our world will always be sexist. It is human nature. Only in academia can you find ppl striving for a non-sexist world.

It is odd to hear a science magazine writer complain that "biological theorizing ... does more harm than good." When we only allow certain theorizing that supports certain political views, then we get bogus theories. In this case, he only wants anti-sexism and anti-patriarchy theories.

It is amusing to read Scott's comments, where he agrees with the academic leftists 98%. But Ken Miller jumps on his for disagreeing with white genocide. That is, Scott says that a leftist professor deserves to be criticized if he advocates white genocide.