Monday, December 30, 2019

Reasons to be a quantum computing skeptic

Craig Costello gave this TED Talk on how quantum computers will the modern cryptography that is used in smart phones are everything else.

He says public key cryptography depends on the difficulty of factoring integers, and quantum computers will crack that in 10 to 30 years. This poses a risk today because we have military secrets that are supposed to be kept for longer than that, and our enemies may already be stockpiling intercepted data in the hopes that a quantum computer will decode them some day.

He ended saying, "No matter what technilogical future we live in, our secrets will always be part of our humanity, and that is worth protecting."

Really 30-year military secrets are part of our humanity?

Okay, he is just trying to sell his crypto work. He is a cryptographer working on ideas in search of a practical application. That is not why I am posting this.

To convince the TED audience that existing crypto methods are insecure, he explained that the vulnerability was from some startling XX century physics discoveries, that cryptographers did not account for:

(1) a proton can be in two places at once.
(2) two objects, on opposite sides of the universe, can influence each other instantaneously.
(3) a computer can make use of a calculation in a parallel universe.

When were these discovered? Who got the Nobel prizes for these discoveries?

This is why I am a quantum computing skeptic.

Academic cryptography became irrelevant to the real world in the 1990s when all the major problems got solved. The quantum computing hype neatly aligns with research justifications and government grants.

But my real skepticism is based on the facts that the arguments for quantum computing depend on goofy interpretations of physics are not backed up by experiment.

Yes, one can believe in some standard interpretation of QM, such as textbook/Copenhagen/QBism, and still believe in quantum computing, but then quantum computing just seems like a wild conjecture.

Many physicists firmly believe that quantum computing is just an engineering problem. When they try to convince you, they nearly always rely on one or more of the above three "discoveries".

Saturday, December 28, 2019

New essay contest on unpredictability

FQXi announces:
At FQXi we're excited to launch our latest essay contest, with generous support from the Fetzer Franklin Fund and the Peter and Patricia Gruber Foundation. The topic for this contest is: Undecidability, Uncomputability, and Unpredictability.

For a brief time in history, it was possible to imagine that a sufficiently advanced intellect could, given sufficient time and resources, in principle understand how to mathematically prove everything that was true. They could discern what math corresponds to physical laws, and use those laws to predict anything that happens before it happens. That time has passed. Gödel’s undecidability results (the incompleteness theorems), Turing’s proof of non-computable values, the formulation of quantum theory, chaos, and other developments over the past century have shown that there are rigorous arguments limiting what we can prove, compute, and predict. While some connections between these results have come to light, many remain obscure, and the implications are unclear. Are there, for example, real consequences for physics — including quantum mechanics — of undecidability and non-computability? Are there implications for our understanding of the relations between agency, intelligence, mind, and the physical world?

In this essay contest, we open the floor for investigations of such connections, implications, and speculations. We invite rigorous but bold and open-minded investigation of the meaning of these impossibilities for reality, and for us, its residents. The contest is open now, and we will be accepting entries until March 16th.
The contest is open to anyone, but the judging rules are slanted towards their own members, and and the panel of judges is secret.

Last time I submitted an essay, it was summarily rejected without explanation.

When was that "brief time in history"? The 19th century, I guess. By the 1930s, we knew about chaos, undecidability, etc.

Did scientists in the 19C really believe that someday a computer could be programmed to determine all mathematical truths and predict all physical phenomena? I doubt it. That would require a belief in a extreme form of determinism, and a depressing view of humanity. We would all be pre-programmed robots. Some man's brilliant mathematical idea would be no better than memorized digits of pi. A computer could do it better.

My hunch is that 19C scientists believed that humans were better than just robots, and that there were limits to knowledge.

If I told them that in 2019 we would have useful 5-day weather forecasts, would they have argued it should be possible to forecast weather months or years in advance? I doubt it.

When quantum mechanics was discovered in the 1920s, it described physics on an atomic level, as previously not possible. Scientists learned that they could make amazingly precise predictions, and that there were fundamental uncertainties blocking other types of predictions. Which discovery was more surprising? My guess is that the ability to make precise predictions was much more surprising.

Does any of this relate to agency and intelligence? I will have to think about it.

Thursday, December 26, 2019

Obsessed with quantum understanding

Physicist Stephen Boughn writes Why Are We Obsessed with "Understanding" Quantum Mechanics?:
Richard Feynman famously declared, "I think that I can safely say that nobody really understands quantum mechanics." Sean Carroll lamented the persistence of this sentiment in a recent opinion piece entitled, "Even Physicists Don't Understand Quantum Mechanics. Worse, they don't seem to want to understand it." Quantum mechanics is arguably the greatest achievement of modern science and by and large we absolutely understand quantum theory. Rather, the "understanding" to which these two statements evidently refer concerns the ontological status of theoretical constructs.
I agree with this. Quantum mechanics has been well-understood since 1930. It has become fashionable to rant about quantum mysteries, but some very smart physicists pondered those mysteries in the 1930s, and came to conclusions that are much more sensible than anything Carroll and other modern quantum expositors say. That is also Boughn's view:
I confess that during my student days, and even thereafter, I was mightily bothered by these quantum mysteries and enjoyed spending time and effort worrying about them. Of course, as Carroll also laments, I avoided letting this avocation interfere with my regular physics research, otherwise, my academic career undoubtedly would have suffered.4 As I approached the twilight of my career (I'm now retired), I happily resumed my ambition to "understand quantum mechanics" and have ended up writing several papers on the subject.5 Furthermore, as others before me, I now proudly profess that I finally understand quantum mechanics J. Even so, I'm somewhat chagrinned that my understanding is essentially the same as that expressed by Niels Bohr in the 1930s, minus some of Bohr's more philosophical trappings.6 Some have criticized my epiphany with remarks to the effect that I am too dismissive of the wonderful mysteries of quantum mechanics by relegating its role to that of an algorithm for making predictions while at the same time too reverent of insights provided by classical mechanics. I've come to believe that, quite to the contrary, those who still pursue an understanding of Carroll's quantum riddles are burdened with a classical view of reality and fail to truly embrace the fundamental quantum aspects of nature.
Again, my experience is similar. I used to accept this story that there are great quantum mysteries that we need to solve with research into quantum foundations. But the problem is that people like Sean M. Carroll just don't want to accept quantum mechanics, and want to fit it into a classical physics paradigm.

Carroll subscribes to many-worlds, and claims that it solves the measurement problem. That is just crackpot stuff. The textbooks of the 1930s were vastly more sensible. There is no sense in which many-worlds solves the measurement problem, or any other problem.

Some people claim that Einstein discovered entanglement in his 1935 EPR paper, but this paper says that Einstein and Bohr were already arguing about entanglement in 1927.

Wednesday, December 25, 2019

There was a year zero

NPR Radio news:
NPR's Rachel Martin talks to Sandi Duncan, managing editor of the Farmers' Almanac, about the debate over when a decade ends, and when a new one technically begins. ...

MARTIN: I mean, it feels like a big deal, 2019 to 2020. Why is there such a debate about whether or not this is the end of the decade?

SANDI DUNCAN: You know, it's really interesting. But I hate to tell you it's not.

MARTIN: It's not?

DUNCAN: Actually, no. We ran a story several years ago. In fact, you know, remember the big celebration in 1999. People thought that the new millennial was going to start the next year. But really, a decade begins actually with the year ending in the numeral one. There was never a year zero. So when we started counting time way back when, it goes one through 10. So a decade is 10 years. So in actuality, the next decade won't start until January 1, 2021.
Wow, there is some crazy reasoning.

She says that the twenties will not start until 2021 because there was never a year zero!

There certainly was a year 0. It was 2019 years ago. Nobody called it year 0 at the time, just as nobody called the next year year 1 at the time, as the Christian calendar was only adopted a couple of centuries later.

The year 0 is also called 1 BC, which is confusing, and reflects a poor definition, but is not a reason to say that we need to wait another year to start the twenties.

Merry Christmas.

Apparently somebody thought that a good way to standardize the calendar was to say that Jesus was born on Dec. 25, 1 AD. Mathematicians might have said that it makes more sense to use Jan 1, 0 AD, or maybe the 0th day of the 0th month of the 0th year. Then our calendar years would measure the age of Jesus. But that appears to have been not the intent, as they have him being born near the end of year 1.

Of course estimates of Jesus's birthday could be off by 5 or 10 years. The estimate was just a way of fixing the calendar.

Monday, December 23, 2019

Leftism is now a job requirement for professors

I did not know that applicants for U. California Berkeley faculty positions had to pass a 5-point test for commitment to Leftist ideals. Many other universities have similar requirements. Math professor Abigail Thompson
explains in the WSJ.

In the 1950s, the university had a requirement that faculty could not belong to the Communist Party, or any other organization with a mission to violently overthrow the American government. The Commies really were evil back then.

If you believe in free speech, the current system in 1000x worse.

The Math community is sharply split on this issue, as you can see from these published letters.

For a long time, I have heard excuses that academia is overwhelmingly Leftist because Leftists are smarter, or that university life appeals to socialists, or something. Now it turns out that they have a process for systematically excluding right-wingers!

Friday, December 20, 2019

Aaronson defends quantum supremacy term

Computer complexity theory professor Scott Aaronson is freaked out about his leftist colleagues calling him a "quantum supremacist" in Nature and elsewhere. He is, quite literally, a quantum supremacist. He has staked his whole career on the concept, he has written one book on it and is writing another, he often talks it up to the news media, and he aggressively promoted Google's work as proof of quantum supremacy.

He defends his views:
The same half of me thinks: do we really want to fight racism and sexism? Then let’s work together to assemble a broad coalition that can defeat Trump. And Jair Bolsonaro, and Viktor Orbán, and all the other ghastly manifestations of humanity’s collective lizard-brain. Then, if we’re really fantasizing, we could liberalize the drug laws, and get contraception and loans and education to women in the Third World, and stop the systematic disenfranchisement of black voters, and open up the world’s richer, whiter, and higher-elevation countries to climate refugees, and protect the world’s remaining indigenous lands (those that didn’t burn to the ground this year).

In this context, the trouble with obsessing over terms like “quantum supremacy” is not merely that it diverts attention, while contributing nothing to fighting the world’s actual racism and sexism. The trouble is that the obsessions are actually harmful. For they make academics — along with progressive activists — look silly. They make people think that we must not have meant it when we talked about the existential urgency of climate change and the world’s other crises. They pump oxygen into right-wing echo chambers.
As this makes clear, he is a Jewish supremacist, and not a White supremacist. He, along with most other secular academic Jews, favors policies that replace Whites with non-whites. He even refers to non-Jews as sub-human ("lizard-brain").

I don't want to discuss impeachment politics here, but Aaronson is saying that his commonplace arrogant Jewish leftist Trump-hating politics should somehow excuse him being an avowed quantum supremacist. No, it is just the opposite. He shows the same sort of illogical thinking in both arguments.

Until Google's recent announcement, Aaronson often conceded that quantum supremacy was just a conjecture, and we did not know whether it could be achieved or not. Now he is fully on board with arguing that quantum supremacy has now been proved. "Quantum Supremacist" could be his epitaph.

After writing this, I see that Aaronson responds to a comment:
Regarding your “provocative bit,” I confess that being Jewish never once crossed my mind when I wrote about “reclaiming” the word supremacy.

(Having said that: if, according to the Official Regulations of Wokeness, being Jewish, a member of one of humanity’s longest-persecuted identity groups, grants my voice and perspective some sort of special consideration from my opponents in these discussions, I hereby wish to claim the special consideration now. 😀 )

By “reclaiming” I simply meant: reclaiming the word “supremacy” from the racists, for and on behalf of all decent human beings, for the latter to use as a common inheritance.
This is just more confirmation that he is a Leftist Jewish Supremacist.

First, he was obviously conscious of his Jewishness because he talks about Whites as people different from himself.

Second, Jews are humanity's most privileged group, not its most persecuted.

Third, he explicitly claims Jewish privilege, as if that makes him superior to non-Jews.

Fourth, the word "supremacy" is not a word used by the "racists" he decries. There are a few people who call themselves white nationalists or white advocates, but I have never heard any call himself a white supremacist. It is entirely a term used by leftist anti-white organizations like the SPLC or NY Times. He is not reclaiming it from racists.

Fifth, the word supremacy is used all the time in contexts that have nothing to do with race.

Update: Aaronson brings in his fellow professor (and evolutionist/atheist/leftist/psychologist/linguist) Steve Pinker to argue:
To state what should be obvious: nip is not a sexual word. ... Men have nipples too, and women’s nipples evolved as organs of nursing, not sexual gratification. Indeed, many feminists have argued that it’s sexist to conceptualize women’s bodies from the point of view of male sexuality.
This is way out of my expertise, but it is not obvious to me. Men don't breastfeed, so nursing is plainly sexual, and most moms enjoy doing it. Humans would have died out a long time ago otherwise. It seems plausible to me that women's nipples evolved for sexual gratification.

I wanted to agree with Pinker's larger point, but his reasoning doesn't make much sense.

Aaronson also reiterated his disavowal of this comment:
5. I believe there still exist men who think women are inferior, that they have no business in science, that they’re good only for sandwich-making and sex. Though I don’t consider it legally practicable, as a moral matter I’d be fine if every such man were thrown in prison for life.
Okay, I am sure he never meant this comment to be taken literally, but it shows his leftist mindset that he even wants to punish men for their opinions. Classical liberals believed that everyone should be entitled to their opinions. Today's leftists very much want to punish men for their opinions. When they guys denounce Pres. Trump, it seems to me that their own bigotry is at work.

Thursday, December 19, 2019

The End of the Scientific Culture

Razib Khan writes on The End of the Scientific Culture:
In the 1990s there broke out something we now call the “Science Wars.” Basically it pitted the bleeding edge of “Post-Modernism” against traditional scientific scholars, who were generally adherents of a naive sort of positivism. By the latter, I’m not saying that these were necessarily people steeped in Carnap, Popper or Lakatos. Very few scientists know anything about philosophy of science except for a few nods to Karl Popper, and more dimly, Francis Bacon. By “naive positivism” I’m just alluding to the reality most scientists think there’s a world out there, and the scientific method is the best way to get at that world in terms of regularities. ...

By the 2000s these arguments seemed stale and tired. ...

Now the “academic Left” is on the march again. Though somewhat differently, and arguably more potently. The Left is self-consciously “science-based” and “reality-based.” Instead of the grand assertion that science is just another superstition, the bleeding edge of the academic Left now argues that science needs to be perfected and purged of oppression, white supremacy, etc. Who after all would favor oppression and white supremacy?

The problem is that to eat away at the oppressive structures the acid of critique has to be thrown at the pretention of objectivity of scientists and science as it is today, and as it has come to be, over the past few hundred years.
I don't entirely agree with him, but look at Nature's Ten people who mattered in science in 2019, and with the
usual LuMo rant against it. The ten are mostly political activists, or chosen to meet diversity requirements.

LuMo says the only legitimate entry is the guy who led Google's demo of quantum supremacy, but Nature refuses to use the word "supremacy" because that reminds people of White supremacy. I would think that Greta Thunberg would be the one on the list to most remind ppl of White supremacy. Only a white girl could turn a set of psychiatric disorders into international celebrity status.

I think that the claim of quantum supremacy is bogus for scientific, not political, reasons.

Tuesday, December 17, 2019

Achieving Quantum Wokeness


I mentioned the silly Nature letter, and now here is a WSJ Editorial (unpaywalled here):
Achieving Quantum Wokeness
Political correctness barges into a computer science breakthrough.

Two months ago researchers at Google published a paper in Nature saying they had achieved “quantum supremacy.” It’s a term of art, meaning Google’s quantum computer had zipped through some calculating that would take eons on a classical supercomputer.

Don’t you see the problem? “We take issue with the use of ‘supremacy’ when referring to quantum computers,” as 13 academics and researchers wrote last week, also in Nature. “We consider it irresponsible to override the historical context of this descriptor, which risks sustaining divisions in race, gender and class.”

The word “supremacy,” they claim, is tainted with “overtones of violence, neocolonialism and racism.” They lament that “inherently violent language” already has crept into other scientific fields, as with talk of human “colonization” or “settlement” of outer space. These terms “must be contextualized against ongoing issues of neocolonialism.” Instead of “supremacy,” they suggest using the term “quantum advantage.”

There it is, folks: Mankind has hit quantum wokeness. Our species, akin to Schrödinger’s cat, is simultaneously brilliant and brain-dead. We built a quantum computer and then argued about whether the write-up was linguistically racist.

Taken seriously, the renaming game will never end. First put a Sharpie to the Supremacy Clause of the U.S. Constitution, which says federal laws trump state laws. Cancel Matt Damon for his 2004 role in “The Bourne Supremacy.” Make the Air Force give up the term “air supremacy.” Tell lovers of supreme pizza to quit being so chauvinistic about their toppings. Please inform Motown legend Diana Ross that the Supremes are problematic.

The quirks of quantum mechanics, some people argue, are explained by the existence of many universes. How did we get stuck in this one?
We could rename the Supremacy Clause to be the Trump Clause!

I would agree with this editorial, except Google did not really build a quantum computer. The whole point of its using the word "supremacy" was to convey a superiority that was not really there.

Monday, December 16, 2019

Reasons to be a climate skeptic

I am not really a climage change skeptic, as I believe that the climate has been changing for millions of years, and will continue to change.

No mention of benefits. Increased CO2 and global warming have some very positive benefits, such as increased crop yields and possible navigability of the Arctic Ocean. If someone tells you harms without any comparison to benefits, then you are not getting the full story.

Emphasis on man-made causes. If building windmills is going to make life better for us somehow, then I don't see why it matters whether the global warming was caused by industialization or cosmic forces. It appears that they would rather give us a guilt trip, than tell us the policy benefits.

Alignment with leftist causes. Most of those wanting action on climate change are leftists, and nearly all of their recommendations are things that they are ideologically committed to, independent of climate.

Avoiding nuclear energy. Nuclear fission power is still the only large-scale non-CO2 energy source, and we would be switching to it if we really needed to get off of fossil fuels.

No mention of demographics. The biggest threats to CO2 increases are demographic, such as people moving from Third World countries to the USA, or the population explosions in Africa and India. Climate change activists hardly ever mention this.

Alarmist rhetoric. I hear scare stories like the polar bears going extinct, or that we are doomed without drastic action in the next ten years. It seems obvious that the people who say this stuff do not even believe it themselves.

When I see a climate change argument that suffer the above defects, I just tune it out. It reminds me of a couple of times in my life when a salesman tried to pitch a product or service to me, and refused to answer basic questions like how much it costs or how long is the contract.

It is all too dishonest for me.

A recent SciAm article says:
One year ago, the international scientific community could hardly have expected that Greta Thunberg, a teenager from Sweden, would become one of its greatest allies. Since beginning her weekly “School Strike for the Climate,” the petite 16-year-old has skillfully used her public appearances and powerful social media presence to push for bolder global action to reduce carbon emissions.

“Again and again, the same message,” she tweeted recently. “Listen to the scientists, listen to the scientists. Listen to the scientists!” ...

A little-publicized Stanford University study, also released on Earth Day, found that global warming from fossil fuel use “very likely exacerbated global economic inequality” over the past 50 years. The study’s authors found that warming has likely enhanced economic growth in cooler, wealthier countries while dampening economic growth in hotter, poorer countries.
Wow, that is worded in a funny way. So it admits that global warming has enhanced our economic growth!

Economic growth in those poorer countries is entirely dependent on those cooler wealthier countries. Without the industrialized West, those poor countries would be getting poorer. So those poorer countries are probably benefitting from global warming also.

Notice how the result is written in a way to appeal to leftists, rather than to just explain the result.

Saturday, December 14, 2019

Sick of Quantum Computing's Hype



Wired mag reports:
This spring, a mysterious figure by the name of Quantum Bullshit Detector strolled onto the Twitter scene. Posting anonymously, they began to comment on purported breakthroughs in quantum computing—claims that the technology will speed up artificial intelligence algorithms, manage financial risk at banks, and break all encryption. The account preferred to express its opinions with a single word: “Bullshit.” ...

In the subsequent months, the account has called bullshit on statements in academic journals such as Nature and journalism publications such as Scientific American, Quanta, and yes, an article written by me in WIRED. Google’s so-called quantum supremacy demonstration? Bullshit. Andrew Yang’s tweet about Google’s quantum supremacy demonstration? Bullshit. Quantum computing pioneer Seth Lloyd accepting money from Jeffrey Epstein? Bullshit. ...

The anonymous account is a response to growing anxiety in the quantum community, as investment accelerates and hype balloons inflate. Governments in the US, UK, EU, and China have each promised more than $1 billion of investment in quantum computing and related technologies. Each country is hoping to become the first to harness the technology’s potential to help design better batteries or to break an adversary’s encryption system, for example. But these ambitions will likely take decades of work, and some researchers worry whether they can deliver on inflated expectations—or worse, that the technology might accidentally make the world a worse place. “With more money comes more promises, and more pressure to fulfill those promises, which leads to more exaggerated claims,” says Bermejo-Vega.
The guy has to remain anonymous to avoid career consequences.

A reader sends this video (skip the first half, an interesting discussion of an unrelated topic) describing a guy who attacked M-Theory as a failure, and it was career suicide. Besides attacking M-theory, he attacks the whole high-energy physics enterprise, as he says that the useful new particle was discovered in 1929. (The positron has some medical uses.)

Abby Thompson is a tenured mathematician, or else she would have committed career suicide to pointing out the corruption of diversity statements. See this article from a few weeks ago, and these responses just published by the American Mathematical Society.

Thursday, December 12, 2019

Overtones of violence, neocolonialism and racism

Nature is one of the two top science journals in the world, and it just published this wacky letter:

We take issue with the use of ‘supremacy’ when referring to quantum computers that can out-calculate even the fastest supercomputers (F. Arute et al. Nature 574, 505–510; 2019). We consider it irresponsible to override the historical context of this descriptor, which risks sustaining divisions in race, gender and class. We call for the community to use ‘quantum advantage’ instead.

The community claims that quantum supremacy is a technical term with a specified meaning. However, any technical justification for this descriptor could get swamped as it enters the public arena after the intense media coverage of the past few months.

In our view, ‘supremacy’ has overtones of violence, neocolonialism and racism through its association with ‘white supremacy’. Inherently violent language has crept into other branches of science as well — in human and robotic spaceflight, for example, terms such as ‘conquest’, ‘colonization’ and ‘settlement’ evoke the terra nullius arguments of settler colonialism and must be contextualized against ongoing issues of neocolonialism.

Instead, quantum computing should be an open arena and an inspiration for a new generation of scientists.
I don't think this solves anything. Someone could still complain that white men are advantaged over other groups.

If quantum supremacy turns out to be a big fraud, and quantum supremacy is associated with neocolonialism, that maybe that will help credit neo-colonialism?

These sorts of ridiculous complaints have become commonplace now, and I have concluded that most of them are not sincere. They are not really offended by the phrase. They are just trying to exercise some political power.

Lubos Motl also criticizes the Nature letter, and others:
Well, I find it amazing that Nature that used to be a respectable journal is publishing similar lunacy from such despicable and intellectually empty activists these days. ...

Wow. Dr Preskill, aren't you ashamed of being this kind of a hardcore coward? How does it feel to be a pußy of sixteen (OK, 1000 in binary) zeroes? People who are far from being supreme?

I encourage readers from Caltech to spit at Prof Preskill, a spineless collaborationist with pure evil. Maybe he needs to start to drown in saliva to understand that pure evil shouldn't be supported in this way. ...

Let's hope that the NPCs will never open the U.S. Constitution because they would find 3 copies of the word "supremacy" there (two of them are in "national supremacy") and they would start to burn the book immediately.
He is overreacting a bit, but it is outrageous that a leading science journal publishes a social justice warrior demand that we stop using a perfectly good neutral word.

Monday, December 9, 2019

Applying covariance to white empiricism

A University of Chicago journal just published a wacky paper:
Making Black Women Scientists under White Empiricism: The Racialization of Epistemology in Physics

White empiricism is one of the mechanisms by which this asymmetry follows Black women physicists into their professional lives. Because white empiricism contravenes core tenets of modern physics (e.g., covariance and relativity), it negatively impacts scientific outcomes and harms the people who are othered. ...

Yet white empiricism undermines a significant theory of twentieth-century physics: General Relativity (Johnson 1983). Albert Einstein’s monumental contribution to our empirical understanding of gravity is rooted in the principle of covariance, which is the simple idea that there is no single objective frame of reference that is more objective than any other (Sachs 1993). All frames of reference, all observers, are equally competent and capable of observing the universal laws that underlie the workings of our physical universe. Yet the number of women in physics remains low, especially those of African descent ... Given that Black women must, according to Einstein’s principle of covariance, have an equal claim to objectivity regardless of their simultaneously experiencing intersecting axes of oppression, we can dispense with any suggestion that the low number of Black women in science indicates any lack of validity on their part as observers.
I am pretty sure this article is not intended to be a joke.

Covariance was not really Einstein's contribution. His papers do not show that he ever understood the arguments for covariance that Poincare made in 1905, and that Minkowski made in 1907. Einstein wrote a paper in 1914 arguing that covariance was impossible in relativity. It appears that Grossmann, Levi-Civita, and Hilbert convinced him of the merits of covariance.

Not everyone agrees that covariance, by itself, has physical significance. It is a mathematical concept, and it allows formulas in one frame to be converted to formulas in another frame. Poincare's "principle of relativity" is what says that inertia frames see the same physics.

I try to stick to physics on this blog. Other fields are hopelessly corrupted with sloppy work and political ideology. Physics is supposed to have higher standards. I mention this because of its goofy relativity reasoning.

Update: Jerry Coyne criticizes this article, but takes it way too seriously.

Update: I had assumed that this woman is black, but her web site says:
I will not say yes to any invitations that clash with the Jewish High Holy Days (Rosh Hashanah, Shabbat Shuvah, and Yom Kippur) or the first two nights of Passover.
Maybe she married a Jew and converted.

Friday, December 6, 2019

Believe this vision of the world religiously

Physicist Chris Fuchs says in a Discover mag interview:
You’ve written critically about the Many Worlds (or Everettian) Interpretation of quantum mechanics. What are its main shortcomings?

Its main shortcoming is simply this: The interpretation is completely contentless. I am not exaggerating or trying to be rhetorical. It is not that the interpretation is too hard to believe or too nonintuitive or too outlandish for physicists to handle the truth (remember the movie A Few Good Men?). It is just that the interpretation actually does not say anything whatsoever about reality. I say this despite all the fluff of the science-writing press and a few otherwise reputable physicists, like Sean Carroll, who seem to believe this vision of the world religiously.

For me, the most important point is that the interpretation depends upon no particular or actual detail of the mathematics of quantum theory. No detail that is, except possibly on an erroneous analysis of the meaning of “quantum measurement” introduced by John von Neumann in the 1930s, which is based on a reading of quantum states as if they are states of reality. Some interpretations of quantum theory, such as the one known as QBism, reject that analysis. ..

The Many Worlds Interpretation just boils down to this: Whenever a coin is tossed (or any process occurs) the world splits. But who would know the difference if that were not true? What does this vision have to do with any of the details of physics? ..

You also object to the idea of multiple alternate worlds on a philosophical level, correct?

Depending in no way on the details of quantum theory, the Many Worlds Interpretation has always seemed to me as more of a comforting religion than anything else. It takes away human responsibility for anything happening in the world in the same way that a completely fatalistic, deterministic universe does, though it purportedly saves the appearance of quantum physics by having indeterministic chance in the branches.
I have been saying similar things here for years. I quit calling Many-Worlds an interpretation, because it is not even that. It doesn't even make any predictions. As he says, there is no content to it.

Tuesday, November 12, 2019

Eroding public trust in nutrition science

Harvard has responded to new research that red meat is harmless:
[N]utrition research is complex, and rarely do [its findings] reverse so abruptly. That's why it's so important to look beyond the headlines at the quality of the evidence behind the claims. Still, the publication of these new guidelines in such a prominent medical journal is unfortunate as it risks further harm to the credibility of nutrition science, eroding public trust in research as well as the recommendations they ultimately inform.
Funny how new research nearly always causes further harm to the credibility of nutrition science. Others say:
The misplaced low-fat craze of the 80's was the direct result of Harvard Professor Dr. Hegsted, who participated in the McGovern report that lead to dietary recommendation changes for Americans to eat more carbs in place of meat and fat, a recommendation that turned out to be based on "science" paid for by the sugar industry. Those recommendations caused an explosion of obesity, diabetes, heart disease, and cancer - all metabolic disorders caused by the insulin resistance that resulted from those recommended dietary changes.
My trust in nutrition science is nearly zero.

What do any of these people know about nutrition?

Physicians get a lot of respect for their medical opinions, and they probably deserve it most of the time. But most of them have never taken a course on nutrition, and don't know more than anyone else on the subject.

Everyone eats food, and so has opinions about food. Child-rearing is another subject where everyone has an opinion, but those opinions have almost no scientific value.

The nutrition research is so confusing that I don't know how to conclude that any food is healthier than any other food.

Sunday, November 10, 2019

Academic groupthink on paradigm shifts

Novelist Eugene Linden writes in a NY Times op-ed:
How Scientists Got Climate Change So Wrong ...

The word “upended” does not do justice to the revolution in climate science wrought by the discovery of sudden climate change. The realization that the global climate can swing between warm and cold periods in a matter of decades or even less came as a profound shock to scientists who thought those shifts took hundreds if not thousands of years. ...

In 2002, the National Academies acknowledged the reality of rapid climate change in a report, “Abrupt Climate Change: Inevitable Surprises,” which described the new consensus as a “paradigm shift.” This was a reversal of its 1975 report.
I wonder if he even realizes what these terms means. A scientific revolution or paradigm shift was famously described by Thomas Kuhn as a change in thinking that is incommensurable with previous theories. That is, there is no data to say whether the new thinking is any better or worse than the old. Kuhn described scientists jumping to the new paradigm like a big fad, and not really based on any scientific analysis.

Of course it is all Donald Trump's fault:
computer modeling in 2016 indicated that its disintegration in concert with other melting could raise sea levels up to six feet by 2100, about twice the increase described as a possible worst-case scenario just three years earlier.
Computer models change that much in 3 years? That says more about the instability of the models than anything else.

If the Trump administration has its way, even the revised worst-case scenarios may turn out to be too rosy. ... But the Trump administration has made its posture toward climate change abundantly clear: Bring it on!
Trump is one of the most pro-science presidents we have ever had. Even tho he is widely hated in academia, we hardly ever hear any criticisms of how he has funded scientific work.

Trump has also over-funded quantum computing, and yet Scott Aaronson posts a rant against him. Everyone is entitled to his opinion, of course, but it seems clear to me that academia is dominated by a groupthink mentality that makes their opinions on climate or presidential politics useless.

Wednesday, November 6, 2019

Carroll plugs many-worlds in videos

Lex Fridman interviews Sean M. Carroll on his new quantum mechanics book.

Carroll says that there are three contenders for a QM interpretation: (1) many-worlds, (2) hidden-variables, and (3) spontaneous collapse.

None of these has a shred of empirical evidence. We know that hidden variable theories have to be non-local, and no one has ever observed such a nonlocality. Spontaneous collapse theories contradict quantum mechanics.

After some questions, he admitted another: (4) theories concerned with predicting experiments!

He derided (4) as "epistemic", and complained that those theories (like textbook Copenhagen quantum mechanics) are unsatisfactory because they just predict experiments, and fail to predict what is going on in parallel universes or ghostly unobserved particles.

He also complained that under (4), two different observers of a system might collect different data, and deduce different wave functions.

Yes, of course, that is the nature of science.

Carroll's problem is that he has a warped view of what science is all about. He badmouths theories that make testable predictions, and says that we should prefer a theory that somehow tells us about "reality", but doesn't actually make any testable predictions.

He is a disgrace to science.

Update: See also this Google Talk video, where Carroll makes similar points.

He compares his 3 leading interpretations of QM to the 3 leading Democrat contenders for the White House. Maybe that is a fair analogy, and the leading Democrat contenders are all unfit for office, for different reasons.

Thursday, October 31, 2019

Aaronson explain qubits in the NY Times

Scott Aaronson announces his New York Times op-ed on quantum supremacy. His own personal interest in this is greater than I thought, as he says the NY Times forced him to reveal:
Let’s start with applications. A protocol that I came up with a couple years ago uses a sampling process, just like in Google’s quantum supremacy experiment, to generate random bits. ... Google is now working toward demonstrating my protocol; it bought the non-exclusive intellectual property rights last year.
He was the outside reviewer of the Google paper published in Nature. So he had a big hand in the editorial decision to say that this was quantum supremacy. Aaronson claims the credit for Google confirming that quantum computers can be used for generating random numbers. And Google paid Aaronson for the privilege.

I am not accusing Aaronson of being crooked here. I sure his motives are as pure as Ivory Soap. But he sure has a lot invested in affirming quantum supremacy based on random number generation. Maybe the Nature journal should have also required this disclosure.

He admits:
The computer revolution was enabled, in large part, by a single invention: the transistor. ... We don’t yet have the quantum computing version of the transistor — that would be quantum error correction.
So we don't have real qubits yet.

Aaronson has spent many years trying to convince everyone that there is a right way and a wrong way to explain qubits. Here is the wrong way:
For a moment — a few tens of millionths of a second — this makes the energy levels behave as quantum bits or “qubits,” entities that can be in so-called superpositions of the 0 and 1 states.

This is the part that’s famously hard to explain. Many writers fall back on boilerplate that makes physicists howl in agony: “imagine a qubit as just a bit that can be both 0 and 1 at the same time, exploring both possibilities simultaneously.”
So here is his better version:
Here’s a short version: In everyday life, the probability of an event can range only from 0 percent to 100 percent (there’s a reason you never hear about a negative 30 percent chance of rain). But the building blocks of the world, like electrons and photons, obey different, alien rules of probability, involving numbers — the amplitudes — that can be positive, negative, or even complex (involving the square root of -1). Furthermore, if an event — say, a photon hitting a certain spot on a screen — could happen one way with positive amplitude and another way with negative amplitude, the two possibilities can cancel, so that the total amplitude is zero and the event never happens at all. This is “quantum interference,” and is behind everything else you’ve ever heard about the weirdness of the quantum world.
Really? I may be dense, but I don't see that this is any better. He insists that the key is realizing that probabilities can be negative, or imaginary.

But this is just nonsense. There are no negative probabilities in quantum mechanics, or anywhere else.

We do have interference. Light does show interference patterns, as is possible for all waves. There is nothing the slightest bit strange about waves showing interference. But Aaronson insists on saying that the interference comes from negative probabilities. I don't see how that is mathematically accurate, or helpful to understanding quantum mechanics.

Wednesday, October 30, 2019

Perfect qubits would be amazing

The NY Times finally has an article on Google's quantum supremacy claim:
“Imagine you had 100 perfect qubits,” said Dario Gil, the head of IBM’s research lab in Yorktown Heights, N.Y., in a recent interview. “You would need to devote every atom of planet Earth to store bits to describe that state of that quantum computer. By the time you had 280 perfect qubits, you would need every atom in the universe to store all the zeros and ones.” ...

In contrast, many hundreds of qubits or more may be required to store just one of the huge numbers used in current cryptographic codes. And each of those qubits will need to be protected by many hundreds more, to protect against errors introduced by outside noise and interference.
Got that? 100 perfect qubit would give you more storage capacity than all the atoms on Earth.

But to store just one of the numbers used in crypto codes, you would need many 100s of qubits, as well as technological breakthrus to protect against errors.

The catch here is the modifier "perfect". Nobody has made any perfect qubits, or any scalable qubits, or any qubits protected against errors from outside noise and interference.

All this talk of 53 qubits is a big scam. They don't even have 2 qubits.

Tuesday, October 29, 2019

Many-Worlds theory is not science

More and more physicists are endorsing the Many-Worlds theory, and I have criticized them many times on this blog. Lubos Motl has also defended Copenhagen and criticized MW, and he has finally gotten to the heart of the matter.

MW is not just a goofy interpretation. It turns a good scientific theory into something that is contrary to all of science. It eliminates the ability to make predictions.

Motl writes:
Even today, almost 90 years later, the anti-quantum zealots who are still around – depending on the degree of their stupidity – argue that quantum mechanics is either wrong or incomplete. The typical complaint that "quantum mechanics isn't complete" is formulated as follows:
But the Copenhagen Interpretation fails to tell us what is really going on before we look.
Well, in reality, quantum mechanics tells us everything that is happening before the observation: nothing that could be considered a fact is happening before (or in the absence of) an observation! It is an answer. You may dislike it but it's a lie to say that you weren't given an answer!

Needless to say, the statements are upside down. The Copenhagen Interpretation provides us with a definition which questions are physically meaningful; and with the method to determine the answers to these questions (which must be probabilistic and the axiomatic framework clearly and unambiguously says that no "unambiguous" predictions of the phenomena are possible in general).

Instead, it's the anti-quantum "interpretations" of quantum mechanics such as the Many Worlds Interpretation that are incomplete because
their axioms just don't allow you to determine what you should do if you want to calculate the probability of an outcome of an observation.

In particular, the Many Worlds Interpretation denies that there's any collapse following Born's rule (an axiom) but it is rather obvious that when you omit this only link between quantum mechanics and probabilities, the Many Worlds paradigm will become unable to actually predict these probabilities. You created a hole – (because the building block looked ideologically heretical to him) someone has removed something that was needed (in the Copenhagen paradigm) to complete the argumentation that normally ends with the probabilistic prediction.

This is an actually valid complaint because the primary purpose of science is to explain and predict the results of phenomena.
That's right. And if you support MW, you abandoning the primary purpose of science. (I am avoiding the word "interpretation", because it is not really an interpretation. Calling it an interpretation is part of the hoax.)

Motl doesn't name names in this post, but an example is Sean M. Carroll. Motl probably has more distinguished physicists in mind, and doesn't want to embarrass them.

Belief in MW is a belief so goofy as to discredit whatever other opinions they might have. It is like believing in the Flat Earth, or that the Moon landings were faked.

Friday, October 25, 2019

Quantum measurement problem, explained

Dr. Bee explains the quantum measurement problem:
The problem with the quantum measurement is now that the update of the wave-function is incompatible with the Schrödinger equation. The Schrödinger equation, as I already said, is linear. That means if you have two different states of a system, both of which are allowed according to the Schrödinger equation, then the sum of the two states is also an allowed solution. The best known example of this is Schrödinger’s cat, which is a state that is a sum of both dead and alive. Such a sum is what physicists call a superposition.

We do, however, only observe cats that are either dead or alive. This is why we need the measurement postulate. Without it, quantum mechanics would not be compatible with observation. ...

Why is the measurement postulate problematic? The trouble with the measurement postulate is that the behavior of a large thing, like a detector, should follow from the behavior of the small things that it is made up of. But that is not the case. So that’s the issue. The measurement postulate is incompatible with reductionism. ...

I just explained why quantum mechanics is inconsistent. This is not a 'vague philosophical concern'.
She also says QM is incomplete.

This so-called measurement problem is a 'vague philosophical concern' in the sense that it does not present any practical difficulties.

When you say a theory is inconsistent, that usually means that it allows computing two different outcomes for some proposed experiment. That never happens with QM.

To see that there is an inconsistency, you have to wade thru a discussion of not seeing cats that are alive and dead at the same time.

It is not clear that this problem is a problem.

If anything good comes out of quantum computing research, it could be a better reductionist understanding of quantum measurement. Quantum computers seek to string together qubits as much as possible without measuring them. Because the computation depends on this lack of measurement, maybe the experiments could tell us more precisely just what a measurement is.

But the quantum computing research has told us nothing of the kind. Good old QM/Copenhagen is the underlying theory for all these experiments, and we have no clue that the 1930 theory is not good enuf.

Wednesday, October 23, 2019

IBM explains why Google has a quantum flop

Wired reports:
IBM Says Google’s Quantum Leap Was a Quantum Flop ...

Monday, Big Blue’s quantum PhDs said Google’s claim of quantum supremacy was flawed. IBM said Google had essentially rigged the race by not tapping the full power of modern supercomputers. “This threshold has not been met,” IBM’s blog post says. Google declined to comment. ...

Whoever is proved right in the end, claims of quantum supremacy are largely academic for now. ... It's a milestone suggestive of the field’s long-term dream: That quantum computers will unlock new power and profits ...
Wired says "academic" because everyone quoted claims that quantum supremacy will soon be achieved.

But where's the proof?

Nobody believed the Wright brothers could fly until they actually got off the ground. Quantum supremacy was supposed to be a way of showing that quantum computers had gotten off the ground. If those claims are bogus, as IBM now claims to have proved, then no quantum computers have gotten off the ground. That "long-term dream" is pure speculation.

Update: Google is now bragging, as its paper appeared in Nature. I assumed that it was trying to get into either Science or Nature, but I believe that Nature claims that it does not object to releasing preprints. If so, Google could have addressed criticisms after the paper was leaked.

Quanta mag has an article on the Google IBM dispute:
Google stands by their 10,000 year estimate, though several computer experts interviewed for this article said IBM is probably right on that point. “IBM’s claim looks plausible to me,” emailed Scott Aaronson of the University of Texas, Austin. ...

Aaronson — borrowing an analogy from a friend — said the relationship between classical and quantum computers following Google’s announcement is a lot like the relationship in the 1990s between chess champion Garry Kasparov and IBM’s Deep Blue supercomputer. Kasparov could keep up for a bit, but it was clear he was soon going to be hopelessly outstripped by his algorithmic foe.

“Kasparov can make a valiant stand during a ‘transitional era’ that lasts for maybe a year or two,” Aaronson said. “But the fundamentals of the situation are that he’s toast.”
Following his analogy, IBM's Deep Blue did beat Kasparov, but not convincingly. Maybe the computer was lucky. It was really the subsequent advances by others that showed that computers were superior.

So Aaronson seems to be saying that this research does not prove quantum supremacy, but other research will soon prove it.

We shall see.

Meanwhile, let's be clear about what Google did. It made a random number generator out of near-absolute-zero electronic gates in entangled states. Then it made some measurements to get some random values. Then it said that the device could be simulated on a classical computer, but it would take more time. Maybe 10,000 years more, but maybe just a couple of hours more.

That's all. No big deal, and certainly not quantum supremacy.

Update: Scott Aaronson weighs in , and admits that he was a Nature reviewer. He is happy because he is professionally invested in quantum supremacy being proved this way.

But he admits that Google's claim of 10k years is bogus, and that Google does not have any scalable qubits at all. Further more the researchers cooked the circuits so that they would be hard to simulate classically, while being completely useless for actually doing a computation.

To actually compute something useful, Google would need scalable qubits with some fault-tolerance system, and Google is no closer to doing that.

It has long been known that there are quantum systems that are hard to simulate. The only new thing here is that Google says its system is programmable. I am not sure why that is a good thing, as it cannot be programmed to do anything useful.

Update: Aaronson argues:
But I could turn things around and ask you: do you seriously believe at this point that Nature is going to tell the experimenters, “building a QC with 53 qubits is totally fine — but 60? 70? no, that’s too many!”
The flaw in this argument is that they don't really have a QC with 53 qubits. They have a random number generator with 53 components that act enuf like qubits to generate random numbers.

Computing something useful is expected to require 10 million real (scalable) qubits. Yes, Nature may very well say you can have a quantum device to generate random numbers, but not get a quantum computational advantage.

Monday, October 21, 2019

Indian books on Superior and Inferior

I enjoy John Horgan's SciAm columns, especially when he expresses skepticism for fad scientific work. For example, he appears to be winning a bet that no Nobel Prize will be awarded for string theory.

He has his share of goofy ideas, such as his belief in abolishing war.

His latest column is a rant against scientific work on human races, as all such work is inherently racist
But no, he was condemning Watson’s critics, whom he saw as cowards attacking a courageous truth-teller. I wish I could say I was shocked by my host’s rant, but I have had many encounters like this over the decades. Just as scientists and other intellectuals often reveal in private that they believe in the paranormal, so many disclose that they believe in the innate inferiority of certain groups. ...

I once suggested that, given the harm done by research on alleged cognitive differences between races, it should be banned. I stand by that proposal. I also agree with Saini that online media firms should do more to curb the dissemination of racist pseudoscience. “This is not a free speech issue,”
Really? Scientists and intellectuals often reveal in private that they believe in the paranormal? I doubt that.

My guess is that he is just using "paranormal" as a word to cover beliefs he does not recognize.

I am no expert in race research, but there is a lot of it, and I cannot believe it is all bogus.
I read Superior: The Return of Race Science by British journalist Angela Saini (who is coming to my school Nov. 4, see Postscript). Superior is a thoroughly researched, brilliantly written and deeply disturbing book. It is an apt follow-up to Saini’s previous book, Inferior, which explores sexism in science (and which I wrote about here and here). Saini calls “intellectual racism” the “toxic little seed at the heart of academia. However dead you might think it is, it needs only a little water, and now it’s raining.”
British? She has Indian parents, and India is famous for its racial/caste divisions. And its sexism too, for that matter.

Her books are mostly politics, not science. The favorable reviews just show how science has been corrupted. Here is how she writes:
If anything, the public debate around race and science has sunk into the mud. To state even the undeniable fact that we are one human species today means falling afoul of a cabal of conspiracy theorists. The “race realists,” as they call themselves online, join the growing ranks of climate change deniers, anti-vaxxers and flat-earthers in insisting that science is under the yoke of some grand master plan designed to pull the wool over everyone’s eyes. In their case, a left-wing plot to promote racial equality when, as far as they’re concerned, racial equality is impossible for biological reasons.

How did we get here? How did society manage to create so much room for those who genuinely believe that entire nations have innately higher or lower cognitive capacities,
Maybe because some nations have achieved much more than other nations?
What has started with a gentle creep through the back door of our computers could end, if we’re not careful, with jackboots through the front door of our homes. Populism, ethnic nationalism and neo-Nazism are on the rise worldwide.
No, this is just leftist paranoia. Neo-Nazism does not even exist, as far as I know.

Saturday, October 19, 2019

Google overhyped announcement imminent

Nautilus:
News on the quantum physics grapevine, Frankfurt Institute theoretical physicist Sabine Hossenfelder tells me, is that Google will announce something special next week: Their paper on achieving quantum supremacy, the realization of a quantum computer that outdoes its conventional counterpart. ...

It’s nothing to get too excited about yet. “This” — NISQ — “is really a term invented to make investors believe that quantum computing will have practical applications in the next decades or so,” Hossenfelder says. “The trouble with NISQs is that while it is plausible that they soon will be practically feasible, no one knows how to calculate something useful with them.” Perhaps no one ever will. “I am presently quite worried that quantum computing will go the same way as nuclear fusion, that it will remain forever promising but never quite work.”
We know that the Sun gets its energy from nuclear fusion. We don't know that quantum speedups are even possible.

Thursday, October 17, 2019

Rovelli: Neither Presentism nor Eternalism

Physicist Carlo Rovelli writes in support of Neither Presentism nor Eternalism:
Shortly after the formulation of special relativity, Einstein's former math professor Minkowski found an elegant reformulation of the theory in terms of the four dimensional geometry that we call today Minkowski space. Einstein at first rejected the idea. (`A pointless mathematical complication'.) But he soon changed his mind and embraced it full heart, making it the starting point of general relativity, where Minkowski space is understood as the local approximation to a 4d, pseudo-Riemannian manifold, representing physical spacetime.

The mathematics of Minkowski and general relativity suggested an alternative to Presentism: the entire 4d spacetime is `equally real now', and becoming is illusory. This I call here Eternalism.
Other make this argument that relativity implies a eternalism philosophy of time. I disagree with this argument. You can talk about spacetime with either Galilean or Lorentz transformations. If that is eternalist, then it is either with or without relativity.

Note that Rovelli is compelled to make his relativity story all about Einstein, even tho he had nothing to do with the issue at hand. Minkowski did not reformulate Einstein's theory, as it is not clear that Minkowski was ever influenced by anything Einstein wrote. Spacetime relativity was first published by Poincare, and Minkowski cited Poincare.

Rovelli ends up wanting some compromise between presentism and eternalism, as both views are really just philosophical extremes to emphasize particular ways of thinking about time. This might seem obvious, except that there are a lot of physicists who say that relativity requires eternalism.

Monday, October 14, 2019

The hardest of the hard sciences has gone soft

Science writer Jim Baggott writes in Aeon:
So what if a handful of theoretical physicists want to indulge their inner metaphysician and publish papers that few outside their small academic circle will ever read? But look back to the beginning of this essay. Whether they intend it or not (and trust me, they intend it), this stuff has a habit of leaking into the public domain, dripping like acid into the very foundations of science. The publication of Carroll’s book Something Deeply Hidden, about the Many-Worlds interpretation, has been accompanied by an astonishing publicity blitz, including an essay on Aeon last month. A recent PBS News Hour piece led with the observation that: ‘The “Many-Worlds” theory in quantum mechanics suggests that, with every decision you make, a new universe springs into existence containing what amounts to a new version of you.’

Physics is supposed to be the hardest of the ‘hard sciences’. It sets standards by which we tend to judge all scientific endeavour. And people are watching.
Physics has become embarrassingly unscientific.

Unsurprisingly, the folks at the Discovery Institute, the Seattle-based think-tank for creationism and intelligent design, have been following the unfolding developments in theoretical physics with great interest. The Catholic evangelist Denyse O’Leary, writing for the Institute’s Evolution News blog in 2017, suggests that: ‘Advocates [of the multiverse] do not merely propose that we accept faulty evidence. They want us to abandon evidence as a key criterion for acceptance of their theory.’ The creationists are saying, with some justification: look, you accuse us of pseudoscience, but how is what you’re doing in the name of science any different?
Yes, I think it is different. The folks at the Discovery Institute try to support their ideas with evidence. Carroll has no evidence for his ideas, and denies that any evidence is needed.
Instead of ‘the multiverse exists’ and ‘it might be true’, is it really so difficult to say something like ‘the multiverse has some philosophical attractions, but it is highly speculative and controversial, and there is no evidence for it’?
No, many worlds is not some speculative idea that might be true. Saying that would suggest that there might be evidence for it. There can be no evidence for it.

Sabine Hossenfelder writes:
Right, as I say in my public lecture, physicists know they shouldn't make these arguments, but they do it nevertheless. That's why I am convinced humans will go extinct in the next few hundred years.
Extinct? Maybe rational humans will die out, and be replaced by intelligent robots and an uneducated underclass.

Wednesday, October 9, 2019

Preskill explains quantum supremacy

Physicist John Preskill writes in Quillette:
In 2012, I proposed the term “quantum supremacy” to describe the point where quantum computers can do things that classical computers can’t, regardless of whether those tasks are useful. ...

The words “quantum supremacy” — if not the concept — proved to be controversial for two reasons. One is that supremacy, through its association with white supremacy, evokes a repugnant political stance. The other reason is that the word exacerbates the already overhyped reporting on the status of quantum technology.
This is funny. A few years ago, supremacy might have invoked thoughts of kings, empires, popes, and laws, but not white people. Now rationalist internet forums get frequented by misogynists and white nationalists. Preskill seems to be referring to this gripe about white supremacy.
The catch, as the Google team acknowledges, is that the problem their machine solved with astounding speed was carefully chosen just for the purpose of demonstrating the quantum computer’s superiority. It is not otherwise a problem of much practical interest. In brief, the quantum computer executed a randomly chosen sequence of instructions, and then all the qubits were measured to produce an output bit string. This quantum computation has very little structure, which makes it harder for the classical computer to keep up, but also means that the answer is not very informative.

However, the demonstration is still significant. By checking that the output of their quantum computer agrees with the output of a classical supercomputer (in cases where it doesn’t take thousands of years), the team has verified that they understand their device and that it performs as it should. Now that we know the hardware is working, we can begin the search for more useful applications.
The term "quantum supremacy" suggests a major accomplishment. But all we really know is that the hardware is working.

We also know that they did a quantum experiment that is hard to simulate. But so what? The weather is hard to simulate. A lot of things are hard to simulate.

Here is Preskill's 2012 paper on quantum supremacy, and 2018 paper on NISQ. The latter says:
I’ve already emphasized repeatedly that it will probably be a long time before we have fault-tolerant quantum computers solving hard problems. ...

Nevertheless, solving really hard problems (like factoring numbers which are thousands of bits long) using fault-tolerant quantum computing is not likely to happen for a while, because of the large number of physical qubits needed. To run algorithms involving thousands of protected qubits we’ll need a number of physical qubits which is in the millions, or more [56].
So a quantum computer that tells us something we didn't already know is decades away. Or impossible.

Monday, October 7, 2019

Many-Worlds does not solve measurement

Dr. Bee has a podcast on The Trouble with Many Worlds:
The measurement process therefore is not only an additional assumption that quantum mechanics needs to reproduce what we observe. It is actually incompatible with the Schrödinger equation.

Now, the most obvious way to deal with that is to say, well, the measurement process is something complicated that we do not yet understand, and the wave-function collapse is a placeholder that we use until we will figured out something better.

But that’s not how most physicists deal with it.
Actually, I think it is. Quantum mechanics was created by positivists, and their attitude is to go with what we've got, and not worry too much about purely philosophical objections.
Most sign up for what is known as the Copenhagen interpretation, that basically says you’re not supposed to ask what happens during measurement. In this interpretation, quantum mechanics is merely a mathematical machinery that makes predictions and that’s that. The problem with Copenhagen – and with all similar interpretations – is that they require you to give up the idea that what a macroscopic object, like a detector does should be derivable from theory of its microscopic constituents.

If you believe in the Copenhagen interpretation you have to buy that what the detector does just cannot be derived from the behavior of its microscopic constituents.
The positivists would go along with saying that the theory is all about the predictions, but would never say that you are not supposed to ask about the measurement process. Positivists do not tell you what not to do. They talk about what works.

She is completely correct that the collapse is observed. Some people complain that Copenhagen is goofy because the collapse is unnatural, but all interpretations have to explain the apparent collapse somehow.
The many world interpretation, now, supposedly does away with the problem of the quantum measurement and it does this by just saying there isn’t such a thing as wavefunction collapse. Instead, many worlds people say, every time you make a measurement, the universe splits into several parallel words, one for each possible measurement outcome. This universe splitting is also sometimes called branching. ...

And because it’s the same thing you already know that you cannot derive this detector definition from the Schrödinger equation. It’s not possible. What the many worlds people are now trying instead is to derive this postulate from rational choice theory. But of course that brings back in macroscopic terms, like actors who make decisions and so on. In other words, this reference to knowledge is equally in conflict with reductionism as is the Copenhagen interpretation.

And that’s why the many worlds interpretation does not solve the measurement problem and therefore it is equally troubled as all other interpretations of quantum mechanics.
She is right that Many-Worlds does not solve the measurement problem, and really has to have its own sneaky collapse postulate like Copenhagen, even tho the whole point of Many-Worlds was to avoid that.

However the situation with Many-Worlds is worse than that. Any physical theory could be turned into a Many-Worlds theory by simply introducing a universe splitting for each probabilistic prediction. This can be done with Newtonian celestial mechanics, electromagnetism, relativity, or anything else.

With any of these Many-Worlds theories, you can believe in them if you want, but the split universes have no observable consequences except to reduce or kill the predictive power of your theory. Any freak event can be explained away by splitting to another universe.

So Many-Worlds does not, and cannot, explain anything. It is just smoke and mirrors.

A reader asks:
What is your explanation as to why many people who are obviously very smart, such as Max Tegmark, David Deutsch, Sean Carroll, etc, subscribe to the many-worlds interpretation?
Why do so many smart people tell lies about Donald Trump every day?

I wrote a whole book on how Physics has lost its way. There is now a long list of subjects where prominent Physics professors recite nonsense. I hesitate to say that they are all con men, as many appear to be sincerely misguided.

Friday, October 4, 2019

Google scooped by unconventional p-bit computer

It is funny how quantum computing evangelist Scott Aaronson is flummoxed by being scooped by a rival technology:
Nature paper entitled Integer factorization using stochastic magnetic tunnel junctions (warning: paywalled). See also here for a university press release.

The authors report building a new kind of computer based on asynchronously updated “p-bits” (probabilistic bits). A p-bit is “a robust, classical entity fluctuating in time between 0 and 1, which interacts with other p-bits … using principles inspired by neural networks.” They build a device with 8 p-bits, and use it to factor integers up to 945. They present this as another “unconventional computation scheme” alongside quantum computing, and as a “potentially scalable hardware approach to the difficult problems of optimization and sampling.”

A commentary accompanying the Nature paper goes much further still — claiming that the new factoring approach, “if improved, could threaten data encryption,” and that resources should now be diverted from quantum computing to this promising new idea, one with the advantages of requiring no refrigeration or maintenance of delicate entangled states. (It should’ve added: and how big a number has Shor’s algorithm factored anyway, 21? Compared to 945, that’s peanuts!)

Since I couldn’t figure out a gentler way to say this, here goes: it’s astounding that this paper and commentary made it into Nature in the form that they did. This is funny. While Google is keeping mum in order to over-dramatize their silly result, a rival group steals the spotlight with non-quantum technology.

Aaronson is annoyed that this is non-quantum technology making extravagant claims, but exactly how is the Google quantum computer effort any better?

Apparently Google refuses to compete in any meaningful way, as Aaronson says
How large a number Google could factor, by running Shor’s algorithm on its current device, is a semi-interesting question to which I don’t know the answer. My guess would be that they could at least get up to the hundreds, depending on how much precompilation and other classical trickery was allowed. The Google group has expressed no interest in doing this, regarding it (with some justice) as a circus stunt that doesn’t showcase the real abilities of the hardware.
A circus stunt? Obviously the results would be embarrassingly bad for Google.

Others have claimed to use quantum computers to factor 15 or , but those were circus stunts. They failed to show any evidence of a quantum speedup.

An interesting quantum computer result would factor numbers with Shor's algorithm, and show how the work scales with the size of the number.

Also:
But as I explained in the FAQ, running Shor to factor a classically intractable number will set you back thousands of logical qubits, which after error-correction could translate into millions of physical qubits. That’s why no can do it yet.
And that is why we will not see true quantum supremacy any time soon. All Google has is a fancy random number generator.

Thursday, October 3, 2019

How there is mathematical pluralism

Mathematics is the study of absolute truth.

It is common for non-mathematicians to try to deny this. Sometimes they give arguments like saying that Goedel proved that mathematical truth is not possible. Goedel would never have agreed to that.

Mathematician Timothy Chow writes:
I would say that virtually all professional mathematicians agree that questions of the form “Does Theorem T provably follow from Axioms A1, A2, and A3?” have objectively true answers. ...

On the other hand, when it comes to the question of whether Axioms A1, A2, and A3 are true, then I think we have (what I called) “pluralism” in mathematics.
That is correct.

There are some axioms for the existence of very large cardinals, and some disagreement among mathematicians about whether those axioms should be regarded as true. But there is not really any serious disagreement about the truth of published theorems.

Other fields, like Physics, are filled with disputes about what is true.

Monday, September 30, 2019

Classical and quantum theories are similarly indeterministic

Nearly everyone accepts the proposition that classical mechanics is deterministic, while quantum mechanics is probabilistic. For example, a recent Quanta mag essay starts:
In A Philosophical Essay on Probabilities, published in 1814, Pierre-Simon Laplace introduced a notorious hypothetical creature: a “vast intelligence” that knew the complete physical state of the present universe. For such an entity, dubbed “Laplace’s demon” by subsequent commentators, there would be no mystery about what had happened in the past or what would happen at any time in the future. According to the clockwork universe described by Isaac Newton, the past and future are exactly determined by the present. ...

A century later, quantum mechanics changed everything.
I believe that this view is mistaken.

I don't just mean that some classical theories use probability, like statistical mechanics. Or that quantum mechanics sometimes predicts a sure result.

I mean that determinism is not a genuine difference between classical and quantum mechanics.

A couple of recent papers by Flavio Del Santo and Nicolas Gisin make this point.

One says:
Classical physics is generally regarded as deterministic, as opposed to quantum mechanics that is considered the first theory to have introduced genuine indeterminism into physics. We challenge this view by arguing that the alleged determinism of classical physics relies on the tacit, metaphysical assumption that there exists an actual value of every physical quantity, with its infinite predetermined digits (which we name "principle of infinite precision").
Also:
Classical physics is generally regarded as deterministic, as opposed to quantum mechanics that is considered the first theory to have introduced genuine indeterminism into physics. We challenge this view by arguing that the alleged determinism of classical physics relies on the tacit, metaphysical assumption that there exists an actual value of every physical quantity, with its infinite predetermined digits (which we name "principle of infinite precision"). Building on recent information-theoretic arguments showing that the principle of infinite precision (which translates into the attribution of a physical meaning to mathematical real numbers) leads to unphysical consequences, we consider possible alternative indeterministic interpretations of classical physics. We also link those to well-known interpretations of quantum mechanics. In particular, we propose a model of classical indeterminism based on "finite information quantities" (FIQs). Moreover, we discuss the perspectives that an indeterministic physics could open (such as strong emergence), as well as some potential problematic issues. Finally, we make evident that any indeterministic interpretation of physics would have to deal with the problem of explaining how the indeterminate values become determinate, a problem known in the context of quantum mechanics as (part of) the ``quantum measurement problem''. We discuss some similarities between the classical and the quantum measurement problems, and propose ideas for possible solutions (e.g., ``collapse models'' and ``top-down causation'').
Another:
Do scientific theories limit human knowledge? In other words, are there physical variables hidden by essence forever? We argue for negative answers and illustrate our point on chaotic classical dynamical systems. We emphasize parallels with quantum theory and conclude that the common real numbers are, de facto, the hidden variables of classical physics. Consequently, real numbers should not be considered as "physically real" and classical mechanics, like quantum physics, is indeterministic.
The point here is that any deterministic theory involving real numbers becomes indeterministic if you use finitary measurements and representations of those reals. In practice, all those theories are indeterministic.

Also, any indeterministic theory can be made deterministic by including the future observables in the present state. Quantum mechanical states are usually unknowable, and people accept that, so one could add the future (perhaps unknowable) being in the present state.

Thus whether a physical theory is deterministic is just an artifact of how the theory is presented. It has no more meaning than that.

Tuesday, September 24, 2019

Did Google achieve Quantum Supremacy?

I have readers turning to my blog to see if I have shut it because of the humiliation of being proven wrong.

I refer to papers announcing that Google has achieve quantum supremacy. You can find links to the two papers in the comments on Scott Aaronson's blog.

I am not conceding defeat yet. First, Google has withdrawn the papers, and refuses to say whether it has a achieved a breakthru or not. Second, outside experts like Aaronson have apparently been briefed on the work, but refuse to comment on it. And those who do comment are not positive:
However, the significance of Google’s announcement was disputed by at least one competitor. Speaking to the FT, IBM’s head of research Dario Gil said that Google’s claim to have achieved quantum supremacy is “just plain wrong.” Gil said that Google’s system is a specialized piece of hardware designed to solve a single problem, and falls short of being a general-purpose computer, unlike IBM’s own work.
Gil Kalai says that the Google and IBM results are impressive, but he still believes that quantum supremacy is impossible.

So it may not be what it appears to be.

Aaronson had been sworn to secrecy, and now considers the Google work a vindication of his ideas. He stops short of saying that it proves quantum supremacy, but he implies that the quantum supremacy skeptics have been checkmated.

Probably Google is eager to make a big splash about this, but is getting the paper published in Science or Nature, and those journals do not like to be scooped. The secrecy also helps suppress criticism, because the critics usually don't know enuf about the work when the reporters call.

The paper claims quantum supremacy on the basis of doing a computation that would have been prohibitive on a classical supercomputer.

That sounds great, but since the computation was not replicated, how do we know that it was done correctly?

Wikipedia says:
A universal quantum simulator is a quantum computer proposed by Yuri Manin in 1980[4] and Richard Feynman in 1982.[5] Feynman showed that a classical Turing machine would experience an exponential slowdown when simulating quantum phenomena, while his hypothetical universal quantum simulator would not. David Deutsch in 1985, took the ideas further and described a universal quantum computer.
So we have known since 1982 that simulating a quantum experiment on a classical computer can take exponential time.

At first glance, it appears that Google has only verified that. It did some silly quantum experiment, and then showed that the obvious classical simulation of it would take exponential time.

Is that all Google has done? I haven't read the paper yet, so I don't know. It is hard to believe that Google would claim quantum supremacy if that is all it is. And Google has not officially claimed it yet.

The paper says:
The benchmark task we demonstrate has an immediate application in generating certifiable random numbers [9];
Really? Is that all? It would be more impressive if they actually computed something.

Monday, September 23, 2019

Debunking Libet's free will experiment

The anti-free-will folks often cite a famous experiment by Libet. It doesn't really disprove free will, but it seemed to show that decisions had an unconscious element.

Now I learn that the experiment has been debunked anyway. The Atlantic mag reports:
Twenty years later, the American physiologist Benjamin Libet used the Bereitschaftspotential to make the case not only that the brain shows signs of a decision before a person acts, but that, incredibly, the brain’s wheels start turning before the person even consciously intends to do something. Suddenly, people’s choices—even a basic finger tap—appeared to be determined by something outside of their own perceived volition. ...

This would not imply, as Libet had thought, that people’s brains “decide” to move their fingers before they know it. Hardly. Rather, it would mean that the noisy activity in people’s brains sometimes happens to tip the scale if there’s nothing else to base a choice on, saving us from endless indecision when faced with an arbitrary task. The Bereitschaftspotential would be the rising part of the brain fluctuations that tend to coincide with the decisions. This is a highly specific situation, not a general case for all, or even many, choices. ...

When Schurger first proposed the neural-noise explanation, in 2012, the paper didn’t get much outside attention, but it did create a buzz in neuroscience. Schurger received awards for overturning a long-standing idea.
This does not resolve the issue of free will, but it does destroy one of the arguments against free will.

It also throws into doubt the idea that we subconsciously make decisions.

Saturday, September 21, 2019

On the verge of quantum supremacy again

July news:
Google expected to achieve quantum supremacy in 2019: Here’s what that means

Google‘s reportedly on the verge of demonstrating a quantum computer capable of feats no ordinary classical computer could perform. The term for this is quantum supremacy, and experts believe the Mountain View company could be mere months from achieving it. This may be the biggest scientific breakthrough for humanity since we figured out how to harness the power of fire. ...

Experts predict the advent of quantum supremacy – useful quantum computers – will herald revolutionary advances in nearly every scientific field. We’re talking breakthroughs in chemistry, astrophysics, medicine, security, communications and more. It may sound like a lot of hype, but these are the grounded predictions. Others think quantum computers will help scientists unlock some of the greater mysteries of the cosmos such as how the universe came to be and whether life exists outside of our own planet.
It seems as if I post these stories every year. Okay, here we go again.

I am betting Google will fail again. Check back on Dec. 31, 2019.

If Google delivers as promised, I will admit to being wrong. Otherwise, another year of phony promises will have passed.

Maybe already. The Financial Times is reporting:
Google claims to have reached quantum supremacy
The article is behind a paywall, so that's all I know. If true, you can be sure Google will be bragging in a major way. (Update: Read the FT article here.)

Update: LuMo tentatively believes it:
Google's quantum computing chip Bristlecone – that was introduced in March 2018 – has arguably done a calculation that took 3 minutes but it would take 10,000 years on the IBM's Summit, the top classical supercomputer as of today. I know nothing about the details of this calculation. I don't even know what amount of quantum error correction, if any, is used or has to be used for these first demonstrations of quantum supremacy.

If you have a qualified guess, let us know – because while I have taught quantum computing (in one or two of the lectures of QM) at Harvard, I don't really have practical experience with the implementation of the paradigm.

If true, and I tend to think it's true even though the claim is remarkable, we are entering the quantum computing epoch.
I look forward to the details being published. Commenter MD Cory suggests that I have been tricked.

Friday, September 20, 2019

Physicists confusing religion and science

Sabine Hossenfelderwrites in a Nautilus essay:
And finally, if you are really asking whether our universe has been programmed by a superior intelligence, that’s just a badly concealed form of religion. Since this hypothesis is untestable inside the supposed simulation, it’s not scientific. This is not to say it is in conflict with science. You can believe it, if you want to. But believing in an omnipotent Programmer is not science—it’s tech-bro monotheism. And without that Programmer, the simulation hypothesis is just a modern-day version of the 18th century clockwork universe, a sign of our limited imagination more than anything else.

It’s a similar story with all those copies of yourself in parallel worlds. You can believe that they exist, all right. This belief is not in conflict with science and it is surely an entertaining speculation. But there is no way you can ever test whether your copies exist, therefore their existence is not a scientific hypothesis.

Most worryingly, this confusion of religion and science does not come from science journalists; it comes directly from the practitioners in my field. Many of my colleagues have become careless in separating belief from fact. They speak of existence without stopping to ask what it means for something to exist in the first place. They confuse postulates with conclusions and mathematics with reality. They don’t know what it means to explain something in scientific terms, and they no longer shy away from hypotheses that are untestable even in principle.
She is right, but with this attitude, she is not going to get tenure anywhere good.

Deepak Chopra wrote a letter to NY Times in response to Sean M. Carroll's op-ed. He mixes quantum mechanics and consciousness in a way that drives physicists nuts. They regard him as a mystic crackpot whose ideas should be classified as religion. But he is not really as bad as Carroll. It would be easier to test Chopra's ideas than Carroll's many-worlds nonsense.

Carroll is an example of a physicist confusing religion and science.