Monday, August 29, 2022

The Quantum Computing Bubble

Dr. Quantum Supremacy writes:
Several people have asked me to comment about a Financial Times opinion piece entitled The Quantum Computing Bubble (subtitle: “The industry has yet to demonstrate any real utility, despite the fanfare, billions of VC dollars and three Spacs”). The piece is purely deflationary — not a positive word in it — though it never goes so far as to suggest that QC is blocked by any Gil-Kalai-like fundamental principle, nor does it even evince curiosity about that question.
The article is paywalled, so I don't know anything about it.

Aaronson always likes to draw a distinction between saying something is impossible, and saying something is impossible because it is blocked by a fundamental principle.

For example, saying that perpetual motion machines are impossible is not as satisfying as saying perpetual motion machines are impossible because the First Law of Thermodynamics says energy is conserved.

Okay, maybe, but it depends on how convincing the principle is.

Aarson concedes that the article is right that it is not yet known whether it is "possible to build a large-scale, fault-tolerant quantum computer."

As for applications, my position has always been that if there were zero applications, it would still be at least as scientifically important to try to build QCs as it was to build the LHC, LIGO, or the James Webb telescope.  If there are real applications, such as simulating chemical dynamics, or certifiable randomness — and there very well might be — then those are icing on the cake. 
That is because he likes to study quantum complexity theory. But the practical applications may well be negative.

It is possible that the biggest practical application of QC will be to destroy the security of internet communications that everyone uses everyday.

Human Behavior Journal goes Woke

Nature, perhaps the world's leading group of science publications, announces:
Science must respect the dignity and rights of all humans

New ethics guidance addresses potential harms for human population groups who do not participate in research but may be harmed by its publication.

So the Nature Human Behavior journal will now reject papers based on these principles:
1. Content that is premised upon the assumption of inherent biological, social, or cultural superiority or inferiority of one human group over another based on race, ethnicity, national or social origin, sex, gender identity, sexual orientation, religion, political or other beliefs, age, disease, (dis)ability, or other socially constructed or socially relevant groupings (hereafter referred to as socially constructed or socially relevant human groupings).

2. Content that undermines — or could reasonably be perceived to undermine — the rights and dignities of an individual or human group on the basis of socially constructed or socially relevant human groupings.

3. Content that includes text or images that directly or indirectly disparage a person or group on the basis of socially constructed or socially relevant human groupings.

4. Submissions that embody singular, privileged perspectives, which are exclusionary of a diversity of voices in relation to socially constructed or socially relevant human groupings, and which purport such perspectives to be generalisable and/or assumed.

Steve Pinker and other prominent scientists criticize it here and here. Separately, the same ones attack a NY Times op-ed denying the maternal instinct. These scientists are all old and retired, and academia is not producing truth-tellers anymore.

This seems to be saying: We are tired be being called racist for publishing research that Black people are inferior. So we are going to ban articles that even hint at the facts. Because George Floyd, we have to be more woke.

Human behavior does vary among racial and ethnic group. Good research about it is needed for social policy. We will not get it anymore. You might have to read century-old papers to get the truth.

Update: Noah Carl writes:

The reason I want to congratulate the editors of Nature Human Behaviour is that they are being open and honest about a policy that most social science journals already have. While many commentators have rightly criticised the absurd editorial, they seem to be operating under the illusion that it’s a one-off. It isn’t. Many journals follow exactly the same policy – they just don’t say so, or if they do, they hide it in the small print.

Even Intelligence, a supposedly controversial journal, has guidelines for the “use of inclusive language”. These specify that submissions must “contain nothing which might imply that one individual is superior to another on the grounds of age, gender, race, ethnicity, culture, sexual orientation, disability or health condition.” ...

And it isn’t just journals. The owners of some datasets explicitly forbid you from testing certain hypotheses. To access data held by the Social Science Genetic Association Consortium, you now have to promise that you “will not use these data to make comparisons of genetically predicted phenotype levels across ancestral groups”.

Thursday, August 25, 2022

Is There Causation in Fundamental Physics?

Emily Adlam tries to justify quantum causation in a new paper:
Bertrand Russell famously argued that causation plays no role in science: it is ‘a relic of a bygone age, surviv- ing, like the monarchy, only because it is erroneously supposed to do no harm.’ [1] Cartwright [2] and later writers moderated this conclusion somewhat, and it is now largely accepted that in a macroscopic setting causal concepts are an important part of the assessments we make about possible strategies for action.

But the view that causation in the usual sense of the term is not present in fundamental physics, or at least that not all fundamental physical processes are causal, remains prevalent [3, 4] - for example, Norton writes that ‘(causes and causal principles) are heuristi- cally useful notions, licensed by our best sciences, but we should not mistake them for the fundamental principles of nature’ [5].

Furthermore, many influential philosophical analyses of causation posit that causation arises only at a macroscopic level, as a result of the thermodynamic gradient [6,7], interventions [8,9], the perspectives of agents [10], or some such feature of reality which plays no role in fundamental physics.

In light of this widespread orthodoxy, it may seem surprising that in recent years a significant literature around causation has sprung up within quantum foundations.

[1] Bertrand Russell. On the notion of cause. Proceedings of the Aristotelian Society, 13:1–26, 1912.
[2] Nancy Cartwright. Causal laws and effective strategies. Noˆus, 13(4):419–437, 1979.

The opinion against causation is so bizarre that it is hard for me to understand it. In my view, causation is fundamedental at all levels of science. Events are influenced by events in the backwards light cone, and science is all about explaining that.

How can philosophers be so against causation?

I see different arguments.

If time travel into the past is possible, then it is hard to see what causation means.

If you use a Lagrangian formulation of physics, then you often find a solution for all times at once, as opposed to strictly deducing future events from past events. However there is usually another formulation where the past causes the future.

Russell's argument was in 1912, before quantum mechanics. He seemed to think that there were universal mathematical laws determining the future. In his view, that was not really causation, like a human causing something to happen.

Quantum uncertainties lead to other questions. Some people seem to think that probabilities cannot be caused, but that is plainly untrue, in the ordinary English language usage. People say that smoking causes lung cancer, even thought the connection is probabilistic.

Some say that causality cannot explain the Bell correlations:

note that the proof of the Bell inequality can be regarded as telling us that any causal model for correlations violating the Bell inequality must postulate a causal connection between the choice of measurement on one particle and the outcome of the measurement on the other particle, but the quantum mechanical no-signalling theorem ensures that at the statistical level there will be no dependence of the outcome on the measurement choice, so if we wish to represent these statistics by a causal model we must carefully ‘fine-tune’ the parameters of the model to ensure that the underlying causal influences exactly cancel out so as to be invisible at the level of the empirical statistics.
I think that the problem here is that if you think that Bell proved nonlocality, and if causation is a local mechanism, then there is a problem.

But Bell did not prove nonlocality. Quantum mechanics is a local theory, consistent with causation. Many people misunderstand this.

Wednesday, August 24, 2022

Weyl's unified field theory

New paper:
In 1918, H. Weyl proposed a unified theory of gravity and electromagnetism based on a generalization of Riemannian geometry. With hindsight we now could say that the theory carried with it some of the most original ideas that inspired the physics of the twentieth century. ...

Although Weyl’s theory was not considered by Einstein to constitute a viable physical theory, the powerful and elegant ideas put forward by the publication of Weyl’s paper survived and now constitutes a constant source of inspiration for new proposals, particularly in the domain of the so-called “modified gravity theories” [3].

Despite Einstein’s objections, Weyl’s unified theory attracted the attention of some eminent contemporary physicists of Weyl, among whom we can quote Pauli, Eddington, London, and Dirac[4]. However, the great majority of theoretical physicists in the first decades of the twentieth century remained completely unaware of Weyl’s work.

The amazing thing to me is that Weyl had a theory similar to the geometric formulations of electromagnetic gauge theory t hat became widely known 50 years later.

Weyl's theory did become widely known, but didn't anyone improve it until much later? Or maybe they did, and I haven't heard about it.

Monday, August 22, 2022

Formal Math v Human Math

David Ruelle wrote and essay on human math, and remarked:
Mathematics consists in deriving consequences (theorems) from a set of assumptions (axioms) by application of given logical rules. The set of axioms mostly used currently is ZFC (Zermelo-Fraenkel-Choice set-theoretical axioms). Axioms and theorems can be formulated in a formal language. ZFC is fairly believable by mathematicians (a typical axiom is ‘there exists an infinite set’). We remind the reader that the consistency of ZFC cannot be proved (this follows from Godel’s incompleteness theorems).

Human mathematics is based on natural languages (ancient Greek, English, etc.) which can in principle be translated into formal language (but is hardly understandable after translation).

This is all true, but it leads people to the conclusion that formal axiomatized math does not really prove what it is supposed to prove, so human math is better.

ZFC is not supposed to prove the consistency of ZFC. It doesn't make sense. Consistency is only proved in a larger system. Goedel's theorems are widely misunderstood.

Sunday, August 14, 2022

Still No Quantum Supremacy

Dr. Quantum Supremacy just got back from a conference on the subject, and reports:
Of course there’s a lot still to do. Many of the talks drew an exclamation point on something I’ve been saying for the past couple years: that there’s an urgent need for better quantum supremacy experiments, which will require both theoretical and engineering advances. The experiments by Google and USTC and now Xanadu represent a big step forward for the field, but since they started being done, the classical spoofing attacks have also steadily improved, to the point that whether “quantum computational supremacy” still exists depends on exactly how you define it.

Briefly: if you measure by total operations, energy use, or CO2 footprint, then probably yes, quantum supremacy remains. But if you measure by number of seconds, then it doesn’t remain, not if you’re willing to shell out for enough cores on AWS or your favorite supercomputer. And even the quantum supremacy that does remain might eventually fall to, e.g., further improvements of the algorithm due to Gao et al. For more details, see, e.g., the now-published work of Pan, Chen, and Zhang, or this good popular summary by Adrian Cho for Science.

If the experimentalists care enough, they could easily regain the quantum lead, at least for a couple more years, by (say) repeating random circuit sampling with 72 qubits rather than 53-60, and hopefully circuit depth of 30-40 rather than just 20-25.

Considering how he has staked his professional reputation on quantum supremacy, this is an admission that it has not been achieved. It will require will require both theoretical and engineering advances, and they better come quickly, or Google and a lot of big-shots are going to be very embarrassed.

I am skeptical that quantum computers will ever have any advantage over Turing machines.

There are a lot of book hyping quantum computers. Here is a skeptical one that I have not read: Will We Ever Have a Quantum Computer?, by Mikhail I. Dyakonov.

Wednesday, August 10, 2022

New Lecture on Many-Worlds

Physicist Sean M. Carroll has a new lecture on The Many Worlds of Quantum Mechanics. Here is a 2-year-old lecture on the same subject.

A question at 1:04:00 asks for observable evidence for many-worlds. For example, could you prepare a Schroedinger Cat, and somehow verify that it is alive in one world and dead in another?

The correct answer is that there is no such evidence, and the whole concept of many-worlds is unscientific and unverifiable.

He dodges the question, and says that there are experiments that could disprove quantum mechanics.

Yes, of course, but textbook (aka Copenhagen) QM does not say the two cats can be observed.

His lecture is a pretty clear explanation of QM and many-worlds.

He says, at 35:40 that many-worlds is a theory, not an interpretation. I agree with that. The interpretations of QM all have the same predictions and observations. The interpretation is just a philosophical explanation for what the variables mean, but no experiment can say that one interpretation is any better than any other.

The Copenhagen interpretation is what Bohr and Heisenberg said. And maybe Schroedinger and Dirac. The textbook interpretation is the version of it found in modern textbooks.

Many-worlds is, in essence, the theory of QM with the part about observations and predictions removed. So many-worlds cannot make predictions, and cannot be tested or verified.

Carroll is a big proponent of many-worlds, but only because he believes it gives a better explanation of what is going on. But it does not explain anything, and is an unscientific theory.

In the older lecture, he admits at 37:00 that many-worlds cannot be tested. He excuses this by saying that the assumptions that go into many-worlds can be tested. Those assumptions are the same as with quantum mechanics, so every test of QM is also a test of many-worlds.

This is just a dodge. There is no test that can show a preference to many-worlds over textbook QM.

He then goes on to say that many-worlds is an unfinished theory, maybe some day someone will figure how many-worlds could make testable predictions. With the current knowledge of the theory, it deterministically predicts that all things happen in branched universes, so all predictions come true in some universe. The theory cannot be tested.

Israeli physicist Lev Vaidman has a new paper on Why the Many-Worlds Interpretation?:

A brief (subjective) description of the state of the art of the many-worlds interpretation of quantum mechanics (MWI) is presented. It is argued that the MWI is the only interpretation which removes action at a distance and randomness from quantum theory. Limitations of the MWI regarding questions of probability which can be legitimately asked are specified.... Some speculations about misconceptions, which apparently prevent the MWI to be in the consensus, are mentioned.
I give you arguments for many-worlds, as otherwise you would not believe that the theory is as stupid as it is.

Note that he says that MWI removes randomness and fails to predict probability, as if that were an advantage.

The only part of our experience, which unitary evolution of the universal wave function does not explain, is the statistics of the results of quantum experiments we performed. ...

Thus, the MWI brings back determinism to scientific description [8]. (Before the quantum revolution, determinism was considered as a virtue of scientific explanation.) We, as agents capable of experiencing only a single world, have an illusion of randomness. This illusion is explained by a deterministic theory of the universe which includes all worlds together.

Got that? It it deterministic about things we never see and fails to predict the probabilistic events we do see.
The MWI provides simple answers to almost all quantum paradoxes. Schr ̈odinger’s Cat is absurd in one world, but unproblematic when it represents one world with a live cat and a multitude of worlds with the cat which died at different times of detection of the radioactive decay. ...

The paradoxical behaviour of Bell-type experiments disappears when quantum measure- ment does not have a single outcome [9]. ...

The reluctance of a human to accept the MWI is natural. We would like to think that we are the center of the Universe: that the Sun, together with other stars, moves around Earth, that our Galaxy is the center of the Universe, and we are unhappy to accept that there are many parallel copies of us which are apparently not less important.

There you go. Your rejection of the idea that you are constanting splitting into parallel universes is just a natural human conceit about your own self-importance. You are like those narrow-minded astronomers who put the Earth at the center of the universe.

This is crackpot stuff. It is anti-science. It is saying that you can get paradoxes out of a theory by removing all predictions.

Monday, August 8, 2022

Grad Schools to Stop Using Standardized Tests

From an AAAS Science magazine editorial:
Earlier this year, the University of Michigan became the first US university to remove the requirement that applicants to its nonprofessional doctoral programs take a standardized test—the Graduate Record Examination (GRE). This decision will not, on its own, address inequities in admissions practice, nor the broader education barriers that many applicants face. But it is a major step toward an admissions process that considers all dimensions of a candidate’s preparation and promise—a holistic view that should be adopted by all universities if equity in education and opportunities is to be achieved. ...

What are the costs for admissions committees that use the GRE in admissions decisions? In short, the loss of talented applicants at every stage of the process.

This is the dumbing down of science grad schools. The purpose is to admit incompetent women and BIPOCs. There is no example of a talented applicant being lost. The talented ones are able to score well on the tests.

Test scores are the main way that talented students get into good schools, when they have deficiencies in their records. Ignoring the scores serves no purpose, example to enable sex and race discrimination. It is amazing to see America's leading science journal going along with this nonsense.

Google Quantum Computers Failed to Prove Anything

AAAS Science magazine announces:
Ordinary computers can beat Google’s quantum computer after all
Superfast algorithm put crimp in 2019 claim that Google’s machine had achieved “quantum supremacy”

If the quantum computing era dawned 3 years ago, its rising sun may have ducked behind a cloud. In 2019, Google researchers claimed they had passed a milestone known as quantum supremacy when their quantum computer Sycamore performed in 200 seconds an abstruse calculation they said would tie up a supercomputer for 10,000 years. Now, scientists in China have done the computation in a few hours with ordinary processors. A supercomputer, they say, could beat Sycamore outright.

Such results were reported previous on this blog, and by Gil Kilai, who points out that Google was wrong by ten orders of magnitude.
“I think they’re right that if they had access to a big enough supercomputer, they could have simulated the … task in a matter of seconds,” says Scott Aaronson, a computer scientist at the University of Texas, Austin. The advance takes a bit of the shine off Google’s claim, says Greg Kuperberg, a mathematician at the University of California, Davis. “Getting to 300 feet from the summit is less exciting than getting to the summit.”

Still, the promise of quantum computing remains undimmed, Kuperberg and others say.

No, they are not 300 feet from the summit. They are still at sea level.

The whole point of quantum supremacy is to find a computation where quantum computers are demonstrably faster that classical (Turing) computers. That has been a failure. No advantage has been shown at all.

The advance underscores the pitfalls of racing a quantum computer against a conventional one, researchers say. “There’s an urgent need for better quantum supremacy experiments,” Aaronson says. Zhang suggests a more practical approach: “We should find some real-world applications to demonstrate the quantum advantage.”
They are acknowledging that no quantum computer has demonstrated any advantage.

I have said here that the whole research program is misguided and doomed. Quantum computing is probably impossible.

Even if you didn't know anything about this subject, you would have to think the program is fishy. The QC proponents are collecting billions of dollars in research funds, and making wildly exaggerated claims, only to be proved wrong later. Look at how they are in denial. The biggest result of the last ten years is proven wrong, and they still say, "the promise of quantum computing remains undimmed".

Thursday, August 4, 2022

The Prize for the Electroweak Model

The biggest Nobel Prize for the Standard Model was probably the 1979 prize to Glashow, Weinberg, and Salam for electroweak theory.

Peter Woit posted some info about bickering behind the prize. Apparently Salam's work was unoriginal and undeserving. Salam thought that he deserved to share the 1957 prize for parity violation, and lobbied heavily to get it for something else.

Apparenly also Weinberg used to be close buddies with Glashow, but did not want to share the prize with him. So Weinberg was eager to credit Salam in order to cut Glashow out.

I am not even sure Weinberg was so deserving. His contribution was a short 1967 paper that was not hardly cited by anyone, Decades later prizes were given for the Higgs mechanism and 'tHooft renormalization, and they were arguably more critical.

A couple of comments mention that Salam was the first Moslem to win a science prize. I do not know if that worked in his favor, or against him.