Thursday, October 31, 2019

Aaronson explain qubits in the NY Times

Scott Aaronson announces his New York Times op-ed on quantum supremacy. His own personal interest in this is greater than I thought, as he says the NY Times forced him to reveal:
Let’s start with applications. A protocol that I came up with a couple years ago uses a sampling process, just like in Google’s quantum supremacy experiment, to generate random bits. ... Google is now working toward demonstrating my protocol; it bought the non-exclusive intellectual property rights last year.
He was the outside reviewer of the Google paper published in Nature. So he had a big hand in the editorial decision to say that this was quantum supremacy. Aaronson claims the credit for Google confirming that quantum computers can be used for generating random numbers. And Google paid Aaronson for the privilege.

I am not accusing Aaronson of being crooked here. I sure his motives are as pure as Ivory Soap. But he sure has a lot invested in affirming quantum supremacy based on random number generation. Maybe the Nature journal should have also required this disclosure.

He admits:
The computer revolution was enabled, in large part, by a single invention: the transistor. ... We don’t yet have the quantum computing version of the transistor — that would be quantum error correction.
So we don't have real qubits yet.

Aaronson has spent many years trying to convince everyone that there is a right way and a wrong way to explain qubits. Here is the wrong way:
For a moment — a few tens of millionths of a second — this makes the energy levels behave as quantum bits or “qubits,” entities that can be in so-called superpositions of the 0 and 1 states.

This is the part that’s famously hard to explain. Many writers fall back on boilerplate that makes physicists howl in agony: “imagine a qubit as just a bit that can be both 0 and 1 at the same time, exploring both possibilities simultaneously.”
So here is his better version:
Here’s a short version: In everyday life, the probability of an event can range only from 0 percent to 100 percent (there’s a reason you never hear about a negative 30 percent chance of rain). But the building blocks of the world, like electrons and photons, obey different, alien rules of probability, involving numbers — the amplitudes — that can be positive, negative, or even complex (involving the square root of -1). Furthermore, if an event — say, a photon hitting a certain spot on a screen — could happen one way with positive amplitude and another way with negative amplitude, the two possibilities can cancel, so that the total amplitude is zero and the event never happens at all. This is “quantum interference,” and is behind everything else you’ve ever heard about the weirdness of the quantum world.
Really? I may be dense, but I don't see that this is any better. He insists that the key is realizing that probabilities can be negative, or imaginary.

But this is just nonsense. There are no negative probabilities in quantum mechanics, or anywhere else.

We do have interference. Light does show interference patterns, as is possible for all waves. There is nothing the slightest bit strange about waves showing interference. But Aaronson insists on saying that the interference comes from negative probabilities. I don't see how that is mathematically accurate, or helpful to understanding quantum mechanics.

Wednesday, October 30, 2019

Perfect qubits would be amazing

The NY Times finally has an article on Google's quantum supremacy claim:
“Imagine you had 100 perfect qubits,” said Dario Gil, the head of IBM’s research lab in Yorktown Heights, N.Y., in a recent interview. “You would need to devote every atom of planet Earth to store bits to describe that state of that quantum computer. By the time you had 280 perfect qubits, you would need every atom in the universe to store all the zeros and ones.” ...

In contrast, many hundreds of qubits or more may be required to store just one of the huge numbers used in current cryptographic codes. And each of those qubits will need to be protected by many hundreds more, to protect against errors introduced by outside noise and interference.
Got that? 100 perfect qubit would give you more storage capacity than all the atoms on Earth.

But to store just one of the numbers used in crypto codes, you would need many 100s of qubits, as well as technological breakthrus to protect against errors.

The catch here is the modifier "perfect". Nobody has made any perfect qubits, or any scalable qubits, or any qubits protected against errors from outside noise and interference.

All this talk of 53 qubits is a big scam. They don't even have 2 qubits.

Tuesday, October 29, 2019

Many-Worlds theory is not science

More and more physicists are endorsing the Many-Worlds theory, and I have criticized them many times on this blog. Lubos Motl has also defended Copenhagen and criticized MW, and he has finally gotten to the heart of the matter.

MW is not just a goofy interpretation. It turns a good scientific theory into something that is contrary to all of science. It eliminates the ability to make predictions.

Motl writes:
Even today, almost 90 years later, the anti-quantum zealots who are still around – depending on the degree of their stupidity – argue that quantum mechanics is either wrong or incomplete. The typical complaint that "quantum mechanics isn't complete" is formulated as follows:
But the Copenhagen Interpretation fails to tell us what is really going on before we look.
Well, in reality, quantum mechanics tells us everything that is happening before the observation: nothing that could be considered a fact is happening before (or in the absence of) an observation! It is an answer. You may dislike it but it's a lie to say that you weren't given an answer!

Needless to say, the statements are upside down. The Copenhagen Interpretation provides us with a definition which questions are physically meaningful; and with the method to determine the answers to these questions (which must be probabilistic and the axiomatic framework clearly and unambiguously says that no "unambiguous" predictions of the phenomena are possible in general).

Instead, it's the anti-quantum "interpretations" of quantum mechanics such as the Many Worlds Interpretation that are incomplete because
their axioms just don't allow you to determine what you should do if you want to calculate the probability of an outcome of an observation.

In particular, the Many Worlds Interpretation denies that there's any collapse following Born's rule (an axiom) but it is rather obvious that when you omit this only link between quantum mechanics and probabilities, the Many Worlds paradigm will become unable to actually predict these probabilities. You created a hole – (because the building block looked ideologically heretical to him) someone has removed something that was needed (in the Copenhagen paradigm) to complete the argumentation that normally ends with the probabilistic prediction.

This is an actually valid complaint because the primary purpose of science is to explain and predict the results of phenomena.
That's right. And if you support MW, you abandoning the primary purpose of science. (I am avoiding the word "interpretation", because it is not really an interpretation. Calling it an interpretation is part of the hoax.)

Motl doesn't name names in this post, but an example is Sean M. Carroll. Motl probably has more distinguished physicists in mind, and doesn't want to embarrass them.

Belief in MW is a belief so goofy as to discredit whatever other opinions they might have. It is like believing in the Flat Earth, or that the Moon landings were faked.

Friday, October 25, 2019

Quantum measurement problem, explained

Dr. Bee explains the quantum measurement problem:
The problem with the quantum measurement is now that the update of the wave-function is incompatible with the Schrödinger equation. The Schrödinger equation, as I already said, is linear. That means if you have two different states of a system, both of which are allowed according to the Schrödinger equation, then the sum of the two states is also an allowed solution. The best known example of this is Schrödinger’s cat, which is a state that is a sum of both dead and alive. Such a sum is what physicists call a superposition.

We do, however, only observe cats that are either dead or alive. This is why we need the measurement postulate. Without it, quantum mechanics would not be compatible with observation. ...

Why is the measurement postulate problematic? The trouble with the measurement postulate is that the behavior of a large thing, like a detector, should follow from the behavior of the small things that it is made up of. But that is not the case. So that’s the issue. The measurement postulate is incompatible with reductionism. ...

I just explained why quantum mechanics is inconsistent. This is not a 'vague philosophical concern'.
She also says QM is incomplete.

This so-called measurement problem is a 'vague philosophical concern' in the sense that it does not present any practical difficulties.

When you say a theory is inconsistent, that usually means that it allows computing two different outcomes for some proposed experiment. That never happens with QM.

To see that there is an inconsistency, you have to wade thru a discussion of not seeing cats that are alive and dead at the same time.

It is not clear that this problem is a problem.

If anything good comes out of quantum computing research, it could be a better reductionist understanding of quantum measurement. Quantum computers seek to string together qubits as much as possible without measuring them. Because the computation depends on this lack of measurement, maybe the experiments could tell us more precisely just what a measurement is.

But the quantum computing research has told us nothing of the kind. Good old QM/Copenhagen is the underlying theory for all these experiments, and we have no clue that the 1930 theory is not good enuf.

Wednesday, October 23, 2019

IBM explains why Google has a quantum flop

Wired reports:
IBM Says Google’s Quantum Leap Was a Quantum Flop ...

Monday, Big Blue’s quantum PhDs said Google’s claim of quantum supremacy was flawed. IBM said Google had essentially rigged the race by not tapping the full power of modern supercomputers. “This threshold has not been met,” IBM’s blog post says. Google declined to comment. ...

Whoever is proved right in the end, claims of quantum supremacy are largely academic for now. ... It's a milestone suggestive of the field’s long-term dream: That quantum computers will unlock new power and profits ...
Wired says "academic" because everyone quoted claims that quantum supremacy will soon be achieved.

But where's the proof?

Nobody believed the Wright brothers could fly until they actually got off the ground. Quantum supremacy was supposed to be a way of showing that quantum computers had gotten off the ground. If those claims are bogus, as IBM now claims to have proved, then no quantum computers have gotten off the ground. That "long-term dream" is pure speculation.

Update: Google is now bragging, as its paper appeared in Nature. I assumed that it was trying to get into either Science or Nature, but I believe that Nature claims that it does not object to releasing preprints. If so, Google could have addressed criticisms after the paper was leaked.

Quanta mag has an article on the Google IBM dispute:
Google stands by their 10,000 year estimate, though several computer experts interviewed for this article said IBM is probably right on that point. “IBM’s claim looks plausible to me,” emailed Scott Aaronson of the University of Texas, Austin. ...

Aaronson — borrowing an analogy from a friend — said the relationship between classical and quantum computers following Google’s announcement is a lot like the relationship in the 1990s between chess champion Garry Kasparov and IBM’s Deep Blue supercomputer. Kasparov could keep up for a bit, but it was clear he was soon going to be hopelessly outstripped by his algorithmic foe.

“Kasparov can make a valiant stand during a ‘transitional era’ that lasts for maybe a year or two,” Aaronson said. “But the fundamentals of the situation are that he’s toast.”
Following his analogy, IBM's Deep Blue did beat Kasparov, but not convincingly. Maybe the computer was lucky. It was really the subsequent advances by others that showed that computers were superior.

So Aaronson seems to be saying that this research does not prove quantum supremacy, but other research will soon prove it.

We shall see.

Meanwhile, let's be clear about what Google did. It made a random number generator out of near-absolute-zero electronic gates in entangled states. Then it made some measurements to get some random values. Then it said that the device could be simulated on a classical computer, but it would take more time. Maybe 10,000 years more, but maybe just a couple of hours more.

That's all. No big deal, and certainly not quantum supremacy.

Update: Scott Aaronson weighs in , and admits that he was a Nature reviewer. He is happy because he is professionally invested in quantum supremacy being proved this way.

But he admits that Google's claim of 10k years is bogus, and that Google does not have any scalable qubits at all. Further more the researchers cooked the circuits so that they would be hard to simulate classically, while being completely useless for actually doing a computation.

To actually compute something useful, Google would need scalable qubits with some fault-tolerance system, and Google is no closer to doing that.

It has long been known that there are quantum systems that are hard to simulate. The only new thing here is that Google says its system is programmable. I am not sure why that is a good thing, as it cannot be programmed to do anything useful.

Update: Aaronson argues:
But I could turn things around and ask you: do you seriously believe at this point that Nature is going to tell the experimenters, “building a QC with 53 qubits is totally fine — but 60? 70? no, that’s too many!”
The flaw in this argument is that they don't really have a QC with 53 qubits. They have a random number generator with 53 components that act enuf like qubits to generate random numbers.

Computing something useful is expected to require 10 million real (scalable) qubits. Yes, Nature may very well say you can have a quantum device to generate random numbers, but not get a quantum computational advantage.

Monday, October 21, 2019

Indian books on Superior and Inferior

I enjoy John Horgan's SciAm columns, especially when he expresses skepticism for fad scientific work. For example, he appears to be winning a bet that no Nobel Prize will be awarded for string theory.

He has his share of goofy ideas, such as his belief in abolishing war.

His latest column is a rant against scientific work on human races, as all such work is inherently racist
But no, he was condemning Watson’s critics, whom he saw as cowards attacking a courageous truth-teller. I wish I could say I was shocked by my host’s rant, but I have had many encounters like this over the decades. Just as scientists and other intellectuals often reveal in private that they believe in the paranormal, so many disclose that they believe in the innate inferiority of certain groups. ...

I once suggested that, given the harm done by research on alleged cognitive differences between races, it should be banned. I stand by that proposal. I also agree with Saini that online media firms should do more to curb the dissemination of racist pseudoscience. “This is not a free speech issue,”
Really? Scientists and intellectuals often reveal in private that they believe in the paranormal? I doubt that.

My guess is that he is just using "paranormal" as a word to cover beliefs he does not recognize.

I am no expert in race research, but there is a lot of it, and I cannot believe it is all bogus.
I read Superior: The Return of Race Science by British journalist Angela Saini (who is coming to my school Nov. 4, see Postscript). Superior is a thoroughly researched, brilliantly written and deeply disturbing book. It is an apt follow-up to Saini’s previous book, Inferior, which explores sexism in science (and which I wrote about here and here). Saini calls “intellectual racism” the “toxic little seed at the heart of academia. However dead you might think it is, it needs only a little water, and now it’s raining.”
British? She has Indian parents, and India is famous for its racial/caste divisions. And its sexism too, for that matter.

Her books are mostly politics, not science. The favorable reviews just show how science has been corrupted. Here is how she writes:
If anything, the public debate around race and science has sunk into the mud. To state even the undeniable fact that we are one human species today means falling afoul of a cabal of conspiracy theorists. The “race realists,” as they call themselves online, join the growing ranks of climate change deniers, anti-vaxxers and flat-earthers in insisting that science is under the yoke of some grand master plan designed to pull the wool over everyone’s eyes. In their case, a left-wing plot to promote racial equality when, as far as they’re concerned, racial equality is impossible for biological reasons.

How did we get here? How did society manage to create so much room for those who genuinely believe that entire nations have innately higher or lower cognitive capacities,
Maybe because some nations have achieved much more than other nations?
What has started with a gentle creep through the back door of our computers could end, if we’re not careful, with jackboots through the front door of our homes. Populism, ethnic nationalism and neo-Nazism are on the rise worldwide.
No, this is just leftist paranoia. Neo-Nazism does not even exist, as far as I know.

Saturday, October 19, 2019

Google overhyped announcement imminent

Nautilus:
News on the quantum physics grapevine, Frankfurt Institute theoretical physicist Sabine Hossenfelder tells me, is that Google will announce something special next week: Their paper on achieving quantum supremacy, the realization of a quantum computer that outdoes its conventional counterpart. ...

It’s nothing to get too excited about yet. “This” — NISQ — “is really a term invented to make investors believe that quantum computing will have practical applications in the next decades or so,” Hossenfelder says. “The trouble with NISQs is that while it is plausible that they soon will be practically feasible, no one knows how to calculate something useful with them.” Perhaps no one ever will. “I am presently quite worried that quantum computing will go the same way as nuclear fusion, that it will remain forever promising but never quite work.”
We know that the Sun gets its energy from nuclear fusion. We don't know that quantum speedups are even possible.

Thursday, October 17, 2019

Rovelli: Neither Presentism nor Eternalism

Physicist Carlo Rovelli writes in support of Neither Presentism nor Eternalism:
Shortly after the formulation of special relativity, Einstein's former math professor Minkowski found an elegant reformulation of the theory in terms of the four dimensional geometry that we call today Minkowski space. Einstein at first rejected the idea. (`A pointless mathematical complication'.) But he soon changed his mind and embraced it full heart, making it the starting point of general relativity, where Minkowski space is understood as the local approximation to a 4d, pseudo-Riemannian manifold, representing physical spacetime.

The mathematics of Minkowski and general relativity suggested an alternative to Presentism: the entire 4d spacetime is `equally real now', and becoming is illusory. This I call here Eternalism.
Other make this argument that relativity implies a eternalism philosophy of time. I disagree with this argument. You can talk about spacetime with either Galilean or Lorentz transformations. If that is eternalist, then it is either with or without relativity.

Note that Rovelli is compelled to make his relativity story all about Einstein, even tho he had nothing to do with the issue at hand. Minkowski did not reformulate Einstein's theory, as it is not clear that Minkowski was ever influenced by anything Einstein wrote. Spacetime relativity was first published by Poincare, and Minkowski cited Poincare.

Rovelli ends up wanting some compromise between presentism and eternalism, as both views are really just philosophical extremes to emphasize particular ways of thinking about time. This might seem obvious, except that there are a lot of physicists who say that relativity requires eternalism.

Monday, October 14, 2019

The hardest of the hard sciences has gone soft

Science writer Jim Baggott writes in Aeon:
So what if a handful of theoretical physicists want to indulge their inner metaphysician and publish papers that few outside their small academic circle will ever read? But look back to the beginning of this essay. Whether they intend it or not (and trust me, they intend it), this stuff has a habit of leaking into the public domain, dripping like acid into the very foundations of science. The publication of Carroll’s book Something Deeply Hidden, about the Many-Worlds interpretation, has been accompanied by an astonishing publicity blitz, including an essay on Aeon last month. A recent PBS News Hour piece led with the observation that: ‘The “Many-Worlds” theory in quantum mechanics suggests that, with every decision you make, a new universe springs into existence containing what amounts to a new version of you.’

Physics is supposed to be the hardest of the ‘hard sciences’. It sets standards by which we tend to judge all scientific endeavour. And people are watching.
Physics has become embarrassingly unscientific.

Unsurprisingly, the folks at the Discovery Institute, the Seattle-based think-tank for creationism and intelligent design, have been following the unfolding developments in theoretical physics with great interest. The Catholic evangelist Denyse O’Leary, writing for the Institute’s Evolution News blog in 2017, suggests that: ‘Advocates [of the multiverse] do not merely propose that we accept faulty evidence. They want us to abandon evidence as a key criterion for acceptance of their theory.’ The creationists are saying, with some justification: look, you accuse us of pseudoscience, but how is what you’re doing in the name of science any different?
Yes, I think it is different. The folks at the Discovery Institute try to support their ideas with evidence. Carroll has no evidence for his ideas, and denies that any evidence is needed.
Instead of ‘the multiverse exists’ and ‘it might be true’, is it really so difficult to say something like ‘the multiverse has some philosophical attractions, but it is highly speculative and controversial, and there is no evidence for it’?
No, many worlds is not some speculative idea that might be true. Saying that would suggest that there might be evidence for it. There can be no evidence for it.

Sabine Hossenfelder writes:
Right, as I say in my public lecture, physicists know they shouldn't make these arguments, but they do it nevertheless. That's why I am convinced humans will go extinct in the next few hundred years.
Extinct? Maybe rational humans will die out, and be replaced by intelligent robots and an uneducated underclass.

Wednesday, October 9, 2019

Preskill explains quantum supremacy

Physicist John Preskill writes in Quillette:
In 2012, I proposed the term “quantum supremacy” to describe the point where quantum computers can do things that classical computers can’t, regardless of whether those tasks are useful. ...

The words “quantum supremacy” — if not the concept — proved to be controversial for two reasons. One is that supremacy, through its association with white supremacy, evokes a repugnant political stance. The other reason is that the word exacerbates the already overhyped reporting on the status of quantum technology.
This is funny. A few years ago, supremacy might have invoked thoughts of kings, empires, popes, and laws, but not white people. Now rationalist internet forums get frequented by misogynists and white nationalists. Preskill seems to be referring to this gripe about white supremacy.
The catch, as the Google team acknowledges, is that the problem their machine solved with astounding speed was carefully chosen just for the purpose of demonstrating the quantum computer’s superiority. It is not otherwise a problem of much practical interest. In brief, the quantum computer executed a randomly chosen sequence of instructions, and then all the qubits were measured to produce an output bit string. This quantum computation has very little structure, which makes it harder for the classical computer to keep up, but also means that the answer is not very informative.

However, the demonstration is still significant. By checking that the output of their quantum computer agrees with the output of a classical supercomputer (in cases where it doesn’t take thousands of years), the team has verified that they understand their device and that it performs as it should. Now that we know the hardware is working, we can begin the search for more useful applications.
The term "quantum supremacy" suggests a major accomplishment. But all we really know is that the hardware is working.

We also know that they did a quantum experiment that is hard to simulate. But so what? The weather is hard to simulate. A lot of things are hard to simulate.

Here is Preskill's 2012 paper on quantum supremacy, and 2018 paper on NISQ. The latter says:
I’ve already emphasized repeatedly that it will probably be a long time before we have fault-tolerant quantum computers solving hard problems. ...

Nevertheless, solving really hard problems (like factoring numbers which are thousands of bits long) using fault-tolerant quantum computing is not likely to happen for a while, because of the large number of physical qubits needed. To run algorithms involving thousands of protected qubits we’ll need a number of physical qubits which is in the millions, or more [56].
So a quantum computer that tells us something we didn't already know is decades away. Or impossible.

Monday, October 7, 2019

Many-Worlds does not solve measurement

Dr. Bee has a podcast on The Trouble with Many Worlds:
The measurement process therefore is not only an additional assumption that quantum mechanics needs to reproduce what we observe. It is actually incompatible with the Schrödinger equation.

Now, the most obvious way to deal with that is to say, well, the measurement process is something complicated that we do not yet understand, and the wave-function collapse is a placeholder that we use until we will figured out something better.

But that’s not how most physicists deal with it.
Actually, I think it is. Quantum mechanics was created by positivists, and their attitude is to go with what we've got, and not worry too much about purely philosophical objections.
Most sign up for what is known as the Copenhagen interpretation, that basically says you’re not supposed to ask what happens during measurement. In this interpretation, quantum mechanics is merely a mathematical machinery that makes predictions and that’s that. The problem with Copenhagen – and with all similar interpretations – is that they require you to give up the idea that what a macroscopic object, like a detector does should be derivable from theory of its microscopic constituents.

If you believe in the Copenhagen interpretation you have to buy that what the detector does just cannot be derived from the behavior of its microscopic constituents.
The positivists would go along with saying that the theory is all about the predictions, but would never say that you are not supposed to ask about the measurement process. Positivists do not tell you what not to do. They talk about what works.

She is completely correct that the collapse is observed. Some people complain that Copenhagen is goofy because the collapse is unnatural, but all interpretations have to explain the apparent collapse somehow.
The many world interpretation, now, supposedly does away with the problem of the quantum measurement and it does this by just saying there isn’t such a thing as wavefunction collapse. Instead, many worlds people say, every time you make a measurement, the universe splits into several parallel words, one for each possible measurement outcome. This universe splitting is also sometimes called branching. ...

And because it’s the same thing you already know that you cannot derive this detector definition from the Schrödinger equation. It’s not possible. What the many worlds people are now trying instead is to derive this postulate from rational choice theory. But of course that brings back in macroscopic terms, like actors who make decisions and so on. In other words, this reference to knowledge is equally in conflict with reductionism as is the Copenhagen interpretation.

And that’s why the many worlds interpretation does not solve the measurement problem and therefore it is equally troubled as all other interpretations of quantum mechanics.
She is right that Many-Worlds does not solve the measurement problem, and really has to have its own sneaky collapse postulate like Copenhagen, even tho the whole point of Many-Worlds was to avoid that.

However the situation with Many-Worlds is worse than that. Any physical theory could be turned into a Many-Worlds theory by simply introducing a universe splitting for each probabilistic prediction. This can be done with Newtonian celestial mechanics, electromagnetism, relativity, or anything else.

With any of these Many-Worlds theories, you can believe in them if you want, but the split universes have no observable consequences except to reduce or kill the predictive power of your theory. Any freak event can be explained away by splitting to another universe.

So Many-Worlds does not, and cannot, explain anything. It is just smoke and mirrors.

A reader asks:
What is your explanation as to why many people who are obviously very smart, such as Max Tegmark, David Deutsch, Sean Carroll, etc, subscribe to the many-worlds interpretation?
Why do so many smart people tell lies about Donald Trump every day?

I wrote a whole book on how Physics has lost its way. There is now a long list of subjects where prominent Physics professors recite nonsense. I hesitate to say that they are all con men, as many appear to be sincerely misguided.

Friday, October 4, 2019

Google scooped by unconventional p-bit computer

It is funny how quantum computing evangelist Scott Aaronson is flummoxed by being scooped by a rival technology:
Nature paper entitled Integer factorization using stochastic magnetic tunnel junctions (warning: paywalled). See also here for a university press release.

The authors report building a new kind of computer based on asynchronously updated “p-bits” (probabilistic bits). A p-bit is “a robust, classical entity fluctuating in time between 0 and 1, which interacts with other p-bits … using principles inspired by neural networks.” They build a device with 8 p-bits, and use it to factor integers up to 945. They present this as another “unconventional computation scheme” alongside quantum computing, and as a “potentially scalable hardware approach to the difficult problems of optimization and sampling.”

A commentary accompanying the Nature paper goes much further still — claiming that the new factoring approach, “if improved, could threaten data encryption,” and that resources should now be diverted from quantum computing to this promising new idea, one with the advantages of requiring no refrigeration or maintenance of delicate entangled states. (It should’ve added: and how big a number has Shor’s algorithm factored anyway, 21? Compared to 945, that’s peanuts!)

Since I couldn’t figure out a gentler way to say this, here goes: it’s astounding that this paper and commentary made it into Nature in the form that they did. This is funny. While Google is keeping mum in order to over-dramatize their silly result, a rival group steals the spotlight with non-quantum technology.

Aaronson is annoyed that this is non-quantum technology making extravagant claims, but exactly how is the Google quantum computer effort any better?

Apparently Google refuses to compete in any meaningful way, as Aaronson says
How large a number Google could factor, by running Shor’s algorithm on its current device, is a semi-interesting question to which I don’t know the answer. My guess would be that they could at least get up to the hundreds, depending on how much precompilation and other classical trickery was allowed. The Google group has expressed no interest in doing this, regarding it (with some justice) as a circus stunt that doesn’t showcase the real abilities of the hardware.
A circus stunt? Obviously the results would be embarrassingly bad for Google.

Others have claimed to use quantum computers to factor 15 or , but those were circus stunts. They failed to show any evidence of a quantum speedup.

An interesting quantum computer result would factor numbers with Shor's algorithm, and show how the work scales with the size of the number.

Also:
But as I explained in the FAQ, running Shor to factor a classically intractable number will set you back thousands of logical qubits, which after error-correction could translate into millions of physical qubits. That’s why no can do it yet.
And that is why we will not see true quantum supremacy any time soon. All Google has is a fancy random number generator.

Thursday, October 3, 2019

How there is mathematical pluralism

Mathematics is the study of absolute truth.

It is common for non-mathematicians to try to deny this. Sometimes they give arguments like saying that Goedel proved that mathematical truth is not possible. Goedel would never have agreed to that.

Mathematician Timothy Chow writes:
I would say that virtually all professional mathematicians agree that questions of the form “Does Theorem T provably follow from Axioms A1, A2, and A3?” have objectively true answers. ...

On the other hand, when it comes to the question of whether Axioms A1, A2, and A3 are true, then I think we have (what I called) “pluralism” in mathematics.
That is correct.

There are some axioms for the existence of very large cardinals, and some disagreement among mathematicians about whether those axioms should be regarded as true. But there is not really any serious disagreement about the truth of published theorems.

Other fields, like Physics, are filled with disputes about what is true.