Tuesday, September 24, 2019

Did Google achieve Quantum Supremacy?

I have readers turning to my blog to see if I have shut it because of the humiliation of being proven wrong.

I refer to papers announcing that Google has achieve quantum supremacy. You can find links to the two papers in the comments on Scott Aaronson's blog.

I am not conceding defeat yet. First, Google has withdrawn the papers, and refuses to say whether it has a achieved a breakthru or not. Second, outside experts like Aaronson have apparently been briefed on the work, but refuse to comment on it. And those who do comment are not positive:
However, the significance of Google’s announcement was disputed by at least one competitor. Speaking to the FT, IBM’s head of research Dario Gil said that Google’s claim to have achieved quantum supremacy is “just plain wrong.” Gil said that Google’s system is a specialized piece of hardware designed to solve a single problem, and falls short of being a general-purpose computer, unlike IBM’s own work.
Gil Kalai says that the Google and IBM results are impressive, but he still believes that quantum supremacy is impossible.

So it may not be what it appears to be.

Aaronson had been sworn to secrecy, and now considers the Google work a vindication of his ideas. He stops short of saying that it proves quantum supremacy, but he implies that the quantum supremacy skeptics have been checkmated.

Probably Google is eager to make a big splash about this, but is getting the paper published in Science or Nature, and those journals do not like to be scooped. The secrecy also helps suppress criticism, because the critics usually don't know enuf about the work when the reporters call.

The paper claims quantum supremacy on the basis of doing a computation that would have been prohibitive on a classical supercomputer.

That sounds great, but since the computation was not replicated, how do we know that it was done correctly?

Wikipedia says:
A universal quantum simulator is a quantum computer proposed by Yuri Manin in 1980[4] and Richard Feynman in 1982.[5] Feynman showed that a classical Turing machine would experience an exponential slowdown when simulating quantum phenomena, while his hypothetical universal quantum simulator would not. David Deutsch in 1985, took the ideas further and described a universal quantum computer.
So we have known since 1982 that simulating a quantum experiment on a classical computer can take exponential time.

At first glance, it appears that Google has only verified that. It did some silly quantum experiment, and then showed that the obvious classical simulation of it would take exponential time.

Is that all Google has done? I haven't read the paper yet, so I don't know. It is hard to believe that Google would claim quantum supremacy if that is all it is. And Google has not officially claimed it yet.

The paper says:
The benchmark task we demonstrate has an immediate application in generating certifiable random numbers [9];
Really? Is that all? It would be more impressive if they actually computed something.

6 comments:

  1. Dear Roger,

    1. I made a comment at Prof. Scott Aaronson's blog, here: https://www.scottaaronson.com/blog/?p=4317#comment-1820055

    An important highlight from this comment of mine:

    ``..the protocol in the leaked document, taken by itself, is too weak to support the claim.''

    2. Note, the RCS route to proving QS itself is like that. If *any* QC (Google's or someone else's) follows *any* RCS-based protocol, then, *when* it comes to beat the best classical computer available (CC), its output can *never* be *fully* verified. (That's by definition.)

    So, even if the QC has beaten the best CC, the judgment will always remain, in principle, in probabilistic terms. This is unlike other protocols that can use integer factoring, where a definite yes/no judgement could be had.

    So, the question for the RCS-based QC becomes: What are the other, indirect, ways in which we can become confident about its performance?

    Here, I mentioned that ideally, there should be a series of QCs based on the same technology. Start with a QC of few qubits, verify it fully using the CC. Then systematically increase the number of qubits.

    If *all* the QCs in the sequence of increasing power are based on the same technology, and if the output of all the QC < QS-QC is fully verified using CC, then we can grow confident enough that the one with the QS power also must be functioning alright (even if its output cannot be verified). Provided it is just beyond the QS limit.

    Thus, following the RCS route, our judgment will always be probabilistic.

    3. Something better is possible. If a QC solves not the RCS problem but the integer factoring problem, then the verification of the QC's answer still remains within the capability of even low-end CCs.

    Therefore, even if a QC achieves QS, its performance can be conclusively studied. The judgement will not be in-principle probabilistic, i.e. discounting the fact that all QCs produce probabilistic outputs.

    The point is: Whether the QC can be verified deterministically or not. This is an important difference. Integer factoring problem allows for deterministic verification. RCS allows for only probabilistic verification (because its output is beyond any CC to handle).

    Google's present m/c is based on RCS. That's why I call it a weak protocol.

    4. Prof. Aaronson got sarcastic about my mentioning this idea. See his comment here: https://www.scottaaronson.com/blog/?p=4317#comment-1820142

    I answered further, but it has not appeared (whereas comments made later in time by others have).

    I think that either my answer has gone into spam, or it has been moderated out. The point I made was mostly the same: The present claim is based on RCS, and so, there is a weakness to the protocol.

    Unfortunate, because even if this computer is functioning QM-cally, and actually has greater computational power, the problem it solves and hence the protocol of verification won't allow anything but a probabilistic judgment to be drawn.

    5. No, it's not fully correct to say that this computer doesn't do anything useful. True, there is no useful application to it (in the form the QC is, as of now). But the RCS problem does exercise all the parts of a QC in a fully connected way.

    It's sort of like sending a large number of meaningless pings randomly into a very complex network, and having them re-routed from random place again randomly, rather than mass-mailing people with with get rich quick schemes.

    6. Hope I am not too wrong in characterizing the Google QC or its limitations.

    Best,

    --Ajit

    ReplyDelete
  2. Google has achieved quantum chaos, first with its premature release of its papers, second with their silent response, and third with Scott Aaronson taking over as the Google spokesman for quantum supremacy on his blog.

    I don't like people claiming stuff without backing it up with facts. That is exactly what is happening here. Aaronson responded to my skeptic comments on his blog as follows:

    "Have you considered the possibility that others might know things about this that you don’t? What’s holding Google back is simply that their paper is embargoed by the journal that accepted it. Once it’s published (hopefully within a month), they’ll hold a press conference, etc. etc., as if the leak never happened. And will you be back here to acknowledge your wrongness?"

    All of this leaves a bad taste in my mouth. Aaronson said in his recent 3rd lecture in Zurich that he wants quantum supremacy primarily so he can say "I told you so" to the quantum skeptics.

    And he is doing this now. I am still skeptical.

    ReplyDelete
  3. I agree with Roger. This is what I commented on Gil's blog: Villalonga (2019) discusses four different categories of classical simulators: direct evolution of the quantum state, perturbation of stabilizer circuits, tensor network contractions and hybrid algorithms. Obviously matching the fidelity of a NISQ device can substantially reduce the computational burden. The Kullback-Leibler divergence can distinguish the Porter-Thomas distribution but analog computers can produce it as well (see Tangpanitanon 2019). Makes me wonder if anything “digital” is really going on here. Either way, they are a long way from the BILLION physical qubits of high quality needed to crack RSA. https://arxiv.org/pdf/1905.00444.pdf https://arxiv.org/pdf/1906.03860.pdf
    https://arxiv.org/ftp/arxiv/papers/1208/1208.0928.pdf

    ReplyDelete
  4. All:

    Since my second comment at Prof. Aaronson's blog hasn't appeared even hours later, here I copy-paste the two most relevant portions:

    Excerpt 1:

    "...*From what information has been available*, the protocol *is* too weak, following the usual standards followed in engineering and technology, to be able to fully justify a claim as extraordinary as actual achievement of QS.

    I keep this context, including my standards, when I talk about the desired strength from a protocol...."


    Excerpt 2:

    "...Suppose they not only had already thought of, but also chose to supply, all the relevant data, so what? What *precisely* would it help establish? Is this not a question an ordinary computational engineer should ask himself?

    Suppose an XYZ company, possibly Google themselves, comes tomorrow (or within a short enough a time-frame), and establishes QS using not the RCS route but the integer factoring route, how would one place these two items of progress? How might history look back at the two?

    In all fairness, here I think, one would have to say that the chronologically first result (the present one—assuming it pans out) was necessarily probabilistic in character, because the proving protocol itself had this nature. In contrast, the chronologically second result (based on the integer factoring, a result that is purely hypothetical at this stage) was conclusive.

    Yes, all of this has always been well known to every one. But what is the harm in pointing it out in a mere comment?"

    ---

    Another Point:

    Apart from it all, I think the methodology of some people involved in the debate is suspect. They are using their prestige in setting up an artificial binary of QC-Skeptic vs. QC-Believer.

    According to their foolish scheme, being a QC-Skeptic means: Turning a Nelson's eye to every item of progress whatever be its nature; actively advocating impossibility of the scalable QC in deliberate disregard of any and all evidence.

    Being a QC-Believer in this scheme means: (i) Suspending all integrative context of knowledge that one does have at the flip of their command, and (ii) since (i) alone cannot be enough for them, also actively helping them in branding any independent thinkers as being QC-Skeptics.

    Spelling out the nature of the games they play is enough; one need not separately add or loudly advertise that he refuses to fall such tactics.

    Best,

    --Ajit

    ReplyDelete
  5. The Joshua Tree Quantum Computer(TM) Achieves Quantum Supremacy: Dr. Elliot McGucken's research group hath created a programmable quantum computer capable of simulating over five sextillion atoms in the form of water molecules, performing massively complex calculations in a fraction of a second, leveraging quantum entanglement in the Hilbert Space, thusly tapping into the power of higher dimensions and the space of parallel universes. These highly complex many-body calculations would take classical computers over a trillion trillion trillion years to solve and simulate. The JT Quantum Computer leverages the quantum multiverse to generate random numbers and pattern distributions via the entangled interactions of five sextillion atoms.

    Atoms in a drop of water = 3 atoms/molecule x 1.67 x 10^21 molecules = 5.01 x 10^21 atoms, or five sextillion atoms.

    The patent-pending Joshua Tree Quantum Computer works as follows:

    One obtains an eye-dropper and a glass of water.

    One takes up a drop of water in the eye-dropper.

    One drops said drop of water onto a flat piece of glass.

    One can program this quantum computer by dropping the drop from different heights, resulting in varying sizes and spreads of the final pattern of the collapsed wavefunctions of the several sextillion atoms.

    Immediately one achieves and observes the random pattern distribution based on the entangled interactions between the over 5.01 x 10^21 atoms (five sextillion atoms), as the combined wave function collapses and the universe splits a trillion trillion times. Using a classical computer, one would be unable to simulate this calculation even after a trillion trillion trillions years. The JT Quantum Computer leverages the entangled quantum interactions between the atoms, thusly mining the Hilbert Space of parallel universes and the multiverse in a manner that is at least several sextillion times faster than IBM's leading SUMMIT super computer.

    And so the Joshua Tree Quantum Computer hath achieved quantum supremacy.

    The Wright Brothers' first flight lasted under a minute, and already we have crossed far further than that, with big plans for the future, and multitudinous commercial applications. A new era hath dawned.

    We are currently seeking $200 million in funding to complete the design of the Redwood Quantum Computer, which will involve an entire bucket of water and which will be able to leverage the many worlds of Sean Carroll. Practical applications of the Redwood Quantum Computer(TM) will include dissolving and washing away BS.

    dx4/dt=ic friends, dx4/dt=ic

    ReplyDelete
  6. Gil Kalai's blog https://gilkalai.wordpress.com/ has Gil's thoughts on this matter. He thinks it is possible to program a classical computer to efficiently spoof Google's statistics test for quantum supremacy, thus disproving their supposed claim.

    I think Einstein said it best when he said, "God is subtle but not malicious."

    ReplyDelete