Saturday, August 6, 2016

Supersymmetry fails again

Dennis Overbye reports for the NY Times:
A bump on a graph signaling excess pairs of gamma rays was most likely a statistical fluke, they said. But physicists have been holding their breath ever since.

If real, the new particle would have opened a crack between the known and the unknown, affording a glimpse of quantum secrets undreamed of even by Einstein. Answers to questions like why there is matter but not antimatter in the universe, or the identity of the mysterious dark matter that provides the gravitational glue in the cosmos. In the few months after the announcement, 500 papers were written trying to interpret the meaning of the putative particle.

On Friday, physicists from the same two CERN teams reported that under the onslaught of more data, the possibility of a particle had melted away. ...

The presentations were part of an outpouring of dozens of papers from the two teams on the results so far this year from the collider, all of them in general agreement with the Standard Model.
The fact is that Physics developed a theory of quantum electromagnetism by 1960 or so, and extended it to the other fundamental forces in the 1970s. It is called the Standard Model, and it explains all the observations.

Nevertheless all the leading physicists keep loudly claiming that something is wrong with the Standard Model, so we need more dimensions and bigger particle accelerators.
For a long time, the phenomenon physicists have thought would appear to save the day is a conjecture known as supersymmetry, which comes with the prediction of a whole new set of elementary particles, known as wimps, for weakly interacting massive particles, one of which could comprise the dark matter that is at the heart of cosmologists’ dreams.

But so far, wimps haven’t shown up either in the collider or in underground experiments designed to detect wimps floating through space. Neither has evidence for an alternative idea that the universe has more than three dimensions of space.
Supersymmetry still has many true believers, but it is contrary to all the experiments. And the theory behind is pretty implausible also.

The biggest Physics news of the week was this Nature article:
Here we demonstrate a five-qubit trapped-ion quantum computer that can be programmed in software to implement arbitrary quantum algorithms by executing any sequence of universal quantum logic gates. We compile algorithms into a fully connected set of gate operations that are native to the hardware and have a mean fidelity of 98 per cent. Reconfiguring these gate sequences provides the flexibility to implement a variety of algorithms without altering the hardware.
The full article is behind a paywall.

You probably thought that 5-qubit quantum computers had already been announced. Yes, many times, but each new one claims to be a more honest 5 qubits than the previous ones.

The great promise, of course, is that this is the great breakthru towards scalability.

Maybe, but I doubt it.

I get comments from people who wonder how I can be skeptical about quantum computers when they have already been demonstrated.

They have really been demonstrated. Just look at a paper like this. Even as the authors do everything they can to promote the importance of their works, their most excited claim is always that the work might be a step towards the elusive scalable quantum computer. This implies that the scalable quantum computer has not been demonstrated. It may never be.

These experiments are just demonstrations of quantum mechanical properties of atoms. The technology gets better all the time, but they are still just detecting quantum uncertainties of an isolated state.

I'll be looking for Scott Aaronson's comments, but current he is busy enumerating his errors. Apparently a number of his published theorems and proofs turned out to be wrong.

He is probably not more fallible than other complexity theorists. Just more willing to admit his mistakes.

1 comment:

  1. Roger,

    1. I am happy that physicists are served well! By reality!!

    2. When it comes to scalable quantum computers, to what extent would the analogy of ``rock balancing'' be appropriate? (

    I mean, the two systems do seem to share the feature of an increasingly narrower zone in the controlling parameters space wherein equilibrium can be achieved (even if it's, sort of like, only ``quasi''-stable (not a technical term)).

    The difference is that the QC is an essentially dynamic system whereas the rocks system is essentially static. A dynamic balance might at first sight seem more difficult to achieve, but then, think of tightrope walking.

    Presuming that QC can indeed be correctly characterized as a dynamic system, perhaps this characteristic makes it easier to scale it up? May be?

    ... Or does the quantum (vs. classical) character make it more difficult to scale up?

    But is the analogy OK in the first place? What are your thoughts?