Thursday, July 27, 2017

Physics took a wrong turn with unification

I have agreed with Lubos Motl the last few times I have mentioned him, which is worrisome, but I do disagree with his post on wrong turns:
what was the first wrong turn in theoretical physics [?] ...

There would be "more moderate" groups that would identify the grand unification as the first wrong turn, or supersymmetric field theories as the first wrong turn, or bosonic string theory, or superstring theory, or non-perturbative string theory, or M-theory, or the flux vacua, or something else.

I've met members of every single one of these groups. ...

Some critics of the evolutionary biology say that zebras and horses may have a common ancestor but zebras and llamas can't. Does it make any sense? ...

The case of the critics of physics is completely analogous. If grand unification were the first wrong turn, how do you justify that the group SU(3)×SU(2)×U(1) is "allowed" to be studied in physics, while SO(10) is already blasphemous or "unscientific" (their word for "blasphemous")? It doesn't make the slightest sense. They're two groups and both of them admit models that are consistent with everything we know. SO(10) is really simpler and prettier ...
No, SO(10) is not really simpler and prettier, and such thinking was indeed a wrong turn in 1973 that lead theoretical physicists into a 40-year dead-end.

You justify the group SU(3)×SU(2)×U(1) because all those group parameters were grounded in experimental observations. SU(3) is the 3-color-quark theory of strong interactions, SU(2) is the weak (beta decay), and U(1) is electromagnetism. SO(10) adds many new particles and phenomena that have never been observed and have no experimental basis.

Supposed SO(10) unifies the forces, but it doesn't. It doesn't reduce the number of coupling constants or experimentally-determined parameters. It doesn't make the theory or analysis any easier.

I have a theory that this wrong turn, and also a bunch of other subsequent ones, were rooted in some misconcepted about big physics successes of the past, notably relativity and quantum mechanics.

Those theories were grounded in experiment, but there is a widespread belief that Einstein should get all the credit for relativity because he ignored the experiments and carried out the supposedly essential step of elevating principles to postulates. I wrote a book on How Einstein Ruined Physics, explaining the damage from this warped view of Einstein.

So physicists came to believe that all the glory in physics goes to those who do such sterile theorizing. Hence grand unified theories that do not actually unify or explain anything.

Monday, July 24, 2017

Microsoft qubits are decades behind

I thought that Microsoft was among those promising quantum computers real soon now, but maybe not. SciAm reports:
In 2005, Microsoft made a big investment in quantum braids when it put mathematician Michael Freedman in charge of its efforts on quantum computing. ... But late last year, Microsoft hired several star experimentalists from academia. One of them was physicist Leo Kouwenhoven of the Delft University of Technology in the Netherlands, who in 2012 was the first to confirm experimentally that particles such as anyons remember how they are swapped. He is now setting up a new Microsoft lab at the Delft campus, which aims to demonstrate that anyons can encode qubits and do simple quantum computations. The approach is at least two decades behind other forms of quantum computing, but Freedman thinks that the robustness of topological qubits will ultimately win the day. “If you’re going to build a new technology, you have to get the foundation right,” he says.
If it is "at least two decades behind", then there is probably a long list of technological problems to be solved.

Google and IBM are still promising something this year, I think. We will see, as I think Google and IBM are decades behind also.

Friday, July 21, 2017

Teleportation of undefined information

Philip Ball writes in Nature mag about Chinese research in quantum teleportation:
f physicists Asher Peres and William Wootters had stuck to calling this quantum process ‘telepheresis’ when they first conceived of it in 19934, I doubt we’d be seeing headlines about it today. It was their co-author Charles Bennett who suggested instead ‘quantum teleportation’.

Whatever it’s called, the process transfers the quantum state of one particle onto another, identical particle, and at the same time erases the state in the original. ...

So what exactly is being transmitted through entanglement alone?

This is a tricky question for quantum information theory in general: it is not obvious what ‘information’ means here. As with other colloquial words adopted by science, it is too easy to imagine we all know what we’re talking about. The 'stuff' transmitted by entanglement is neither information in the sense of Claude Shannon’s information theory (where it is quantified in terms of entropy, increasing as the 'message' gets more random), nor in the sense of an office memo (where information becomes meaningful only in the right context). Then what is it information about, exactly?

That issue, at the heart of quantum information theory, has not been resolved8, 9. Is it, for example, information about some underlying reality, or about the effects of our intervention in it? Information universal to all observers, or personal to each? And can it be meaningful to speak of quantum information as something that flows, like liquid in a pipe, from place to place? No one knows (despite what they might tell you). If we can answer these questions, we might be close finally to grasping what quantum mechanics means.
You find some physicists in this field who act as if conservation of quantum information is the most important principle in all of physics. However, as Ball points out, the concept is not even defined.

As far as I know, there are no experiments that have shown it to be ever conserved. There is not really any good theory for believing it to be conserved either, except for those who believe in time reversibility.

And "teleportation" is, as Ball says, just a misleading headline-grabbing term for some mundane quantum physics. Physics pretend that it is something magical, like Star Trek, but it is not.

Scott Aaronson has posted an argument for limits on information density:
Summarizing where we’ve gotten, we could say: any information that’s spatially localized at all, can only be localized so precisely.  In our world, the more densely you try to pack 1’s and 0’s, the more energy you need, therefore the more you warp spacetime, until all you’ve gotten for your trouble is a black hole.  Furthermore, if we rewrote the above conceptual argument in math—keeping track of all the G’s, c’s, h’s, and so on — we could derive a quantitative bound on how much information there can be in a bounded region of space.  And if we were careful enough, that bound would be precisely the holographic entropy bound, which says that the number of (qu)bits is at most A/(4 ln 2), where A is the area of a bounding surface in Planck units. ...

In summary, our laws of physics are structured in such a way that even pure information often has “nowhere to hide”: if the bits are there at all in our description of the world, then they’re forced to pipe up and have a measurable effect.  And this is not a tautology, but comes about only because of nontrivial facts about special and general relativity, quantum mechanics, quantum field theory, and thermodynamics.  And this is what I think people should mean when they say “information is physical.”
This is plausible, but it is a very crude upper bound on classical information. Yes, he says info is physical in the sense that it must take up some space, or the energy needed to store it would be so large as to collapse into a black hole.

But he is not saying that info is conserved, or giving equations of motion for info, or getting mystical about quantum info.

Update: LuMo agrees with me:
To summarize, I think that it's just wrong to get carried away with vague metaphysical sentences such as "information is physical" and build a whole religion on the worshiping of the alleged depth of such statements. I believe that a person who is learning to think as a physicist must understand rather early on that the actual deep physics is composed of much more well-defined and specific statements than "information is physical". So the people who try to impress the laymen with "information is physical" are ultimately contributing to the laymen's misunderstanding of the difference between science and philosophy, science and religion, physics and an empty humanities talk.

The laymen should be honestly told that legitimate physicists generally consider the talk about propositions such as "information is physical" to be a waste of time and everyone who actually starts to think as a physicist – including you – should consider it a waste of time, too.

Monday, July 17, 2017

Political work by feminist geographers

I mentioned a feminist geography paper,
and now it has gotten play from Rush Limbaugh and Jerry Coyne:
Now I haven’t read the entire paper in detail, as even I have limits on my ability to tolerate this kind of writing, but I at least get what they’re saying. The authors cite data showing that work by women and non-Anglophones is cited less frequently than is work by English speakers and men. I suppose there are several possible reasons for this, including bigotry, but it’s hard to discern what’s at play because one must somehow discern a paper’s importance and visibility (i.e., where it was published) to judge whether it should have been cited, and that’s nearly impossible.
The obvious explanation is that no one cites feminist geography because it is garbage.

A reader points out that Nature published an article on hypothetical (ie, impractical) quantum gravity experiments without realizing that there was already a huge literature on the subject. I guess no one cites that literature because it is so disconnected with reality.

Saturday, July 15, 2017

History of Spacetime

Some editing is being done on the History of spacetime for the Spacetime Wikipedia. It is funny how some editors want to make it all about Einstein, when he had very little to do with the historical acceptance of the concept.

The first important thing about spacetime are the transformations mixing space and time. They are called Lorentz transformations because Lorentz figured them out and made them famous before Einstein. Everyone agrees to that. Lorentz even got a Nobel prize in 1902 for related work. See History of Lorentz transformations for details.

Second is combining space and time into a 4-dimension object and giving it a non-Euclidean geometry. Poincare did this in 1905, defining the geometry by the metric and symmetry group. Minkowski elaborated on this with world lines and diagrams, and popularized it in 1908. As a result, spacetime is often called Minkowski space. In the words of a Harvard historian of science:
In sum Minkowski still hoped for the completion of the Electromagnetic World Picture through relativity theory. Moreover, he saw his own work as completing the program of Lorentz, Einstein, Planck, and Poincare. Of these it was Poincare who most directly influenced the mathematics of Minkowski's space-time. As Minkowski acknowledges many times in "The Principle of Relativity," his concept of space-time owes a great deal to Poincare's work.35 [Galison,1979]
Third is defining a relativistic theory on spacetime. This means that the observable physical variables and equations must be covariant under the geometric structure. That is, observing from a rotated or moving frame must induce a symmetry in the laws of physics that is a mathematical consequence of the underlying geometry. Poincare proved this for Maxwell's theory of electromagnetism, and constructed a relativistic theory of gravity, in his 1905 paper. Minkowski appears to be the only one who understood Poincare's paper.

While Einstein understood Lorentz's work in step 1, and gave his own presentation of it, he completely missed steps 2 and 3. Even after Minkowski spelled them out clearly in a paper that was accepted all over Europe, Einstein said, "Since the mathematicians have invaded the theory of relativity, I do not understand it myself anymore."

While Poincare and Minkowski explicitly advocated geometric interpretations different from Lorentz's, Einstein denied that he had any differences from what Lorentz published.

While Einstein, in collaboration with Grossmann, Hilbert, and others, eventually built on Minkowski's work for general relativity, he denied the geometric interpretations of special and general relativity that are in textbooks today.

In short, Einstein had almost nothing to do with the development or acceptance of spacetime among physicists.

He started to become a cult figure with the general public when the NY Times published this 1919 story:
LIGHTS ALL ASKEW IN THE HEAVENS;
Men of Science More or Less Agog Over Results of Eclipse Observations.
EINSTEIN THEORY TRIUMPHS
Stars Not Where They Seemed or Were Calculated to be, but Nobody Need Worry.
A BOOK FOR 12 WISE MEN
No More in All the World Could Comprehend It, Said Einstein When His Daring Publishers Accepted It.
And today, hardly anyone mentions spacetime without also mentioning Einstein's name.

Thursday, July 13, 2017

New Einstein Philosophy book

Philosopher Thomas Ryckman has a new book on Einstein:
Einstein developed some of the most ground breaking theories in physics. So why have you written a book that examines him as a philosopher?

Einstein’s theoretical accomplishments, especially the two theories of relativity, as well as his occasional philosophical pronouncements, had a tremendous impact in shaping the modern discipline of philosophy of science in the first half of the 20th century. Also, throughout his career as a theoretical physicist, Einstein in fact adhered to a particular style of philosophizing, though not in a sense familiar to academic departments of philosophy. I call this a “philosophy of principles”; his central innovations came by elevating certain physical, formal and even metaphysical principles to the status of postulates, and then exploring the empirical consequences.
This is typical of Einstein idolizers crediting him for relativity being such a crucial and revolutionary breakthru.

Lorentz's analysis started with Maxwell's equations, Michelson-Morley, and a couple of other experiments. From these, he deduced that the speed of light was constant, that the same physics holds in different frames, and the Lorentz transformations. FitzGerald, Larmor, Poincare, and Minkowski used similar reasoning.

What set Einstein apart, in the eyes of Ryckman and other philosophers, was that he elevated the constant speed of light and frame-independence principles (from Lorentz and Poincare) to the status of postulates, instead of empirical science. As historian Arthur I. Miller argues, Lorentz and Poincare were willing to admit that experiments might prove the theory wrong, and so Einstein should get all the credit.

This is a backwards view of what science is all about. As Lorentz pointed out, Einstein just postulated what had previously been proved.

Just to be clear, I don't want to criticize new mathematical works. Poincare and Minkowski injected new mathematical ideas and interpretations into relativity, and that was great work. Einstein did not find any new mathematics or physics. He is idolized because he took what was called "principles" in the physics literature, and elevated them to "postulates". That's all. To a mathematician or an empirical scientist, elevating a principle to a postulate is no accomplishment at all.
Einstein was famous for his pacifist views yet set them aside to contribute towards the development of the atomic bomb. This was something he later regretted, campaigning for nuclear disarmament alongside Bertrand Russell. What spurred his, albeit temporary, interest in the development of atomic weapons?
The short answer is that Einstein hated the Germans, and wanted to nuke them. He only regretted the bomb because he was a Communist and opposed the Cold War.
What do you see as his most important contribution to the philosophy of science?

In my opinion, Einstein demonstrates that it is possible to be a “realist” about science without adopting the metaphysical presuppositions of what is today called “scientific realism”. In particular, Einstein balanced the aspirational or motivational realist attitude of many working scientists with the clear recognition that realism remains a metaphysical hypothesis, not demonstrable by empirical evidence.
I thought that I was a realist myself, until I read the nonsense that philosophers write on the subject. Einstein's realism is an incoherent mess, and the philosophers are worse.

Wednesday, July 12, 2017

Philosophically excited about quantum mechanics


From xkcd comic. These comics are sometimes obscure, so there is an explanation page.

For an example of how quantum mechanics gets academics philosophically excited, see this paper:
Assembled Bodies
Reconfiguring Quantum Identities
Whitney Stark

Abstract

In this semimanifesto, I approach how understandings of quantum physics and cyborgian bodies can (or always already do) ally with feminist anti-oppression practices long in use. The idea of the body (whether biological, social, or of work) is not stagnant, and new materialist feminisms help to recognize how multiple phenomena work together to behave in what can become legible at any given moment as a body. By utilizing the materiality of conceptions about connectivity often thought to be merely theoretical, by taking a critical look at the noncentralized and multiple movements of quantum physics, and by dehierarchizing the necessity of linear bodies through time, it becomes possible to reconfigure structures of value, longevity, and subjectivity in ways explicitly aligned with anti-oppression practices and identity politics. Combining intersectionality and quantum physics can provide for differing perspectives on organizing practices long used by marginalized people, for enabling apparatuses that allow for new possibilities of safer spaces, and for practices of accountability.
I cannot even tell if this is a joke or not.

Update: A comment says that it is not a joke, and neither is this:
Academics and scholars must be mindful about using research done by only straight, white men, according to two scientists who argued that it oppresses diverse voices and bolsters the status of already privileged and established white male scholars.

Geographers Carrie Mott and Daniel Cockayne argued in a recent paper that doing so also perpetuates what they call “white heteromasculinism,” which they defined as a “system of oppression” that benefits only those who are “white, male, able-bodied, economically privileged, heterosexual, and cisgendered.” (Cisgendered describes people whose gender identity matches their birth sex.)

Mott, a professor at Rutgers University in New Jersey, and Cockayne, who teaches at the University of Waterloo in Ontario, argued that scholars or researchers disproportionately cite the work of white men, thereby unfairly adding credence to the body of knowledge they offer while ignoring the voices of other groups, like women and black male academics.
Apparently academic geography has been lost to leftism.

Tuesday, July 11, 2017

Tycho was the greatest astronomer



Tycho Brahe was a brilliant astronomer who probably did more to advance accurate observations than anyone.

I got the chart from the recent paper, Astrometric accuracy during the past 2000 years. It has other similar charts.

It shows a lot of later advances in accuracy, but those are mainly technological advances.

At Tycho's time, there had been very little advance in astronomy over the previous millennium. The telescope still had not been invented, but Tycho collected better data than everyone before him put together.

Without Tycho's work, Kepler never could have done what he did, and without Kepler, Newton could not have done what he did. Without Newton, we might still be farmers.

Sunday, July 9, 2017

Sapolsky book opposes free will

Leftist-atheist-evolutionist professor Jerry Coyne posts one of his usual rants against free will, and then has this exchange with a reader:
As I always say, it’s easier to convince a diehard creationist of the truth of evolution than to convince a diehard atheist of the fact that our behaviors are determined, and that we can’t make alternative choices at a given moment.

Yet there are some enlightened folk who not only accept determinism but deny that a version of “free will” can be confected that preserves our notion of that term while accepting determinism. ...

D. Cameron Harbord: I’m always amazed by evolutionary biologists who evidently believe that we evolved all of that expensive decision-making machinery (in the brain) for no apparent purpose. Humans have devoted large amounts of their time and energy to both individual and collective decision making for at least 10’s, and probably 100´s, of thousands of years. What would be the point of evolving to waste so much time and energy in a deterministic universe? It’s a question that I wish believers in a deterministic universe would provide a satisfactory answer to.

Jerry Coyne: What would be the point? It just happened because some genes that affected rumination and behavior left more copies than others. Not all decision making “machinery” is evolved, of course: some is learned.

With all due respect, I don’t think you have the slightest idea what you’re haranguing about, and you clearly don’t understand evolution.

I have just answered your question in a satisfactory manner.

Harbord: I believe I that I do understand evolution, as I believe that you do also. Genes for “rumination”? The point is that the amount of time and energy spent in decision-making “rumination” (and discussion and argument and investigation) has been significant for modern humans, at least since the time we were living in hunter gatherer bands. It is at least a little mysterious why would evolve to be this way, if you are correct.

But then there is no current finding in physics that establishes the hypothesis of a deterministic universe, so there is no scientific finding that rules out the existence of free will.

Thank you very much for your kind reply.

Coyne: Well. we’ve established that you really don’t understand evolution, as you can’t see any selective advantage to evolving a more complex onboard computer in a social and bipedal animal.

What we’ve also established now is that you don’t understand physics, either. You clearly haven’t read the classical physics that establishes determinism; the laws of physics themselves are evidence for a deterministic universe. That we can land rockets on a comet establishes a deterministic universe, as does the fact that we can predict solar eclipses with great accuracy: to the second.

Do you want to try to misunderstand chemistry as well?
Perhaps Coyne's disbelief in free will explains his rudeness.

First, the physics. Classical physics does not establish determinism. Some of the simplest classical mechanical systems are chaotic, and thus indistinguishable from a nondeterministic system.

Second, the biology. The idea that we have genes for decision-making rumination, but we are never actually able to make decisions, is a little bizarre.

When animals devote a lot of energy to some activity, then there must be some payoff in terms of more or better offspring, or else it is an evolutionary puzzle that begs for an explanation.

Coyne is excited about the new best-selling book, Behave: The Biology of Humans at Our Best and Worst, by Robert M. Sapolsky. It is apparently a long argument that our brains are fully programmed, with no free will.

From his Stanford publicist:
For me, the single most important question is how to construct a society that is just, safe, peaceful – all those good things – when people finally accept that there is no free will.
He is a leftist Jewish atheist professor, so that is were he is coming from. Macleans:
I used to be polite and say stuff like I certainly can’t prove there isn’t free will. But no, there’s none. There simply is nothing compatible with a 21st century understanding of how the physical laws of the universe work to have room for some sort of volitional little homunculus crawling around in our heads that takes advice from the biological inputs but at the end of the day goes and makes this independent decision on its own. It’s just not compatible with anything we understand about how biology works. All that free will is, is the biology we don’t understand yet.
My biggest quarrel with these leftist biologists is when they try to tell us about "21st century understanding of how the physical laws of the universe work". There is no such understanding that is contrary to free will.

Vice.com:
There is no concept more American than "free will" — the idea that we're all gifted (probably by God) with the power to choose a path of success or destruction and bear responsibility for the resulting consequences. It's the whole reason we "punish" people for committing crimes. The idea is so ubiquitous that most people have never even pondered an alternative.

Neurobiologist Robert Sapolsky sees things differently. He's opposed to the concept of "free will." Instead, he believes that our behavior is made up of a complex and chaotic soup of so many factors that it's downright silly to think there's a singular, autonomous "you" calling the shots. He breaks all of this down in his new book, Behave: The Biology of Humans at Our Best and Worst. The tome is a buffet of neurology, philosophy, politics, evolutionary science, anthropology, history, and genetics. At times, its exhaustive in the number of variables considered when looking at human behavior, but that's Sapolsky's whole point: The decisions we make are a result of "prenatal environment, genes, and hormones, whether [our] parents were authoritative or egalitarian, whether [we] witnessed violence in childhood, when [we] had breakfast..."
Free will is American? They as might as well say it is a white Christian capitalist right-wing concept.

It is funny how non-Christian leftist academics are so opposed to both genetic determinism and to free will.

Update: Sapolsky also says that Religion is a mental illness. He also says Jesus had some mental disorders. I think that doubting free will is a mental illness.

Friday, July 7, 2017

LIGO data whitening is exposed

Elliot McGucken alerts me to LIGO controversies.

Peter Woit debunks claims that LIGO confirms the extra dimensions of string theory. I have previously complained that LIGO uses fake data injections, with only 3 ppl knowing whether a black hole collision has been faked.

Quanta mag reports:
Wave Observatory (LIGO) announced that they had successfully detected gravitational waves, subtle ripples in the fabric of space-time that had been stirred up by the collision of two black holes. The team held a press conference in Washington to announce the landmark findings.

They also released their data.

Now a team of independent physicists has sifted through this data, only to find what they describe as strange correlations that shouldn’t be there. The team, led by Andrew Jackson, a physicist at the Niels Bohr Institute in Copenhagen, claims that the troublesome signal could be significant enough to call the entire discovery into question. The potential effects of the unexplained correlations “could range from a minor modification of the extracted wave form to a total rejection of LIGO’s claimed [gravitational wave] discovery,” wrote Jackson in an email to Quanta. LIGO representatives say there may well be some unexplained correlations, but that they should not affect the team’s conclusions. ...

For now, confidence is high in LIGO’s conclusions. “The only persons qualified to analyze this paper are in the LIGO Scientific Collaboration,” said Robert Wagoner, a theoretical physicist at Stanford University who is not affiliated with LIGO. “They are the only ones who have had access to the raw data.” Steinn SigurĂ°sson, an astrophysicist at Pennsylvania State University who is also not affiliated with either team, agrees. “For now, I’d definitely go with the LIGO people,” he said. “It is very rare for outsiders to find major errors in a large collaboration.”

Nevertheless, “it’s going to take longer than people would like” to get these issues resolved, said SigurĂ°sson. “It’s going to take months.”
This is funny. Did LIGO release its data or not? If LIGO released its raw data, then others have access to it.

A lot of researchers do not release their raw data, and thus avoid the detailed scrutiny of others. Their attitude is often that competing researchers should go do their own experiments, and collect their own data.

The London Daily Mail explains:
The ensure the results are accurate, LIGO uses two observatories, 3,000 kilometers apart, which operate synchronously, each double-checking the other's observations.

The noise at each detector should be completely uncorrelated, meaning a noise like a stormnearby one detector doesn't show up as noise in the other.

Some of the sources of 'noise' the team say they contend with include: 'a constant 'hiss' from photons arriving like raindrops at our light detectors; rumbles from seismic noise like earthquakes and the oceans pounding on the Earth's crust; strong winds shaking the buildings enough to affect our detectors.'

However, if a gravitational wave is found, it should create a similar signal in both instruments nearly simultaneously.

The main claim of Jackson's team is that there appears to be correlated noise in the detectors at the time of the gravitational-wave signal.
The main idea behind LIGO is to have two detectors, 2000 miles apart, and look for correlations in the data. Each detector by itself just looks like noise. Any big correlation is assumed to be a black hole collision in a distant galaxy.

Finding correlations should be child's play, but it takes the LIGO team many months to announce that they found a correlation.

Here is the LIGO response:
LIGO analyses use whitened data when searching for compact binary mergers such as GW150914. When repeating the analysis of Creswell et al. on whitened data these effects are completely absent. 3. Our 5-sigma significance comes from a procedure of repeatedly time-shifting the data, which is not invalidated if correlations of the type described in Creswell et al. are present.
In other words, the LIGO team massages the data to get rid of the small correlations found by the critics.

This is lame. It is appears that the LIGO team is whitewashing some data in order to make their black hole collision model appears more accurate. They do not want to admit that there are some unexplained correlations.

LIGO Skeptic blog is also all over this.

Wednesday, July 5, 2017

Google predicts quantum supremacy in 2017

SciAm reports:
Scientists have long dreamed of developing quantum computers, machines that rely on arcane laws of physics to perform tasks far beyond the capability of today’s strongest supercomputers. ...

But to realize those visions, scientists first have to figure out how to actually build a quantum computer that can perform more than the simplest operations. They are now getting closer than ever, with IBM in May announcing its most complex quantum system so far and Google saying it is on track this year to unveil a processor with so-called “quantum supremacy” — capabilities no conventional computer can match. ...

Alan Ho, an engineer in Google’s Quantum Artificial Intelligence Lab, told the conference his company expects to achieve quantum supremacy with a 49-qubit chip by the end of this year.
This is not going to happen. Check back on this blog at the end of the year to see if I am wrong.

There has been 25 years of research on quantum computer, and no one has achieved quantum supremacy. Now we have Google, IBM, and others making specific predictions. We will finally see whether they are right or wrong.
Some of those factors can work against one another; adding more qubits, for instance, can increase the rate of errors as information passes down the line from one qubit to another. ...

And increasing the number of qubits, no matter what technology they are used with, makes it harder to connect and manipulate them—because that must be done while keeping them isolated from the rest of the world so they will maintain their quantum states. The more atoms or electrons are grouped together in large numbers, the more the rules of classical physics take over—and the less significant the quantum properties of the individual atoms become to how the whole system behaves. “When you make a quantum system big, it becomes less quantum,” Monroe says.
Yes, there could be technological roadblocks to doing what they want. These problems have not been solved, and may never be solved.
Chow thinks quantum computers will become powerful enough to do at least something beyond the capability of classical computers — possibly a simulation in quantum chemistry — within about five years. Monroe says it is reasonable to expect systems containing a few thousand qubits in a decade or so. To some extent, Monroe says, researchers will not know what they will be able to do with such systems until they figure out how to build them.

Preskill, who is 64, says he thinks he will live long enough to see quantum computers have an impact on society in the way the internet and smartphones have — although he cannot predict exactly what that impact will be.
This is more delusional thinking. A lot of smart ppl have done a lot of good theoretical work on what quantum computer algorithms would be feasible, with a decent number of qubits. The main application will be to destroy the security of internet communications. This impact will be overwhelmingly negative, if quantum computers are feasible.

There are some other possibilities, like simulating chemical reactions. These might have some research interests. It is extremely that these would have commercial utility. It is even more doubtful that there would be any consumer applications.

Monday, July 3, 2017

Ambiguous causality in quantum mechanics

I have posted about important causality is to physics, and now I see an article claiming a quantum counterexample.

Philip Ball writes in Nature mag:
Causation has been a key issue in quantum mechanics since the mid-1930s, when Einstein challenged the apparent randomness that Niels Bohr and Werner Heisenberg had installed at the heart of the theory. Bohr and Heisenberg's Copenhagen interpretation insisted that the outcome of a quantum measurement — such as checking the orientation of a photon's plane of polarization — is determined at random, and only in the instant that the measurement is made. No reason can be adduced to explain that particular outcome. But in 1935, Einstein and his young colleagues Boris Podolsky and Nathan Rosen (now collectively denoted EPR) described a thought experiment that pushed Bohr's interpretation to a seemingly impossible conclusion. ...

Brukner's group in Vienna, Chiribella's team and others have been pioneering efforts to explore this ambiguous causality in quantum mechanics3, 4. They have devised ways to create related events A and B such that no one can say whether A preceded and led to (in a sense 'caused') B, or vice versa. ...

The trick they use involves creating a special type of quantum 'superposition'. ... The two observable states can be used as the binary states (1 and 0) of quantum bits, or qubits, which are the basic elements of quantum computers.

The researchers extend this concept by creating a causal superposition. In this case, the two states represent sequences of events: a particle goes first through gate A and then through gate B (so that A's output state determines B's input), or vice versa.
This research is somewhat interesting, but it is not what it appears.

They find an ambiguity in the path of the photon, but there are always such ambiguities in quantum mechanics. In a simple double-slit experiment, where a light source sends photons thru a slit A and slit B to a screen, the detector on the screen cannot tell you whether the photo went thru slit A or B. The preferred interpretation is that the light is some sort of quantum wave that goes thru both slits. The light is not really photons until they hit the detectors.

This experiment does not really violate causality as the term is usually understood. It is just another of many experiments that are hard to interpret if you think of light as Newtonian particles. Such experiments convinced physicists that light is a wave about 200 years ago. A century ago light was found to be a funny quantized wave, but not a particle in the way you normally think of particles.

I don't agree with calling light a particle, but I also don't agree with saying that it is random up until the instant of a measurement. We don't really know how to make sense out of such statements. Quantum mechanics is a theory about making predictions about observations, and I think Bohr and Heisenberg would say that it doesn't make any sense to talk about the path of the photon (such as going thru slit A or B, or going from A to B or B to A) unless you are actually measuring it.