Friday, July 21, 2017

Teleportation of undefined information

Philip Ball writes in Nature mag about Chinese research in quantum teleportation:
f physicists Asher Peres and William Wootters had stuck to calling this quantum process ‘telepheresis’ when they first conceived of it in 19934, I doubt we’d be seeing headlines about it today. It was their co-author Charles Bennett who suggested instead ‘quantum teleportation’.

Whatever it’s called, the process transfers the quantum state of one particle onto another, identical particle, and at the same time erases the state in the original. ...

So what exactly is being transmitted through entanglement alone?

This is a tricky question for quantum information theory in general: it is not obvious what ‘information’ means here. As with other colloquial words adopted by science, it is too easy to imagine we all know what we’re talking about. The 'stuff' transmitted by entanglement is neither information in the sense of Claude Shannon’s information theory (where it is quantified in terms of entropy, increasing as the 'message' gets more random), nor in the sense of an office memo (where information becomes meaningful only in the right context). Then what is it information about, exactly?

That issue, at the heart of quantum information theory, has not been resolved8, 9. Is it, for example, information about some underlying reality, or about the effects of our intervention in it? Information universal to all observers, or personal to each? And can it be meaningful to speak of quantum information as something that flows, like liquid in a pipe, from place to place? No one knows (despite what they might tell you). If we can answer these questions, we might be close finally to grasping what quantum mechanics means.
You find some physicists in this field who act as if conservation of quantum information is the most important principle in all of physics. However, as Ball points out, the concept is not even defined.

As far as I know, there are no experiments that have shown it to be ever conserved. There is not really any good theory for believing it to be conserved either, except for those who believe in time reversibility.

And "teleportation" is, as Ball says, just a misleading headline-grabbing term for some mundane quantum physics. Physics pretend that it is something magical, like Star Trek, but it is not.

Scott Aaronson has posted an argument for limits on information density:
Summarizing where we’ve gotten, we could say: any information that’s spatially localized at all, can only be localized so precisely.  In our world, the more densely you try to pack 1’s and 0’s, the more energy you need, therefore the more you warp spacetime, until all you’ve gotten for your trouble is a black hole.  Furthermore, if we rewrote the above conceptual argument in math—keeping track of all the G’s, c’s, h’s, and so on — we could derive a quantitative bound on how much information there can be in a bounded region of space.  And if we were careful enough, that bound would be precisely the holographic entropy bound, which says that the number of (qu)bits is at most A/(4 ln 2), where A is the area of a bounding surface in Planck units. ...

In summary, our laws of physics are structured in such a way that even pure information often has “nowhere to hide”: if the bits are there at all in our description of the world, then they’re forced to pipe up and have a measurable effect.  And this is not a tautology, but comes about only because of nontrivial facts about special and general relativity, quantum mechanics, quantum field theory, and thermodynamics.  And this is what I think people should mean when they say “information is physical.”
This is plausible, but it is a very crude upper bound on classical information. Yes, he says info is physical in the sense that it must take up some space, or the energy needed to store it would be so large as to collapse into a black hole.

But he is not saying that info is conserved, or giving equations of motion for info, or getting mystical about quantum info.

Monday, July 17, 2017

Political work by feminist geographers

I mentioned a feminist geography paper,
and now it has gotten play from Rush Limbaugh and Jerry Coyne:
Now I haven’t read the entire paper in detail, as even I have limits on my ability to tolerate this kind of writing, but I at least get what they’re saying. The authors cite data showing that work by women and non-Anglophones is cited less frequently than is work by English speakers and men. I suppose there are several possible reasons for this, including bigotry, but it’s hard to discern what’s at play because one must somehow discern a paper’s importance and visibility (i.e., where it was published) to judge whether it should have been cited, and that’s nearly impossible.
The obvious explanation is that no one cites feminist geography because it is garbage.

A reader points out that Nature published an article on hypothetical (ie, impractical) quantum gravity experiments without realizing that there was already a huge literature on the subject. I guess no one cites that literature because it is so disconnected with reality.

Saturday, July 15, 2017

History of Spacetime

Some editing is being done on the History of spacetime for the Spacetime Wikipedia. It is funny how some editors want to make it all about Einstein, when he had very little to do with the historical acceptance of the concept.

The first important thing about spacetime are the transformations mixing space and time. They are called Lorentz transformations because Lorentz figured them out and made them famous before Einstein. Everyone agrees to that. Lorentz even got a Nobel prize in 1902 for related work. See History of Lorentz transformations for details.

Second is combining space and time into a 4-dimension object and giving it a non-Euclidean geometry. Poincare did this in 1905, defining the geometry by the metric and symmetry group. Minkowski elaborated on this with world lines and diagrams, and popularized it in 1908. As a result, spacetime is often called Minkowski space. In the words of a Harvard historian of science:
In sum Minkowski still hoped for the completion of the Electromagnetic World Picture through relativity theory. Moreover, he saw his own work as completing the program of Lorentz, Einstein, Planck, and Poincare. Of these it was Poincare who most directly influenced the mathematics of Minkowski's space-time. As Minkowski acknowledges many times in "The Principle of Relativity," his concept of space-time owes a great deal to Poincare's work.35 [Galison,1979]
Third is defining a relativistic theory on spacetime. This means that the observable physical variables and equations must be covariant under the geometric structure. That is, observing from a rotated or moving frame must induce a symmetry in the laws of physics that is a mathematical consequence of the underlying geometry. Poincare proved this for Maxwell's theory of electromagnetism, and constructed a relativistic theory of gravity, in his 1905 paper. Minkowski appears to be the only one who understood Poincare's paper.

While Einstein understood Lorentz's work in step 1, and gave his own presentation of it, he completely missed steps 2 and 3. Even after Minkowski spelled them out clearly in a paper that was accepted all over Europe, Einstein said, "Since the mathematicians have invaded the theory of relativity, I do not understand it myself anymore."

While Poincare and Minkowski explicitly advocated geometric interpretations different from Lorentz's, Einstein denied that he had any differences from what Lorentz published.

While Einstein, in collaboration with Grossmann, Hilbert, and others, eventually built on Minkowski's work for general relativity, he denied the geometric interpretations of special and general relativity that are in textbooks today.

In short, Einstein had almost nothing to do with the development or acceptance of spacetime among physicists.

He started to become a cult figure with the general public when the NY Times published this 1919 story:
LIGHTS ALL ASKEW IN THE HEAVENS;
Men of Science More or Less Agog Over Results of Eclipse Observations.
EINSTEIN THEORY TRIUMPHS
Stars Not Where They Seemed or Were Calculated to be, but Nobody Need Worry.
A BOOK FOR 12 WISE MEN
No More in All the World Could Comprehend It, Said Einstein When His Daring Publishers Accepted It.
And today, hardly anyone mentions spacetime without also mentioning Einstein's name.

Thursday, July 13, 2017

New Einstein Philosophy book

Philosopher Thomas Ryckman has a new book on Einstein:
Einstein developed some of the most ground breaking theories in physics. So why have you written a book that examines him as a philosopher?

Einstein’s theoretical accomplishments, especially the two theories of relativity, as well as his occasional philosophical pronouncements, had a tremendous impact in shaping the modern discipline of philosophy of science in the first half of the 20th century. Also, throughout his career as a theoretical physicist, Einstein in fact adhered to a particular style of philosophizing, though not in a sense familiar to academic departments of philosophy. I call this a “philosophy of principles”; his central innovations came by elevating certain physical, formal and even metaphysical principles to the status of postulates, and then exploring the empirical consequences.
This is typical of Einstein idolizers crediting him for relativity being such a crucial and revolutionary breakthru.

Lorentz's analysis started with Maxwell's equations, Michelson-Morley, and a couple of other experiments. From these, he deduced that the speed of light was constant, that the same physics holds in different frames, and the Lorentz transformations. FitzGerald, Larmor, Poincare, and Minkowski used similar reasoning.

What set Einstein apart, in the eyes of Ryckman and other philosophers, was that he elevated the constant speed of light and frame-independence principles (from Lorentz and Poincare) to the status of postulates, instead of empirical science. As historian Arthur I. Miller argues, Lorentz and Poincare were willing to admit that experiments might prove the theory wrong, and so Einstein should get all the credit.

This is a backwards view of what science is all about. As Lorentz pointed out, Einstein just postulated what had previously been proved.

Just to be clear, I don't want to criticize new mathematical works. Poincare and Minkowski injected new mathematical ideas and interpretations into relativity, and that was great work. Einstein did not find any new mathematics or physics. He is idolized because he took what was called "principles" in the physics literature, and elevated them to "postulates". That's all. To a mathematician or an empirical scientist, elevating a principle to a postulate is no accomplishment at all.
Einstein was famous for his pacifist views yet set them aside to contribute towards the development of the atomic bomb. This was something he later regretted, campaigning for nuclear disarmament alongside Bertrand Russell. What spurred his, albeit temporary, interest in the development of atomic weapons?
The short answer is that Einstein hated the Germans, and wanted to nuke them. He only regretted the bomb because he was a Communist and opposed the Cold War.
What do you see as his most important contribution to the philosophy of science?

In my opinion, Einstein demonstrates that it is possible to be a “realist” about science without adopting the metaphysical presuppositions of what is today called “scientific realism”. In particular, Einstein balanced the aspirational or motivational realist attitude of many working scientists with the clear recognition that realism remains a metaphysical hypothesis, not demonstrable by empirical evidence.
I thought that I was a realist myself, until I read the nonsense that philosophers write on the subject. Einstein's realism is an incoherent mess, and the philosophers are worse.

Wednesday, July 12, 2017

Philosophically excited about quantum mechanics


From xkcd comic. These comics are sometimes obscure, so there is an explanation page.

For an example of how quantum mechanics gets academics philosophically excited, see this paper:
Assembled Bodies
Reconfiguring Quantum Identities
Whitney Stark

Abstract

In this semimanifesto, I approach how understandings of quantum physics and cyborgian bodies can (or always already do) ally with feminist anti-oppression practices long in use. The idea of the body (whether biological, social, or of work) is not stagnant, and new materialist feminisms help to recognize how multiple phenomena work together to behave in what can become legible at any given moment as a body. By utilizing the materiality of conceptions about connectivity often thought to be merely theoretical, by taking a critical look at the noncentralized and multiple movements of quantum physics, and by dehierarchizing the necessity of linear bodies through time, it becomes possible to reconfigure structures of value, longevity, and subjectivity in ways explicitly aligned with anti-oppression practices and identity politics. Combining intersectionality and quantum physics can provide for differing perspectives on organizing practices long used by marginalized people, for enabling apparatuses that allow for new possibilities of safer spaces, and for practices of accountability.
I cannot even tell if this is a joke or not.

Update: A comment says that it is not a joke, and neither is this:
Academics and scholars must be mindful about using research done by only straight, white men, according to two scientists who argued that it oppresses diverse voices and bolsters the status of already privileged and established white male scholars.

Geographers Carrie Mott and Daniel Cockayne argued in a recent paper that doing so also perpetuates what they call “white heteromasculinism,” which they defined as a “system of oppression” that benefits only those who are “white, male, able-bodied, economically privileged, heterosexual, and cisgendered.” (Cisgendered describes people whose gender identity matches their birth sex.)

Mott, a professor at Rutgers University in New Jersey, and Cockayne, who teaches at the University of Waterloo in Ontario, argued that scholars or researchers disproportionately cite the work of white men, thereby unfairly adding credence to the body of knowledge they offer while ignoring the voices of other groups, like women and black male academics.
Apparently academic geography has been lost to leftism.

Tuesday, July 11, 2017

Tycho was the greatest astronomer



Tycho Brahe was a brilliant astronomer who probably did more to advance accurate observations than anyone.

I got the chart from the recent paper, Astrometric accuracy during the past 2000 years. It has other similar charts.

It shows a lot of later advances in accuracy, but those are mainly technological advances.

At Tycho's time, there had been very little advance in astronomy over the previous millennium. The telescope still had not been invented, but Tycho collected better data than everyone before him put together.

Without Tycho's work, Kepler never could have done what he did, and without Kepler, Newton could not have done what he did. Without Newton, we might still be farmers.

Sunday, July 9, 2017

Sapolsky book opposes free will

Leftist-atheist-evolutionist professor Jerry Coyne posts one of his usual rants against free will, and then has this exchange with a reader:
As I always say, it’s easier to convince a diehard creationist of the truth of evolution than to convince a diehard atheist of the fact that our behaviors are determined, and that we can’t make alternative choices at a given moment.

Yet there are some enlightened folk who not only accept determinism but deny that a version of “free will” can be confected that preserves our notion of that term while accepting determinism. ...

D. Cameron Harbord: I’m always amazed by evolutionary biologists who evidently believe that we evolved all of that expensive decision-making machinery (in the brain) for no apparent purpose. Humans have devoted large amounts of their time and energy to both individual and collective decision making for at least 10’s, and probably 100´s, of thousands of years. What would be the point of evolving to waste so much time and energy in a deterministic universe? It’s a question that I wish believers in a deterministic universe would provide a satisfactory answer to.

Jerry Coyne: What would be the point? It just happened because some genes that affected rumination and behavior left more copies than others. Not all decision making “machinery” is evolved, of course: some is learned.

With all due respect, I don’t think you have the slightest idea what you’re haranguing about, and you clearly don’t understand evolution.

I have just answered your question in a satisfactory manner.

Harbord: I believe I that I do understand evolution, as I believe that you do also. Genes for “rumination”? The point is that the amount of time and energy spent in decision-making “rumination” (and discussion and argument and investigation) has been significant for modern humans, at least since the time we were living in hunter gatherer bands. It is at least a little mysterious why would evolve to be this way, if you are correct.

But then there is no current finding in physics that establishes the hypothesis of a deterministic universe, so there is no scientific finding that rules out the existence of free will.

Thank you very much for your kind reply.

Coyne: Well. we’ve established that you really don’t understand evolution, as you can’t see any selective advantage to evolving a more complex onboard computer in a social and bipedal animal.

What we’ve also established now is that you don’t understand physics, either. You clearly haven’t read the classical physics that establishes determinism; the laws of physics themselves are evidence for a deterministic universe. That we can land rockets on a comet establishes a deterministic universe, as does the fact that we can predict solar eclipses with great accuracy: to the second.

Do you want to try to misunderstand chemistry as well?
Perhaps Coyne's disbelief in free will explains his rudeness.

First, the physics. Classical physics does not establish determinism. Some of the simplest classical mechanical systems are chaotic, and thus indistinguishable from a nondeterministic system.

Second, the biology. The idea that we have genes for decision-making rumination, but we are never actually able to make decisions, is a little bizarre.

When animals devote a lot of energy to some activity, then there must be some payoff in terms of more or better offspring, or else it is an evolutionary puzzle that begs for an explanation.

Coyne is excited about the new best-selling book, Behave: The Biology of Humans at Our Best and Worst, by Robert M. Sapolsky. It is apparently a long argument that our brains are fully programmed, with no free will.

From his Stanford publicist:
For me, the single most important question is how to construct a society that is just, safe, peaceful – all those good things – when people finally accept that there is no free will.
He is a leftist Jewish atheist professor, so that is were he is coming from. Macleans:
I used to be polite and say stuff like I certainly can’t prove there isn’t free will. But no, there’s none. There simply is nothing compatible with a 21st century understanding of how the physical laws of the universe work to have room for some sort of volitional little homunculus crawling around in our heads that takes advice from the biological inputs but at the end of the day goes and makes this independent decision on its own. It’s just not compatible with anything we understand about how biology works. All that free will is, is the biology we don’t understand yet.
My biggest quarrel with these leftist biologists is when they try to tell us about "21st century understanding of how the physical laws of the universe work". There is no such understanding that is contrary to free will.

Vice.com:
There is no concept more American than "free will" — the idea that we're all gifted (probably by God) with the power to choose a path of success or destruction and bear responsibility for the resulting consequences. It's the whole reason we "punish" people for committing crimes. The idea is so ubiquitous that most people have never even pondered an alternative.

Neurobiologist Robert Sapolsky sees things differently. He's opposed to the concept of "free will." Instead, he believes that our behavior is made up of a complex and chaotic soup of so many factors that it's downright silly to think there's a singular, autonomous "you" calling the shots. He breaks all of this down in his new book, Behave: The Biology of Humans at Our Best and Worst. The tome is a buffet of neurology, philosophy, politics, evolutionary science, anthropology, history, and genetics. At times, its exhaustive in the number of variables considered when looking at human behavior, but that's Sapolsky's whole point: The decisions we make are a result of "prenatal environment, genes, and hormones, whether [our] parents were authoritative or egalitarian, whether [we] witnessed violence in childhood, when [we] had breakfast..."
Free will is American? They as might as well say it is a white Christian capitalist right-wing concept.

It is funny how non-Christian leftist academics are so opposed to both genetic determinism and to free will.

Update: Sapolsky also says that Religion is a mental illness. He also says Jesus had some mental disorders. I think that doubting free will is a mental illness.

Friday, July 7, 2017

LIGO data whitening is exposed

Elliot McGucken alerts me to LIGO controversies.

Peter Woit debunks claims that LIGO confirms the extra dimensions of string theory. I have previously complained that LIGO uses fake data injections, with only 3 ppl knowing whether a black hole collision has been faked.

Quanta mag reports:
Wave Observatory (LIGO) announced that they had successfully detected gravitational waves, subtle ripples in the fabric of space-time that had been stirred up by the collision of two black holes. The team held a press conference in Washington to announce the landmark findings.

They also released their data.

Now a team of independent physicists has sifted through this data, only to find what they describe as strange correlations that shouldn’t be there. The team, led by Andrew Jackson, a physicist at the Niels Bohr Institute in Copenhagen, claims that the troublesome signal could be significant enough to call the entire discovery into question. The potential effects of the unexplained correlations “could range from a minor modification of the extracted wave form to a total rejection of LIGO’s claimed [gravitational wave] discovery,” wrote Jackson in an email to Quanta. LIGO representatives say there may well be some unexplained correlations, but that they should not affect the team’s conclusions. ...

For now, confidence is high in LIGO’s conclusions. “The only persons qualified to analyze this paper are in the LIGO Scientific Collaboration,” said Robert Wagoner, a theoretical physicist at Stanford University who is not affiliated with LIGO. “They are the only ones who have had access to the raw data.” Steinn Sigurðsson, an astrophysicist at Pennsylvania State University who is also not affiliated with either team, agrees. “For now, I’d definitely go with the LIGO people,” he said. “It is very rare for outsiders to find major errors in a large collaboration.”

Nevertheless, “it’s going to take longer than people would like” to get these issues resolved, said Sigurðsson. “It’s going to take months.”
This is funny. Did LIGO release its data or not? If LIGO released its raw data, then others have access to it.

A lot of researchers do not release their raw data, and thus avoid the detailed scrutiny of others. Their attitude is often that competing researchers should go do their own experiments, and collect their own data.

The London Daily Mail explains:
The ensure the results are accurate, LIGO uses two observatories, 3,000 kilometers apart, which operate synchronously, each double-checking the other's observations.

The noise at each detector should be completely uncorrelated, meaning a noise like a stormnearby one detector doesn't show up as noise in the other.

Some of the sources of 'noise' the team say they contend with include: 'a constant 'hiss' from photons arriving like raindrops at our light detectors; rumbles from seismic noise like earthquakes and the oceans pounding on the Earth's crust; strong winds shaking the buildings enough to affect our detectors.'

However, if a gravitational wave is found, it should create a similar signal in both instruments nearly simultaneously.

The main claim of Jackson's team is that there appears to be correlated noise in the detectors at the time of the gravitational-wave signal.
The main idea behind LIGO is to have two detectors, 2000 miles apart, and look for correlations in the data. Each detector by itself just looks like noise. Any big correlation is assumed to be a black hole collision in a distant galaxy.

Finding correlations should be child's play, but it takes the LIGO team many months to announce that they found a correlation.

Here is the LIGO response:
LIGO analyses use whitened data when searching for compact binary mergers such as GW150914. When repeating the analysis of Creswell et al. on whitened data these effects are completely absent. 3. Our 5-sigma significance comes from a procedure of repeatedly time-shifting the data, which is not invalidated if correlations of the type described in Creswell et al. are present.
In other words, the LIGO team massages the data to get rid of the small correlations found by the critics.

This is lame. It is appears that the LIGO team is whitewashing some data in order to make their black hole collision model appears more accurate. They do not want to admit that there are some unexplained correlations.

LIGO Skeptic blog is also all over this.

Wednesday, July 5, 2017

Google predicts quantum supremacy in 2017

SciAm reports:
Scientists have long dreamed of developing quantum computers, machines that rely on arcane laws of physics to perform tasks far beyond the capability of today’s strongest supercomputers. ...

But to realize those visions, scientists first have to figure out how to actually build a quantum computer that can perform more than the simplest operations. They are now getting closer than ever, with IBM in May announcing its most complex quantum system so far and Google saying it is on track this year to unveil a processor with so-called “quantum supremacy” — capabilities no conventional computer can match. ...

Alan Ho, an engineer in Google’s Quantum Artificial Intelligence Lab, told the conference his company expects to achieve quantum supremacy with a 49-qubit chip by the end of this year.
This is not going to happen. Check back on this blog at the end of the year to see if I am wrong.

There has been 25 years of research on quantum computer, and no one has achieved quantum supremacy. Now we have Google, IBM, and others making specific predictions. We will finally see whether they are right or wrong.
Some of those factors can work against one another; adding more qubits, for instance, can increase the rate of errors as information passes down the line from one qubit to another. ...

And increasing the number of qubits, no matter what technology they are used with, makes it harder to connect and manipulate them—because that must be done while keeping them isolated from the rest of the world so they will maintain their quantum states. The more atoms or electrons are grouped together in large numbers, the more the rules of classical physics take over—and the less significant the quantum properties of the individual atoms become to how the whole system behaves. “When you make a quantum system big, it becomes less quantum,” Monroe says.
Yes, there could be technological roadblocks to doing what they want. These problems have not been solved, and may never be solved.
Chow thinks quantum computers will become powerful enough to do at least something beyond the capability of classical computers — possibly a simulation in quantum chemistry — within about five years. Monroe says it is reasonable to expect systems containing a few thousand qubits in a decade or so. To some extent, Monroe says, researchers will not know what they will be able to do with such systems until they figure out how to build them.

Preskill, who is 64, says he thinks he will live long enough to see quantum computers have an impact on society in the way the internet and smartphones have — although he cannot predict exactly what that impact will be.
This is more delusional thinking. A lot of smart ppl have done a lot of good theoretical work on what quantum computer algorithms would be feasible, with a decent number of qubits. The main application will be to destroy the security of internet communications. This impact will be overwhelmingly negative, if quantum computers are feasible.

There are some other possibilities, like simulating chemical reactions. These might have some research interests. It is extremely that these would have commercial utility. It is even more doubtful that there would be any consumer applications.

Monday, July 3, 2017

Ambiguous causality in quantum mechanics

I have posted about important causality is to physics, and now I see an article claiming a quantum counterexample.

Philip Ball writes in Nature mag:
Causation has been a key issue in quantum mechanics since the mid-1930s, when Einstein challenged the apparent randomness that Niels Bohr and Werner Heisenberg had installed at the heart of the theory. Bohr and Heisenberg's Copenhagen interpretation insisted that the outcome of a quantum measurement — such as checking the orientation of a photon's plane of polarization — is determined at random, and only in the instant that the measurement is made. No reason can be adduced to explain that particular outcome. But in 1935, Einstein and his young colleagues Boris Podolsky and Nathan Rosen (now collectively denoted EPR) described a thought experiment that pushed Bohr's interpretation to a seemingly impossible conclusion. ...

Brukner's group in Vienna, Chiribella's team and others have been pioneering efforts to explore this ambiguous causality in quantum mechanics3, 4. They have devised ways to create related events A and B such that no one can say whether A preceded and led to (in a sense 'caused') B, or vice versa. ...

The trick they use involves creating a special type of quantum 'superposition'. ... The two observable states can be used as the binary states (1 and 0) of quantum bits, or qubits, which are the basic elements of quantum computers.

The researchers extend this concept by creating a causal superposition. In this case, the two states represent sequences of events: a particle goes first through gate A and then through gate B (so that A's output state determines B's input), or vice versa.
This research is somewhat interesting, but it is not what it appears.

They find an ambiguity in the path of the photon, but there are always such ambiguities in quantum mechanics. In a simple double-slit experiment, where a light source sends photons thru a slit A and slit B to a screen, the detector on the screen cannot tell you whether the photo went thru slit A or B. The preferred interpretation is that the light is some sort of quantum wave that goes thru both slits. The light is not really photons until they hit the detectors.

This experiment does not really violate causality as the term is usually understood. It is just another of many experiments that are hard to interpret if you think of light as Newtonian particles. Such experiments convinced physicists that light is a wave about 200 years ago. A century ago light was found to be a funny quantized wave, but not a particle in the way you normally think of particles.

I don't agree with calling light a particle, but I also don't agree with saying that it is random up until the instant of a measurement. We don't really know how to make sense out of such statements. Quantum mechanics is a theory about making predictions about observations, and I think Bohr and Heisenberg would say that it doesn't make any sense to talk about the path of the photon (such as going thru slit A or B, or going from A to B or B to A) unless you are actually measuring it.

Friday, June 30, 2017

Max Born on the history of relativity

A commenter mentioned this quote, and so does the Wikipedia spacetime talk page:
[German Physicist Max] Born wrote: "[...] I went to Cologne, met Minkowski and heard his celebrated lecture 'Space and Time' delivered on 2 September 1908. [...] He told me later that it came to him as a great shock when Einstein published his paper in which the equivalence of the different local times of observers moving relative to each other was pronounced; for he had reached the same conclusions independently but did not publish them because he wished first to work out the mathematical structure in all its splendor. He never made a priority claim and always gave Einstein his full share in the great discovery."
Born also spent 3 years trying to convince Whittaker to credit his friend Einstein for special relativity, but Whittaker wrote that Lorentz and Poincare had it all before Einstein. Aa Born wrote to Einstein:
Whittaker, the old mathematician, who lives here as Professor Emeritus and is a good friend of mine, has written a new edition of his old book History of the Theory of the Ether, of which the second volume has already been published. Among other things it contains a history of the theory of relativity which is peculiar in that Lorentz and Poincaré are credited with its discovery while your papers are treated as less important. ... As a matter of fact I have done everything I could during the last three years to dissuade Whittaker from carrying out his plan, which he had already cherished for a long time and loved to talk about. ...

He insisted that everything of importance had already been said by Poincaré, and that Lorentz quite plainly had the physical interpretation.

I don't see that these self-serving quotes mean much. The fact is that Minkowski gave Einstein very little credit, and Minkowski cheated others out of credit also. Minkowski died soon afterwards, so we do not know what he would have thought of the credit dispute.

Born wrote some papers on the relativity of rigid bodies, as there were such a thing. He seems to have understood the Lorentz-Einstein version of the theory, but it is not clear that he accepted the Poincare-Minkowski version.

As discussed here, Born's opinions on the matter are confusing. While he refuses to give Lorentz full credit for relativity, he implies that he never read Poincare's papers until much later, and when he did, he admitted that Poincare seemed to have the whole theory before Einstein. It appears to me that Born wanted to credit Einstein, but could not find a good reason for doing so.

Born's opinion might be important if he had first-hand knowledge of unpublished opinions. He was good friends with Einstein, Lorentz, and Whittaker. But we don't need Born to tell us what was published in the original papers.

Monday, June 26, 2017

Causality is essential to physics

I mentioned Massimo Pigliucci's attack on causality. He has now closed comments, so I respond here (ignoring his usual ad hominem attacks).

He argues that causality is important in all the sciences except for fundamental physics, where it is not because of the following chain of reasoning:

* The equations of fundamental physics have a time reversal symmetry.
* Microscopic physics obeys those equations, and hence has no arrow of time.
* Entropy has an arrow of time, but that is classical physics, and hence not fundamental.
* Without a fundamental arrow of time, there is no way to say one thing causes another.

This is just wrong on every level. The equations of physics do have time reversal asymmetries. Even system with time reversal symmetric equations have physics showing an arrow of time. Entropy increases on both the quantum and classical levels, and is as fundamental as anything. The soft sciences prove causality often, without using any arrow of time.

As commenter Coel explains, quantum mechanics is an irreversible theory. Every observation is irreversible. Decoherence is irreversible. CP violating weak interactions are irreversible.

MP quotes Eddington and the Wikipedia Arrow of Time
Physical processes at the microscopic level are believed to be either entirely or mostly time-symmetric: if the direction of time were to reverse, the theoretical statements that describe them would remain true. Yet at the macroscopic level it often appears that this is not the case: there is an obvious direction (or flow) of time.
This statement is artfully misleading, as there is also an obvious direction of time at the microscopic level.

You might see a neutron decay into a proton, electron, and (anti-)neutrino. You never see a proton, electron, and anti-neutrino all coming together to make a neutron. Likewise, other nuclear reaction have an obvious direction of time.

Wave equations often have time reversal symmetries, but the observed waves do not. Waves go forward in time from initial conditions, and this is often obvious by looking at the wave.

All of this does not really have much to do with causality. A medical paper might have data showing that smoking causing lung cancer, but it does not need an arrow of time to reach the conclusion. There would be causality even if all the laws of physics were time symmetric.

I am not just blaming MP here, as he says he is just reciting conventional wisdom among philosophers. If so, then philosophers do not know the first things about physics.

Sunday, June 25, 2017

Entropy, time, and causality are fundamental

Philosopher of science Massimo Pigliucci is a skeptic about physics, causality, and time, and writes:
But entropy increase is simply an empirical observation. It’s not found anywhere in the equations. And that is the problem. Nobody denies that entropy increases, that time exists (well, actually some do), or that causes precede effects. The problem is that none of this is found in the equations of either quantum mechanics or general relativity. And those are the only fundamental theories about reality we have. ...

Second, the problem with the Big Bang, has Smolin very clearly explains in his book (see: http://tinyurl.com/ohaum9e) does, in fact, present a problem for people who accept at face value the implications of general relativity and the so called block-universe: if one denies the fundamentally of time, then one has to conclude that the biggest discovery of modern cosmology, that the universe had a beginning, is in a deep sense an illusion. I’m not taking sides here, simply pointing out that there is a fundamental problem that keeps physicists up at night.
He is responding to my comments that physics has a direction of time, and that causality is fundamental to physics.

He admits that he is not a physics expert, but where does he get this stuff?

Entropy increase is not just an empirical observation. It is fundamental to modern physics. So is time and causality. They are baked into the equations as well as the theories. I don't see how you can study physics at all, and miss these points. Where do I start explaining it to him?

Update: Pigliucci responds:
“You have a funny idea of science.”

Ah, yes, I love it when people tell a scientist that he has funny ideas about science, meaning that he doesn’t understand the basics.
Comments on his site are strictly moderated, so he deletes any comment that offends him. Of course he is free to insult commenters like me.

He says he is a scientist, having previously worked in biology before switching to philosophy and the study of pseudoscience.

He gets his info about physics from philosophers, and from Lee Smolin. Smolin is way out on the fringe of physics. His last book is concerned with philosophical questions like whether time is real.

Asking whether time is real is not a scientific question. Obviously it is real in the sense that it is measurable, and it is essential to both our practical and conceptual understandings of the world. What could be more real than that?

Philosophers can argue that pretty much anything is not real, but an illusion. Maybe we live in a simulation. Maybe time is a disguise for something else that is not understood, and we cannot understand the unreality of time because we don't understand the something. Philosophers commonly engage in such silliness, not scientists.

Saturday, June 24, 2017

Decoherence is phenomenally efficient

British science writer Philip Ball writes in aeon.co:
Quantum mechanics allows us to calculate that rate, so that we can put the theory of decoherence to the test. Serge Haroche and colleagues at the École Normale Supérieure in Paris first did that in 1996 by measuring decoherence of an atom held in a device called a ‘light trap’ and interacting with photons. The loss of interference between states of the atom owing to decoherence, as calculated from quantum theory, matched the experimental observations perfectly. And in 2003 a team at the University of Vienna led by Anton Zeilinger and Markus Arndt watched interference vanish between the quantum waves of large molecules, as they altered the rate of decoherence by gradually admitting a background gas into the chamber where the interference took place, so that the gas molecules would collide with those in the matter waves. Again, theory and experiment tallied well.

Decoherence is a phenomenally efficient process, probably the most efficient one known to science. For a dust grain 100th of a millimetre across floating in air, it takes about 10-31 seconds: a million times faster than the passage of a photon of light across a single proton! Even in the near-isolation of interstellar space, the ubiquitous photons of the cosmic microwave background – the Big Bang’s afterglow – will decohere such a grain in about one second.

So, for objects approaching the macroscopic scale under ordinary conditions, decoherence is, to all practical purposes, inevitable and instantaneous: you can’t keep them looking ‘quantum’. It’s almost as if the laws of quantum physics that make the world are contrived to hide those very laws from anything much bigger than atom-sized, tricking us into thinking that things just have to be the way we experience them.
LuMo quibbles about this, and explains:
Decoherence is an effective process – perhaps a gedanken process – which is irreversible and erases the information about the relative complex phases. You start with an observed physical system, like a cat. Decoherence will ultimately pick "dead" and "alive" as the preferred basis.
What they don't explain is that decoherence is what destroys quantum computers.

When you hear about some hypothetical quantum computer doing some fantasy computation like factoring a 200-digit integer, it has to do it all within that decoherence time. But as Ball says, decoherence is nearly instantaneous.

When Ball says "Decoherence is a phenomenally efficient process", he means that it is efficient at destroying any possibility of super-Turing computation.

It is almost as if the laws of physics are contrived to prevent us from doing quantum supremacy computations.

Decoherence is a fancy word for why quantum weirdness does not show up at a macroscopic level. I am in a minority, but I say that it is also why quantum weirdness does not enable super-Turing computation.

Wednesday, June 21, 2017

Human genome only 90% sequenced

Were you under the impression that the human genome had been sequenced? That is was completed in 2003?

STAT reports:
“It’s very fair to say the human genome was never fully sequenced,” Craig Venter, another genomics luminary, told STAT.

“The human genome has not been completely sequenced and neither has any other mammalian genome as far as I’m aware,” said Harvard Medical School bioengineer George Church, who made key early advances in sequencing technology. ...

Perhaps nobody paid much attention because the missing sequences didn’t seem to matter. But now it appears they may play a role in conditions such as cancer and autism. ...

Church estimates 4 percent to 9 percent of the human genome hasn’t been sequenced. Miga thinks it’s 8 percent.
Why couldn't they announce that the genome was 90% sequenced? Did they think that the public was incapable of understanding that?

Of course the public can understand that. There was some sort of conspiracy to mislead the public.

I remember the original announcement making a big deal about finding all the genes, and then later learning that they did not even know how many genes there were.

I heard that there were gaps, but now I learn 4 to 9% is missing! That tells me that not only is a lot missing, but they don't even know how much is missing.