Wednesday, December 30, 2015

Latest search for Schroedinger cats

NewScientist mag reports:
Try to imagine a tiny ball sitting on one fingertip yet also on your shoulder at the same instant. Are you struggling? Most of us can’t conceive of an object being in two places at once – yet physicists have just demonstrated the effect over a distance of half a metre, smashing previous records.

It’s an example of superposition, the idea that an object can exist in two quantum states at the same time. This persists until it is observed, causing a property called its wave function to collapse into one state or the other. The same principle allows Schrödinger’s cat to be both dead and alive inside a box until you open the lid.
That is quite a little magic trick. You announce that you have made something in two places at once, but there is a catch -- if you look at it, then it reverts to being in just one place.

The article is kind enuf to supply some skepticism:
But it may be less of a breakthrough than it seems. In 2013, Hornberger helped devise a “weirdness scale” that scores experiments according to how far they show quantum effects extending into the everyday world. Kasevich’s work extends the distance scale but compromises in other ways, so scores about the same as previous attempts, says Hornberger. That means a true Schrödinger’s cat is still far from being realised.
We need a weirdness scale for overhyped science news story. +1 for a phony experiment. +1 for phony theory. +1 for some bogus interpretation to the result.

I don't want to be too harsh, as this is just a bogus interpretation.

Tuesday, December 29, 2015

The big AdS/CFT fad

Physicist Sabine Hossenfelder writes:
As to experimental evidence. This isn’t an issue specific to string theory. ...

My point of view has always been that quantum gravity isn’t science as long as one doesn’t at least try to find experimental evidence. ...

Why are there so few people working on finding experimental evidence for quantum gravity? I can tell you it’s not for lack of interest, at least not among the young people, it’s for lack of funding. ...

At some point I concluded that quantum gravity phenomenology is a bad, bad field to work on because it flops through peer review, which is almost certainly conducted by people who in the majority think that funding should go into further mathematical speculations. Seeing that my colleagues who work on AdS/CFT have funds thrown after them, I applied for funding in that field too. Nevermind that I have basically zero prior experience and the field is entirely overpopulated already.
Peter Woit replies:
The AdS/CFT phenomenon is quite remarkable and deserving of its own book. I see that the Maldacena paper is up above 10,000 citations, way off scale anything that has ever happened in the history of physics. What I’ve heard time and again from young theorists is that they go into AdS/CFT because it’s something that seems to be not understood, seems to be both a new deep idea about quantum gravity, and to have other applications, and, of course, it’s where most of the jobs are.
Of course there is no experimental evidence for AdS/CFT, and there never will be any, as it has no relation to the real world. It is all just some hypothetical boundary value problem on a black hole in some alternate cosmology unlike what we know of the universe.

Monday, December 28, 2015

Sam Harris and multiple universes

Atheist neurologist Sam Harris is one of the more prominent scientist public intellectuals. I thought that he was one of those rationalist atheists with a scientific outlook on everything, but he is really not. Most of his opinions have very little to do with science. He acts as if he is speaking with the authority of science, but he is not.

I also thought that he was anti-religion, but he is really some sort of Buddhist, and he regularly preaches about how meditation and hallucinogenic drugs can lead to spiritual enlightenment.

Most of this is off-topic for this blog, but I have attacked his position on free will.

Now he has interviewed some physicists, including Max Tegmark mp3 and David Deutsch mp3.

Physicist Lawrence Krauss is somewhat like Sam Harris, and often pushes leftist atheism in debates with religious scholars. His video debate with Muslim scholar Hamza Tzortzis is particularly amusing. Tzortzis concedes all scientific issues to Krauss, and Krauss makes him look like an ignorant fool. The discussion of infinity is embarrassing on both sides. Krauss hammers Islam as being morally objectionable in many ways.

But then, about halfway thru, Krauss is asked how he can disapprove of incest, if he rejects God. Then Krauss squirms, and the audience groans. Krauss admits that he cannot say that incest is wrong. Tzortzis retorts that if Krauss is such a moral relativist that he cannot say incest is wrong, then how can he so positively say that all of Islam is so wrong.

Krauss bragged that he wrote a book showing that the universe can develop from nothing, and belittled Tzortzis for not understanding that and challenged him to say what "nothing" means. To Krauss's surprise, Tzortzis said he read the book, and the book defines nothing as a quantum field. Krauss tried to deny this, but Tzortzis was absolutely correct on this point.

Scientists have a huge advantage in these debates, if they stick to the science. From what I have seen, they cannot do it, and get sucked into debating all sorts of issues where they no longer have hard evidence to back them up. Then they are more like politicians with opinions.

Saturday, December 26, 2015

Realism, antirealism, and conventionalism

I mentioned a German conference on the heart and soul of physics. It has gotten more publicity in Nature mag, and an article on why string theory is not science.

Professor of pseudoscience philosophy Massimo Pigliucci summarizes, but I should warn you that he is one of those leftist ideologue anti-sciece reality-denying philosophers.

Here is his latest rant, on the denial of human races:
Realism, antirealism and conventionalism are technical philosophical terms usually deployed in discussions of philosophy of science, philosophy of mathematics, and ethics. Say we are talking about the existence of mathematical objects (or of moral truths, which in many respects is an analogous concept). If one is a realist about these objects one is saying that there is a ontologically thick sense in which, say, numbers “exist.” Ontologically thick here means that numbers exist in a mind-independent way, though not in the physical sense that you can point a telescope somewhere and see them. More along the lines of “if there are any other intelligent beings in the cosmos they will independently ‘discover’ the concept of numbers.”

Being antirealist about numbers (or moral truths) means, of course, exactly the opposite: the antirealist doesn’t deny that numbers, once defined in a certain way, have certain objective properties. But she denies that it makes sense to think of any such definition in a mind-independent fashion.

The conventionalist, then, provides one possible antirealist account of numbers (or moral truths) to counter the realist one: numbers, like all mathematical objects, are human inventions, which are constructed in certain ways but could have been constructed differently. They are not “discovered,” they are invented. ...

Kaplan and Winther conclude (and, I think, are obviously correct) that the most sensible positions concerning race are: conventionalism about bio-genomic clusters, antirealism about biological races, and realism about social races.
These terms are somewhat confusing, as the quantum mechanics literature uses them in a manner that may be opposite to what you expect. In that, the realists are the Einstein-Bohm-Bell followers who believe that QM is wrong and must be replaced by a theory of hidden variables, which are unseen and unknown to modern science. The anti-realists are those who follow the Copenhagen interpretation or something similar, and believe that the observations are what is real.

Conventionalism is associated with Poincare, who believed that the aether was a useful convention, and you could believe or not believe in it, depending on convenience.

Applying these ideas to numbers is a little strange. Saying 2+2=4 is a universal truth that is independent of mind and convention.

Applying them to human races requires putting on ideological blinders. A commenter points to objective scientific genetic differences between the races, and he is just called a "racialist" who is undermining progressive work towards a post-racial society. Pigliucci says, "I can tell you that the majority of human population biologists don’t think that races exist in anything like the folk concept." He only admits that the genetic differences are useful for medical purposes.

Update: Pigliucci defends his political activism, because he wants to fight the creationist, and I guess, to attack those who believe in human races. This is funny because his racial denialism is just another form of creationism. Tney both deny human evolution and scientific evidence.

Thursday, December 24, 2015

Using Einstein to justify diversity rules

I have sharply criticized the popular credit given to Albert Einstein for relativity, on this blog in in my book. The purpose is not to run down Einstein, but to counter the way the history of relativity is told wrong in order to promote all sorts of bad ideas.

Thomas Levenson, an MIT professor of science writing, writes in Atlantic mag:
In describing the paradox that led him to his early breakthrough, the Special Theory of Relativity, Einstein noted that it took him a decade to get from his first thoughts to the final theory. (He started at 16, so he has some excuse.) The breakthrough turned on his realization that measurements of time and space aren’t absolute. Rather, they shift for different observers depending on how they’re moving relative to each other.

As he struggled to finish the theory, Einstein found that “even scholars of audacious spirit and fine instinct can be obstructed in the interpretation of facts by philosophical prejudices.” Einstein himself had to reach out of physics to develop the habits of mind that allowed him to see past the prejudices that obscured the relativistic universe he ultimately discovered. “The type of critical reasoning which was required for the discovery of this central point” he wrote, “was decisively furthered, in my case especially, by the reading of David Hume’s and Ernst Mach’s philosophical writings.”
Roberts’s question about the benefits minorities might bring into a physics classroom suggests a classroom in which nothing outside physics may usefully impinge.

David Hume! Benjamin Franklin’s friend (they corresponded on the matter of lightning rods, among other matters) and Adam Smith’s confidante! Hume’s name isn’t usually linked to deep physical insights. But as Einstein suggests, the intellectual work of physics doesn’t occur in a vacuum. It can’t be separated from a person’s learned habits of thought, from a particular set of lived incidents, books read, alternate worlds imagined. The diversity of Einstein’s particular background, sensibility, and cultural circumstances all played a role in bringing Special Relativity to fruition.
Einstein mentioning Hume and Mach is just a sneaky way of avoiding the fact that he got the whole theory from Lorentz and Poincare.

Levenson uses this to say that the US Supreme Court is wrong, and that Physics needs affirmative action to bring in new people and ideas. Lubos Motl addresses the foolishness of his argument.

Contrary to popular belief, Einstein was not an outsider. He was a German who progressed thru a rigid educational system, getting a doctoral degree in physics from a prestigious university. He did have an outside job while finishing his dissertations, but I doubt that was unusual.

Wednesday, December 23, 2015

Quantum crypto busted again

I mentioned an experiment to close Bell loopholes, and now there are a couple of others, as Alain Aspect summarizes:
By closing two loopholes at once, three experimental tests of Bell’s inequalities remove the last doubts that we should renounce local realism. They also open the door to new quantum information technologies.
This is nice to affirm the generally accepted quantum mechanics of 1930, but it has been raised to a very high profile for two reasons. It supposedly proves nature is nonlocal, and it supposedly enables unbreakable quantum cryptography.

It does not prove nonlocality, as I have repeatedly explained. It is also worthless for cryptography, for several reasons.

One reason is that all of these supposedly unbreakable systems have been repeatedly broken. Here is the latest:
Quantum key distribution is supposed to be a perfectly secure method for encrypting information. Even with access to an infinitely fast computer, an attacker cannot eavesdrop on the encrypted channel since it is protected by the laws of quantum mechanics. In recent years, several research groups have developed a new method for quantum key distribution, called "device independence." This is a simple yet effective way to detect intrusion. Now, a group of Swedish researchers question the security of some of these device-independent protocols. They show that it is possible to break the security by faking a violation of the famous Bell inequality. By sending strong pulses of light, they blind the photodetectors at the receiving stations which in turn allows them to extract the secret information sent between Alice and Bob.
This will not kill the subject, because the quantum Bell-heads will suggest work-arounds. They have too much invested in this nonsense. But it will never be something practical and secure enuf to buy goods on Ebay, and there is other technology that solves that problem. Quantum cryptography is useless in principle, and is a big scam.

Monday, December 21, 2015

Dark buzz killed the dinosaurs

From a dopey NY Times book review:
A good theory is an act of the informed imagination — it reaches toward the unknown while grounded in the firmest foundations of the known. In “Dark Matter and the Dinosaurs,” the Harvard cosmologist Lisa Randall proposes that a thin disk of dark matter in the plane of the Milky Way triggered a minor perturbation in deep space that caused the major earthly catastrophe that decimated the dinosaurs. It’s an original theory that builds on a century of groundbreaking discoveries to tell the story of how the universe as we know it came to exist, how dark matter illuminates its beguiling unknowns and how the physics of elementary particles, the physics of space, and the biology of life intertwine in ways both bewildering and profound.

If correct, Randall’s theory would require us to radically reappraise some of our most fundamental assumptions about the universe and our own existence. Sixty-­six million years ago, according to her dark-matter disk model, a tiny twitch caused by an invisible force in the far reaches of the cosmos hurled a comet three times the width of Manhattan toward Earth at least 700 times the speed of a car on a freeway. The collision produced the most powerful earthquake of all time and released energy a billion times that of an atomic bomb, heating the atmosphere into an incandescent furnace that killed three-quarters of Earthlings. No creature heavier than 55 pounds, or about the size of a Dalmatian, survived. The death of the dinosaurs made possible the subsequent rise of mammalian dominance, without which you and I would not have evolved to ponder the perplexities of the cosmos. ...

Randall calls the force driving that fraction “dark light” — an appropriately paradoxical term confuting the haughty human assumption that the world we see is all there is.
Supposedly she was going to call it "dark buzz", but she did not want to drive traffic to this blog. (Just kidding, Lisa.)

Not everyone accepts that a comet or asteroid wiped out the dinosaurs. A recent Time mag article said:
A new study examining volcanic eruptions and the infamous dinosaur-killing asteroid proposes a compromise in which both were responsible for the great extinction that occurred about 65 million years ago.

In the study, published in the journal Science, Geologists examined the timing of the already well-researched volcanic activity in the Deccan Traps in western India, and found that those eruptions occurred within 50,000 years of the asteroid hit—a pretty narrow window in geologic time. The seismic punch of the impact may have accelerated the speed of the eruptions, making it not only likely that both events had a hand in the dinosaurs’ downfall, but difficult to tease out which, if either, was more to blame.
So this may never be resolved.

Randall is trying to be taken seriously, but wherever she goes to plug her book, the reporters spend most of their time asking her for stories about how male chauvinist physicists mistreated her, about how women are marginalized, and generally baiting her into talking feminist politics instead of physics. She is too polite to say so, but it is obvious that she is much more annoyed at how these reporters belittle women, than any gripes against physicists.

For an example, see The One Question This Brilliant Physicist Wants People To Stop Asking Her in the AOL Huff Post. I got suckered by the click-bait, as I wanted to see if the question was about Jodie Foster, who played a cosmologist in a movie based on a Carl Sagan book.

Other physicists are more political, and some are telling the US Supreme Court that we need affirmative action:
Minority students attending primarily white institutions commonly face racism, biases, and a lack of mentoring. Meanwhile, white students unfairly benefit psychologically from being overrepresented. ...

We ask that you take these considerations seriously in your deliberations and join us physicists and astrophysicists in the work of achieving full integration and removing the pernicious vestiges of racism and white supremacy from our world.
This is just anti-white hatred. Physics has problems, but having too many white people is not one of them. Yes, most of the historical progress has been from white Christians and Jews, but not out of some white supremacy.

Friday, December 18, 2015

Battle for the heart and soul of physics

Quanta Magazine reports on a recent conference in Germany:
Ellis and Silk declared a “battle for the heart and soul of physics.” ...

Whether the fault lies with theorists for getting carried away, or with nature, for burying its best secrets, the conclusion is the same: Theory has detached itself from experiment. The objects of theoretical speculation are now too far away, too small, too energetic or too far in the past to reach or rule out with our earthly instruments. So, what is to be done? ...

Over three mild winter days, scholars grappled with the meaning of theory, confirmation and truth; how science works; and whether, in this day and age, philosophy should guide research in physics or the other way around.
Apparently they think what should be done is to redefine physics and science so that it includes theoretical speculations that can never be tested.
Today, most physicists judge the soundness of a theory by using the Austrian-British philosopher Karl Popper’s rule of thumb. In the 1930s, Popper drew a line between science and nonscience in comparing the work of Albert Einstein with that of Sigmund Freud. Einstein’s theory of general relativity, which cast the force of gravity as curves in space and time, made risky predictions — ones that, if they hadn’t succeeded so brilliantly, would have failed miserably, falsifying the theory. But Freudian psychoanalysis was slippery: Any fault of your mother’s could be worked into your diagnosis. The theory wasn’t falsifiable, and so, Popper decided, it wasn’t science.

Critics accuse string theory and the multiverse hypothesis, as well as cosmic inflation — the leading theory of how the universe began — of falling on the wrong side of Popper’s line of demarcation. To borrow the title of the Columbia University physicist Peter Woit’s 2006 book on string theory, these ideas are “not even wrong,” say critics. In their editorial, Ellis and Silk invoked the spirit of Popper: “A theory must be falsifiable to be scientific.”

But, as many in Munich were surprised to learn, falsificationism is no longer the reigning philosophy of science. Massimo Pigliucci, a philosopher at the Graduate Center of the City University of New York, pointed out that falsifiability is woefully inadequate as a separator of science and nonscience, as Popper himself recognized. Astrology, for instance, is falsifiable — indeed, it has been falsified ad nauseam — and yet it isn’t science. Physicists’ preoccupation with Popper “is really something that needs to stop,” Pigliucci said. “We need to talk about current philosophy of science. We don’t talk about something that was current 50 years ago.”

Nowadays, as several philosophers at the workshop said, Popperian falsificationism has been supplanted by Bayesian confirmation theory, or Bayesianism, a modern framework based on the 18th-century probability theory of the English statistician and minister Thomas Bayes. Bayesianism allows for the fact that modern scientific theories typically make claims far beyond what can be directly observed — no one has ever seen an atom — and so today’s theories often resist a falsified-unfalsified dichotomy.
The philosophers are even worse than the physicists.

About 50 years ago, philosophers rejected the whole idea that science is about seeking truth in the natural world. So they reject the idea that theories can be falsified, or that objective knowledge exists, or that scientists follow a scientific method, or that science makes progress. Most of these philosophers are ignorant of XX century science, and base their views on those of other philosophers, and on science from many centuries ago.

The article suggests that many think that Bayesianism will save them, but Bayesian statistician Andrew Gelman says that they completely misunderstand Bayesianism.

If this isn't pseudoscience, I don't know what is. Someday these people will be laughed at like astrologers.

Popper died in 1994, Bayes died in 1761. It is strange for a philosopher to put down 50-year-old ideas as being unworthy of consideration.

I agree that there is a battle for the heart and soul of physics. Philosophers and the most prominent physicists are off the deep end into unscientific mysticism. I am inclined to believe that there is a silent majority of physicists who reject all of this nonsense, but it is hard to tell as they have been intimidated into silence.

Update: They say "no one has ever seen an atom", but scientists have made big progress on pictures of atoms and even molecular reactions taking place. See this UC Berkeley announcement.



Update: Nature mag article.

Wednesday, December 16, 2015

Hints of a new particle

Dennis Overbye reports in the NY Times:
One possibility, out of a gaggle of wild and not-so-wild ideas springing to life as the day went on, is that the particle — assuming it is real — is a heavier version of the Higgs boson, a particle that explains why other particles have mass. Another is that it is a graviton, the supposed quantum carrier of gravity, whose discovery could imply the existence of extra dimensions of space-time.
No, it will not be a graviton, and a graviton would not imply extra dimensions.

They found the Higgs, but no SUSY particles.

Funny that there is no mention of supersymmetry. The LHC was built to find the Higgs, and confirm the standard model, and to find some supersymmetry (SUSY) particles to disprove it.
When all the statistical effects are taken into consideration, Dr. Cranmer said, the bump in the Atlas data had about a 1-in-93 chance of being a fluke — far stronger than the 1-in-3.5-million odds of mere chance, known as five-sigma, considered the gold standard for a discovery.
So don't get excited yet.

Monday, December 14, 2015

Teleportation gets breakthru award

Physics World announces:
The Physics World 2015 Breakthrough of the Year goes to Jian-Wei Pan and Chaoyang Lu of the University of Science and Technology of China in Hefei, for being the first to achieve the simultaneous quantum teleportation of two inherent properties of a fundamental particle – the photon. Nine other achievements are highly commended and cover topics ranging from astronomy to medical physics

Synonymous with the fictional world of Star Trek, the idea of teleportation has intrigued scientists and the public alike. Reality caught up with fiction in 1993, when an international group of physicists proved theoretically that the teleportation of a quantum state is entirely possible, so long as the original state being copied is destroyed. Successfully teleporting a quantum state therefore involves making a precise measurement of a system, transmitting the information to a distant location and then reconstructing a flawless copy of the original state. As the "no cloning" theorem of quantum mechanics does not allow for a perfect copy of a quantum state to be made, it must be completely transferred from one particle onto another, such that the first particle is no longer in that state.
From a strictly scientific view, this work is of only very minor significance. It only gets attention because of the Star Trek terminology.

Some info is being transmitted, and it is wrapped in quantum terms to make it sound like a big deal.

Likewise, Nature mag made a big deal out of some physics problem being undecidable. See Aaronson and Motl for their slants on it. It all sounds very profound as unknowable math has surfaced as unknowable physics. But it is really not. It is just a minor curiosity with little significance for math or physics.

Friday, December 11, 2015

German meeting on unscientific physics

Germany just held a conference uniting philosophers with non-empirical physicists. This was partially in response to a Nature article defending the integrity of physics against such nonsense.

Peter Woit discusses it, drawing these comments:
“Joe Polchinski lays out the case for string theory, and how unexpectedly successful it’s been.” http://arxiv.org/abs/1512.02477

“The public is confused because there are a host of ppl who write blogs or books who attack string theory”

Gross said he wasn’t referring to you [Peter Woit]. Heaven knows who he was referring to then…
I don't think he was referring to me either. Woit is the most prominent critic of string theory.

The big non-empirical physics is: string theory, multiverse, and quantum gravity.

The philosophers are probably excited that some prominent theoretical physicists are willing to talk to them. But the philosophers are only the anti-science philosophers who deny the scientific method anyway.

Theoretical physicists must be the only scientists who complain about criticism from bloggers, or who go running to philosophers for validation.

Thursday, December 10, 2015

Not yet truly a quantum computer

I just re-iterated my claim that we have no quantum computer showing a quantum speedup, and now Scott Aaronson comments on the latest hype:
As many of you will have seen by now, on Monday a team at Google put out a major paper reporting new experiments on the D-Wave 2X machine. (See also Hartmut Neven’s blog post about this.) The predictable popularized version of the results—see for example here and here—is that the D-Wave 2X has now demonstrated a factor-of-100-million speedup over standard classical chips, thereby conclusively putting to rest the question of whether the device is “truly a quantum computer.” ...

Thus, while there’s been genuine, interesting progress, it remains uncertain whether D-Wave’s approach will lead to speedups over the best known classical algorithms, ...

But to repeat: even if D-Wave makes all four of these improvements, we still have no idea whether they’ll see a true, asymptotic, Selby-resistant, encoding-resistant quantum speedup.  We just can’t say for sure that they won’t see one. ...

I still have no idea when and if we’ll have a practical, universal, fault-tolerant QC, capable of factoring 10,000-digit numbers and so on.  But it’s now looking like only a matter of years until Gil Kalai, and the other quantum computing skeptics, will be forced to admit they were wrong — which was always the main application I cared about anyway!
It is funny how he is perfectly happy spending his life on quantum computer complexity theory, when there is no proof that there is any such thing as a quantum computer. But when someone like me argues that quantum computers are impossible, then suddenly he wants research to prove me wrong.

It is almost as if he wants to work on something of no practical value, but he does not want anyone saying that it has no practical value.

Wednesday, December 9, 2015

Most profound result of quantum field theory

Amanda Gefter writes:
“Dr. Wilczek,” the defense attorney begins. “You have stated what you believe to be the single most profound result of quantum field theory. Can you repeat for the court what that is?”

The physicist leans in toward the microphone. “That two electrons are indistinguishable,” he says.

The smoking gun for indistinguishability, and a direct result of the 1-in-3 statistics, is interference. Interference betrays the secret life of the electron, explains Wilczek. On observation, we will invariably find the electron to be a corpuscular particle, but when we are not looking at it, the electron bears the properties of a wave. When two waves overlap, they interfere — adding and amplifying in the places where their phases align — peaks with peaks, troughs with troughs — and canceling and obliterating where they find themselves out of sync. These interfering waves are not physical waves undulating through a material medium, but mathematical waves called wavefunctions. Where physical waves carry energy in their amplitudes, wavefunctions carry probability. So although we never observe these waves directly, the result of their interference is easily seen in how it affects probability and the statistical outcomes of experiment. All we need to do is count.

The crucial point is that only truly identical, indistinguishable things interfere. The moment we find a way to distinguish between them — be they particles, paths, or processes — the interference vanishes, and the hidden wave suddenly appears in its particle guise. If two particles show interference, we can know with absolute certainty that they are identical. Sure enough, experiment after experiment has proven it beyond a doubt: electrons interfere. Identical they are — not for stupidity or poor eyesight but because they are deeply, profoundly, inherently indistinguishable, every last one.

This is no minor technicality. It is the core difference between the bizarre world of the quantum and the ordinary world of our experience. The indistinguishability of the electron is “what makes chemistry possible,” says Wilczek. “It’s what allows for the reproducible behavior of matter.” If electrons were distinguishable, varying continuously by minute differences, all would be chaos. It is their discrete, definite, digital nature that renders them error-tolerant in an erroneous world.

Monday, December 7, 2015

No big advances in theoretical physics

Physicist Sabine Hossenfelder answers:
“Can you think of a single advancement in theoretical physics, other than speculation like Strings and Loops and Safe Gravity and Twistors, and confirming things like the Higgs Boson and pentaquarks at the LHC, since Politizer and Wilczek and Gross (and Coleman) did their thing re QCD in the early 1980's?” ...

Admittedly your question pains me considerably.

Quantum error correction, quantum logical gates, quantum computing.

Quantum cryptography.

Inflation.

Effective field theory/Renormalization group running.

Gauge-gravity duality (AdS/CFT).

"When Frank Wilczek becomes 65 in 2018, there will be no active (below normal retirement age) “fundamental” theorist with a Nobel prize, for the first time since H.A. Lorentz won the prize in 1902."
I skipped a few items outside my expertise. The normalization group and other standard model work has been great, but a lot of the big ideas are from the 1970s.

Quantum cryptography and quantum computing are big scams. Inflation is an interesting idea, but has not really been tested.

The history books of the next millennium will say that the great ideas of physics were worked in the XX century, or maybe 1860 to 1980. Then the field became overrun with charlatans.

SciAm writer John Horgan annoyed everyone with a 1996 book arguing that scientists had already found the big discoveries, and that new results would be disappointing. He now has a new edition bragging that he was right.

Yes, he was right. I did not believe him at the time, because I believe the hype that new experiments like the Superconducting Super Collider were going to discover new physics. Physicists are still complaining about that fiasco. But the LHC spent $10B, and only confirmed the standard model from the 1970s. It found the value of the Higgs mass, but nothing else new about it. String theorists have given up on any real physics, and are now babbling about “non-empirical theory confirmation”, whatever that is.

Attempts to prove quantum nonlocality have been a total failure. Quantum cryptography has taught us nothing new about quantum mechanics, and found no useful application to cryptography. Quantum computing has failed to convincingly demonstrate a quantum speedup or even a true qubit. Quantum gravity has never made any progress. Whole new areas like the multiverse are incoherent from the start.

The whole field of Physics is now dominated by charlatans.

Sunday, December 6, 2015

No evidence that we live in a hologram

One of the biggest advance in theoretical physics of the last 30 years is supposed to be the holographic principle, but I did not know that anyone was foolish enuf to believe that it is testable. Jennifer Ouellette (Mrs. Sean M. Carroll) writes:
A controversial experiment at Fermilab designed to hunt for signs that our universe may really be a hologram has failed to find the evidence it was seeking, the laboratory has announced.

It’s called the Holometer (short for “Holographic Interferometer”), and it’s the brainchild of Fermilab physicist Craig Hogan. He dreamed up the idea in 2009 as a way to test the so-called holographic principle.

Back in the 1970s, a physicist named Jacob Bekenstein showed that the information about a black hole’s interior is encoded on its two-dimensional surface area (the “boundary”) rather than within its three-dimensional volume (the “bulk”). Twenty years later, Leonard Susskind and Gerard ‘t Hooft extended this notion to the entire universe, likening it to a hologram: our three-dimensional universe in all its glory emerges from a two-dimensional “source code.” New York Times reporter Dennis Overbye has likened the holographic concept to a can of soup. All the “stuff” of the universe, including human beings, makes up the “soup” inside the can, but all the information describing that stuff is inscribed on the label on the outside boundary. ...

The holographic principle has since become one of the most influential ideas in theoretical physics, yet many believe it to be untestable, at least for now. (It would require probing black holes up-close, a daunting prospect even if we had the technology to do so.) Hogan decided to try anyway.
I don't want to blame someone for doing an experiment, but was this stuff ever meant to be taken seriously?

The holographic principle is just some silly conjectural mathematical property of some theories that have some hypothetical relation to black hole boundaries, but no real relation to the real world.

Friday, December 4, 2015

Negative progress in quantum mechanics

I have defended textbook quantum mechanics against various critics, and now Lubos Motl writes:
Most fields of the human activity have seen persistent progress. But it's remarkable to see how much negative progress has occurred in the recent 85 if not 90 years in the field of "writing about the foundations of quantum mechanics". In 1930, people had folks like Heisenberg who had actually discovered the totally new foundations of physics, knew how to avoid all the traps and possible mistakes, and what they were, and they just presented the new theory in the no-nonsense way. Today we have tons of Deutsches, Wallaces, Puseys, Rudolphs, Barretts, Hsus who are sloppy all the time, who are dogmatic about things that are unsupported or directly contradict the evidence, and who are deliberately obfuscating some points in order to mask the incoherence of their message and indefensibility of the claim that quantum mechanics needs an "addition" and the universal postulates of quantum mechanics that materialized out of the Copenhagen spirit have to be replaced by one of their incoherent new sloppy irrational pictures that are designed to return physics to the era of classical physics, a goal that obviously can never succeed.
I agree with this. Some comments have claimed that the views of Bohr and Heisenberg are indefensible, but Lumo quotes Heisenberg:
However, all the opponents of the Copenhagen interpretation do agree on one point. It would, in their view, be desirable to return to the reality concept of classical physics or, to use a more general philosophic term, to the ontology of materialism. They would prefer to come back to the idea of an objective real world whose smallest parts exist objectively in the same sense as stones or trees exist, independently of whether or not we observe them.

This, however, is impossible or at least not entirely possible because of the nature of the atomic phenomena, as has been discussed in some of the earlier chapters. It cannot be our task to formulate wishes as to how the atomic phenomena should be; our task can only be to understand them.
So he clearly understood what was wrong with the Bohm-Bell school of physics that confuses people endlessly.

Here is a new paper on Bohr's views.

I am not saying that Bohr and Heisenberg got everything right, but we have had negative progress. Reputable physicists and journal say silly things about QM.

Meanwhile Scott Aaronson is speaking at an IBM conference on "ThinkQ 2015 - Challenges and applications for medium size quantum computers". The first thing he says in his slides is:
Can forget temporarily about practical applications of QC: the more immediate goal is just to show a clear quantum speedup for anything
You read that right. There are no practical applications on the horizon. They are desperately trying to show that it is possible for a quantum computer to have some sort of quantum speedup. So far, they have failed. Too bad Bohr and Heisenberg are no longer around to explain to them why they are failing.

Tuesday, December 1, 2015

Bell's beables are failed hidden variables

A persistent reader disputes my explanation of Bell's Theorem. I started out picking on a book by a SciAm writer, with the author defending his book, but then we got into Bell details. See SciAm book promotes spooky action, Explaining the EPR paradox, Shimony's opinion of Bell's Theorem , and The Bell theorem hypotheses.

I think that I have followed what Bell wrote, and how it is explained in Wikipedia, textbooks, and other references.

Apparently others have had the exact same dispute that I have had with the anonymous commenter. Travis Norsen, a coauthor of the Scholarpedia article I cited previously, writes in a 2008 paper:
J.S. Bell believed that his famous theorem entailed a deep and troubling conflict between the empirically verified predictions of quantum theory and the notion of local causality that is motivated by relativity theory. Yet many physicists continue to accept, usually on the reports of textbook writers and other commentators, that Bell's own view was wrong, and that, in fact, the theorem only brings out a conflict with determinism or the hidden-variables program or realism or some other such principle that (unlike local causality), allegedly, nobody should have believed anyway. ... Here we try to shed some light on the situation ...
Yes, I am with those who say that Bell's theorem only presents a conflict with hidden variables or counterfactual definiteness. Others say that the theorem is stronger.

Norsen relies directly on Bell:
Here is how Bell responded to this first class of disagreement:
“My own first paper on this subject starts with a summary of the EPR argument from locality to deterministic hidden variables. But the commentators have almost universally reported that it begins with deterministic hidden variables.” (Bell, 1981, p.157) ...
Bell’s fullest and evidently most-considered discussion of local causality occurs in his last published paper, La nouvelle cuisine (1990, 232-248). We will here essentially follow that discussion, supplementing it occasionally with things from his earlier papers.

Bell first introduces what he calls the “Principle of local causality” as follows: “The direct causes (and effects) of events are near by, and even the indirect causes (and effects) are no further away than permitted by the velocity of light.” Then, referencing what has been reproduced here as Figure 1, Bell elaborates: “Thus, for events in a space-time region 1 ... we would look for causes in the backward light cone, and for effects in the future light cone. In a region like 2, space-like separated from 1, we would seek neither causes nor effects of events in 1. Of course this does not mean that events in 1 and 2 might not be correlated...” (1990, p. 239)

After remarking that this formulation “is not yet sufficiently sharp and clean for mathematics,” Bell then proposes the following version, referencing what has been reproduced here as Figure 2:
“A theory will be said to be locally causal if the probabilities attached to values of local beables in a space-time region 1 are unaltered by specification of values of local beables in a space-like separated region 2, when what happens in the backward light cone of 1 is already sufficiently specified, for example by a full specification of local beables in a spacetime region 3...” (1990, 239-40)
No, his first definition in terms of light cones is much cleaner and sharper for mathematical analysis. That is the definition used in Maxwell's theory of electromagnetism, in quantum field theory, and in every other relativistic theory.

What the heck are "beables", and how can anyone be sure about the probabilities?

Norsen drafted a Wikipedia article on beables in 2010:
The word beable was introduced by the physicist John Stewart Bell in his article entitled "The theory of local beables" (see Speakable and Unspeakable in Quantum Mechanics, pg. 52). A beable of a physical theory is an object that, according to that theory, is supposed to correspond to an element of physical reality. The word "beable" (be-able) contrasts with the word "observable". While the value of an observable can be produced by a complex interaction of a physical system with a given experimental apparatus (and not be associated to any "intrinsic property" of the physical system), a beable exists objectively, independently of observation. For instance, it can be proven that there exists no physical theory, consistent with the predictions of quantum theory, in which all observables of quantum theory (i.e., all self-adjoint operators on the Hilbert space of quantum states) are beables.

While, in a given theory, an observable does not have to correspond to any beable, the result of the "measurement" of an observable that has actually been carried out in some experiment is physically real (it is represented, say, by the position of a pointer) and must be stored in some beable of the theory.
So a beable is just Bell's notion of a hidden variable. It is not an observable, but somehow represents someone's opinion about what ought to be real.

The mainstream interpretations of quantum mechanics say that the set of observables are what is important and real. Bell rejects this, and says that some other form of hidden variables must be what is real.

Bell also focuses on probability, as if that is something real. It is not, as I have explained here and elsewhere. It is not any more essential to quantum mechanics than to any other theory. It is a mathematical device for relating theories to the world, but it is not directly observable.

Thus when Bell defines causality in terms of beables, he is squarely and directly making an assumption about hidden variables. And that assumption contradicts the postulates, mathematical formulation, and spirit of QM.

Norsen himself is squarely in Bell's camp, as he proposed this rewrite of the Wikipedia article on Bell's theorem:
Bell's Theorem is a mathematical theorem first demonstrated by J.S. Bell in his 1964 paper "On the Einstein-Podolsky-Rosen paradox". The theorem establishes that any physical theory respecting a precisely-formulated locality (or "local causality") condition will make predictions, for a certain class of experiments, that are constrained by a so-called Bell Inequality. Since the particular theory called Quantum Mechanics makes predictions which violate this constraint, one can also think of Bell's Theorem as a proof that there is an inconsistency between (i) local causality and (ii) a certain set of QM's empirical predictions. ...

Bell's own interpretation of his theorem, however, is not widely accepted among physicists in general. Some physicists who have studied Bell's Theorem carefully point to alleged flaws or hidden assumptions in Bell's formulation of local causality and/or his derivation of (what Bell called) the Locality Inequality therefrom; such claims are controversial and will be addressed below. But most physicists fail to agree with Bell's statement above not because they think there is some flaw in the reasoning leading to it, but rather because what they have learned about Bell's Theorem (from textbooks and other sources) radically distorts the subject.
This was apparently rejected because the Wikipedia editors agree with the textbook explanation of Bell's theorem, and do not accept Bell's own interpretation.

The core teachings of QM say that an electron is observed as a particle, but is not really the sort of classical particle that has a precise position and momentum at the same time. The Einstein-Bohm-Bell types refuse to accept this. They also refuse a positivist view that allows us to be silent about what cannot be measured. Instead they want to pretend to have values for that, and call them beables or hidden variable or reality or whatever sounds good. The consensus since 1930 has been that this approach does not work.

My critic says:
You seem to scrupulously avoid any cognitive engagement with the actual subject.
I thought I did, but I guess he means that I avoid beables.

This is like discussing the twin paradox or some other subtle point in relativity theory, and someone objecting, "But what is the true time? You seem to avoid discussing what the real time is!"

Relativity teaches that different observers measure time differently. Defining some sort of universal real time is usually not helpful. Likewise, defining beables as the hypothetical result of unperformed measurements is usually not helpful either.

I remember taking a high school science class, and being told the history of the debate over whether light was a particle or a wave, with the arguments for each side. At the end of the course, I had a dissatisfied feeling because the teacher never told us which it was. I thought that maybe I skipped class that day, because surely one side was right and one was wrong.

No, nature does not always match our preconceptions. You have to let go of the idea that light has to match some intuitive classical model for a moving object. Relativity teaches us how clocks behave, but not what time really is. Quantum mechanics teaches us how to make and predict measurements of electrons and photons, but not what they really are.

If you want to understand electrons and photons in terms of classical joint probabilities of beables, then you will be disappointed, because nature does not work that way.

There is nothing in this Einstein-Bohm-Bell analysis but a failed attempt to prove QM wrong. The physicists who did the early Bell test experiments were convinced that they would win Nobel prizes for disproving QM. Instead they just confirmed what everyone thought in 1930.

Saturday, November 28, 2015

Quantum gravity looks for a miracle

Peter Woit recommends: Lance Dixon Explains Quantum Gravity
What questions do researchers hope to answer with quantum gravity?

Quantum gravity could help us answer important questions about the universe.

For example, quantum effects play a role near black holes ...

Researchers also hope to better understand the very first moments after the Big Bang, ...

One has to realize, though, that processes on Earth occur at much smaller energy scales, with unmeasurably small quantum corrections to gravity. With the LHC, for instance, we can reach energies that are a million billion times smaller than the Planck scale. Therefore, quantum gravity studies are mostly “thought experiments,” ...

What would be a breakthrough in the field?

It would be very interesting if someone miraculously found a theory that we could use to consistently predict quantum gravitational effects to much higher orders than possible today.
As you can infer, quantum gravity has no relation to observational science. All they are doing is thought experiments, and trying to do mathematical calculations that are consistent with their prejudices about the world. That supposed consistency is all they care about, as there is no observation or measurement that will ever have any bearing on what they do.

Scott Aaronson writes about partying with his fellow rationalists, and adds:
he just wanted to grill me all evening about physics and math and epistemology. Having recently read this Nature News article by Ron Cowen, he kept asking me things like: “you say that in quantum gravity, spacetime itself is supposed to dissolve into some sort of network of qubits. Well then, how does each qubit know which other qubits it’s supposed to be connected to? Are there additional qubits to specify the connectivity pattern? If so, then doesn’t that cause an infinite regress?” I handwaved something about AdS/CFT, where a dynamic spacetime is supposed to emerge from an ordinary quantum theory on a fixed background specified in advance. But I added that, in some sense, he had rediscovered the whole problem of quantum gravity that’s confused everyone for almost a century: if quantum mechanics presupposes a causal structure on the qubits or whatever other objects it talks about, then how do you write down a quantum theory of the causal structures themselves?
These are all unanswerable questions, of course. There is no such thing as a network of qubits. A qubit is an idealization that could be used to build quantum computers, if such a thing exists and can be scaled up. But replacing spacetime with qubits is just a stupid quantum gravity pipe dream that has no bearing on reality.

The Nature article says:
A successful unification of quantum mechanics and gravity has eluded physicists for nearly a century. Quantum mechanics governs the world of the small — the weird realm in which an atom or particle can be in many places at the same time, and can simultaneously spin both clockwise and anticlockwise. Gravity governs the Universe at large — from the fall of an apple to the motion of planets, stars and galaxies — and is described by Albert Einstein’s general theory of relativity, announced 100 years ago this month. The theory holds that gravity is geometry: particles are deflected when they pass near a massive object not because they feel a force, said Einstein, but because space and time around the object are curved.

Both theories have been abundantly verified through experiment, yet the realities they describe seem utterly incompatible. And from the editors’ standpoint, Van Raamsdonk’s approach to resolving this incompatibility was  strange. All that’s needed, he asserted, is ‘entanglement’: the phenomenon that many physicists believe to be the ultimate in quantum weirdness. Entanglement lets the measurement of one particle instantaneously determine the state of a partner particle, no matter how far away it may be — even on the other side of the Milky Way.
No, measuring a particle has no effect on a partner on the other side of the galaxy. It appears that people are so spooked by entanglement that you can babble nonsense and be taken seriously by a leading science journal.

It is not true that there is any incompatibility between relativity and quantum mechanics, in any practical sense. The supposed problems are in black holes or at the big bang, where much of we know breaks down for other reasons. There is no foreseeable way for a scientific resolution of what the quantum gravity folks want.

Here is Aaronson's example of a test for quantum gravity:
For example, if you could engineer a black hole to extreme precision (knowing the complete quantum state of the infalling matter), then wait ~1067 years for the hole to evaporate, collecting all the outgoing Hawking radiation and routing it into your quantum computer for analysis, then it’s a prediction of most quantum theories of gravity that you’d observe the radiation to encode the state of the infalling matter (in highly scrambled form), and the precise way in which the state was scrambled might let you differentiate one theory of pre-spacetime qubits at the Planck scale from another one. (Note, however, that some experiments would also require jumping into the black hole as a second step, in which case you couldn’t communicate the results to anyone else who didn’t jump into the hole with you.)
This is science fiction from beginning to end. One rarely even knows the complete quantum state for a single particle, and never for anything complicated.

Van Raamsdonk endorses the wacky idea that entanglement and wormholes are the same thing, on a different scale.

The rationalists (such as the Less Wrong crowd) grapple with their own futuristic compatibility problem. They are convinced that AI robots will take over the Earth, and then will have no use for illogical humans. So they are determined to improve the logical functioning of humans so that they will be able to coexist with the machine super-intelligence. I think that the robots will exterminate us when they discover that our leading scientists believe that detecting a particle can have an instantaneous effect on the other side of the galaxy.

Thursday, November 26, 2015

NY Times celebrates general relativity centennial

NY Times editor and Einstein biographer Dennis Overbye writes on the general relativity centennial:
This is the general theory of relativity. It’s a standard trope in science writing to say that some theory or experiment transformed our understanding of space and time. General relativity really did. ...

Hardly anybody would be more surprised by all this than Einstein himself. The space-time he conjured turned out to be far more frisky than he had bargained for back in 1907. ...

Gravity was not a force transmitted across space-time like magnetism; it was the geometry of that space-time itself that kept the planets in their orbits and apples falling.

It would take him another eight difficult years to figure out just how this elastic space-time would work, during which he went from Bern to Prague to Zurich and then to a prestigious post in Berlin.
He makes it sound as if Einstein invented space-time geometry in 1907, and spent 8 years figuring out how to interpret gravity as geometry.

Physics books often say this also, but it is wrong. Poincare discovered the spacetime geometry in 1905, and was the first to apply it to gravity. As I wrote last month:
Many people assume that Einstein was a leader in this movement, but he was not. When Minkowski popularized the spacetime geometry in 1908, Einstein rejected it. When Grossmann figured out to geometrize gravity with the Ricci tensor, Einstein wrote papers in 1914 saying that covariance is impossible. When a relativity textbook described the geometrization of gravity, Einstein attacked it as wrongheaded.
Einstein did make some contributions to the theory, he should only be credited for what he did.

I previously criticized Overbye for the way he credited Einstein for special relativity:
But in Einstein's formulation did objects actually shrink? In a way, the message of relativity theory was that physics was not about real objects, rather, it concerned the measurement of real objects. ...

No such declaration of grandeur, of course, intruded on the flat and somewhat brisk tone of the paper. Albert simply presented his argument and in many cases left it to the reader to fill in the gaps and to realize the implications.
A more complete quote is here. That is a nice statement of the message of relativity, but it was Poincare's view, and not Einstein's.

Monday, November 23, 2015

Extraordinary pages in the history of thought

A SciAm blogger writes:
“The treatise itself, therefore, contains only twenty-four pages — the most extraordinary two dozen pages in the whole history of thought!”

This Hungarian postage stamp does not depict János Bolyai. No portraits of him survive. For more information about the "real face of János Bolyai," click the picture to read Tamás Dénes's article on the subject.

“How different with Bolyai János and Lobachévski, who claimed at once, unflinchingly, that their discovery marked an epoch in human thought so momentous as to be unsurpassed by anything recorded in the history of philosophy or of science, demonstrating as had never been proved before the supremacy of pure reason at the very moment of overthrowing what had forever seemed its surest possession, the axioms of geometry.”

— George Bruce Halsted, on János Bolyai’s treatise on non-Euclidean geometry, The Science of Absolute Space.
Bolyai's non-Euclidean geometry was published in 1832.

Another geometry book says:
“The discoverers of noneuclidean geometry fared somewhat like the biblical king Saul. Saul was looking for some donkeys and found a kingdom. The mathematicians wanted merely to pick a hole in old Euclid and show that one of his postulates which he though was not deducible from the others is, in fact, so deducible. In this they failed. But they found a new world, a geometry in which there are infinitely many lines parallel to a given line and passing through a given point; in which the sum of the angles in a triangle is less than two right angles; and which is nevertheless free of contradiction.”
The blog concludes:
These quotes all seem a bit dramatic for geometry, but it’s easy not to know how truly revolutionary the discovery of non-Euclidean geometry was. Halsted’s description of Bolyai’s paper as “the most extraordinary two dozen pages in the whole history of thought” certainly sounds hyperbolic, but can you find 24 other pages that compete with it?
The creation of non-Euclidean geometry by Bolyai, Gauss, and others was indeed a great vindication of geometry and the axiomatic method. It later became central to 20th century physics when Poincare and Minkowski created a theory of relativity based on it.

Wednesday, November 18, 2015

Carroll's big picture

Physicist Sean M. Carroll has a new book to be released next year, The Big Picture: On the Origins of Life, Meaning, and the Universe Itself:
This is Sean Carroll's most ambitious book yet: how the deep laws of nature connect to our everyday lives. He has taken us From Eternity to Here and to The Particle at the End of the Universe. Now forThe Big Picture. This is a book that will stand on the shelf alongside the great humanist thinkers from Stephen Hawking and Carl Sagan to Daniel Dennett and E.O. Wilson. It is a new synthesis of science and the biggest questions humans ask about life, death, and where we are in the cosmos. 

Readers learn the difference between how the world works at the quantum level, the cosmic level, and the human level; the emergence of causes from underlying laws; and ultimately how human values relate to scientific reality. This tour of, well, everything explains the principles that have guided the scientific revolution from Darwin and Einstein to the origins of life, consciousness, and the universe, but it also shows how an avalanche of discoveries over the past few hundred years has changed the world for us and what we think really matters. As Carroll eloquently demonstrates, our lives are dwarfed by the immensity of the universe and redeemed by our capacity to comprehend it and give it meaning.
Keep in mind that this is a guy who believes in the many-worlds interpretation (MWI) of quantum mechanics, where your every fantasy is really being played out in some parallel universe.

Update: Publication delayed until May 10, 2016.

Monday, November 16, 2015

Philosopher says scientism leads to leftism

A reader sends an example of a philosopher who supposedly has a scientific outlook. In this podcast, Alex Rosenberg trashes Economics as not being a science. He notes that while economists brag about being able to make certain types of predictions, they are usually completely unable to make any quantitative predictions with any reliability.

Rosenberg has a book on The Atheist's Guide to Reality: Enjoying Life without Illusions.

One chapter is titled "Never let your consious be your guide". I am not sure if he misspelled "conscience", or he is making a pun, or what. Either way, he says "science provides clear-cut answers": You have no mind, you have no free will, and you have no capacity for introspection, rational decision, or moral judgment.

In this interview, he proudly declares his devotion to Scientism, even tho some consider it a pejoritive term. His true science is Physics, and he says he believes in bosons and fermions, and nothing more. But he never actually cites any scientific facts or theories to support any of his opinions. He shows no sign of knowing anything about Physics, except that particles are classified into bosons and fermions.

Even saying that everything is made of bosons and fermions is a little misleading. We have spacetime and quantum fields. Even the quantum vacuum is a nontrivial thing, and I would not say it is just made of bosons and fermions.

After nearly an hour of babbling about the merits of philosophical analysis, he gets to another subject that passes his scientific requirements: political leftism. That's right, the whole thing is just a build-up for a bunch of left-wing political opinions.

He denies that there are any natural rights, but strongly argues for a "core morality". This makes no sense to me.

He says he is "pro-choice" (ie, pro-abortion) but denies that people have any choices because his scientism implies a causal determinism that eliminates free will. So why is he in favor of free choice when there is no such thing?

He says that his scientism implies that we should be soft on crime, because it is inhumane to punish people for doing what they were pre-programmed to do. He also says it is wrong to reward people for hard work or successful business, because again, they are just doing what they were programmed to do. He concedes that we might want a market economy, because science shows that it works better, even tho economics is not a science. But people should never be allowed to get rich, because science proves that they will never deserve the riches.

This is all nonsense. It all rests on Laplacian determinism, but there is no scientific proof of that, and the great majority of physicists today reject it. Even if it were true, his reasoning doesn't even make any sense. He is like a Marxist who advocates pseudo-science with great certainty to fellow humanities professors who like the leftist conclusions and are too innumerate to see the fallacies.

A book review says:
Science – i.e. common sense – tells us that atheism is pretty much a certainty. The reason is quite straightforward. The second law of thermodynamics tells us that disorder and homogeneity steadily increase everywhere in the universe. Whatever physics has left to tell us, it almost certainly won’t contradict this fundamental law. But a purposeful agent, arranging things according to a conscious plan, would be transforming disorder to order. And this is never possible according to the second law (strictly speaking, it is possible, but so improbable as to be ruled out). This rules out most conceptions of God straight away. ...

Vividly and painstakingly, Rosenberg undermines our fundamental belief that we consciously direct our actions. He argues that it is impossible that any of the processes occurring in our brain can have representative content. They can’t be about anything at all. ... This means that our actions can’t, however much they seem to, be caused by thoughts about intended results, or indeed thoughts about anything at all. ...

Despite appearances, then, we aren’t capable of ordering the world by conscious design. In fact, our actions are never caused by conscious intentions. ...

Human life as a whole has no meaning, no goal, and no purpose. Neither our individual lives nor human history in general are governed by purposes – not our own, and not anybody else’s. ... Indeed, there is no ‘you’; the self is another one of those convenient fictions we use to roughly indicate blind processes that have no subjective centre.

The same goes for human history. All apparent progress – the spread of democracy or the global progression towards prosperity – are just local equilibria in a blind arms-race driven by blind selection.
This is neither science nor common sense. God could have created the universe in a very low entropy big bang, and intended that life evolve on Earth while entropy increases in the Sun.

Of course our actions are caused by conscious intentions. And human history has made progress.

I would rather stick to the science, but he does not have much science here, if any. What he has is philosophical nuttiness directed towards some sort of politically leftist nihilism.

Sunday, November 15, 2015

Where are the bits in the brain?

Information is reducible to bits, right? Zeros and one. Maybe the quantum computer enthusiasts add qubits, and maybe the Bell spooks might say that non-bit information can travel faster than light.

Brain scientist do not buy into this quantum stuff, so I thought that they believe in bits. But apparently not:
Most neuroscientists believe that the brain learns by rewiring itself—by changing the strength of connections between brain cells, or neurons. But experimental results published last year, from a lab at Lund University in Sweden, hint that we need to change our approach. They suggest the brain learns in a way more analogous to that of a computer: It encodes information into molecules inside neurons and reads out that information for use in computational operations. ...

From a computational point of view, directions and distances are just numbers. ...

Neuroscientists have not come to terms with this truth. I have repeatedly asked roomfuls of my colleagues, first, whether they believe that the brain stores information by changing synaptic connections — they all say, yes — and then how the brain might store a number in an altered pattern of synaptic connections. They are stumped, or refuse to answer.
I guess not much is known about how the brain stores information.

Saturday, November 14, 2015

The Bell theorem hypotheses

A commenter disputes my contention that Bell's Theorem depends on an assumption of local hidden variables.

This may seem like an obscure technical point, but it helps explain differing views about Bell's Theorem:
Cornell solid-state physicist David Mermin has described the appraisals of the importance of Bell's theorem in the physics community as ranging from "indifference" to "wild extravagance".
There has been a consensus since about 1930 that hidden variable theory is contrary to the core of quantum mechanics. So another experiment confirming Bell's Theorem is just another affirmation of the legitimacy of those Nobel prizes in 1932 and 1933.

That explains the indifference. So the Bell spooks need a better argument to justify their wild extravagance.

Here is one such attempt, from Wikipedia:
Whereas Bell's paper deals only with deterministic hidden variable theories, Bell's theorem was later generalized to stochastic theories[18] as well, and it was also realised[19] that the theorem is not so much about hidden variables, as about the outcomes of measurements that could have been taken instead of the one actually taken. Existence of these variables is called the assumption of realism, or the assumption of counterfactual definiteness.
This is just a sneaky way of defining hidden variables. Quantum mechanics teaches that you cannot precisely measure position and momentum at the same time. If you assume that particles somehow have precise values for those in the absence of a measurement, then you are in the land of anti-QM hidden variables. If there were any merit to this, then much of XX century physics would be wrong.

John Bell himself later tried to fudge his theorem by pretending to be able to deduce hidden variables from other hypotheses:
Already at the time Bell wrote this, there was a tendency for critics to miss the crucial role of the EPR argument here. The conclusion is not just that some special class of local theories (namely, those which explain the measurement outcomes in terms of pre-existing values) are incompatible with the predictions of quantum theory (which is what follows from Bell's inequality theorem alone), but that local theories as such (whether deterministic or not, whether positing hidden variables or not, etc.) are incompatible with the predictions of quantum theory. This confusion has persisted in more recent decades, so perhaps it is worth emphasizing the point by (again) quoting from Bell's pointed footnote from the same 1980 paper quoted just above: "My own first paper on this subject ... starts with a summary of the EPR argument from locality to deterministic hidden variables. But the commentators have almost universally reported that it begins with deterministic hidden variables."
The commentators are correct. The argument does begin with hidden variables.

In my view, the core of the problem here that some physicists and philosophers refuse to buy into the positivist philosophy that comes with quantum mechanics. Quantum mechanics is all about making predictions for observables, but some people, from Einstein to today, insist on trying to give values for hypothetical things that can never be observed.

The comment asks:
Could you tell me what you mean by "deterministic and separable", and in particular could you tell me how your concept of "deterministic and separable" can account for the perfect anti-correlation without hidden variables?

By separable, I mean that if two objects are separated by a space-like 4-vector, then one can have no influence over the other. This is local causality, and says that physical causes act within light cones. It has been one of the main 3 or 4 principles of physics since about 1860. It is the core of relativity theory.

Determinism is a dubious concept. It has no scientific meaning, as there is no experiment to confirm or deny it. It has never been taught in textbooks, and few physicists have believed in it, with Einstein being a notable exception.

If determinism were true, then you could prepare two identical Potassium-40 atoms, wait about a billion years, and watch them both decay at the same time. But it is hopelessly impossible to prepare identical atoms, so this experiment cannot be done.

Some interpretations of quantum mechanics claim to be deterministic and some claim the opposite, and they all claim to be consistent with all the experiments. So this notion of determinism does not seem to tell us anything about quantum mechanics.

As I wrote last year:
If determinism means completely defined by observables or hidden variables obeying local differential equations, then quantum mechanics and chaos theory show it to be impossible.
So I am not pushing determinism. It is an ill-defined and unscientific concept.

If some EPR-like process emits two equal and opposite electrons, then maybe those electrons are physically determined by the emission process. Or maybe they are intrinsically stochastic and have aspects that are not determined until a measurement. Or maybe it is all determined by events pre-dating the emission. I do not think that these distinctions make any mathematical or scientific sense. You can believe in any of these as you please, and positivists like myself will be indifferent.
When the experimenters get together later to compare their results, they make an astounding discovery: Every time the two experimenters happened to measure a pair of entangled electrons along the same direction, they ALWAYS got opposite results (one UP and one DOWN), and whenever they measured in different directions they got the same result (both UP or both DOWN) 3/4 of the time.
What makes this astounding is if you try to model the electron spin by assuming that there is some underlying classical spin to explain all this. That means that there is always a quantitative value for that spin, or some related hidden variable, that defines the reality of the electron. Some people call this "realism", but it is more than that. It is identifying the electron with the outcomes of potential measurements that are never made.

Everything we know about electrons says that this is impossible. You can measure position, and mess up the momentum. You can measure the X-spin, and mess up the Y-spin. And we can only make these measurements by forcing the electron into special traps, thereby making obvious changes to the electron.

Thus, contrary to widespread beliefs, Bell's Theorem and its experimental tests say nothing about locality or determinism or randomness. They only rule out some hidden variable theories that everyone (but Einstein) thought were ruled out in 1930.

I am not saying anything radical or unusual here. This is just textbook quantum mechanics. Those textbooks do not say that Bell proves nonlocality or indeterminism. I don't think so, anyway. That is why so many physicists are indifferent to the subject, and no Nobel prize has been given for this work. It is just 1930s physics with some wrong philosophical conclusions.

Lubos Motl just posted a rant against a BBC documentary:
I have just watched the first of the two episodes of the 2014 BBC Four documentary The Secrets of Quantum Physics. ...

Needless to say, after having said that Einstein's view has been conclusively disproven, Al-Khalili says that he believes in Einstein's view, anyway. Hilarious. Sorry, Mr Al-Khalili, but you just violate the basic rules of logic in the most obvious way. You should really reduce the consumption of drugs.

Before I watched the full episode, and even in 1/2 of it or so, I wanted to believe that it would be a documentary on quantum mechanics that avoids the complete idiocy and pseudoscience. Sadly, my optimism was misplaced. This is another excellent representative of the anti-quantum flapdoodle that is being spread by almost all conceivable outlets of the mass culture.
I haven't watched it, but it sounds like a lot of other popular accounts of quantum mechanics. They get the early history pretty well, and then they tell how Einstein and Bell did not believe it, and tried to prove QM wrong. Then the experiments proved QM right and Einstein and Bell wrong. But then then end up with some crazy conclusions that do not make any sense.

Update: There is some discussion in the comments below about whether Bell assumes hidden variables, and when he leaves Bohr's positivistic view of quantum mechanics as a possibility. You can see for yourself in Bell's 1981 socks paper, which is here, here, and here. He uses the letter lambda for the hidden variables.

Friday, November 13, 2015

Teen wins prize for relativity video

USA Today reports:
Meet Ryan Chester, 18, whose quirky video explaining Albert Einstein’s mind-bending Special Theory of Relativity won the inaugural Breakthrough Junior Challenge Sunday. The prize nets the North Royalton (Ohio) High School senior $250,000 toward college; $50,000 for Rick Nestoff, the physics teacher that inspired him; and $100,000 for a new school science lab.

Parents, take note: That's not too shabby a haul in exchange for a zippy 7-minute clip.
It is a slick video, and it follows closely a common textbook explanation in terms of Einstein's postulates (ie, Poincare's relativity principle and Lorentz's constancy of the speed of light), the Michelson-Morley experiment, and spacetime without an aether.

Thursday, November 12, 2015

Science writers on finding truth

SciAm writer John Horgan reviews the book of fellow science writer George Johnson, contrasting it with his own. They also discuss it in this Bloggingheads video.

I think that the XX century will always be considered the greatest for finding fundamental enduring truths, and Horgan has a similar view:
In 1965 Richard Feynman, prescient as always, prophesied that science would reach this impasse. "The age in which we live is the age in which we are discovering the fundamental laws of nature, and that day will never come again." After the great truths are revealed, Feynman continued, "there will be a degeneration of ideas, just like the degeneration that great explorers feel is occurring when tourists begin moving in on a new territory."
They argree on this:
On the issue of superstrings, Johnson and I are in complete agreement. Superstring theory seems to be a product not of empirical investigation of nature but of a kind of religious conviction about reality's symmetrical structure. Some particle physicists have the same view. Sheldon Glashow, one of the architects of the electroweak theory, has ridiculed superstring researchers as the "equivalents of medieval theologians."
Where they disagree is where Johnson seems to be influenced by foolish philosophers who deny truth and treat science as just another social activity driven by fads:
Johnson proposes that science, too, might be "a construction of towers that just might have been built another way." Although he never mentions The Structure of Scientific Revolutions, Johnson's view evokes the one set forth in 1962 by the philosopher Thomas Kuhn. In that book's coda, Kuhn compared the evolution of science to the evolution of life. Biologists, he noted, have banished the teleological notion that evolution advances toward anything--including the clever, featherless biped known as Homo sapiens. In the same way, Kuhn suggested, scientists should eschew the illusion that science is evolving toward a perfect, true description of nature. In fact, Kuhn asserted, neither life nor science evolve toward anything, but only away from something; science is thus as contingent, as dependent on circumstance, as life is. Johnson espouses a similar view, though more eloquently than Kuhn did. ...

My main disagreement with Johnson is that he plies his doubts too even-handedly. The inevitable result is that all theories, from the most empirically substantiated to those that are untestable even in principle, seem equally tentative. I also cannot accept Johnson's evolutionary, Kuhnian model of scientific progress. ...

Johnson further attempts to undermine the reader's faith in physics by reviewing ongoing efforts to make sense of quantum mechanics.
Johnson's view is probably (unfortunately) the majority among historians, philosophers, social scientists, science writers, and physics popularists.

Tuesday, November 10, 2015

Shimony's opinion of Bell's Theorem

Following up my EPR comments, I post this to show that my opinions are not much different from respected experts on the subject.

Abner Shimony spent his whole career writing papers on quantum spookiness, and here is his opinion of the consequence of Bell's Theorem:
There may indeed be "peaceful coexistence" between Quantum nonlocality and Relativistic locality, but it may have less to do with signaling than with the ontology of the quantum state. Heisenberg's view of the mode of reality of the quantum state was briefly mentioned in Section 2 - that it is potentiality as contrasted with actuality. This distinction is successful in making a number of features of quantum mechanics intuitively plausible - indefiniteness of properties, complementarity, indeterminacy of measurement outcomes, and objective probability. But now something can be added, at least as a conjecture: that the domain governed by Relativistic locality is the domain of actuality, while potentialities have careers in space-time (if that word is appropriate) which modify and even violate the restrictions that space-time structure imposes upon actual events. The peculiar kind of causality exhibited when measurements at stations with space-like separation are correlated is a symptom of the slipperiness of the space-time behavior of potentialities. This is the point of view tentatively espoused by the present writer, but admittedly without full understanding. What is crucially missing is a rational account of the relation between potentialities and actualities - just how the wave function probabilistically controls the occurrence of outcomes. In other words, a real understanding of the position tentatively espoused depends upon a solution to another great problem in the foundations of quantum mechanics - the problem of reduction of the wave packet.
What he is trying to say is that Heisenberg's quantum states represent potential measurements, not actual reality. There are apparent nonlocalities in the potentialities, but that is just in our minds. It does not violate relativistic causality, because that is about actual reality. There is no nonlocality in actual reality.

He still wants to believe in some sort of spookiness, but this shows that there is no proof of nonlocality in actual reality.

Monday, November 9, 2015

Explaining the EPR paradox

A reader asks me to explain my view of the EPR Bohm paradox, since I seem to reject what everyone else says.

Actually, my view is just textbook quantum mechanics. I do not subscribe to hidden variables, superdeterminism, action-at-a-distance, or anything exotic.

I am also a logical positivist, so I stick to what we know, and try not to jump to unsupported conclusions.

Experiments show that electrons and photons have both particle and wave properties. Quantum mechanics teaches that an electron is a mysterious quantum that is not exactly a particle or a wave, but something else. It obeys various physical laws, like conservation of energy and momentum, and Heisenberg uncertainty.

The EPR paper looks at a physical process that emits two equal and opposite electrons. Only I prefer to call them quanta, because they are not really particles with definite position and momentum at the same time.

Mathematically, the two quanta are represented as a single quantum state. A measurement of one collapses the state, according to the rules of quantum mechanics. Quantitative predictions are in excellent agreement with experiment.

In particular, you can measure the momentum of one quantum, and know that the other must be equal and opposite. Physically there is nothing strange about this, as it is a consequence of momentum being conserved.

But it is a little strange if you combine this with Heisenberg uncertainty, which normally prevents us from making a precise statement about momentum until it is measured. Measuring one quantum allows us to say something about a distant quantum.

Bohm pointed out that you get the same paradox when measuring spin, and tried to argue for hidden variables.

One way people have tried to explain this is with action-at-a-distance, with measuring one quantum having an instantaneous physical effect on the other. But that is so contrary to everything else we know about physics, then such an explanation would only be a last resort.

Another is to model the quantum with hidden variables. All such attempts have failed. In particular, if you think of the quantum as having a definite position, momentum, and spin before the measurement, you get contradictions with experiment.

So what is left? There is still the original Bohr logical positivist view. Things are clearer if you avoid trying to answer unanswerable questions. Physics is about observables. Concepts like position, momentum, and spin are not really properties of quanta. They are properties of observations.

We have no true mathematical representation of the quantum. We have a mechanics of observables. We predict observations, and avoid trying to say what is not observed.

When people try to visualize the quantum, they inevitably form some classical (pre-quantum) picture that we know is wrong. So they get paradoxes and say that quantum mechanics is incomprehensible.

Or they try some mathematical model that ends up being a hidden variable model, and it does not work.

So again, here is what happens. A physical process emits two equal and opposite quanta. They seem particle-like and wave-like, but they are physical objects that lack a true mathematical formulation. From the physics, we know that they are equal and opposite, and from quantum mechanics formulas, we can make certain predictions about observables. In particular, observations about the two quanta are correlated. Correlation is not necessarily causation.

Does some physical process like radioactive decay determine the state of an emitted quantum? I have an open mind on that, because I don't see how the question makes any sense. How can any experiment tell us one way or the other? You can believe it or not believe it; it is all the same to me.

Physicists make arguments that EPR-like experiments prove true randomness. I have posted denials of that, and I do not even believe that there is any such thing as true randomness. Randomness is a mathematical formalism that is actually deterministic, or a figure of speech for how certain theories fail to predict certain outcomes. That's all. When physicists talk of true randomness, they are usually talking nonsense.

What about the quanta being separable? Action-at-a-distance seems like hokum to me. It is as implausible as a perpetual motion machine. There is no evidence for it, and a lot of reasons for thinking it impossible.

I say that the physical quanta are separate and cannot influence each other. At the same time, our knowledge of the two quanta are linked, and info about one tells us something about the other.

In the terminology of Bell's theorem, I am rejecting counterfactual definiteness, just as all mainstream physicists have for decades.

Counterfactual definiteness says that the photons in the double-slit experiment must entirely go thru one slit or the other, just as if they were measured as particles at the slits. But mainstream quantum mechanics teaches that this is completely false, and such measurement would destroy the interference pattern. The light goes thru both slits at once.

You cannot do a completely passive observation of a quantum giving it a particular position, momentum, or spin. Any such measurement changes it, because quanta do not have such properties.

Rejection of counterfactual definiteness is essential to XX century physics, and is embodied by these slogans:
Another thing that people have emphasized since quantum mechanics was developed is the idea that we should not speak about those things which we cannot measure. (Actually relativity theory also said this.) [Feynman]

Unperformed experiments have no results. [Peres]
Somehow the 21st century has brought us more and more physicists who would rather believe in spookiness or parallel universes. A serious disease has infected Physics.

You will probably say I am cheating because I am not seeking a complete mathematical description of an electron, or that it is a cop-out to say that the wave function is just a representation of our knowledge.

My answer is that this issue goes to the heart of what science is all about. The job of physics is to use mathematics to predict outcomes for experiments. It is not to provide mathematical representations for things you cannot observe. All of the EPR paradoxes are based on naive expectations for counterfactuals, and not observations. Stick to observables, and the experiments are not so strange.