Wednesday, December 30, 2015

Latest search for Schroedinger cats

NewScientist mag reports:
Try to imagine a tiny ball sitting on one fingertip yet also on your shoulder at the same instant. Are you struggling? Most of us can’t conceive of an object being in two places at once – yet physicists have just demonstrated the effect over a distance of half a metre, smashing previous records.

It’s an example of superposition, the idea that an object can exist in two quantum states at the same time. This persists until it is observed, causing a property called its wave function to collapse into one state or the other. The same principle allows Schrödinger’s cat to be both dead and alive inside a box until you open the lid.
That is quite a little magic trick. You announce that you have made something in two places at once, but there is a catch -- if you look at it, then it reverts to being in just one place.

The article is kind enuf to supply some skepticism:
But it may be less of a breakthrough than it seems. In 2013, Hornberger helped devise a “weirdness scale” that scores experiments according to how far they show quantum effects extending into the everyday world. Kasevich’s work extends the distance scale but compromises in other ways, so scores about the same as previous attempts, says Hornberger. That means a true Schrödinger’s cat is still far from being realised.
We need a weirdness scale for overhyped science news story. +1 for a phony experiment. +1 for phony theory. +1 for some bogus interpretation to the result.

I don't want to be too harsh, as this is just a bogus interpretation.

Tuesday, December 29, 2015

The big AdS/CFT fad

Physicist Sabine Hossenfelder writes:
As to experimental evidence. This isn’t an issue specific to string theory. ...

My point of view has always been that quantum gravity isn’t science as long as one doesn’t at least try to find experimental evidence. ...

Why are there so few people working on finding experimental evidence for quantum gravity? I can tell you it’s not for lack of interest, at least not among the young people, it’s for lack of funding. ...

At some point I concluded that quantum gravity phenomenology is a bad, bad field to work on because it flops through peer review, which is almost certainly conducted by people who in the majority think that funding should go into further mathematical speculations. Seeing that my colleagues who work on AdS/CFT have funds thrown after them, I applied for funding in that field too. Nevermind that I have basically zero prior experience and the field is entirely overpopulated already.
Peter Woit replies:
The AdS/CFT phenomenon is quite remarkable and deserving of its own book. I see that the Maldacena paper is up above 10,000 citations, way off scale anything that has ever happened in the history of physics. What I’ve heard time and again from young theorists is that they go into AdS/CFT because it’s something that seems to be not understood, seems to be both a new deep idea about quantum gravity, and to have other applications, and, of course, it’s where most of the jobs are.
Of course there is no experimental evidence for AdS/CFT, and there never will be any, as it has no relation to the real world. It is all just some hypothetical boundary value problem on a black hole in some alternate cosmology unlike what we know of the universe.

Monday, December 28, 2015

Sam Harris and multiple universes

Atheist neurologist Sam Harris is one of the more prominent scientist public intellectuals. I thought that he was one of those rationalist atheists with a scientific outlook on everything, but he is really not. Most of his opinions have very little to do with science. He acts as if he is speaking with the authority of science, but he is not.

I also thought that he was anti-religion, but he is really some sort of Buddhist, and he regularly preaches about how meditation and hallucinogenic drugs can lead to spiritual enlightenment.

Most of this is off-topic for this blog, but I have attacked his position on free will.

Now he has interviewed some physicists, including Max Tegmark mp3 and David Deutsch mp3.

Physicist Lawrence Krauss is somewhat like Sam Harris, and often pushes leftist atheism in debates with religious scholars. His video debate with Muslim scholar Hamza Tzortzis is particularly amusing. Tzortzis concedes all scientific issues to Krauss, and Krauss makes him look like an ignorant fool. The discussion of infinity is embarrassing on both sides. Krauss hammers Islam as being morally objectionable in many ways.

But then, about halfway thru, Krauss is asked how he can disapprove of incest, if he rejects God. Then Krauss squirms, and the audience groans. Krauss admits that he cannot say that incest is wrong. Tzortzis retorts that if Krauss is such a moral relativist that he cannot say incest is wrong, then how can he so positively say that all of Islam is so wrong.

Krauss bragged that he wrote a book showing that the universe can develop from nothing, and belittled Tzortzis for not understanding that and challenged him to say what "nothing" means. To Krauss's surprise, Tzortzis said he read the book, and the book defines nothing as a quantum field. Krauss tried to deny this, but Tzortzis was absolutely correct on this point.

Scientists have a huge advantage in these debates, if they stick to the science. From what I have seen, they cannot do it, and get sucked into debating all sorts of issues where they no longer have hard evidence to back them up. Then they are more like politicians with opinions.

Saturday, December 26, 2015

Realism, antirealism, and conventionalism

I mentioned a German conference on the heart and soul of physics. It has gotten more publicity in Nature mag, and an article on why string theory is not science.

Professor of pseudoscience philosophy Massimo Pigliucci summarizes, but I should warn you that he is one of those leftist ideologue anti-sciece reality-denying philosophers.

Here is his latest rant, on the denial of human races:
Realism, antirealism and conventionalism are technical philosophical terms usually deployed in discussions of philosophy of science, philosophy of mathematics, and ethics. Say we are talking about the existence of mathematical objects (or of moral truths, which in many respects is an analogous concept). If one is a realist about these objects one is saying that there is a ontologically thick sense in which, say, numbers “exist.” Ontologically thick here means that numbers exist in a mind-independent way, though not in the physical sense that you can point a telescope somewhere and see them. More along the lines of “if there are any other intelligent beings in the cosmos they will independently ‘discover’ the concept of numbers.”

Being antirealist about numbers (or moral truths) means, of course, exactly the opposite: the antirealist doesn’t deny that numbers, once defined in a certain way, have certain objective properties. But she denies that it makes sense to think of any such definition in a mind-independent fashion.

The conventionalist, then, provides one possible antirealist account of numbers (or moral truths) to counter the realist one: numbers, like all mathematical objects, are human inventions, which are constructed in certain ways but could have been constructed differently. They are not “discovered,” they are invented. ...

Kaplan and Winther conclude (and, I think, are obviously correct) that the most sensible positions concerning race are: conventionalism about bio-genomic clusters, antirealism about biological races, and realism about social races.
These terms are somewhat confusing, as the quantum mechanics literature uses them in a manner that may be opposite to what you expect. In that, the realists are the Einstein-Bohm-Bell followers who believe that QM is wrong and must be replaced by a theory of hidden variables, which are unseen and unknown to modern science. The anti-realists are those who follow the Copenhagen interpretation or something similar, and believe that the observations are what is real.

Conventionalism is associated with Poincare, who believed that the aether was a useful convention, and you could believe or not believe in it, depending on convenience.

Applying these ideas to numbers is a little strange. Saying 2+2=4 is a universal truth that is independent of mind and convention.

Applying them to human races requires putting on ideological blinders. A commenter points to objective scientific genetic differences between the races, and he is just called a "racialist" who is undermining progressive work towards a post-racial society. Pigliucci says, "I can tell you that the majority of human population biologists don’t think that races exist in anything like the folk concept." He only admits that the genetic differences are useful for medical purposes.

Update: Pigliucci defends his political activism, because he wants to fight the creationist, and I guess, to attack those who believe in human races. This is funny because his racial denialism is just another form of creationism. Tney both deny human evolution and scientific evidence.

Thursday, December 24, 2015

Using Einstein to justify diversity rules

I have sharply criticized the popular credit given to Albert Einstein for relativity, on this blog in in my book. The purpose is not to run down Einstein, but to counter the way the history of relativity is told wrong in order to promote all sorts of bad ideas.

Thomas Levenson, an MIT professor of science writing, writes in Atlantic mag:
In describing the paradox that led him to his early breakthrough, the Special Theory of Relativity, Einstein noted that it took him a decade to get from his first thoughts to the final theory. (He started at 16, so he has some excuse.) The breakthrough turned on his realization that measurements of time and space aren’t absolute. Rather, they shift for different observers depending on how they’re moving relative to each other.

As he struggled to finish the theory, Einstein found that “even scholars of audacious spirit and fine instinct can be obstructed in the interpretation of facts by philosophical prejudices.” Einstein himself had to reach out of physics to develop the habits of mind that allowed him to see past the prejudices that obscured the relativistic universe he ultimately discovered. “The type of critical reasoning which was required for the discovery of this central point” he wrote, “was decisively furthered, in my case especially, by the reading of David Hume’s and Ernst Mach’s philosophical writings.”
Roberts’s question about the benefits minorities might bring into a physics classroom suggests a classroom in which nothing outside physics may usefully impinge.

David Hume! Benjamin Franklin’s friend (they corresponded on the matter of lightning rods, among other matters) and Adam Smith’s confidante! Hume’s name isn’t usually linked to deep physical insights. But as Einstein suggests, the intellectual work of physics doesn’t occur in a vacuum. It can’t be separated from a person’s learned habits of thought, from a particular set of lived incidents, books read, alternate worlds imagined. The diversity of Einstein’s particular background, sensibility, and cultural circumstances all played a role in bringing Special Relativity to fruition.
Einstein mentioning Hume and Mach is just a sneaky way of avoiding the fact that he got the whole theory from Lorentz and Poincare.

Levenson uses this to say that the US Supreme Court is wrong, and that Physics needs affirmative action to bring in new people and ideas. Lubos Motl addresses the foolishness of his argument.

Contrary to popular belief, Einstein was not an outsider. He was a German who progressed thru a rigid educational system, getting a doctoral degree in physics from a prestigious university. He did have an outside job while finishing his dissertations, but I doubt that was unusual.

Wednesday, December 23, 2015

Quantum crypto busted again

I mentioned an experiment to close Bell loopholes, and now there are a couple of others, as Alain Aspect summarizes:
By closing two loopholes at once, three experimental tests of Bell’s inequalities remove the last doubts that we should renounce local realism. They also open the door to new quantum information technologies.
This is nice to affirm the generally accepted quantum mechanics of 1930, but it has been raised to a very high profile for two reasons. It supposedly proves nature is nonlocal, and it supposedly enables unbreakable quantum cryptography.

It does not prove nonlocality, as I have repeatedly explained. It is also worthless for cryptography, for several reasons.

One reason is that all of these supposedly unbreakable systems have been repeatedly broken. Here is the latest:
Quantum key distribution is supposed to be a perfectly secure method for encrypting information. Even with access to an infinitely fast computer, an attacker cannot eavesdrop on the encrypted channel since it is protected by the laws of quantum mechanics. In recent years, several research groups have developed a new method for quantum key distribution, called "device independence." This is a simple yet effective way to detect intrusion. Now, a group of Swedish researchers question the security of some of these device-independent protocols. They show that it is possible to break the security by faking a violation of the famous Bell inequality. By sending strong pulses of light, they blind the photodetectors at the receiving stations which in turn allows them to extract the secret information sent between Alice and Bob.
This will not kill the subject, because the quantum Bell-heads will suggest work-arounds. They have too much invested in this nonsense. But it will never be something practical and secure enuf to buy goods on Ebay, and there is other technology that solves that problem. Quantum cryptography is useless in principle, and is a big scam.

Monday, December 21, 2015

Dark buzz killed the dinosaurs

From a dopey NY Times book review:
A good theory is an act of the informed imagination — it reaches toward the unknown while grounded in the firmest foundations of the known. In “Dark Matter and the Dinosaurs,” the Harvard cosmologist Lisa Randall proposes that a thin disk of dark matter in the plane of the Milky Way triggered a minor perturbation in deep space that caused the major earthly catastrophe that decimated the dinosaurs. It’s an original theory that builds on a century of groundbreaking discoveries to tell the story of how the universe as we know it came to exist, how dark matter illuminates its beguiling unknowns and how the physics of elementary particles, the physics of space, and the biology of life intertwine in ways both bewildering and profound.

If correct, Randall’s theory would require us to radically reappraise some of our most fundamental assumptions about the universe and our own existence. Sixty-­six million years ago, according to her dark-matter disk model, a tiny twitch caused by an invisible force in the far reaches of the cosmos hurled a comet three times the width of Manhattan toward Earth at least 700 times the speed of a car on a freeway. The collision produced the most powerful earthquake of all time and released energy a billion times that of an atomic bomb, heating the atmosphere into an incandescent furnace that killed three-quarters of Earthlings. No creature heavier than 55 pounds, or about the size of a Dalmatian, survived. The death of the dinosaurs made possible the subsequent rise of mammalian dominance, without which you and I would not have evolved to ponder the perplexities of the cosmos. ...

Randall calls the force driving that fraction “dark light” — an appropriately paradoxical term confuting the haughty human assumption that the world we see is all there is.
Supposedly she was going to call it "dark buzz", but she did not want to drive traffic to this blog. (Just kidding, Lisa.)

Not everyone accepts that a comet or asteroid wiped out the dinosaurs. A recent Time mag article said:
A new study examining volcanic eruptions and the infamous dinosaur-killing asteroid proposes a compromise in which both were responsible for the great extinction that occurred about 65 million years ago.

In the study, published in the journal Science, Geologists examined the timing of the already well-researched volcanic activity in the Deccan Traps in western India, and found that those eruptions occurred within 50,000 years of the asteroid hit—a pretty narrow window in geologic time. The seismic punch of the impact may have accelerated the speed of the eruptions, making it not only likely that both events had a hand in the dinosaurs’ downfall, but difficult to tease out which, if either, was more to blame.
So this may never be resolved.

Randall is trying to be taken seriously, but wherever she goes to plug her book, the reporters spend most of their time asking her for stories about how male chauvinist physicists mistreated her, about how women are marginalized, and generally baiting her into talking feminist politics instead of physics. She is too polite to say so, but it is obvious that she is much more annoyed at how these reporters belittle women, than any gripes against physicists.

For an example, see The One Question This Brilliant Physicist Wants People To Stop Asking Her in the AOL Huff Post. I got suckered by the click-bait, as I wanted to see if the question was about Jodie Foster, who played a cosmologist in a movie based on a Carl Sagan book.

Other physicists are more political, and some are telling the US Supreme Court that we need affirmative action:
Minority students attending primarily white institutions commonly face racism, biases, and a lack of mentoring. Meanwhile, white students unfairly benefit psychologically from being overrepresented. ...

We ask that you take these considerations seriously in your deliberations and join us physicists and astrophysicists in the work of achieving full integration and removing the pernicious vestiges of racism and white supremacy from our world.
This is just anti-white hatred. Physics has problems, but having too many white people is not one of them. Yes, most of the historical progress has been from white Christians and Jews, but not out of some white supremacy.

Friday, December 18, 2015

Battle for the heart and soul of physics

Quanta Magazine reports on a recent conference in Germany:
Ellis and Silk declared a “battle for the heart and soul of physics.” ...

Whether the fault lies with theorists for getting carried away, or with nature, for burying its best secrets, the conclusion is the same: Theory has detached itself from experiment. The objects of theoretical speculation are now too far away, too small, too energetic or too far in the past to reach or rule out with our earthly instruments. So, what is to be done? ...

Over three mild winter days, scholars grappled with the meaning of theory, confirmation and truth; how science works; and whether, in this day and age, philosophy should guide research in physics or the other way around.
Apparently they think what should be done is to redefine physics and science so that it includes theoretical speculations that can never be tested.
Today, most physicists judge the soundness of a theory by using the Austrian-British philosopher Karl Popper’s rule of thumb. In the 1930s, Popper drew a line between science and nonscience in comparing the work of Albert Einstein with that of Sigmund Freud. Einstein’s theory of general relativity, which cast the force of gravity as curves in space and time, made risky predictions — ones that, if they hadn’t succeeded so brilliantly, would have failed miserably, falsifying the theory. But Freudian psychoanalysis was slippery: Any fault of your mother’s could be worked into your diagnosis. The theory wasn’t falsifiable, and so, Popper decided, it wasn’t science.

Critics accuse string theory and the multiverse hypothesis, as well as cosmic inflation — the leading theory of how the universe began — of falling on the wrong side of Popper’s line of demarcation. To borrow the title of the Columbia University physicist Peter Woit’s 2006 book on string theory, these ideas are “not even wrong,” say critics. In their editorial, Ellis and Silk invoked the spirit of Popper: “A theory must be falsifiable to be scientific.”

But, as many in Munich were surprised to learn, falsificationism is no longer the reigning philosophy of science. Massimo Pigliucci, a philosopher at the Graduate Center of the City University of New York, pointed out that falsifiability is woefully inadequate as a separator of science and nonscience, as Popper himself recognized. Astrology, for instance, is falsifiable — indeed, it has been falsified ad nauseam — and yet it isn’t science. Physicists’ preoccupation with Popper “is really something that needs to stop,” Pigliucci said. “We need to talk about current philosophy of science. We don’t talk about something that was current 50 years ago.”

Nowadays, as several philosophers at the workshop said, Popperian falsificationism has been supplanted by Bayesian confirmation theory, or Bayesianism, a modern framework based on the 18th-century probability theory of the English statistician and minister Thomas Bayes. Bayesianism allows for the fact that modern scientific theories typically make claims far beyond what can be directly observed — no one has ever seen an atom — and so today’s theories often resist a falsified-unfalsified dichotomy.
The philosophers are even worse than the physicists.

About 50 years ago, philosophers rejected the whole idea that science is about seeking truth in the natural world. So they reject the idea that theories can be falsified, or that objective knowledge exists, or that scientists follow a scientific method, or that science makes progress. Most of these philosophers are ignorant of XX century science, and base their views on those of other philosophers, and on science from many centuries ago.

The article suggests that many think that Bayesianism will save them, but Bayesian statistician Andrew Gelman says that they completely misunderstand Bayesianism.

If this isn't pseudoscience, I don't know what is. Someday these people will be laughed at like astrologers.

Popper died in 1994, Bayes died in 1761. It is strange for a philosopher to put down 50-year-old ideas as being unworthy of consideration.

I agree that there is a battle for the heart and soul of physics. Philosophers and the most prominent physicists are off the deep end into unscientific mysticism. I am inclined to believe that there is a silent majority of physicists who reject all of this nonsense, but it is hard to tell as they have been intimidated into silence.

Update: They say "no one has ever seen an atom", but scientists have made big progress on pictures of atoms and even molecular reactions taking place. See this UC Berkeley announcement.



Update: Nature mag article.

Wednesday, December 16, 2015

Hints of a new particle

Dennis Overbye reports in the NY Times:
One possibility, out of a gaggle of wild and not-so-wild ideas springing to life as the day went on, is that the particle — assuming it is real — is a heavier version of the Higgs boson, a particle that explains why other particles have mass. Another is that it is a graviton, the supposed quantum carrier of gravity, whose discovery could imply the existence of extra dimensions of space-time.
No, it will not be a graviton, and a graviton would not imply extra dimensions.

They found the Higgs, but no SUSY particles.

Funny that there is no mention of supersymmetry. The LHC was built to find the Higgs, and confirm the standard model, and to find some supersymmetry (SUSY) particles to disprove it.
When all the statistical effects are taken into consideration, Dr. Cranmer said, the bump in the Atlas data had about a 1-in-93 chance of being a fluke — far stronger than the 1-in-3.5-million odds of mere chance, known as five-sigma, considered the gold standard for a discovery.
So don't get excited yet.

Monday, December 14, 2015

Teleportation gets breakthru award

Physics World announces:
The Physics World 2015 Breakthrough of the Year goes to Jian-Wei Pan and Chaoyang Lu of the University of Science and Technology of China in Hefei, for being the first to achieve the simultaneous quantum teleportation of two inherent properties of a fundamental particle – the photon. Nine other achievements are highly commended and cover topics ranging from astronomy to medical physics

Synonymous with the fictional world of Star Trek, the idea of teleportation has intrigued scientists and the public alike. Reality caught up with fiction in 1993, when an international group of physicists proved theoretically that the teleportation of a quantum state is entirely possible, so long as the original state being copied is destroyed. Successfully teleporting a quantum state therefore involves making a precise measurement of a system, transmitting the information to a distant location and then reconstructing a flawless copy of the original state. As the "no cloning" theorem of quantum mechanics does not allow for a perfect copy of a quantum state to be made, it must be completely transferred from one particle onto another, such that the first particle is no longer in that state.
From a strictly scientific view, this work is of only very minor significance. It only gets attention because of the Star Trek terminology.

Some info is being transmitted, and it is wrapped in quantum terms to make it sound like a big deal.

Likewise, Nature mag made a big deal out of some physics problem being undecidable. See Aaronson and Motl for their slants on it. It all sounds very profound as unknowable math has surfaced as unknowable physics. But it is really not. It is just a minor curiosity with little significance for math or physics.

Friday, December 11, 2015

German meeting on unscientific physics

Germany just held a conference uniting philosophers with non-empirical physicists. This was partially in response to a Nature article defending the integrity of physics against such nonsense.

Peter Woit discusses it, drawing these comments:
“Joe Polchinski lays out the case for string theory, and how unexpectedly successful it’s been.” http://arxiv.org/abs/1512.02477

“The public is confused because there are a host of ppl who write blogs or books who attack string theory”

Gross said he wasn’t referring to you [Peter Woit]. Heaven knows who he was referring to then…
I don't think he was referring to me either. Woit is the most prominent critic of string theory.

The big non-empirical physics is: string theory, multiverse, and quantum gravity.

The philosophers are probably excited that some prominent theoretical physicists are willing to talk to them. But the philosophers are only the anti-science philosophers who deny the scientific method anyway.

Theoretical physicists must be the only scientists who complain about criticism from bloggers, or who go running to philosophers for validation.

Thursday, December 10, 2015

Not yet truly a quantum computer

I just re-iterated my claim that we have no quantum computer showing a quantum speedup, and now Scott Aaronson comments on the latest hype:
As many of you will have seen by now, on Monday a team at Google put out a major paper reporting new experiments on the D-Wave 2X machine. (See also Hartmut Neven’s blog post about this.) The predictable popularized version of the results—see for example here and here—is that the D-Wave 2X has now demonstrated a factor-of-100-million speedup over standard classical chips, thereby conclusively putting to rest the question of whether the device is “truly a quantum computer.” ...

Thus, while there’s been genuine, interesting progress, it remains uncertain whether D-Wave’s approach will lead to speedups over the best known classical algorithms, ...

But to repeat: even if D-Wave makes all four of these improvements, we still have no idea whether they’ll see a true, asymptotic, Selby-resistant, encoding-resistant quantum speedup.  We just can’t say for sure that they won’t see one. ...

I still have no idea when and if we’ll have a practical, universal, fault-tolerant QC, capable of factoring 10,000-digit numbers and so on.  But it’s now looking like only a matter of years until Gil Kalai, and the other quantum computing skeptics, will be forced to admit they were wrong — which was always the main application I cared about anyway!
It is funny how he is perfectly happy spending his life on quantum computer complexity theory, when there is no proof that there is any such thing as a quantum computer. But when someone like me argues that quantum computers are impossible, then suddenly he wants research to prove me wrong.

It is almost as if he wants to work on something of no practical value, but he does not want anyone saying that it has no practical value.

Wednesday, December 9, 2015

Most profound result of quantum field theory

Amanda Gefter writes:
“Dr. Wilczek,” the defense attorney begins. “You have stated what you believe to be the single most profound result of quantum field theory. Can you repeat for the court what that is?”

The physicist leans in toward the microphone. “That two electrons are indistinguishable,” he says.

The smoking gun for indistinguishability, and a direct result of the 1-in-3 statistics, is interference. Interference betrays the secret life of the electron, explains Wilczek. On observation, we will invariably find the electron to be a corpuscular particle, but when we are not looking at it, the electron bears the properties of a wave. When two waves overlap, they interfere — adding and amplifying in the places where their phases align — peaks with peaks, troughs with troughs — and canceling and obliterating where they find themselves out of sync. These interfering waves are not physical waves undulating through a material medium, but mathematical waves called wavefunctions. Where physical waves carry energy in their amplitudes, wavefunctions carry probability. So although we never observe these waves directly, the result of their interference is easily seen in how it affects probability and the statistical outcomes of experiment. All we need to do is count.

The crucial point is that only truly identical, indistinguishable things interfere. The moment we find a way to distinguish between them — be they particles, paths, or processes — the interference vanishes, and the hidden wave suddenly appears in its particle guise. If two particles show interference, we can know with absolute certainty that they are identical. Sure enough, experiment after experiment has proven it beyond a doubt: electrons interfere. Identical they are — not for stupidity or poor eyesight but because they are deeply, profoundly, inherently indistinguishable, every last one.

This is no minor technicality. It is the core difference between the bizarre world of the quantum and the ordinary world of our experience. The indistinguishability of the electron is “what makes chemistry possible,” says Wilczek. “It’s what allows for the reproducible behavior of matter.” If electrons were distinguishable, varying continuously by minute differences, all would be chaos. It is their discrete, definite, digital nature that renders them error-tolerant in an erroneous world.

Monday, December 7, 2015

No big advances in theoretical physics

Physicist Sabine Hossenfelder answers:
“Can you think of a single advancement in theoretical physics, other than speculation like Strings and Loops and Safe Gravity and Twistors, and confirming things like the Higgs Boson and pentaquarks at the LHC, since Politizer and Wilczek and Gross (and Coleman) did their thing re QCD in the early 1980's?” ...

Admittedly your question pains me considerably.

Quantum error correction, quantum logical gates, quantum computing.

Quantum cryptography.

Inflation.

Effective field theory/Renormalization group running.

Gauge-gravity duality (AdS/CFT).

"When Frank Wilczek becomes 65 in 2018, there will be no active (below normal retirement age) “fundamental” theorist with a Nobel prize, for the first time since H.A. Lorentz won the prize in 1902."
I skipped a few items outside my expertise. The normalization group and other standard model work has been great, but a lot of the big ideas are from the 1970s.

Quantum cryptography and quantum computing are big scams. Inflation is an interesting idea, but has not really been tested.

The history books of the next millennium will say that the great ideas of physics were worked in the XX century, or maybe 1860 to 1980. Then the field became overrun with charlatans.

SciAm writer John Horgan annoyed everyone with a 1996 book arguing that scientists had already found the big discoveries, and that new results would be disappointing. He now has a new edition bragging that he was right.

Yes, he was right. I did not believe him at the time, because I believe the hype that new experiments like the Superconducting Super Collider were going to discover new physics. Physicists are still complaining about that fiasco. But the LHC spent $10B, and only confirmed the standard model from the 1970s. It found the value of the Higgs mass, but nothing else new about it. String theorists have given up on any real physics, and are now babbling about “non-empirical theory confirmation”, whatever that is.

Attempts to prove quantum nonlocality have been a total failure. Quantum cryptography has taught us nothing new about quantum mechanics, and found no useful application to cryptography. Quantum computing has failed to convincingly demonstrate a quantum speedup or even a true qubit. Quantum gravity has never made any progress. Whole new areas like the multiverse are incoherent from the start.

The whole field of Physics is now dominated by charlatans.

Sunday, December 6, 2015

No evidence that we live in a hologram

One of the biggest advance in theoretical physics of the last 30 years is supposed to be the holographic principle, but I did not know that anyone was foolish enuf to believe that it is testable. Jennifer Ouellette (Mrs. Sean M. Carroll) writes:
A controversial experiment at Fermilab designed to hunt for signs that our universe may really be a hologram has failed to find the evidence it was seeking, the laboratory has announced.

It’s called the Holometer (short for “Holographic Interferometer”), and it’s the brainchild of Fermilab physicist Craig Hogan. He dreamed up the idea in 2009 as a way to test the so-called holographic principle.

Back in the 1970s, a physicist named Jacob Bekenstein showed that the information about a black hole’s interior is encoded on its two-dimensional surface area (the “boundary”) rather than within its three-dimensional volume (the “bulk”). Twenty years later, Leonard Susskind and Gerard ‘t Hooft extended this notion to the entire universe, likening it to a hologram: our three-dimensional universe in all its glory emerges from a two-dimensional “source code.” New York Times reporter Dennis Overbye has likened the holographic concept to a can of soup. All the “stuff” of the universe, including human beings, makes up the “soup” inside the can, but all the information describing that stuff is inscribed on the label on the outside boundary. ...

The holographic principle has since become one of the most influential ideas in theoretical physics, yet many believe it to be untestable, at least for now. (It would require probing black holes up-close, a daunting prospect even if we had the technology to do so.) Hogan decided to try anyway.
I don't want to blame someone for doing an experiment, but was this stuff ever meant to be taken seriously?

The holographic principle is just some silly conjectural mathematical property of some theories that have some hypothetical relation to black hole boundaries, but no real relation to the real world.

Friday, December 4, 2015

Negative progress in quantum mechanics

I have defended textbook quantum mechanics against various critics, and now Lubos Motl writes:
Most fields of the human activity have seen persistent progress. But it's remarkable to see how much negative progress has occurred in the recent 85 if not 90 years in the field of "writing about the foundations of quantum mechanics". In 1930, people had folks like Heisenberg who had actually discovered the totally new foundations of physics, knew how to avoid all the traps and possible mistakes, and what they were, and they just presented the new theory in the no-nonsense way. Today we have tons of Deutsches, Wallaces, Puseys, Rudolphs, Barretts, Hsus who are sloppy all the time, who are dogmatic about things that are unsupported or directly contradict the evidence, and who are deliberately obfuscating some points in order to mask the incoherence of their message and indefensibility of the claim that quantum mechanics needs an "addition" and the universal postulates of quantum mechanics that materialized out of the Copenhagen spirit have to be replaced by one of their incoherent new sloppy irrational pictures that are designed to return physics to the era of classical physics, a goal that obviously can never succeed.
I agree with this. Some comments have claimed that the views of Bohr and Heisenberg are indefensible, but Lumo quotes Heisenberg:
However, all the opponents of the Copenhagen interpretation do agree on one point. It would, in their view, be desirable to return to the reality concept of classical physics or, to use a more general philosophic term, to the ontology of materialism. They would prefer to come back to the idea of an objective real world whose smallest parts exist objectively in the same sense as stones or trees exist, independently of whether or not we observe them.

This, however, is impossible or at least not entirely possible because of the nature of the atomic phenomena, as has been discussed in some of the earlier chapters. It cannot be our task to formulate wishes as to how the atomic phenomena should be; our task can only be to understand them.
So he clearly understood what was wrong with the Bohm-Bell school of physics that confuses people endlessly.

Here is a new paper on Bohr's views.

I am not saying that Bohr and Heisenberg got everything right, but we have had negative progress. Reputable physicists and journal say silly things about QM.

Meanwhile Scott Aaronson is speaking at an IBM conference on "ThinkQ 2015 - Challenges and applications for medium size quantum computers". The first thing he says in his slides is:
Can forget temporarily about practical applications of QC: the more immediate goal is just to show a clear quantum speedup for anything
You read that right. There are no practical applications on the horizon. They are desperately trying to show that it is possible for a quantum computer to have some sort of quantum speedup. So far, they have failed. Too bad Bohr and Heisenberg are no longer around to explain to them why they are failing.

Tuesday, December 1, 2015

Bell's beables are failed hidden variables

A persistent reader disputes my explanation of Bell's Theorem. I started out picking on a book by a SciAm writer, with the author defending his book, but then we got into Bell details. See SciAm book promotes spooky action, Explaining the EPR paradox, Shimony's opinion of Bell's Theorem , and The Bell theorem hypotheses.

I think that I have followed what Bell wrote, and how it is explained in Wikipedia, textbooks, and other references.

Apparently others have had the exact same dispute that I have had with the anonymous commenter. Travis Norsen, a coauthor of the Scholarpedia article I cited previously, writes in a 2008 paper:
J.S. Bell believed that his famous theorem entailed a deep and troubling conflict between the empirically verified predictions of quantum theory and the notion of local causality that is motivated by relativity theory. Yet many physicists continue to accept, usually on the reports of textbook writers and other commentators, that Bell's own view was wrong, and that, in fact, the theorem only brings out a conflict with determinism or the hidden-variables program or realism or some other such principle that (unlike local causality), allegedly, nobody should have believed anyway. ... Here we try to shed some light on the situation ...
Yes, I am with those who say that Bell's theorem only presents a conflict with hidden variables or counterfactual definiteness. Others say that the theorem is stronger.

Norsen relies directly on Bell:
Here is how Bell responded to this first class of disagreement:
“My own first paper on this subject starts with a summary of the EPR argument from locality to deterministic hidden variables. But the commentators have almost universally reported that it begins with deterministic hidden variables.” (Bell, 1981, p.157) ...
Bell’s fullest and evidently most-considered discussion of local causality occurs in his last published paper, La nouvelle cuisine (1990, 232-248). We will here essentially follow that discussion, supplementing it occasionally with things from his earlier papers.

Bell first introduces what he calls the “Principle of local causality” as follows: “The direct causes (and effects) of events are near by, and even the indirect causes (and effects) are no further away than permitted by the velocity of light.” Then, referencing what has been reproduced here as Figure 1, Bell elaborates: “Thus, for events in a space-time region 1 ... we would look for causes in the backward light cone, and for effects in the future light cone. In a region like 2, space-like separated from 1, we would seek neither causes nor effects of events in 1. Of course this does not mean that events in 1 and 2 might not be correlated...” (1990, p. 239)

After remarking that this formulation “is not yet sufficiently sharp and clean for mathematics,” Bell then proposes the following version, referencing what has been reproduced here as Figure 2:
“A theory will be said to be locally causal if the probabilities attached to values of local beables in a space-time region 1 are unaltered by specification of values of local beables in a space-like separated region 2, when what happens in the backward light cone of 1 is already sufficiently specified, for example by a full specification of local beables in a spacetime region 3...” (1990, 239-40)
No, his first definition in terms of light cones is much cleaner and sharper for mathematical analysis. That is the definition used in Maxwell's theory of electromagnetism, in quantum field theory, and in every other relativistic theory.

What the heck are "beables", and how can anyone be sure about the probabilities?

Norsen drafted a Wikipedia article on beables in 2010:
The word beable was introduced by the physicist John Stewart Bell in his article entitled "The theory of local beables" (see Speakable and Unspeakable in Quantum Mechanics, pg. 52). A beable of a physical theory is an object that, according to that theory, is supposed to correspond to an element of physical reality. The word "beable" (be-able) contrasts with the word "observable". While the value of an observable can be produced by a complex interaction of a physical system with a given experimental apparatus (and not be associated to any "intrinsic property" of the physical system), a beable exists objectively, independently of observation. For instance, it can be proven that there exists no physical theory, consistent with the predictions of quantum theory, in which all observables of quantum theory (i.e., all self-adjoint operators on the Hilbert space of quantum states) are beables.

While, in a given theory, an observable does not have to correspond to any beable, the result of the "measurement" of an observable that has actually been carried out in some experiment is physically real (it is represented, say, by the position of a pointer) and must be stored in some beable of the theory.
So a beable is just Bell's notion of a hidden variable. It is not an observable, but somehow represents someone's opinion about what ought to be real.

The mainstream interpretations of quantum mechanics say that the set of observables are what is important and real. Bell rejects this, and says that some other form of hidden variables must be what is real.

Bell also focuses on probability, as if that is something real. It is not, as I have explained here and elsewhere. It is not any more essential to quantum mechanics than to any other theory. It is a mathematical device for relating theories to the world, but it is not directly observable.

Thus when Bell defines causality in terms of beables, he is squarely and directly making an assumption about hidden variables. And that assumption contradicts the postulates, mathematical formulation, and spirit of QM.

Norsen himself is squarely in Bell's camp, as he proposed this rewrite of the Wikipedia article on Bell's theorem:
Bell's Theorem is a mathematical theorem first demonstrated by J.S. Bell in his 1964 paper "On the Einstein-Podolsky-Rosen paradox". The theorem establishes that any physical theory respecting a precisely-formulated locality (or "local causality") condition will make predictions, for a certain class of experiments, that are constrained by a so-called Bell Inequality. Since the particular theory called Quantum Mechanics makes predictions which violate this constraint, one can also think of Bell's Theorem as a proof that there is an inconsistency between (i) local causality and (ii) a certain set of QM's empirical predictions. ...

Bell's own interpretation of his theorem, however, is not widely accepted among physicists in general. Some physicists who have studied Bell's Theorem carefully point to alleged flaws or hidden assumptions in Bell's formulation of local causality and/or his derivation of (what Bell called) the Locality Inequality therefrom; such claims are controversial and will be addressed below. But most physicists fail to agree with Bell's statement above not because they think there is some flaw in the reasoning leading to it, but rather because what they have learned about Bell's Theorem (from textbooks and other sources) radically distorts the subject.
This was apparently rejected because the Wikipedia editors agree with the textbook explanation of Bell's theorem, and do not accept Bell's own interpretation.

The core teachings of QM say that an electron is observed as a particle, but is not really the sort of classical particle that has a precise position and momentum at the same time. The Einstein-Bohm-Bell types refuse to accept this. They also refuse a positivist view that allows us to be silent about what cannot be measured. Instead they want to pretend to have values for that, and call them beables or hidden variable or reality or whatever sounds good. The consensus since 1930 has been that this approach does not work.

My critic says:
You seem to scrupulously avoid any cognitive engagement with the actual subject.
I thought I did, but I guess he means that I avoid beables.

This is like discussing the twin paradox or some other subtle point in relativity theory, and someone objecting, "But what is the true time? You seem to avoid discussing what the real time is!"

Relativity teaches that different observers measure time differently. Defining some sort of universal real time is usually not helpful. Likewise, defining beables as the hypothetical result of unperformed measurements is usually not helpful either.

I remember taking a high school science class, and being told the history of the debate over whether light was a particle or a wave, with the arguments for each side. At the end of the course, I had a dissatisfied feeling because the teacher never told us which it was. I thought that maybe I skipped class that day, because surely one side was right and one was wrong.

No, nature does not always match our preconceptions. You have to let go of the idea that light has to match some intuitive classical model for a moving object. Relativity teaches us how clocks behave, but not what time really is. Quantum mechanics teaches us how to make and predict measurements of electrons and photons, but not what they really are.

If you want to understand electrons and photons in terms of classical joint probabilities of beables, then you will be disappointed, because nature does not work that way.

There is nothing in this Einstein-Bohm-Bell analysis but a failed attempt to prove QM wrong. The physicists who did the early Bell test experiments were convinced that they would win Nobel prizes for disproving QM. Instead they just confirmed what everyone thought in 1930.