Wednesday, September 28, 2016

Junk science in TED Talk

Jerry Coyne reports on some populist bogus social science:
The second most popular TED talk of all time, with over 32 million views on TED, is by Harvard Business School associate professor Amy Cuddy, called “Your body language shapes who you are”. (You can also see the talk on YouTube, where it has over 10 million views. Cuddy appears to be on “leave of absence.”)  Her point, based on research she did with two others, was that by changing your body language you can modify your hormones, thus not only influencing other people in the way you want, but changing your own physiology in a way you want.
The kindest thing you can say is that her results were not replicated. All the other studies gave opposite conclusions.

Statistician Andrew Gelman also laments the sorry state of social science. It is overrun by poor studies and p-hacking.

Monday, September 26, 2016

No-Cloning is not fundamental

Complexity theorist Scott Aaronson makes a big deal out of the No-cloning theorem:
The subject of this talk is a deep theorem that stands as one of the crowning achievements of our field. I refer, of course, to the No-Cloning Theorem. Almost everything we’re talking about at this conference, from QKD onwards, is based in some way on quantum states being unclonable. ...

OK, but No-Cloning feels really fundamental. One of my early memories is when I was 5 years old or so, and utterly transfixed by my dad’s home fax machine, ...

The No-Cloning Theorem represents nothing less than a partial return to the view of the world that I had before I was five. It says that quantum information doesn’t want to be free: it wants to be private. There is, it turns out, a kind of information that’s tied to a particular place, or set of places. It can be moved around, or even teleported, but it can’t be copied the way a fax machine copies bits.

So I think it’s worth at least entertaining the possibility that we don’t have No-Cloning because of quantum mechanics; we have quantum mechanics because of No-Cloning — or because quantum mechanics is the simplest, most elegant theory that has unclonability as a core principle.
No-Cloning sounds fundamental, but it is not.

The theorem says that there is no physical process that always perfectly duplicates a given physical entity. You can have a laser beam that emits identical photons in identical states, but you can not have some trick optical device that takes in any photon and copies it into an additional photon in the same state.

The proof assumes that any such process would have to perfectly represent the entity (eg photon) by a wave function, to apply a unitary transformation to duplicate it, and to perfectly realize that result.

So you have to believe that a physical entity is exactly the same as a mathematical wave function, and a physical process is exactly the same as a unitary transformation.

But isn't that the essence of quantum mechanics? No, it is not.

Quantum mechanics is a system of using wave functions to predict observables. But there is no good reason to believe that there is a one-to-one correspondence between physical entities and wave functions.

The no-cloning theorem is a big headache for quantum computing because it means that qubits can never be copied. Ordinary computers spend most of their time copying bits.

But scalable qubits may not even be possible.

The no-cloning theorem is not something with any direct empirical evidence. It is true that we have no known way of duplicating an atomic state, but that is a little misleading. We have no way of doing any observations at all on an atomic state without altering that state.

Friday, September 23, 2016

China claims to invent quantum radar

ExtremeTech reports:
The Chinese military says it has invented quantum radar, a breakthrough which, if true, would render the hundreds of billions of dollars the United States has invested into stealth technology obsolete. Like the original invention of radar, the advent of modern artillery, or radio communications, quantum radar could fundamentally transform the scope and nature of war. ...

Quantum radar would exploit quantum entanglement, the phenomena that occurs when two or more particles are linked, even when separated by a significant amount of physical space. In theory, a radar installation could fire one group of particles towards a target while studying the second group of entangled particles to determine what happened to the first group. The potential advantages of this approach would be enormous, since it would allow for extremely low-energy detection of approaching enemy craft. Unlike conventional radar, which relies on an ability to analyze and detect a sufficiently strong signal return, quantum radar would let us directly observe what happened to a specific group of photons. Since we haven’t invented cloaking devices just yet, this would seem to obviate a great deal of investment in various stealth technologies.

China is claiming to have developed a single-photon radar detection system that can operate at a range of 100km, more than 5x that of a lab-based system developed last year by researchers from the UK, US, and Germany. Research into quantum radar has been ongoing for at least a decade, but there are significant hurdles to be solved.
Let me get this straight. The quantum radar entangles two photons, and fires one at the stealth plane. Then there is no need to look for any return signal because you can figure out what happened by examining the entangled photon that never left.

This is impossible, of course. Either someone has some massive misunderstandings of quantum mechanics, or someone is perpetrating a scam. Or maybe China thinks that Americans are dumb enuf to believe a story like this.

Monday, September 19, 2016

Nobel site on relativity

The Nobel Prize site has a page on the History of Special Relativity
1898 Jules Henri Poincaré said that "... we have no direct intuition about the equality of two time intervals."

1904 Poincaré came very close to special relativity: "... as demanded by the relativity principle the observer cannot know whether he is at rest or in absolute motion."

1905 On June 5, Poincaré finished an article in which he stated that there seems to be a general law of Nature, that it is impossible to demonstrate absolute motion. On June 30, Einstein finished his famous article On the Electrodynamics of Moving Bodies, where he formulated the two postulates of special relativity.
Here is that 1905 Poincare paper. It is really just an abstract of a longer paper. Poincare states that general law of nature in the first paragraph.

The most important points in that Poincare abstract are that the Lorentz transformations form a group, and that all forces, including electromagnetism and gravity, are affected the same way. Poincare is quite emphatic about both of these points.

Einstein had access to Poincare's abstract before writing his
1905 relativity paper, but he does not quite get these two points, altho he is often credited with them. Here is what they say.

Poincare: "The sum of all these transformations, together with the set of all rotations of space, must form a group; but for this to occur, we need l = 1; so one is forced to suppose l = 1 and this is a consequence that Lorentz has obtained by another way."

Einstein: "we see that such parallel transformations — necessarily — form a group."

Poincare: "Lorentz, in the work quoted, found it necessary to complete his hypothesis by assuming that all forces, whatever their origin, are affected by translation in the same way as electromagnetic forces and, consequently, the effect produced on their components by the Lorentz transformation is still defined by equations (4). It was important to examine this hypothesis more closely and in particular to examine what changes it would require us to make on the law of gravitation."

Einstein: "They suggest rather that, as has already been shown to the first order of small quantities, the same laws of electrodynamics and optics will be valid for all frames of reference for which the equations of mechanics hold good."

As you can see, Poincare had a deeper understanding of relativity, and he published it first.

Saturday, September 10, 2016

Randomness does not need new technology

SciAm reports:
Photonic Chip Could Strengthen Smartphone Encryption

The chip uses pulses of laser light to generate truly random numbers, the basis of encryption. Christopher Intagliata reports.

Random numbers are hugely important for modern computing. They're used to encrypt credit card numbers and emails. To inject randomness into online gaming. And to simulate super complex phenomena, like protein folding or nuclear fission.

But here's the dirty secret: a lot of these so-called random numbers are not truly random. They're actually what’s known as "pseudo random numbers," generated by algorithms. Think of generating random numbers by rolling dice. If you know the number of dice, it’s simple to figure out something about the realm of possible random numbers—thus putting probabilistic limits on the randomness.

But truly random numbers can be generated through quantum mechanical processes. So researchers built a photonic chip — a computer chip that uses photons instead of electrons. The chip has two lasers: one shoots continuously; the other pulses at regular intervals. Each time the two lasers meet, the interference between the light beams is random, thanks to the rules of quantum mechanics. The chip then digitizes that random signal, and voila: a quantum random number generator.
This distinction between truly random and pseudo random numbers is entirely fallacious.

They act as if quantum hanky panky magically makes something more random than other random things.

See the Wikipedia article on Hardware random number generators. There are lots of them on the market already. They are already cheap enuf for smart phones, if there were any commercial need.

I actually think that it would be nice to have such a random number function in PCs and phones. But the fact is that there are suitable workarounds already. There is no need for a fancy 2-laser device like in this research.

Massimo Pigliucci is a philosophy professor who specializes in calling things pseudoscience, and argues strenuously that it is a useful term:
Pseudoscience refers to “any body of knowledge that purports to be scientific or to be supported by science but which fails to comply with the scientific method,” though since there is no such thing as the scientific method, I would rather modify the above to read “with currently accepted scientific standards.” ...

Burke then goes on to cite a 2011 dissertation by Paul Lawrie that argues that “dismissing… scientific racism as ‘pseudo-science,’ or a perversion of the scientific method, blurs our understanding of its role as a tool of racial labor control in modern America.”

Maybe it does, maybe it doesn’t. But scientific racism is a pseudoscience, and it is widely recognized — again by the relevant epistemic community — as such.
It seems to me that the term pseudoscience is mainly used for political purposes and unscientific name-calling, such as for denouncing racism.

But if anything is pseudoscience, then why not the creation of "truly random numbers"? Why is it acceptable to distinguish truly random numbers from not-truly random numbers, but only pseudoscience to distinguish Caucasians from Orientals?

Current philosophers like Pigliucci should be called pseudo-philosophers. They sound as if they are doing philosophy, but they fail to follow minimal standards.

Thursday, September 8, 2016

Coyne is not a freely acting agent

Leftist-atheist-evolutionist Jerry Coyne regularly attacks religion and believers for being irrational. He has written books on the subject.

Here he quibbles with physicist Sean M. Carroll over free will:
Lack of information simply means we cannot perfectly predict how we or anybody else will do, but it doesn’t say that what we’ll do isn’t determined in advance by physical laws. Unpredictability does not undermine determinism. And therefore, if you realize that, I don’t know how you can assert that “it’s completely conceivable that we could have acted differently.” ...

What I would have added to this is that nobody who downloads child pornography is in control of his actions, at least in the sense that they could have avoided downloading the pornography. I’m sure Sean would agree. Whether you have a brain tumor, some other cause of hypersexuality, were abused yourself as a child, were mentally ill in a way with no clear physical diagnosis, or simply have been resistant to social pressures to avoid that kind of stuff — all of this is determined by your genes and your environment. ...

But what does predictability have to do with this? We already know that people are not freely acting agents in the sense that they are free from deterministic control by their brains. We already know that people are predestined. ...

I’ve already given my solution to this issue. We recognize that, at bottom, nobody could have done otherwise. If they are accused of something that society deems to be a crime, you find out if they really did commit that crime. If they’re found guilty, then a group of experts — scientists, psychologists, sociologists criminologists, etc. — determine what the “punishment” should be based on the person’s history (a brain tumor would mandate an operation, for instance), malleability to persuasion, likelihood of recidivism, danger to society, and deterrent effects. None of that needs the assumption that someone is a “freely acting agent.”

I know many of you will disagree on that, or on the ramifications of determinism for our punishment and reward system. But at least I have the satisfaction of knowing that Sean agrees with me on this: if science shows the way our behaviors are determined, that knowledge should affect the way we punish and reward people.
So why is he saying what ppl should do if no one has any choice about it?

How can he distinguish between predictability and determinism, if no science experiment can make that distinction? What sense does it make to say something is determined, if there is no conceivable empirical way of making that prediction?

Coyne is pretty good on some evolution-related science that is closer to his expertise, but on subjects like this, I do not see how he is any more scientific than the religious believers he attacks.

Our most fundamental laws of physics are not deterministic. And even if they were, they would not be so predictable as to predict ppl's free choices.

Many theists believe in a God that is all-knowing. Most Christians believe that God gives us free will to make our own choices. Coyne mocks all of this, but he believes in some unobservable force or system that determines everything we do.

I am beginning to think that if a man says that he is driven by voices in his head, I should just believe him. Ditto for demons in his head, or any other equivalent. That is, I choose to believe him.

For more from Carroll, listen to this recent lecture:
Sean Carroll discusses whether space and gravity could emerge from quantum entanglement -- and rewrites physics in terms of quantum circuits.

Tuesday, September 6, 2016

Non-empirical confirmation stinks

Physicist Carlos Rovelli writes:
String theory is a proof of the dangers of relying excessively on non-empirical arguments. It raised great expectations thirty years ago, when it promised to compute all the parameters of the Standard Model from first principles, to derive from first principles its symmetry group SU(3) x SU(2) x U(1) and the existence of its three families of elementary particles, to predict the sign and the value of the cosmological constant, to predict novel observable physics, to understand the ultimate fate of black holes and to offer a unique well-founded unified theory of everything. Nothing of this has come true. String theorists, instead, have predicted a negative cosmological constant, deviations from Newtons 1/r2 law at sub millimeters scale, black holes at CERN, low-energy supersymmetric particles, and more. All this was false.

From a Popperian point of view, these failures do not falsify the theory, because the theory is so flexible that it can be adjusted to escape failed predictions. But from a Bayesian point of view, each of these failures decreases the credibility in the theory, because a positive result would have increased it. The recent failure of the prediction of supersymmetric particles at LHC is the most fragrant [sic] example. By Bayesian standards, it lowers the degree of belief in string theory dramatically. This is an empirical argument. Still, Joe Polchinski, prominent string theorist, writes in [7] that he evaluates the probability of string to be correct at 98.5% (!).
I think he means the failure of SUSY is the most flagrant example. English is not his native language. Or maybe he means that string theory stinks!

At least string theory had some hope of doing something worthwhile, 20 years ago. There are other areas of theoretical physics that have no such hope.

Monday, September 5, 2016

The essence of quantum mechanics

What is the essence of quantum mechanics?

Expositors of quantum mechanics always argue that the theory has something mysterious that is not present in previous theories. For example, Seth Lloyd called it quantum hanky-panky. But what is it?

Usually it is called entanglement. And that is compared to Schroedinger's cat being alive and dead at the same time.

However there is no proof that a cat can be alive and dead at the same time. What we do have is situations where we do not know whether the cat is alive or dead, and we don't need quantum mechanics for that.

No, the essence is that states cannot be equated with observables. And that is because observables do not commute.

If position and momentum commuted, then they could be measured simultaneously, and maybe you could talk about an electron state has having a particular position and momentum. But they do not commute, so position and momentum depend on how they are measured. Any mathematical description of the (pre-measurement) state of an electron must allow for different measurement outcomes.

I agree with Lubos Motl's rant:
The anti-quantum zealots are inventing lots of "additional" differences between classical and quantum physics – such as the purported "extra nonlocality" inherent in quantum mechanics, or some completely new "entanglement" that is something totally different than anything we know in classical physics, or other things. Except that all these purported additional differences are non-existent. ...

Now, the "quantum mechanics is non-local" crackpots love to say not only crazy things about the extra non-locality of quantum mechanics – which I have thoroughly debunked in many previous specialized blog posts – but they also love to present the quantum entanglement as some absolutely new voodoo, a supernatural phenomenon that has absolutely no counterpart in classical physics and whose divine content has nothing to do with the uncertainty principle or the nonzero commutators.

Except that all this voodoo is just fog. ...

The idea that the correlation between the two electrons is "something fundamentally different" from the correlation between the two socks' colors – an idea pioneered by John Bell – is totally and absolutely wrong. ...

Aside from the consequences of the nonzero commutators – i.e. of the need to use off-diagonal matrices for the most general observables that may really be measured – there is simply nothing "qualitatively new" in quantum mechanics. Whoever fails to understand these points misunderstands the character of quantum mechanics and the relationship between quantum mechanics and classical physics, too.
Where I differ from LuMo is in the likelihood of quantum computers.

The premise of the quantum computer is that nonlocal quantum voodoo can be applied to do super-Turing computations for us. Somehow an electron has to be in two places at the same time, doing two different computations.

If you believe that the cat can be alive and dead at the same time, then it is not much of a stretch to figure that a qubit can be in multiple states doing different computations, possibly in parallel universes, and assembling those results into a final observable.

But what if the quantum voodoo is just an algebraic consequence of the fact atomic-scale measurements depend on the order of the measuring?

You can put a pair of qubits in a entangled state where measuring one and then the other gives a different result from reversing the order. That is a consequence of observables not commuting. I accept that. What I do not accept is that those qubits are doing different computations simultaneously.

Thanks a reader supplying the link, the UK Daily Mail reports:
PCs? Google may be on the verge of a breakthrough next year, claim experts

* Quantum machines could open up new possibilities for computing
* Google is aiming to achieve 'quantum supremacy' with a small machine
* Experts say firm could announce a breakthrough in the area next year
* They claim a computer with just 50 quantum bits could outperform the most powerful classical computers in existence

Scientists and engineers around the world are quietly working on the next generation of machines which could usher in a new age of computing.

Even the fastest supercomputers today are still bound by the system of 1's and 0's which enabled the very first machines to make calculations.

But experts believe that drawing on the strange properties of the quantum world can enable computers to break free from these binary shackles, creating the most powerful problem-solving machines on the planet.

Now, according to New Scientist, researchers at Google may announce a breakthrough as soon as next year, potentially reaching what the company terms 'quantum supremacy' before anyone expected.
If achieves quantum supremacy next year, I will have to admit that I am wrong. I may then have to defer to Motl and Aaronson. But I doubt it.

Thursday, September 1, 2016

Quantum Hanky-Panky

MIT professor Seth Lloyd gives this argument for quantum computers:
My motto about this is that if a little quantum hanky-panky will allow you to reproduce a little faster, then by God, you're going to use quantum hanky-panky. It turns out that plants and bacteria and algae have been using quantum hanky-panky for a billion years to make their lives better.

Indeed, what's happening in quantum information and quantum computing is it's become more and more clear that quantum information is a universal language for how nature behaves. ...

There is an old idea, originally due to Richard Feynman, that you could use quantum computers to simulate other quantum systems — a quantum analog computer. Twenty years ago, I wrote the first algorithms for how you could program the quantum computers we have now to explore how their quantum systems behave. ...

Thinking about the future of quantum computing, I have no idea if we're going to have a quantum computer in every smart phone, or if we're going to have quantum apps or quapps, that would allow us to communicate securely and find funky stuff using our quantum computers; that's a tall order. It's very likely that we're going to have quantum microprocessors in our computers and smart phones that are performing specific tasks.

This is simply for the reason that this is where the actual technology inside our devices is heading anyway. If there are advantages to be had from quantum mechanics, then we'll take advantage of them, just in the same way that energy is moving around in a quantum mechanical kind of way in photosynthesis. If there are advantages to be had from some quantum hanky-panky, then quantum hanky-panky it is.
The fallacy here is that our personal computers are already using some quantum hanky-panky. You need quantum mechanics to understand CPU chips, memory chips, disc drives, and most of the other electronics. They use quantum hanky-panky as much as plants do in photosynthesis.

Yes, Feynman was right that you could use a quantum machine to simulate a quantum world, for the simple reason that the world simulates itself, and that using a digital computer is computationally inefficient.

The interesting question is whether a quantum computer can do a super-Turing digital computation. I doubt it.

Lloyd recites how many brilliant physicists have worked on this question for 30 years, and now 1000s are working on it. He compares progress to the early days of digital computers.

I don't buy it. Within about 5 years of building the earliest digital computers, they were simulating the weather and nuclear bomb explosions. The quantum computer folks are still working on connecting one qubit with another. The fact that so many smart ppl are working on it and the results so meager, only shows that it may not be possible.

All of the smart high-energy physicists were believers in SUSY (supersymmetry) for 30 years also, and some of them are just now paying off bets now that the LHC has failed to find any SUSY particles. So yes, when all the experts say that something is theoretically possible, they may be all wrong.

Lloyd explains:
Quantum computers work by storing and processing information at the most microscopic level. For instance, if you have an electron, you could call an electron spinning out like this zero, you could call an electron spinning like that one, and you could call an electron spinning like that zero and one at the same time, which is the primary feature of quantum computation. A quantum bit—qubit—can be zero and one at the same time; this is where quantum computers gain their power over classical computers.
This is like saying Schroedinger's cat is alive and dead at the same time. If we could do that, maybe we could have a horse plow a field and pull a carriage at the same time, and thus do twice as much work.

That is impossible, because it violates energy principles. But Lloyd wants the qubit to do two computations at the same time, and thus twice as much computational work on each low-level operation. String those together, and you get exponentially as much work.

That would violate computational principles, but the quantum computer folks do not believe them.

In fairness, Scott Aaronson would say that I am taking advantage of Lloyd's oversimplified explanation, and that the qubit does not really do two things at the same time. It just does some quantum hanky panky with negative probabilities that is too subtle for fools like me to understand, and that is enuf for a big computational gain.


The quantum computers are now getting a lot bigger. Now, we have tens of bits, soon we'll have 50 bits, then we'll have 500 bits, because now there's a clear path to how you build a larger scale quantum computer.
If they had 50 qubits, they would have enuf for a super-Turing computation. The first physicist to demonstrate that would be a likely candidate for a Nobel prize. Based on the current hype, it should happen in the next year or so.

So if it happens, you will hear about it. Then you can leave a comment here telling me that I am wrong. Otherwise, you should start to wonder if maybe Lloyd and the 1000s of others might be wrong.

Monday, August 29, 2016

Science is trapped in a vortex

Daniel Sarewitz writes:
Science isn’t self-correcting, it’s self-destructing. To save the enterprise, scientists must come out of the lab and into the real world.

Science, pride of modernity, our one source of objective knowledge, is in deep trouble. Stoked by fifty years of growing public investments, scientists are more productive than ever, pouring out millions of articles in thousands of journals covering an ever-expanding array of fields and phenomena. But much of this supposed knowledge is turning out to be contestable, unreliable, unusable, or flat-out wrong. From metastatic cancer to climate change to growth economics to dietary standards, science that is supposed to yield clarity and solutions is in many instances leading instead to contradiction, controversy, and confusion. Along the way it is also undermining the four-hundred-year-old idea that wise human action can be built on a foundation of independently verifiable truths. Science is trapped in a self-destructive vortex; to escape, it will have to abdicate its protected political status and embrace both its limits and its accountability to the rest of society.

The story of how things got to this state is difficult to unravel, in no small part because the scientific enterprise is so well-defended by walls of hype, myth, and denial.
He argues that science works best when it is driven by warfare necessities.
The science world has been buffeted for nearly a decade by growing revelations that major bodies of scientific knowledge, published in peer-reviewed papers, may simply be wrong. Among recent instances: a cancer cell line used as the basis for over a thousand published breast cancer research studies was revealed to be actually a skin cancer cell line; a biotechnology company was able to replicate only six out of fifty-three “landmark” published studies it sought to validate; a test of more than one hundred potential drugs for treating amyotrophic lateral sclerosis in mice was unable to reproduce any of the positive findings that had been reported from previous studies; a compilation of nearly one hundred fifty clinical trials for therapies to block human inflammatory response showed that even though the therapies had supposedly been validated using mouse model experiments, every one of the trials failed in humans; a statistical assessment of the use of functional magnetic resonance imaging (fMRI) to map human brain function indicated that up to 70 percent of the positive findings reported in approximately 40,000 published fMRI studies could be false; and an article assessing the overall quality of basic and preclinical biomedical research estimated that between 75 and 90 percent of all studies are not reproducible. Meanwhile, a painstaking effort to assess the quality of one hundred peer-reviewed psychology experiments was able to replicate only 39 percent of the original papers’ results; annual mammograms, once the frontline of the war on breast cancer, have been shown to confer little benefit for women in their forties; and, of course, we’ve all been relieved to learn after all these years that saturated fat actually isn’t that bad for us. The number of retracted scientific publications rose tenfold during the first decade of this century, and although that number still remains in the mere hundreds, the growing number of studies such as those mentioned above suggests that poor quality, unreliable, useless, or invalid science may in fact be the norm in some fields, and the number of scientifically suspect or worthless publications may well be counted in the hundreds of thousands annually. ...

Richard Horton, editor-in-chief of The Lancet, puts it like this:

The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness.
Criticism of Physics is curiously absent. Has it avoided these problems?

No. Someone should catalog physics papers on failed and dead-end ideas.

Hundreds of papers were written on a rumored LHC data bump, until the LHC just announced that analyzing more data showed no such bump. Thousands of papers were written on SUSY, and continue to be written, even tho 25 years of accelerator data show no such thing.

Thousands of papers are written on concepts that do not even make any scientific sense, like the multiverse. Maybe a billion dollars have gone into quantum crypto and quantum computing, with little attention to the worthlessness and impossibility of these research programs.

And then there are the completely bogus results, such as the faster-than-light neutrinos and the BICEP2 proof of inflation.

Thursday, August 25, 2016

Leifer trashes Copenhagen interpretation

FQXi reports:
Leifer’s main target in his talk was the Copenhagen interpretation of quantum mechanics, which he says most physicists subscribe to (though there were doubts expressed about that in the room). I’m wary of attempting to define what it is because a large part of Leifer’s argument is that there is no consistent definition. But it’s the interpretation attributed to a bunch of quantum theory’s founding fathers — and the one that physicists are often taught at school. It says that before you look, a quantum object is described by a wavefunction that encompasses a number of possibilities (a particle being here and there, a cat being dead and alive), and that when you look, this collapses into definiteness. Schrödinger’s equation allows you to calculate the probability of the outcome of a quantum experiment, but you can’t really know, and probably shouldn’t even worry about, what’s happening before you look.

On top of that, Leifer argues that Copenhagen-like interpretations, rather than being the most sensible option (as is often claimed), are actually just as whacky as, for instance, the Many World’s Interpretation.
Leifer is one of the more sensible quantum theorists, and it is nice to see him acknowledge that Copenhagen is the dominant interpretation.

He hates Copenhagen, and defines it this way:
Observers observe. Universality - everything can be described by quantum mechanics. ... No deeper description of reality to be had. ... Quantum systems may very well have properties, but they are just ineffable. For some reason, whatever properties they have, they're fundamentally impossible to represent those properties in language, mathematics, physics, pictures, whatever you like.
He objects to this philosophically, but admits that it is a reasonable position.

The real problem is his "universality" assumption. To him, this means that a time-reversible Schroedinger equation applies to everything, and it enables an observer to reverse an experiment. He goes on to describe a paradox resulting from this reversibility.

I don't remember Bohr or anyone else saying that observers can reverse experiments.

Time reversibility is a bizarre philosophical belief, as I discussed recently. The reasons for believing in it do not have much to do with quantum mechanics. Much of physics, including quantum mechanics, statistical mechanics, and thermodynamics, teaches that time is not reversible.

Leifer claims to have an argument that Copenhagen is strange, but he really has an argument that time reversibility is strange.

His real problem is that he rejects positivism. To the positivist, a system having ineffable problems is completely acceptable. I do not expect to understand subatomic physics by relating properties to ordinary human experiences of the 5 senses. I think it would be bizarre if an atom had a wave function that perfectly represented reality. Get over it. That is not even what science is all about.

On the subject of time symmetry, Quanta mag article:
“That signifies nothing. For us believing physicists, the distinction between past, present and future is only a stubbornly persistent illusion.”

Einstein’s statement was not merely an attempt at consolation. Many physicists argue that Einstein’s position is implied by the two pillars of modern physics: Einstein’s masterpiece, the general theory of relativity, and the Standard Model of particle physics. The laws that underlie these theories are time-symmetric — that is, the physics they describe is the same, regardless of whether the variable called “time” increases or decreases. Moreover, they say nothing at all about the point we call “now” — a special moment (or so it appears) for us, but seemingly undefined when we talk about the universe at large. The resulting timeless cosmos is sometimes called a “block universe” — a static block of space-time in which any flow of time, or passage through it, must presumably be a mental construct or other illusion.

Monday, August 22, 2016

Race for quantum supremacy

Caltech's Spyridon Michalakis writes:
In short, there was no top-down policy directive to focus national attention and inter-agency Federal funding on winning the quantum supremacy race.

Until now.

The National Science and Technology Council, which is chaired by the President of the United States and “is the principal means within the executive branch to coordinate science and technology policy across the diverse entities that make up the Federal research and development enterprise”, just released the following report:

Advancing Quantum Information Science: National Challenges and Opportunities

The White House blog post does a good job at describing the high-level view of what the report is about and what the policy recommendations are. There is mention of quantum sensors and metrology, of the promise of quantum computing to material science and basic science, and they even go into the exciting connections between quantum error-correcting codes and emergent spacetime, by IQIM’s Pastawski, et al.

But the big news is that the report recommends significant and sustained investment in Quantum Information Science. The blog post reports that the administration intends “to engage academia, industry, and government in the upcoming months to … exchange views on key needs and opportunities, and consider how to maintain vibrant and robust national ecosystems for QIS research and development and for high-performance computing.”

Personally, I am excited to see how the fierce competition at the academic, industrial and now international level will lead to a race for quantum supremacy.
The report says:
While further substantial scale-up will be needed to test algorithms such as
Shor’s, systems with tens of entangled qubits that may be of interest for early-stage research in quantum computer science will likely be available within 5 years. Developing a universal quantum computer is a long-term challenge that will build on technology developed for quantum simulation and communication. Deriving the full benefit from quantum computers will also require continued work on algorithms, programming languages, and compilers. The ultimate capabilities and limitations of quantum computers are not fully understood, and remain an active area of research.
With so much govt research money at stake, do you think that any prominent physicist is going to throw cold water on the alleged promise of quantum computers?

I do not think that we will have interesting quantum computers within 5 years, as I do not think that it is even physically possible.

Wednesday, August 17, 2016

Philosophers denying causality again

Pseudoscience philosopher Massimo Pigliucci writes:
Talk of causality is all over the place in the so-called “special sciences,” i.e., everything other than fundamental physics (up to and including much of the rest of physics). In the latter field, seems to me that people just can’t make up their minds. I’ve read articles by physicists, and talked to a number of them, and they seem to divide in two camps: those who argue that of course causality is still crucial even at the fundamental level, including in quantum mechanics. And those who say that because quantum mechanical as well as relativistic equations are time symmetric, and the very idea of causality requires time asymmetry (as in the causes preceding the effects), then causality “plays no role” at that level.

Both camps have some good reasons on their side. It is true that the most basic equations in physics are time symmetric, so that causality doesn’t enter into them.
I wonder who could be silly enuf to be in the latter camp, besides Sean M. Carroll.

Ordinary classical wave equations, like those for water or sound waves, are usually time symmetric. Understanding such waves certainly requires causality. Pick up any textbook on waves, and causality is considered all over the place.

How could any physicist say that causality plays no role in phenomena described by wave equations?!

How could any philosopher believe such nonsense?
For instance, Brad Skow adopts the “block universe” concept arising from Special Relativity and concludes that time doesn’t “pass” in the sense of flowing; rather, “time is part of the uniform larger fabric of the universe, not something moving around inside it.” If this is correct, than “events do not sail past us and vanish forever; they just exist in different parts of spacetime … the only experiences I’m having are the ones I’m having now in this room. The experiences you had a year ago or 10 years ago are still just as real [Skow asserts], they’re just ‘inaccessible’ because you are now in a different part of spacetime.
The concepts of time as flowing or not flowing go back to the ancient Greeks, I believe, and have nothing to do with relativity. You can believe that time is part of the larger fabric of the universe whether you believe in relativity or not.

Some descriptions special relativity use the concept of a block universe, but you can use the same concept to describe pre-relativity physics or any other physics. Either way, time is a parameter in the equations and you can think of it as flowing or not.

Reasonable ppl can disagree with some of my opinions on this blog, but I do not see how anyone with an understanding of freshman physics can say such nonsense. Have these ppl never solved a physics problem involving waves? If so, how did they do it without considering causality?

Where did they get the idea that wave equations and relativity contradict causality? The opposite is closer to the truth, as causality is crucial to just about everything with waves and relativity.

A comment tries to explain:
Prior to quantum physics, causes equalled forces. That’s it. Newton’s laws. If something changes its trajectory, it is because a force is applied. The force is the “cause” of change. All changes are due to forces. Or am I missing something? I don’t think you need to deal with conservation of energy, etc. Quantum mechanics changes all this, as you note, and some physicists don’t believe forces, as we think about them, exist, they are only there to get equations to work. (it’s all fields, etc).
No, this is more nonsense. A field is just a force on a test particle. It is true that they are not directly observable in quantum mechanics, but that has no bearing on causality. All of these arguments about causality are more or less the same whether considering classical or quantum mechanics.

Monday, August 15, 2016

Confusion about p-values

538.com writes:
To be clear, everyone I spoke with at METRICS could tell me the technical definition of a p-value — the probability of getting results at least as extreme as the ones you observed, given that the null hypothesis is correct — but almost no one could translate that into something easy to understand.

It’s not their fault, said Steven Goodman, co-director of METRICS. Even after spending his “entire career” thinking about p-values, he said he could tell me the definition, “but I cannot tell you what it means, and almost nobody can.” Scientists regularly get it wrong, and so do most textbooks, he said.
Statistician A. Gelman tries to explain p-values here and here.

The simple explanation of p-value is that if it is less than 5%, then your results are publishable.

So you do your study, collect your data, and wonder about whether it is statistically significant evidence for whatever you want to prove. If so, the journal editors will publish it. For whatever historical reasons, they have agreed that p-value is the best single number for making that determination.

Tuesday, August 9, 2016

Dilbert on free will

Here is a segment from a Dilbert cartoon.


It is just a cartoon, but ppl like Jerry Coyne take this argument seriously.

No part of the brain is exempt from the laws of physics. The laws of physics are not determinist, so they have no conflict with free will. It is not clear that determinism even makes any sense.

Civilization requires blaming ppl for misdeeds. You would not be reading this otherwise. So yes, we can blame ppl for what they do whether the laws of physics apply to brains or not. We have to so we can maintain order.

I know ppl who never blame dogs, because they say any bad behavior is a result of bad training, and they never blame cats, because they say cats just follow instincts. Leftists sometimes take another step and say that it is wrong to blame humans. Of course these same leftists are extremely judgmental and are always blaming others for not subscribing to the supposedly correct ideologies.

The Dilbert cartoonist likes to emphasize how ppl are irrational when they vote and make other decisions. Voting has become largely predictable by various demographic indicators, and therefore ppl are not as free as they think. There is some truth to this. But when you argue that some law of physics prevents ppl from making decisions, you are talking nonsense, because that law of physics is not in any standard textbook.

A recent Rationally Speaking philosopher podcast argues:
If someone asks you, "What caused your success (in finance, your career, etc.)?" what probably comes to mind for you is a story about how you worked hard and made smart choices. Which is likely true -- but what you don't see are all the people who also worked hard and made smart choices, but didn't succeed because luck wasn't on their side. In this episode, Julia chats with professor of economics Robert Frank about his latest book, Success and Luck: The Myth of the Modern Meritocracy. They explore questions like: Why do we discount the role of luck in success? Has luck become more important in recent years? And would acknowledging luck's importance sap our motivation to try?
He is another leftist who does not believe in free will. He does not want to blame ppl for failures, and does not want to credit ppl for success. It is all determinism and luck, he says. He agrees with Pres. Barack Obama when he said, "You didn't build that."

Frank is apparently firmly convinced that ppl would more readily accept the progressive agenda if they could be convinced that everything is determinist or random. And also by asking ppl bizarre counterfactuals, like: What would happen if you were born a poor uneducated African?