Monday, August 29, 2016

Science is trapped in a vortex

Daniel Sarewitz writes:
Science isn’t self-correcting, it’s self-destructing. To save the enterprise, scientists must come out of the lab and into the real world.

Science, pride of modernity, our one source of objective knowledge, is in deep trouble. Stoked by fifty years of growing public investments, scientists are more productive than ever, pouring out millions of articles in thousands of journals covering an ever-expanding array of fields and phenomena. But much of this supposed knowledge is turning out to be contestable, unreliable, unusable, or flat-out wrong. From metastatic cancer to climate change to growth economics to dietary standards, science that is supposed to yield clarity and solutions is in many instances leading instead to contradiction, controversy, and confusion. Along the way it is also undermining the four-hundred-year-old idea that wise human action can be built on a foundation of independently verifiable truths. Science is trapped in a self-destructive vortex; to escape, it will have to abdicate its protected political status and embrace both its limits and its accountability to the rest of society.

The story of how things got to this state is difficult to unravel, in no small part because the scientific enterprise is so well-defended by walls of hype, myth, and denial.
He argues that science works best when it is driven by warfare necessities.
The science world has been buffeted for nearly a decade by growing revelations that major bodies of scientific knowledge, published in peer-reviewed papers, may simply be wrong. Among recent instances: a cancer cell line used as the basis for over a thousand published breast cancer research studies was revealed to be actually a skin cancer cell line; a biotechnology company was able to replicate only six out of fifty-three “landmark” published studies it sought to validate; a test of more than one hundred potential drugs for treating amyotrophic lateral sclerosis in mice was unable to reproduce any of the positive findings that had been reported from previous studies; a compilation of nearly one hundred fifty clinical trials for therapies to block human inflammatory response showed that even though the therapies had supposedly been validated using mouse model experiments, every one of the trials failed in humans; a statistical assessment of the use of functional magnetic resonance imaging (fMRI) to map human brain function indicated that up to 70 percent of the positive findings reported in approximately 40,000 published fMRI studies could be false; and an article assessing the overall quality of basic and preclinical biomedical research estimated that between 75 and 90 percent of all studies are not reproducible. Meanwhile, a painstaking effort to assess the quality of one hundred peer-reviewed psychology experiments was able to replicate only 39 percent of the original papers’ results; annual mammograms, once the frontline of the war on breast cancer, have been shown to confer little benefit for women in their forties; and, of course, we’ve all been relieved to learn after all these years that saturated fat actually isn’t that bad for us. The number of retracted scientific publications rose tenfold during the first decade of this century, and although that number still remains in the mere hundreds, the growing number of studies such as those mentioned above suggests that poor quality, unreliable, useless, or invalid science may in fact be the norm in some fields, and the number of scientifically suspect or worthless publications may well be counted in the hundreds of thousands annually. ...

Richard Horton, editor-in-chief of The Lancet, puts it like this:

The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness.
Criticism of Physics is curiously absent. Has it avoided these problems?

No. Someone should catalog physics papers on failed and dead-end ideas.

Hundreds of papers were written on a rumored LHC data bump, until the LHC just announced that analyzing more data showed no such bump. Thousands of papers were written on SUSY, and continue to be written, even tho 25 years of accelerator data show no such thing.

Thousands of papers are written on concepts that do not even make any scientific sense, like the multiverse. Maybe a billion dollars have gone into quantum crypto and quantum computing, with little attention to the worthlessness and impossibility of these research programs.

And then there are the completely bogus results, such as the faster-than-light neutrinos and the BICEP2 proof of inflation.

Thursday, August 25, 2016

Leifer trashes Copenhagen interpretation

FQXi reports:
Leifer’s main target in his talk was the Copenhagen interpretation of quantum mechanics, which he says most physicists subscribe to (though there were doubts expressed about that in the room). I’m wary of attempting to define what it is because a large part of Leifer’s argument is that there is no consistent definition. But it’s the interpretation attributed to a bunch of quantum theory’s founding fathers — and the one that physicists are often taught at school. It says that before you look, a quantum object is described by a wavefunction that encompasses a number of possibilities (a particle being here and there, a cat being dead and alive), and that when you look, this collapses into definiteness. Schrödinger’s equation allows you to calculate the probability of the outcome of a quantum experiment, but you can’t really know, and probably shouldn’t even worry about, what’s happening before you look.

On top of that, Leifer argues that Copenhagen-like interpretations, rather than being the most sensible option (as is often claimed), are actually just as whacky as, for instance, the Many World’s Interpretation.
Leifer is one of the more sensible quantum theorists, and it is nice to see him acknowledge that Copenhagen is the dominant interpretation.

He hates Copenhagen, and defines it this way:
Observers observe. Universality - everything can be described by quantum mechanics. ... No deeper description of reality to be had. ... Quantum systems may very well have properties, but they are just ineffable. For some reason, whatever properties they have, they're fundamentally impossible to represent those properties in language, mathematics, physics, pictures, whatever you like.
He objects to this philosophically, but admits that it is a reasonable position.

The real problem is his "universality" assumption. To him, this means that a time-reversible Schroedinger equation applies to everything, and it enables an observer to reverse an experiment. He goes on to describe a paradox resulting from this reversibility.

I don't remember Bohr or anyone else saying that observers can reverse experiments.

Time reversibility is a bizarre philosophical belief, as I discussed recently. The reasons for believing in it do not have much to do with quantum mechanics. Much of physics, including quantum mechanics, statistical mechanics, and thermodynamics, teaches that time is not reversible.

Leifer claims to have an argument that Copenhagen is strange, but he really has an argument that time reversibility is strange.

His real problem is that he rejects positivism. To the positivist, a system having ineffable problems is completely acceptable. I do not expect to understand subatomic physics by relating properties to ordinary human experiences of the 5 senses. I think it would be bizarre if an atom had a wave function that perfectly represented reality. Get over it. That is not even what science is all about.

On the subject of time symmetry, Quanta mag article:
“That signifies nothing. For us believing physicists, the distinction between past, present and future is only a stubbornly persistent illusion.”

Einstein’s statement was not merely an attempt at consolation. Many physicists argue that Einstein’s position is implied by the two pillars of modern physics: Einstein’s masterpiece, the general theory of relativity, and the Standard Model of particle physics. The laws that underlie these theories are time-symmetric — that is, the physics they describe is the same, regardless of whether the variable called “time” increases or decreases. Moreover, they say nothing at all about the point we call “now” — a special moment (or so it appears) for us, but seemingly undefined when we talk about the universe at large. The resulting timeless cosmos is sometimes called a “block universe” — a static block of space-time in which any flow of time, or passage through it, must presumably be a mental construct or other illusion.

Monday, August 22, 2016

Race for quantum supremacy

Caltech's Spyridon Michalakis writes:
In short, there was no top-down policy directive to focus national attention and inter-agency Federal funding on winning the quantum supremacy race.

Until now.

The National Science and Technology Council, which is chaired by the President of the United States and “is the principal means within the executive branch to coordinate science and technology policy across the diverse entities that make up the Federal research and development enterprise”, just released the following report:

Advancing Quantum Information Science: National Challenges and Opportunities

The White House blog post does a good job at describing the high-level view of what the report is about and what the policy recommendations are. There is mention of quantum sensors and metrology, of the promise of quantum computing to material science and basic science, and they even go into the exciting connections between quantum error-correcting codes and emergent spacetime, by IQIM’s Pastawski, et al.

But the big news is that the report recommends significant and sustained investment in Quantum Information Science. The blog post reports that the administration intends “to engage academia, industry, and government in the upcoming months to … exchange views on key needs and opportunities, and consider how to maintain vibrant and robust national ecosystems for QIS research and development and for high-performance computing.”

Personally, I am excited to see how the fierce competition at the academic, industrial and now international level will lead to a race for quantum supremacy.
The report says:
While further substantial scale-up will be needed to test algorithms such as
Shor’s, systems with tens of entangled qubits that may be of interest for early-stage research in quantum computer science will likely be available within 5 years. Developing a universal quantum computer is a long-term challenge that will build on technology developed for quantum simulation and communication. Deriving the full benefit from quantum computers will also require continued work on algorithms, programming languages, and compilers. The ultimate capabilities and limitations of quantum computers are not fully understood, and remain an active area of research.
With so much govt research money at stake, do you think that any prominent physicist is going to throw cold water on the alleged promise of quantum computers?

I do not think that we will have interesting quantum computers within 5 years, as I do not think that it is even physically possible.

Wednesday, August 17, 2016

Philosophers denying causality again

Pseudoscience philosopher Massimo Pigliucci writes:
Talk of causality is all over the place in the so-called “special sciences,” i.e., everything other than fundamental physics (up to and including much of the rest of physics). In the latter field, seems to me that people just can’t make up their minds. I’ve read articles by physicists, and talked to a number of them, and they seem to divide in two camps: those who argue that of course causality is still crucial even at the fundamental level, including in quantum mechanics. And those who say that because quantum mechanical as well as relativistic equations are time symmetric, and the very idea of causality requires time asymmetry (as in the causes preceding the effects), then causality “plays no role” at that level.

Both camps have some good reasons on their side. It is true that the most basic equations in physics are time symmetric, so that causality doesn’t enter into them.
I wonder who could be silly enuf to be in the latter camp, besides Sean M. Carroll.

Ordinary classical wave equations, like those for water or sound waves, are usually time symmetric. Understanding such waves certainly requires causality. Pick up any textbook on waves, and causality is considered all over the place.

How could any physicist say that causality plays no role in phenomena described by wave equations?!

How could any philosopher believe such nonsense?
For instance, Brad Skow adopts the “block universe” concept arising from Special Relativity and concludes that time doesn’t “pass” in the sense of flowing; rather, “time is part of the uniform larger fabric of the universe, not something moving around inside it.” If this is correct, than “events do not sail past us and vanish forever; they just exist in different parts of spacetime … the only experiences I’m having are the ones I’m having now in this room. The experiences you had a year ago or 10 years ago are still just as real [Skow asserts], they’re just ‘inaccessible’ because you are now in a different part of spacetime.
The concepts of time as flowing or not flowing go back to the ancient Greeks, I believe, and have nothing to do with relativity. You can believe that time is part of the larger fabric of the universe whether you believe in relativity or not.

Some descriptions special relativity use the concept of a block universe, but you can use the same concept to describe pre-relativity physics or any other physics. Either way, time is a parameter in the equations and you can think of it as flowing or not.

Reasonable ppl can disagree with some of my opinions on this blog, but I do not see how anyone with an understanding of freshman physics can say such nonsense. Have these ppl never solved a physics problem involving waves? If so, how did they do it without considering causality?

Where did they get the idea that wave equations and relativity contradict causality? The opposite is closer to the truth, as causality is crucial to just about everything with waves and relativity.

A comment tries to explain:
Prior to quantum physics, causes equalled forces. That’s it. Newton’s laws. If something changes its trajectory, it is because a force is applied. The force is the “cause” of change. All changes are due to forces. Or am I missing something? I don’t think you need to deal with conservation of energy, etc. Quantum mechanics changes all this, as you note, and some physicists don’t believe forces, as we think about them, exist, they are only there to get equations to work. (it’s all fields, etc).
No, this is more nonsense. A field is just a force on a test particle. It is true that they are not directly observable in quantum mechanics, but that has no bearing on causality. All of these arguments about causality are more or less the same whether considering classical or quantum mechanics.

Monday, August 15, 2016

Confusion about p-values

538.com writes:
To be clear, everyone I spoke with at METRICS could tell me the technical definition of a p-value — the probability of getting results at least as extreme as the ones you observed, given that the null hypothesis is correct — but almost no one could translate that into something easy to understand.

It’s not their fault, said Steven Goodman, co-director of METRICS. Even after spending his “entire career” thinking about p-values, he said he could tell me the definition, “but I cannot tell you what it means, and almost nobody can.” Scientists regularly get it wrong, and so do most textbooks, he said.
Statistician A. Gelman tries to explain p-values here and here.

The simple explanation of p-value is that if it is less than 5%, then your results are publishable.

So you do your study, collect your data, and wonder about whether it is statistically significant evidence for whatever you want to prove. If so, the journal editors will publish it. For whatever historical reasons, they have agreed that p-value is the best single number for making that determination.

Tuesday, August 9, 2016

Dilbert on free will

Here is a segment from a Dilbert cartoon.


It is just a cartoon, but ppl like Jerry Coyne take this argument seriously.

No part of the brain is exempt from the laws of physics. The laws of physics are not determinist, so they have no conflict with free will. It is not clear that determinism even makes any sense.

Civilization requires blaming ppl for misdeeds. You would not be reading this otherwise. So yes, we can blame ppl for what they do whether the laws of physics apply to brains or not. We have to so we can maintain order.

I know ppl who never blame dogs, because they say any bad behavior is a result of bad training, and they never blame cats, because they say cats just follow instincts. Leftists sometimes take another step and say that it is wrong to blame humans. Of course these same leftists are extremely judgmental and are always blaming others for not subscribing to the supposedly correct ideologies.

The Dilbert cartoonist likes to emphasize how ppl are irrational when they vote and make other decisions. Voting has become largely predictable by various demographic indicators, and therefore ppl are not as free as they think. There is some truth to this. But when you argue that some law of physics prevents ppl from making decisions, you are talking nonsense, because that law of physics is not in any standard textbook.

A recent Rationally Speaking philosopher podcast argues:
If someone asks you, "What caused your success (in finance, your career, etc.)?" what probably comes to mind for you is a story about how you worked hard and made smart choices. Which is likely true -- but what you don't see are all the people who also worked hard and made smart choices, but didn't succeed because luck wasn't on their side. In this episode, Julia chats with professor of economics Robert Frank about his latest book, Success and Luck: The Myth of the Modern Meritocracy. They explore questions like: Why do we discount the role of luck in success? Has luck become more important in recent years? And would acknowledging luck's importance sap our motivation to try?
He is another leftist who does not believe in free will. He does not want to blame ppl for failures, and does not want to credit ppl for success. It is all determinism and luck, he says. He agrees with Pres. Barack Obama when he said, "You didn't build that."

Frank is apparently firmly convinced that ppl would more readily accept the progressive agenda if they could be convinced that everything is determinist or random. And also by asking ppl bizarre counterfactuals, like: What would happen if you were born a poor uneducated African?

Saturday, August 6, 2016

Supersymmetry fails again

Dennis Overbye reports for the NY Times:
A bump on a graph signaling excess pairs of gamma rays was most likely a statistical fluke, they said. But physicists have been holding their breath ever since.

If real, the new particle would have opened a crack between the known and the unknown, affording a glimpse of quantum secrets undreamed of even by Einstein. Answers to questions like why there is matter but not antimatter in the universe, or the identity of the mysterious dark matter that provides the gravitational glue in the cosmos. In the few months after the announcement, 500 papers were written trying to interpret the meaning of the putative particle.

On Friday, physicists from the same two CERN teams reported that under the onslaught of more data, the possibility of a particle had melted away. ...

The presentations were part of an outpouring of dozens of papers from the two teams on the results so far this year from the collider, all of them in general agreement with the Standard Model.
The fact is that Physics developed a theory of quantum electromagnetism by 1960 or so, and extended it to the other fundamental forces in the 1970s. It is called the Standard Model, and it explains all the observations.

Nevertheless all the leading physicists keep loudly claiming that something is wrong with the Standard Model, so we need more dimensions and bigger particle accelerators.
For a long time, the phenomenon physicists have thought would appear to save the day is a conjecture known as supersymmetry, which comes with the prediction of a whole new set of elementary particles, known as wimps, for weakly interacting massive particles, one of which could comprise the dark matter that is at the heart of cosmologists’ dreams.

But so far, wimps haven’t shown up either in the collider or in underground experiments designed to detect wimps floating through space. Neither has evidence for an alternative idea that the universe has more than three dimensions of space.
Supersymmetry still has many true believers, but it is contrary to all the experiments. And the theory behind is pretty implausible also.

The biggest Physics news of the week was this Nature article:
Here we demonstrate a five-qubit trapped-ion quantum computer that can be programmed in software to implement arbitrary quantum algorithms by executing any sequence of universal quantum logic gates. We compile algorithms into a fully connected set of gate operations that are native to the hardware and have a mean fidelity of 98 per cent. Reconfiguring these gate sequences provides the flexibility to implement a variety of algorithms without altering the hardware.
The full article is behind a paywall.

You probably thought that 5-qubit quantum computers had already been announced. Yes, many times, but each new one claims to be a more honest 5 qubits than the previous ones.

The great promise, of course, is that this is the great breakthru towards scalability.

Maybe, but I doubt it.

I get comments from people who wonder how I can be skeptical about quantum computers when they have already been demonstrated.

They have really been demonstrated. Just look at a paper like this. Even as the authors do everything they can to promote the importance of their works, their most excited claim is always that the work might be a step towards the elusive scalable quantum computer. This implies that the scalable quantum computer has not been demonstrated. It may never be.

These experiments are just demonstrations of quantum mechanical properties of atoms. The technology gets better all the time, but they are still just detecting quantum uncertainties of an isolated state.

I'll be looking for Scott Aaronson's comments, but current he is busy enumerating his errors. Apparently a number of his published theorems and proofs turned out to be wrong.

He is probably not more fallible than other complexity theorists. Just more willing to admit his mistakes.

Tuesday, August 2, 2016

Coyne complains about free will again

Leftist-atheist-evolutionist Jerry Coyne writes:
The boys over at the Discovery Institute (DI) spend a lot of time mocking me online, but I rarely pay attention. ...

You’d think that they’d keep their religious motivations secret, for, after all, Intelligent Design was rejected by the courts ...

Klinghoffer’s new post, “Without free will there is no justice“, excoriates me for my determinism, using as an example my recent post on Manson “girl” Leslie Van Houten. ...

Klinghoffer’s piece is a good example of how many people misunderstand — deliberately or out of ignorance — how “agency” works. In the case of Abrahamic religionists, most (except for Calvinists) have to believe in libertarian free will, ...

Notice how when a leftist does not like someone's conclusions, he makes accusations of bad intentions and ignorance.

Egnor, of course, has no evidence for libertarian free will, and we have lots of evidence against it (neuroscience, psychology experiments, and, most important, the laws of nature). So Egnor simply asserts that what his faith teaches him is also scientifically true:

We are free agents, influenced by our genes and our environment, but are free to choose the course of action we take. Determinism is not true, denial of free will is self-refuting (If we are not free to choose, why assume Coyne’s opinion has any truth value? It’s just a chemical reaction, determined by genes and environment), and our intellect and will are immaterial powers of the soul and are inherently free in the libertarian sense of not being determined by matter.

We are not meat robots. If we were meat robots, why would anyone listen to Jerry Coyne?

Well, Dr. Egnor, maybe they should listen because the two pounds of meat in my skull is better programmed than are the two pounds of Egnorian head-meat. That is, my meat emits statements that comport better with what rational people observe in the Universe than does Egnor’s faith-ridden meat.
So I have no free will to make choices, but I should listen to Coyne because his brain is programmed to be more rational?!

No, that is irrational.

Egnor even tries to reject determinism of human behavior by citing quantum mechanics, which shows how desperate he is:

If you “accept science,” you don’t accept determinism, which has been ruled out in physics by an ingenious series of experiments over the past several decades. It is the consensus of physicists that nature is non-deterministic, in the sense that there are no local hidden variables. Coyne’s rejection of the overwhelming evidence that nature is non-deterministic is a rejection of science, just as his denial of free will is a rejection of common sense and reason.
This presupposes either that quantum-mechanical uncertainty gives us free will, which can’t be true (I won’t insult your intelligence by explaining that again), or that the “Bell’s inequality” experiments showing the lack of local realism on the level of particles show that all forms of natural law determining human behavior are out the window.
It is true that the consensus of physicists that nature is non-deterministic, and that there are no local hidden variables.

I would not say that determinism has been ruled out, as determinism is too ill-defined to be ruled out. But the laws of physics are not deterministic, and Egnor's point is valid.

Coyne denies that "uncertainty gives us free will" but that is not the argument. Uncertainty is a result of free will, not a cause.

If electrons had free will, what kind of theory could be used to explain them? The answer is quantum mechanics. Other theories (ie, failed theories) do not leave room for free will, but quantum mechanics allows for free will.

Coyne's argument is wrong.

Coyne elaborates:
It illustrates one of the many misconceptions people have about science-based determinism and its rejection of libertarian free will: that under determinism is useless to try to change anything since everything is preordained by the laws of nature. Well, the last part of that is pretty much correct — save for fundamental indeterminacy due to quantum mechanics — but within that paradigm lies the fact that people’s arguments constitute environmental factors that affect how others behave.

My own example is that you can alter the behavior of a dog by kicking it when it does something you don’t like. (I am NOT recommending this!). After a while the dog, whose onboard computer gets reprogrammed to anticipate pain, will no longer engage in the unwanted behavior.
So science-based determinism says that you cannot voluntarily change anything, except in examples where the cause and effect is so obvious that it cannot be denied.

And you can make choices to influence a dog, but not yourself.

He has a cartoon of a masculine woman kicking a man in the testicles.
Yes, its determinism all the way down.
I think that this is an allusion to an argument about the Earth being held up by turtles.

I used to think that scientists were much more rational than religious believers.