Thursday, August 25, 2016

Leifer trashes Copenhagen interpretation

FQXi reports:
Leifer’s main target in his talk was the Copenhagen interpretation of quantum mechanics, which he says most physicists subscribe to (though there were doubts expressed about that in the room). I’m wary of attempting to define what it is because a large part of Leifer’s argument is that there is no consistent definition. But it’s the interpretation attributed to a bunch of quantum theory’s founding fathers — and the one that physicists are often taught at school. It says that before you look, a quantum object is described by a wavefunction that encompasses a number of possibilities (a particle being here and there, a cat being dead and alive), and that when you look, this collapses into definiteness. Schr√∂dinger’s equation allows you to calculate the probability of the outcome of a quantum experiment, but you can’t really know, and probably shouldn’t even worry about, what’s happening before you look.

On top of that, Leifer argues that Copenhagen-like interpretations, rather than being the most sensible option (as is often claimed), are actually just as whacky as, for instance, the Many World’s Interpretation.
Leifer is one of the more sensible quantum theorists, and it is nice to see him acknowledge that Copenhagen is the dominant interpretation.

He hates Copenhagen, and defines it this way:
Observers observe. Universality - everything can be described by quantum mechanics. ... No deeper description of reality to be had. ... Quantum systems may very well have properties, but they are just ineffable. For some reason, whatever properties they have, they're fundamentally impossible to represent those properties in language, mathematics, physics, pictures, whatever you like.
He objects to this philosophically, but admits that it is a reasonable position.

The real problem is his "universality" assumption. To him, this means that a time-reversible Schroedinger equation applies to everything, and it enables an observer to reverse an experiment. He goes on to describe a paradox resulting from this reversibility.

I don't remember Bohr or anyone else saying that observers can reverse experiments.

Time reversibility is a bizarre philosophical belief, as I discussed recently. The reasons for believing in it do not have much to do with quantum mechanics. Much of physics, including quantum mechanics, statistical mechanics, and thermodynamics, teaches that time is not reversible.

Leifer claims to have an argument that Copenhagen is strange, but he really has an argument that time reversibility is strange.

His real problem is that he rejects positivism. To the positivist, a system having ineffable problems is completely acceptable. I do not expect to understand subatomic physics by relating properties to ordinary human experiences of the 5 senses. I think it would be bizarre if an atom had a wave function that perfectly represented reality. Get over it. That is not even what science is all about.

On the subject of time symmetry, Quanta mag article:
“That signifies nothing. For us believing physicists, the distinction between past, present and future is only a stubbornly persistent illusion.”

Einstein’s statement was not merely an attempt at consolation. Many physicists argue that Einstein’s position is implied by the two pillars of modern physics: Einstein’s masterpiece, the general theory of relativity, and the Standard Model of particle physics. The laws that underlie these theories are time-symmetric — that is, the physics they describe is the same, regardless of whether the variable called “time” increases or decreases. Moreover, they say nothing at all about the point we call “now” — a special moment (or so it appears) for us, but seemingly undefined when we talk about the universe at large. The resulting timeless cosmos is sometimes called a “block universe” — a static block of space-time in which any flow of time, or passage through it, must presumably be a mental construct or other illusion.

Monday, August 22, 2016

Race for quantum supremacy

Caltech's Spyridon Michalakis writes:
In short, there was no top-down policy directive to focus national attention and inter-agency Federal funding on winning the quantum supremacy race.

Until now.

The National Science and Technology Council, which is chaired by the President of the United States and “is the principal means within the executive branch to coordinate science and technology policy across the diverse entities that make up the Federal research and development enterprise”, just released the following report:

Advancing Quantum Information Science: National Challenges and Opportunities

The White House blog post does a good job at describing the high-level view of what the report is about and what the policy recommendations are. There is mention of quantum sensors and metrology, of the promise of quantum computing to material science and basic science, and they even go into the exciting connections between quantum error-correcting codes and emergent spacetime, by IQIM’s Pastawski, et al.

But the big news is that the report recommends significant and sustained investment in Quantum Information Science. The blog post reports that the administration intends “to engage academia, industry, and government in the upcoming months to … exchange views on key needs and opportunities, and consider how to maintain vibrant and robust national ecosystems for QIS research and development and for high-performance computing.”

Personally, I am excited to see how the fierce competition at the academic, industrial and now international level will lead to a race for quantum supremacy.
The report says:
While further substantial scale-up will be needed to test algorithms such as
Shor’s, systems with tens of entangled qubits that may be of interest for early-stage research in quantum computer science will likely be available within 5 years. Developing a universal quantum computer is a long-term challenge that will build on technology developed for quantum simulation and communication. Deriving the full benefit from quantum computers will also require continued work on algorithms, programming languages, and compilers. The ultimate capabilities and limitations of quantum computers are not fully understood, and remain an active area of research.
With so much govt research money at stake, do you think that any prominent physicist is going to throw cold water on the alleged promise of quantum computers?

I do not think that we will have interesting quantum computers within 5 years, as I do not think that it is even physically possible.

Wednesday, August 17, 2016

Philosophers denying causality again

Pseudoscience philosopher Massimo Pigliucci writes:
Talk of causality is all over the place in the so-called “special sciences,” i.e., everything other than fundamental physics (up to and including much of the rest of physics). In the latter field, seems to me that people just can’t make up their minds. I’ve read articles by physicists, and talked to a number of them, and they seem to divide in two camps: those who argue that of course causality is still crucial even at the fundamental level, including in quantum mechanics. And those who say that because quantum mechanical as well as relativistic equations are time symmetric, and the very idea of causality requires time asymmetry (as in the causes preceding the effects), then causality “plays no role” at that level.

Both camps have some good reasons on their side. It is true that the most basic equations in physics are time symmetric, so that causality doesn’t enter into them.
I wonder who could be silly enuf to be in the latter camp, besides Sean M. Carroll.

Ordinary classical wave equations, like those for water or sound waves, are usually time symmetric. Understanding such waves certainly requires causality. Pick up any textbook on waves, and causality is considered all over the place.

How could any physicist say that causality plays no role in phenomena described by wave equations?!

How could any philosopher believe such nonsense?
For instance, Brad Skow adopts the “block universe” concept arising from Special Relativity and concludes that time doesn’t “pass” in the sense of flowing; rather, “time is part of the uniform larger fabric of the universe, not something moving around inside it.” If this is correct, than “events do not sail past us and vanish forever; they just exist in different parts of spacetime … the only experiences I’m having are the ones I’m having now in this room. The experiences you had a year ago or 10 years ago are still just as real [Skow asserts], they’re just ‘inaccessible’ because you are now in a different part of spacetime.
The concepts of time as flowing or not flowing go back to the ancient Greeks, I believe, and have nothing to do with relativity. You can believe that time is part of the larger fabric of the universe whether you believe in relativity or not.

Some descriptions special relativity use the concept of a block universe, but you can use the same concept to describe pre-relativity physics or any other physics. Either way, time is a parameter in the equations and you can think of it as flowing or not.

Reasonable ppl can disagree with some of my opinions on this blog, but I do not see how anyone with an understanding of freshman physics can say such nonsense. Have these ppl never solved a physics problem involving waves? If so, how did they do it without considering causality?

Where did they get the idea that wave equations and relativity contradict causality? The opposite is closer to the truth, as causality is crucial to just about everything with waves and relativity.

A comment tries to explain:
Prior to quantum physics, causes equalled forces. That’s it. Newton’s laws. If something changes its trajectory, it is because a force is applied. The force is the “cause” of change. All changes are due to forces. Or am I missing something? I don’t think you need to deal with conservation of energy, etc. Quantum mechanics changes all this, as you note, and some physicists don’t believe forces, as we think about them, exist, they are only there to get equations to work. (it’s all fields, etc).
No, this is more nonsense. A field is just a force on a test particle. It is true that they are not directly observable in quantum mechanics, but that has no bearing on causality. All of these arguments about causality are more or less the same whether considering classical or quantum mechanics.

Monday, August 15, 2016

Confusion about p-values

538.com writes:
To be clear, everyone I spoke with at METRICS could tell me the technical definition of a p-value — the probability of getting results at least as extreme as the ones you observed, given that the null hypothesis is correct — but almost no one could translate that into something easy to understand.

It’s not their fault, said Steven Goodman, co-director of METRICS. Even after spending his “entire career” thinking about p-values, he said he could tell me the definition, “but I cannot tell you what it means, and almost nobody can.” Scientists regularly get it wrong, and so do most textbooks, he said.
Statistician A. Gelman tries to explain p-values here and here.

The simple explanation of p-value is that if it is less than 5%, then your results are publishable.

So you do your study, collect your data, and wonder about whether it is statistically significant evidence for whatever you want to prove. If so, the journal editors will publish it. For whatever historical reasons, they have agreed that p-value is the best single number for making that determination.

Tuesday, August 9, 2016

Dilbert on free will

Here is a segment from a Dilbert cartoon.


It is just a cartoon, but ppl like Jerry Coyne take this argument seriously.

No part of the brain is exempt from the laws of physics. The laws of physics are not determinist, so they have no conflict with free will. It is not clear that determinism even makes any sense.

Civilization requires blaming ppl for misdeeds. You would not be reading this otherwise. So yes, we can blame ppl for what they do whether the laws of physics apply to brains or not. We have to so we can maintain order.

I know ppl who never blame dogs, because they say any bad behavior is a result of bad training, and they never blame cats, because they say cats just follow instincts. Leftists sometimes take another step and say that it is wrong to blame humans. Of course these same leftists are extremely judgmental and are always blaming others for not subscribing to the supposedly correct ideologies.

The Dilbert cartoonist likes to emphasize how ppl are irrational when they vote and make other decisions. Voting has become largely predictable by various demographic indicators, and therefore ppl are not as free as they think. There is some truth to this. But when you argue that some law of physics prevents ppl from making decisions, you are talking nonsense, because that law of physics is not in any standard textbook.

A recent Rationally Speaking philosopher podcast argues:
If someone asks you, "What caused your success (in finance, your career, etc.)?" what probably comes to mind for you is a story about how you worked hard and made smart choices. Which is likely true -- but what you don't see are all the people who also worked hard and made smart choices, but didn't succeed because luck wasn't on their side. In this episode, Julia chats with professor of economics Robert Frank about his latest book, Success and Luck: The Myth of the Modern Meritocracy. They explore questions like: Why do we discount the role of luck in success? Has luck become more important in recent years? And would acknowledging luck's importance sap our motivation to try?
He is another leftist who does not believe in free will. He does not want to blame ppl for failures, and does not want to credit ppl for success. It is all determinism and luck, he says. He agrees with Pres. Barack Obama when he said, "You didn't build that."

Frank is apparently firmly convinced that ppl would more readily accept the progressive agenda if they could be convinced that everything is determinist or random. And also by asking ppl bizarre counterfactuals, like: What would happen if you were born a poor uneducated African?

Saturday, August 6, 2016

Supersymmetry fails again

Dennis Overbye reports for the NY Times:
A bump on a graph signaling excess pairs of gamma rays was most likely a statistical fluke, they said. But physicists have been holding their breath ever since.

If real, the new particle would have opened a crack between the known and the unknown, affording a glimpse of quantum secrets undreamed of even by Einstein. Answers to questions like why there is matter but not antimatter in the universe, or the identity of the mysterious dark matter that provides the gravitational glue in the cosmos. In the few months after the announcement, 500 papers were written trying to interpret the meaning of the putative particle.

On Friday, physicists from the same two CERN teams reported that under the onslaught of more data, the possibility of a particle had melted away. ...

The presentations were part of an outpouring of dozens of papers from the two teams on the results so far this year from the collider, all of them in general agreement with the Standard Model.
The fact is that Physics developed a theory of quantum electromagnetism by 1960 or so, and extended it to the other fundamental forces in the 1970s. It is called the Standard Model, and it explains all the observations.

Nevertheless all the leading physicists keep loudly claiming that something is wrong with the Standard Model, so we need more dimensions and bigger particle accelerators.
For a long time, the phenomenon physicists have thought would appear to save the day is a conjecture known as supersymmetry, which comes with the prediction of a whole new set of elementary particles, known as wimps, for weakly interacting massive particles, one of which could comprise the dark matter that is at the heart of cosmologists’ dreams.

But so far, wimps haven’t shown up either in the collider or in underground experiments designed to detect wimps floating through space. Neither has evidence for an alternative idea that the universe has more than three dimensions of space.
Supersymmetry still has many true believers, but it is contrary to all the experiments. And the theory behind is pretty implausible also.

The biggest Physics news of the week was this Nature article:
Here we demonstrate a five-qubit trapped-ion quantum computer that can be programmed in software to implement arbitrary quantum algorithms by executing any sequence of universal quantum logic gates. We compile algorithms into a fully connected set of gate operations that are native to the hardware and have a mean fidelity of 98 per cent. Reconfiguring these gate sequences provides the flexibility to implement a variety of algorithms without altering the hardware.
The full article is behind a paywall.

You probably thought that 5-qubit quantum computers had already been announced. Yes, many times, but each new one claims to be a more honest 5 qubits than the previous ones.

The great promise, of course, is that this is the great breakthru towards scalability.

Maybe, but I doubt it.

I get comments from people who wonder how I can be skeptical about quantum computers when they have already been demonstrated.

They have really been demonstrated. Just look at a paper like this. Even as the authors do everything they can to promote the importance of their works, their most excited claim is always that the work might be a step towards the elusive scalable quantum computer. This implies that the scalable quantum computer has not been demonstrated. It may never be.

These experiments are just demonstrations of quantum mechanical properties of atoms. The technology gets better all the time, but they are still just detecting quantum uncertainties of an isolated state.

I'll be looking for Scott Aaronson's comments, but current he is busy enumerating his errors. Apparently a number of his published theorems and proofs turned out to be wrong.

He is probably not more fallible than other complexity theorists. Just more willing to admit his mistakes.

Tuesday, August 2, 2016

Coyne complains about free will again

Leftist-atheist-evolutionist Jerry Coyne writes:
The boys over at the Discovery Institute (DI) spend a lot of time mocking me online, but I rarely pay attention. ...

You’d think that they’d keep their religious motivations secret, for, after all, Intelligent Design was rejected by the courts ...

Klinghoffer’s new post, “Without free will there is no justice“, excoriates me for my determinism, using as an example my recent post on Manson “girl” Leslie Van Houten. ...

Klinghoffer’s piece is a good example of how many people misunderstand — deliberately or out of ignorance — how “agency” works. In the case of Abrahamic religionists, most (except for Calvinists) have to believe in libertarian free will, ...

Notice how when a leftist does not like someone's conclusions, he makes accusations of bad intentions and ignorance.

Egnor, of course, has no evidence for libertarian free will, and we have lots of evidence against it (neuroscience, psychology experiments, and, most important, the laws of nature). So Egnor simply asserts that what his faith teaches him is also scientifically true:

We are free agents, influenced by our genes and our environment, but are free to choose the course of action we take. Determinism is not true, denial of free will is self-refuting (If we are not free to choose, why assume Coyne’s opinion has any truth value? It’s just a chemical reaction, determined by genes and environment), and our intellect and will are immaterial powers of the soul and are inherently free in the libertarian sense of not being determined by matter.

We are not meat robots. If we were meat robots, why would anyone listen to Jerry Coyne?

Well, Dr. Egnor, maybe they should listen because the two pounds of meat in my skull is better programmed than are the two pounds of Egnorian head-meat. That is, my meat emits statements that comport better with what rational people observe in the Universe than does Egnor’s faith-ridden meat.
So I have no free will to make choices, but I should listen to Coyne because his brain is programmed to be more rational?!

No, that is irrational.

Egnor even tries to reject determinism of human behavior by citing quantum mechanics, which shows how desperate he is:

If you “accept science,” you don’t accept determinism, which has been ruled out in physics by an ingenious series of experiments over the past several decades. It is the consensus of physicists that nature is non-deterministic, in the sense that there are no local hidden variables. Coyne’s rejection of the overwhelming evidence that nature is non-deterministic is a rejection of science, just as his denial of free will is a rejection of common sense and reason.
This presupposes either that quantum-mechanical uncertainty gives us free will, which can’t be true (I won’t insult your intelligence by explaining that again), or that the “Bell’s inequality” experiments showing the lack of local realism on the level of particles show that all forms of natural law determining human behavior are out the window.
It is true that the consensus of physicists that nature is non-deterministic, and that there are no local hidden variables.

I would not say that determinism has been ruled out, as determinism is too ill-defined to be ruled out. But the laws of physics are not deterministic, and Egnor's point is valid.

Coyne denies that "uncertainty gives us free will" but that is not the argument. Uncertainty is a result of free will, not a cause.

If electrons had free will, what kind of theory could be used to explain them? The answer is quantum mechanics. Other theories (ie, failed theories) do not leave room for free will, but quantum mechanics allows for free will.

Coyne's argument is wrong.

Coyne elaborates:
It illustrates one of the many misconceptions people have about science-based determinism and its rejection of libertarian free will: that under determinism is useless to try to change anything since everything is preordained by the laws of nature. Well, the last part of that is pretty much correct — save for fundamental indeterminacy due to quantum mechanics — but within that paradigm lies the fact that people’s arguments constitute environmental factors that affect how others behave.

My own example is that you can alter the behavior of a dog by kicking it when it does something you don’t like. (I am NOT recommending this!). After a while the dog, whose onboard computer gets reprogrammed to anticipate pain, will no longer engage in the unwanted behavior.
So science-based determinism says that you cannot voluntarily change anything, except in examples where the cause and effect is so obvious that it cannot be denied.

And you can make choices to influence a dog, but not yourself.

He has a cartoon of a masculine woman kicking a man in the testicles.
Yes, its determinism all the way down.
I think that this is an allusion to an argument about the Earth being held up by turtles.

I used to think that scientists were much more rational than religious believers.

Saturday, July 30, 2016

More on nonlocality and realism

I mentioned Musser and nonlocality, and Lumo adds:
George Musser identified himself as the latest promoter of the delusion started by John Bell, the delusion saying that the world has to be "non-local" ...

The truth is just the opposite one, of course. Locality works perfectly – at least in non-gravitational context. ...

All the differences between classical physics and quantum mechanics are consequences of the nonzero commutators in quantum mechanics i.e. the uncertainty principle. There are absolutely no other differences between classical physics and quantum mechanics. That fact also means that whenever the commutators between the relevant quantities are zero or negligible, the difference between classical physics and quantum mechanics becomes zero or negligible, too.

The uncertainty principle is the actual reason why it's inconsistent in quantum mechanics to assume that the observables have their values before they're actually observed.
Yes, and that is how the quantum mechanics textbooks have explained it since before EPR, Bohm, and Bell.

Here is a new paper on retrocausality in quantum mechanics, and the authors keep talking about how they want to believe in "realism" and reality. That means that physical systems have their values before they're actually observed. Quantum mechanics shows that observables cannot have their values before observations, but realism is the hope that the state can be fully described by other variables before observation.

So realism is not a belief in the real world, but a believe in a mathematical abstraction of reality. This word usage seems peculiar to me, as I would have said that I believe in realism until I found out what they mean by the term. In quantum mechanics, realism has been a dead-end for 80 years.

Wednesday, July 27, 2016

Not to reveal to us the real nature of things

Much of the confusion over the interpretation of quantum mechanics concerns whether the mathematical wave functions reveal to us the true (ontic) nature of electrons and photons.

Henri Poincare explained it in his famous 1902 book:
The object of mathematical theories is not to reveal to us the real nature of things; that would be an unreasonable claim. Their only object is to co-ordinate the physical laws with which physical experiment makes us acquainted, the enunciation of which, without the aid of mathematics, we should be unable to effect. Whether the ether exists or not matters little — let us leave that to the metaphysicians; what is essential for us is, that everything happens as if it existed, and that this hypothesis is found to be suitable for the explanation of phenomena. After all, have we any other reason for believing in the existence of material objects? That, too, is only a convenient hypothesis; only, it will never cease to be so, while some day, no doubt, the ether will be thrown aside as useless. [Science and Hypothesis, chap. 12, 1st paragraph]
Yes, I could not say it better today. The object of mathematical theories is to make sense of quantified experiments, not to be a perfect description of the real nature of electrons.

The aether is a convenient hypothesis, because our best theories assume a pervasive and uniform quantum field. Even in a vacuum, it has energy, Lorentz symmetries, gauge fields, and virtual particles. Quantum electrodynamics is all about perturbations to that quantum vacuum. In the early XXc the aether was cast aside as useless but it keeps coming back under different names.

It is foolish to think that the true nature of the electron is its wave function. That is just a mathematical device for predicting observables. Attempts to clarify the nature of electrons with hidden variables, as by Bell and his followers, are even more foolish.

Sunday, July 24, 2016

Musser tries to explain nonlocality

George Musser writes on the Lumo blog about nonlocality:
The situation is nonlocal inasmuch as we are speaking of joint properties of spatiotemporally separated objects. We know the singlet electrons have a total spin of zero, but we cannot ascribe either particle a definite spin in advance of measurement. If you object to the word “nonlocal” in this context, fine. I would also be happy with “nonseparable,” “delocalized,” or “global.” ...

The real issue is how to explain the phenomenology of correlations. I know that LuboŇ° does not think highly of the EPR paper (neither did Einstein), but it is the usual starting point for this discussion, so let us focus on the most solid part of that paper: the dilemma it presents us with. Given certain assumptions, to explain correlated outcomes, we must either assign some preexisting values to the properties of entangled particles or we must imagine action at a distance. Einstein recoiled from the latter possibility — he was committed to (classical) field theory. The former possibility was later ruled out by Bell experiments. So, presumably we need to question one of the assumptions going into the argument, and that’s where we go down the interpretive rabbit hole of superdeterminism, Everettian views, and so forth, none of which is entirely satisfactory, either. We seem to be stuck. ...

If you disagree, fine. Tell me what is going on. Give me a step-by-step explanation of how particle spins show the observed correlations even though neither has a determinate value in advance of being measured.
He is saying that if you want an intuitive understanding of "what is going on", then you have to either accept action-at-distance or contradictions with experiment. Both of those are unacceptable.

The way out of this conundrum, as the textbooks have explained for 85 years, is to reject the idea that particle spin can be modeled by classical (pre-quantum) objects. By "what is going on", he means something he can relate to by personal experience. In other words, he wants a classical interpretation.

The classical interpretations are made impossible by the noncommuting observables. Or by Bell's theorem or several other ways the point has been explained.

When you observe a particle's spin, you change its state into something that has a classical interpretation. But just temporarily. If you then measure spin in a different direction, you are back to non-classical behavior.

The supposed nonlocality is just an illusion. The experiments only seem nonlocal if you try to match them to a classical model.

I don't know why this is so difficult. I am just saying what the textbooks have said for decades.

Thursday, July 21, 2016

Einstein’s process of discovery

Einstein idolizer John D. Norton writes a lot of nonsense about Einstein, with the latest being How Einstein Did Not Discover. It includes:
11. The Power of a Single Experiment

The example of Einstein’s discovery of the light quantum will illustrate another popular myth about what powered Einstein’s discoveries. There is, in each case, a single, perplexing, powerful, decisive, crucial experiment. Only Einstein, it is said, was able to interpret the experiment correctly and arrive at a great discovery.

This myth is best known through the example of the Michelson-Morley experiment. Contrary to many reports, Einstein did not formulate the theory as a direct response to its null outcome. The mistake is an easy one to make. It was long standard for pedagogic accounts of special relativity to begin with an account of the experiment and jump directly to special relativity. The pattern starts with Einstein’s (1907) early review. It introduces the Michelson-Morley experiment and no others in its opening pages. Holton’s (1969) analysis of the myth is standard and includes numerous examples. To it, we should add that the null result of the Michelson-Morley experiment was unhelpful and possibly counter-productive in Einstein’s investigations of an emission theory of light, for the null result is predicted by an emission theory.
As Norton alludes, Einstein's 1907 article and most modern textbooks introduce the Michelson-Morley experiment as being crucial to the development of special relativity. The papers that announced the discovery of the FitzGerald contraction, Lorentz transformation, local time, and spacetime geometry all explained these concepts as consequences of MMX.

The null result could be explained by a light emission theory, or by a stationary Earth, or by an aether drag. So MMX alone did not prove special relativity. Other experiments cause the experts to reject those possibilities.

Einstein did not cite MMX or those other theories and experiments because his work was derivative. He was just giving a reformulation of Lorentz's theory, but not recapitulating Lorentz's theoretical and experimental arguments.

Einstein historians have to do a funny dance around this subject, because the relativity textbooks don't make any sense. They say that the MMX was crucial, and they say that Einstein invented special relativity, but Einstein denied that MMX had anything to do with his work.

There is a larger reason for denying the importance of MMX. Philosophers and science historians today are anti-positivists and they deny that scientific breakthrus are based on crucial experiments. Relativity was considered the first great breakthru of the XXc, so the postivist-haters need to have some way of saying that it was not driven by experiment.

It seems possible that someone could have predicted special relativity from abstract principles of causality, or from mathematical analysis of electromagnetism. But that is not how it happened. It was postivist analysis of experiments.

Monday, July 18, 2016

Air Force funds quantum computers

NextGov reports:
The Air Force wants white papers that describe new ways quantum computing could help achieve its mission, according to an amended Broad Agency Announcement posted Friday. Eventually, the government could provide a test-bed where a contractor might install, develop and test a quantum computing system, according to the announcement.

Last year, the Air Force announced it had about $40 million available to fund research into, and the eventual maintenance and installation of a quantum system -- a branch of emerging computing technology that relies on the mechanics of atomic particles to process complex equations. ...

The Air Force is among several other federal groups interested in quantum.

Last year, for instance, the Intelligence Advanced Research Projects Activity, which focuses on research and development, said it planned to award a multiyear grant to IBM to build out a component of a quantum computer. A true quantum computer might be useful, IARPA program manager David Moehring told Nextgov then, because it might be applied to complex questions like the "Traveling Salesman Problem" -- what's the best way for a salesman to visit several different locations?
$40M is not much money to the Air Force, but it shows how money is pouring into the field.

Most quantum computing projects are not even very expensive, by the standards of modern physics experiments.

I liked this comment:
String theory, multiple universes, complexity, quantum teleportation... these are to Physics what Division I football is to college, which is to say, it sells tickets and opens purse strings. No one is going to buy a book on Newtonian physics and relive their junior year in high school. But let Brian Greene write something crazy and out there about a "Holographic Universe" or somesuch and the peeps will scoop it up, and maybe even decide to become physics and math majors, and there are lots of worse results than that. So let the alumni donate for the football team, and let the googley-eyed high schoolers all plan on high-paying and fulfilling careers as Quantum Mechanics. It puts butts in the seats...
So do most physicists realize that 90% of the public image of Physics is garbage? But they quietly go along with it because it keeps the funding dollars coming in?

Sometimes I think that I am just posting the obvious on this blog. Maybe everyone knows it, but cannot say. I can say it because I am not part of the Physics money machine.

Tuesday, July 12, 2016

Google wants us to worry about quantum computing

Google is trying to get nervous about quantum computing:
Google is working on safeguarding Chrome against the potential threat of quantum computers, the company announced today.
At least they admit that the quantum computers may be impossible:
Quantum computers exist today but, for the moment, they are small and experimental, containing only a handful of quantum bits. It's not even certain that large machines will ever be built, although Google, IBM, Microsoft, Intel and others are working on it. ... quantum computers could undermine the security of the entire internet.
Johah Lehrer is back with a new book, after a spectacular rise and fall as a science writer. Before his fall, I accused him of fabricating an Einstein quote, but no one cared about that.

Thursday, July 7, 2016

When evidence is too good to be true

Phys.org reported this in January:
Under ancient Jewish law, if a suspect on trial was unanimously found guilty by all judges, then the suspect was acquitted. This reasoning sounds counterintuitive, but the legislators of the time had noticed that unanimous agreement often indicates the presence of systemic error in the judicial process, even if the exact nature of the error is yet to be discovered. They intuitively reasoned that when something seems too good to be true, most likely a mistake was made.

In a new paper to be published in The Proceedings of The Royal Society A, a team of researchers, Lachlan J. Gunn, et al., from Australia and France has further investigated this idea, which they call the "paradox of unanimity."

"If many independent witnesses unanimously testify to the identity of a suspect of a crime, we assume they cannot all be wrong," coauthor Derek Abbott, a physicist and electronic engineer at The University of Adelaide, Australia, told Phys.org. "Unanimity is often assumed to be reliable. However, it turns out that the probability of a large number of people all agreeing is small, so our confidence in unanimity is ill-founded. This 'paradox of unanimity' shows that often we are far less certain than we think."

The researchers demonstrated the paradox in the case of a modern-day police line-up, in which witnesses try to identify the suspect out of a line-up of several people. The researchers showed that, as the group of unanimously agreeing witnesses increases, the chance of them being correct decreases until it is no better than a random guess.

This is an important point, and a paradox. If someone tells you that scientists are unanimous about global warming, vaccine policy, cosmic inflation, or Donald Trump, you should be suspicious.

Of course the textbooks are unanimous on many things, such as energy conservation. So we should not reject all that textbook knowledge. But most of those things only got into the textbooks after some healthy debate about the pros and cons.

Wednesday, July 6, 2016

Comparing science to poetry

Philosophers sometimes complain that they get no respect from scientists.

The NY Times has a running series of essays, and the latest one denies the scientific method because he says science is like poetry:
In 1970, I had the chance to attend a lecture by Stephen Spender. He described in some detail the stages through which he would pass in crafting a poem. He jotted on a blackboard some lines of verse from successive drafts of one of his poems, asking whether these lines (a) expressed what he wanted to express and (b) did so in the desired form. He then amended the lines to bring them closer either to the meaning he wanted to communicate or to the poetic form of that communication.

I was immediately struck by the similarities between his editing process and those associated with scientific investigation and began to wonder whether there was such a thing as a scientific method. Maybe the method on which science relies exists wherever we find systematic investigation. In saying there is no scientific method, what I mean, more precisely, is that there is no distinctly scientific method.

There is meaning, which we can grasp and anchor in a short phrase, and then there is the expression of that meaning that accounts for it, whether in a literal explanation or in poetry or in some other way. Our knowledge separates into layers: Experience provides a base for a higher layer of more conceptual understanding. This is as true for poetry as for science. ...

James Blachowicz is a professor emeritus of philosophy at Loyola University Chicago
Science finds objective truths about the world. Poetry just expresses thoughts in an entertaining way. If he cannot see the difference, he deserves no respect.