Friday, October 20, 2017

Coyne gives concise argument against free will

Jerry Coyne complains that his fellow leftist-atheist-scientists do not necessarily reject free will, and explains on his blog:
Seriously though, Dr. Coyne could you point me to some post of yours or some articles that clearly explain the determinist position (I’m not even sure I am describing it accurately here!). ...

The best answer I can give (besides reading Sean Carroll’s “The Big Picture”) is to say that our brain is made of matter, and matter follows the laws of physics. Insofar as our neurons could behave fundamentally unpredictably, if affected by quantum mechanics in their firing, that doesn’t give us a basis for agency either.

Since our behaviors all come from our material bodies and brains, which obey the laws of physics, which by and large are deterministic on a macro scale, then our behaviors at any one instant are determined as well by the configuration of molecules in the Universe.

All you have to do is accept that our bodies and brains are made of stuff, and stuff on the macro scale is deterministic in its behavior. Even compatibilists accept these points as well the fundamental determinism (though often unpredictability) of our behavior.

See the book Free Will by Sam Harris which simply explains why we have no basis, in the form of data, to conclude the we can freely make decisions.
I have criticized him before, but his conciseness this time shows his errors more clearly.

Yes, the laws of physics are "by and large ... deterministic on a macro scale". So is human behavior. But macro physics cannot predict with perfect precision, and human behavior also deviates from predictions. So nothing about macro physics contradicts free will.

Neurons certainly are affected by quantum mechanics. Both agency and quantum mechanics lead to unpredictability. So why can't one be related to the other?

Saying that we have "no data" to support freely-made decisions is just nutty. Everyone makes decisions every day. Maybe some of these decisions are illusory somehow, but they are certainly data in favor of decisions.

Free will is mostly a philosophical issue, and you can believe in it or not. I am just rebutting what is supposedly a scientific argument against it.

Wednesday, October 18, 2017

Unanswered questions are not even science

Here is a BigThink essay:
Here, we look at five of the biggest unanswered questions in science. There is no reason to think that we won’t get the answers to these questions eventually, but right now these are the issues on the cutting edge of science.
What are the boundaries of the Universe?

The universe is expanding, which we’ve known for a while. But where is, or what is, the boundary? ...

Thanks to cosmic background radiation and the path it takes, scientists currently believe the universe is flat — and therefore infinite. However, if there is even a slight curve to the universe, one smaller than the margin of error in their observations, then the universe would be a sphere. Similarly, we can’t see anything past the observable universe, so we can rely only on our math to say if the universe is likely to be finite or infinite. The final answer on the exact size of the cosmos may never be knowable.
No, a flat universe does not imply an infinite universe. I don't see how anything would prove an infinite universe, and I am not sure it makes any sense to talk about an infinite universe.
What is consciousness?

While the question of what consciousness is exactly belongs to philosophy, the question of how it works is a problem for science.
It is not clear that consciousness has a scientific definition. If it did, then we could ask whether computers are conscious or will ever be conscious. It seems to me that some day computers will be able to give an appearance of consciousness, but it is not clear that we will ever have a way of saying whether or not they are really conscious.
What is dark energy?

The universe is expanding, and that’s getting faster all the time. We say that the cause of the acceleration is “Dark Energy”, but what is it? Right now, we don’t really have any idea.
It is possible that we already know all we will ever know about dark energy. Quantum mechanics teaches that systems always have a zero point energy. Maybe the dark energy is just the zero point energy of the universe.
What happened Before the Big Bang?

The Big Bang is often thought of as an explosion which caused the beginning of our universe. However, it is better understood as the point where space began to expand and the current laws of physics begin. There was no explosion. Working backwards from now, we can show that all the matter in the universe was in one place at the same time. At that moment, the universe began to expand and the laws of nature, as we understand them, begin to take shape. But what happened before that?
Again, why is this even a scientific question? Maybe we will have theories for what happened before the big bang, and some ppl already have such theories, but there is no way of testing them. It is like theorizing about alternate universes.
Is there a limit to computing power?

Right now, many people subscribe to Moore’s law, the notion that there is a constant rate to how cheap and how powerful computer chips become over time. But what happens when you can’t fit anymore elements onto a chip? Moore himself suggested that his law will end in 2025 when transistors can’t be made any smaller, saying that we will be forced to build larger machines to get more computing power after that. Others look to new processing techniques and exotic materials to make them with to continue to the growth in power.
This is the closest to a scientific question. There are some theoretical limits to computing power, and there are likely to be some practical limits also.

Peter Woit informs us:
The traditional number of 10500 string theory vacua has now been replaced by 10272,000 (and I think this is per geometry. With 10755 geometries the number should be 10272,755). It’s also the case that “big data” is now about the trendiest topic around, and surely there are lots of new calculational techniques available.
This sounds like a joke, but is not.

Sunday, October 15, 2017

50 year anniversary of Weinberg's famous paper

Peter Woit writes:
The 50th anniversary of electroweak unification is coming up in a couple days, since Weinberg’s A Model of Leptons paper was submitted to PRL on October 17, 1967. For many years this was the most heavily cited HEP paper of all time, although once HEP theory entered its “All AdS/CFT, all the time” phase, at some point it was eclipsed by the 1997 Maldacena paper (as of today it’s 13118 Maldacena vs. 10875 Weinberg). Another notable fact about the 1967 paper is that it was completely ignored when published, only cited twice from 1967 to 1971.

The latest CERN Courier has (from Frank Close) a detailed history of the paper and how it came about. It also contains a long interview with Weinberg. It’s interesting to compare his comments about the current state of HEP with the ones from 2011 (see here), where he predicted that “If all they discover is the Higgs boson and it has the properties we expect, then No, I would say that the theorists are going to be very glum.”
It is strange to make a big deal out of a 1967 paper, when no one thought it was important at the time.

Usually, if someone solves some big scientific problem, he has evidence in his paper, he writes followup papers, he gives talks on it, others get persuaded, etc. Weinberg's paper was not particularly original, influential, or important. It got cited a lot later, as it because a popular paper to cite when mentioning the Standard Model.

It appears to me that the Higgs mechanism and the renormalizability were much more important, as explained here:
Meanwhile, in 1964, Brout and Englert, Higgs, Kibble, Guralnik and Hagen had demonstrated that the vector bosons of a Yang–Mills theory (one that is like QED but where attributes such as electric charge can be exchanged by the vector bosons themselves) put forward a decade earlier could become massive without spoiling the fundamental gauge symmetry. This “mass-generating mechanism” suggested that a complete Yang–Mills theory of the strong interaction might be possible. ...

Today, Weinberg’s paper has been cited more than 10,000 times. Having been cited but twice in the four years from 1967 to 1971, suddenly it became so important that researchers have cited it three times every week throughout half a century. There is no parallel for this in the history of particle physics. The reason is that in 1971 an event took place that has defined the direction of the field ever since: Gerard ’t Hooft made his debut, and he and Martinus Veltman demonstrated the renormalisability of spontaneously broken Yang–Mills theories.
Weinberg and 2 others got the Nobel Prize in 1979, 't Hooft and Veltman in 1999, and Englert and Higgs in 2013.

Thursday, October 12, 2017

Experts dispute meaning of Bell's theorem

I mentioned 'tHooft's new paper on superdeterminism, and now Woit links to an email debate between 'tHooft and philosopher of physics Tim Maudlin over it and Bell's Theorem.

The debate is very strange. First of all, these two guys are extremely smart, and are two of the world's experts on quantum mechanics. And yet they disagree so much on the basics, that Maudlin accuses 'tHooft of not understanding Bell's theorem, and 'tHooft accuses Maudlin of sounding like a crackpot.

Bell's theorem is fairly elementary. I don't know how experts can get it wrong.

Maudlin says Bell proved that the quantum world is nonlocal. 'tHooft says that Bell proved that the world is either indeterministic or superdeterministic. They are both wrong.

I agree with Maudlin that believing in superdeterminism is like believing that we live in a simulation. Yes, it is a logical possibility, but it is very hard to take the idea seriously.

First of all, Bell's theorem is only about local hidden variable theories being incompatible with quantum mechanics. It doesn't say anything about the real world, except to reject local hidden variable theories. It is not even particular important or significant, unless you have some sort of belief or fondness for hidden variable theories. If you don't, then Bell's theorem is just an obscure theorem about a class of theories that do not work. If you only care about what does work, then forget Bell.

I explained here that Bell certainly did not prove nonlocality. He only showed that a hidden variable theory would have to be nonlocal.

Sometimes people claim that Bell should have gotten a Nobel prize when experiments confirmed his work. If Bell were right about nonlocality, and if the experiments confirmed nonlocality, then I would agree. But Bell was wrong about nonlocality, and it is highly likely that the Nobel committee recognized that.

At most, Bell proved that if you want to keep locality, then you have to reject counterfactual definiteness. This should be no problem, as mainstream physicists have rejected it since about 1930.

I am baffled as to how these sharp guys could have such fundamental disagreement on such foundational matters. This is textbook knowledge. If we can't get a consensus on this, then how can we get a consensus on global warming or anything else?

Update: Lubos Motl piles on:
Like the millions of his fellow dimwits, Maudlin is obsessed with Bell and his theorem although they have no implications within quantum mechanics. Indeed, Bell's inequality starts by assuming that the laws of physics are classical and local and derives some inequality for a function of some correlations. But our world is not classical, so the conclusion of Bell's proof is inapplicable to our world, and indeed, unsurprisingly, it's invalid in our world. What a big deal. The people who are obsessed with Bell's theorem haven't made the mental transformation past the year 1925 yet. They haven't even begun to think about actual quantum mechanics. They're still in the stage of denial that a new theory is needed at all.
I agree with this. Bell's theorem says nothing about quantum mechanics, except that it helps explain why QM cannot be replaced with a classical theory.
Free will (e.g. free will of a human brain) has a very clear technical, rational meaning: When it exists, it means that the behavior affected by the human brain cannot be determined even with the perfect or maximum knowledge of everything that exists outside this brain. So the human brain does something that isn't dictated by the external data. For an example of this definition, let me say that if a human brain has been brainwashed or equivalently washed by the external environment, its behavior in a given situation may become completely predictable, and that's the point at which the human loses his free will.

With this definition, free will simply exists, at least at a practical level. According to quantum mechanics, it exists even at the fundamental level, in principle, because the brain's decisions are partly constructed by "random numbers" created as the random numbers in outcomes of quantum mechanical measurements.
I agree with this also. No one can have perfect or maximum knowledge, so free will is not really a scientific concept, but it clearly exists on a practical level, except for brainwashed ppl.

But I don't agree with his conclusion:
Maudlin ends up being more intelligent in these exchanges than the Nobel prize winner. But much of their discussion is a lame pissing contest in the kindergarten, anyway. There are no discussions of the actual quantum mechanics with its complex (unreal) numbers used as probability amplitudes etc.
No, 'tHooft's position is philosophically goofy but technically correct. Maudlin accepts fallacious arguments given by Bell, when he says:
Bell was concerned not with determinism but with locality. He knew, having read Bohm, that it was indeed possible to retain determinism and get all the predictions of standard non-Relativistic quantum theory. But Bohm's theory was manifestly non-local, so what he set about to investigate was whether the non-locality of the theory could be somehow avoided. He does not *presume* determinism in his proof, he rather *derives* determinism from locality and the EPR correlations. Indeed, he thinks that this step is so obvious, and so obviously what EPR did, that he hardly comments on it. Unfortunately his conciseness and reliance on the reader's intelligence have had some bad effects.

So having *assumed* locality and *derived* determinism, he then asks whether any local (and hence deterministic) theory can recover not merely the strict EPR correlations but also the additional correlations mentioned in his theorem. And he finds they cannot. So it is not *determinism* that has to be abandoned, but *locality*. And once you give up on locality, it is perfectly possible to have a completely deterministic theory, as Bohm's theory illustrates.

The only logically possible escape from this conclusion, as Bell recognized, is superdeterminism: the claim that the polarizer settings and the original state of the particles when they were created (which may be millions of years ago) are always correlated so the apparatus setting chosen always corresponds—in some completely inexplicable way—to the state the particles happen to have been created in far away and millions of years ago.
No, Bell and Maudlin are just wrong about this. All of that argument also assumes a hidden variable theory, and therefore has no applicability to quantum mechanics, as QM (and all of physics since 1930) is not a hidden variable theory. If Bell and Maudlin were correct about this, then Bell (along with Clauser and Aspect) would have gotten the Nobel prize for proving nonlocality. 'tHooft is correct in accepting locality, and denying that Bell proved nonlocality.

Wednesday, October 4, 2017

Nobel prize for gravitational waves

The NY Times reports:
Rainer Weiss, a professor at the Massachusetts Institute of Technology, and Kip Thorne and Barry Barish, both of the California Institute of Technology, were awarded the Nobel Prize in Physics on Tuesday for the discovery of ripples in space-time known as gravitational waves, which were predicted by Albert Einstein a century ago but had never been directly seen. ...

Einstein’s General Theory of Relativity, pronounced in 1916, suggested that matter and energy would warp the geometry of space-time the way a heavy sleeper sags a mattress, producing the effect we call gravity. His equations described a universe in which space and time were dynamic. Space-time could stretch and expand, tear and collapse into black holes — objects so dense that not even light could escape them. The equations predicted, somewhat to his displeasure, that the universe was expanding from what we now call the Big Bang, and it also predicted that the motions of massive objects like black holes or other dense remnants of dead stars would ripple space-time with gravitational waves.
These articles cannot resist making all about Einstein. But Einstein did not really believe in the geometry of space-time, or in black holes, or in the Big Bang, or in gravitational waves.

You might say: "Who cares what Einstein believed? His equations imply those things, whether he believed in them or not."

I would not say that they are his equations. Grossmann and Levi-Civita convinced him to use the Ricci tensor, and the equation is Ricci=0. Einstein's contribution was minor.

Einstein is mainly famous because he is credited for special relativity, and the only reason he is credited for that is that supposedly Lorentz and Poincare had some faulty beliefs about the interpretation of the equations. Everyone agrees that Lorentz and Poincare had all the equations before Einstein. So if the credit is based on who had the equations, not who have the proper beliefs, then Einstein should get no credit for special relativity. (I say that Einstein was the one with the faulty beliefs about special relativity, but most ppl do not agree with me on that point.)

Anyway, congratulations to the Nobel winners and the LIGO team. It is nice to see a prize given within a couple of years of the discovery being made.

Monday, October 2, 2017

Professor baffled by rational voters

Jerry Coyne is a popular leftist-atheist-evolutionist blogger. His views are fairly typical for a retired professor in that category, and maybe even more sensible that most with criticisms of the Regressive Left and with evolution arguments that are firmly grounded in science. But he is completely baffled about votes for Donald Trump:
I doubt there’s anyone on this website who voted for Trump last November—or, if they did, they’re keeping it quiet.  And most of us, including me, think that those who did vote for The Donald were irrational. My take was that these people, blinded by their bigotry and nativism, simply voted against their own interests, thereby shooting themselves in the foot. In other words, their actions were irrational.

But Keith Stanovich, a professor emeritus of applied psychology and human development at the University of Toronto, disagrees. He says that there’s no obvious reason why Trump voters were irrational, and he’s an expert on rationality and cognitive science. (His last book, The Rationality Quotient, written with Richard West and Maggie Toplkak is an analysis of cognitive thinking and of how to construe “rationality”).  In a new article in Quillette, “Were Trump voters irrational?“, Stanovich, using several ways to conceive of “rationality”, says “no.”
He goes on to explain some arguments for Trump voters being rational.

This is bewildering. Coyne is obviously a smart guy. Trump's campaign took off over 2 years ago. His speeches make it very clear where he stands. Since he had no endorsements, he won by persuading 60 million voters of his message.

I may have to revise some of my opinions about the rationality of scientists. I have always thought that if a man is smart enuf to understand some advance scientific specialty, they he is also smart enuf to understand more trivial matters. But how can I explain academic misunderstandings of Pres. Trump?

Coyne does not believe in free will. I have criticized him for that, such as in Aug 2016, Sept 2016, and July 2017. Given that, I am not sure why he thinks any voters are rational. To him, the election outcome is predetermined, and he has no ability to influence it, or even to decide his own vote. He rejects the notion that humans have moral responsibility for their actions. He complains about funding for Christian free will beliefs.

And somehow Coyne is the rational one, and 60 million Trump voters are not.

Meanwhile, Scott Aaronson tries to show off his rationality about IQ tests:
I know all the studies that show that IQ is highly heritable, that it’s predictive of all sorts of life outcomes, etc. etc. I’m also aware of the practical benefits of IQ research, many of which put anti-IQ leftists into an uncomfortable position: for example, the world might never have understood the risks of lead poisoning without studies showing how they depressed IQ. And as for the thousands of writers who dismiss the concept of IQ in favor of grit, multiple intelligences, emotional intelligence, or whatever else is the flavor of the week … well, I can fully agree about the importance of the latter qualities, but cannot go along with many of those writers’ barely-concealed impulse to lower the social status of STEM nerds even further, or to enforce a world where the things nerds are good at don’t matter. ...

On the other hand … I was given one official IQ test, when I was four years old, and my score was about 106. The tester earnestly explained to my parents that, while I scored off the chart on some subtests, I completely bombed others, and averaging yielded 106.
As an example of what he got wrong, he said that he might not call for help if his neighbor's house was burning down!

Sometimes, I am not sure if he is joking or not. A smart 4yo kid would understand that a house fire is dangerous. It seems plausible to me that Aaronson showed high mental skills in some areas at age 4, but not in other areas.