Sunday, November 23, 2014

Attack on free will is misunderstood Marxism

I have wondered how American professors could adopt such foolish disbeliefs in free will, and maybe this is a clue. It is rooted in misunderstood Marxism!

The New Yorker reports
[B. F.] Skinner was enthralled. Two years after reading the Times Magazine piece, he attended a lecture that Pavlov delivered at Harvard and obtained a signed picture, which adorned his office wall for the rest of his life. Skinner and other behaviorists often spoke of their debt to Pavlov, particularly to his view that free will was an illusion, and that the study of human behavior could be reduced to the analysis of observable, quantifiable events and actions.

But Pavlov never held such views, according to “Ivan Pavlov: A Russian Life in Science” (Oxford), an exhaustive new biography ...

Pavlov’s research originally had little to do with psychology; it focussed on the ways in which eating excited salivary, gastric, and pancreatic secretions. ... That research won him the 1904 Nobel Prize in Physiology or Medicine. But a dog’s drool turned out to be even more meaningful than he had first imagined: it pointed to a new way to study the mind, learning, and human behavior. ...

The Soviets came to regard Pavlov as a scientific version of Marx. The comparison could not entirely have pleased Pavlov, who rebelled at the “divine” authority accorded Marx (“that fool”) and denied that his own “approach represents pure materialism.” Indeed, where others thought that the notion of free will would come to be discarded once we had a full understanding of how the mind worked, Pavlov was, at least at times, inclined to think the opposite. “We would have freedom of the will in proportion to our knowledge of the brain,” he told Gantt in 1927, just as “we had passed from a position of slave to a lord of nature.”

That year, Stalin began a purge of intellectuals. Pavlov was outraged. At a time when looking at the wrong person in the wrong way was enough to send a man to the gulag, he wrote to Stalin saying that he was “ashamed to be called a Russian.”
I realize that people doubted free will before this. Wikipedia says Bertrand Russell rejected free will along with Christianity as a teenager (in around 1890), but also says that he was a big advocate of freedom of thought all his life. I don't know how he reconciles that.

Skinner was voted in 2002 as the most influential psychologist of the 20th century. He denied free will. His biological determinism was based on behavior, while others focused on DNA and genes. Marxism was not necessarily opposed to free will, but very much advocated a sort of historical determinism. They all seemed to think that XX century science is going to make humanity so predictable that there will be no room left for free will.

I am just trying to understand what would lead leftist Harvard professors to reject free will. This is just a clue, and I am sure it is an incomplete explanation.

Friday, November 21, 2014

Free will is stochastic

Leftist-atheist-evolutionist Jerry Coyne attacks neuroscientist Michael Gazzaniga on the subject of free will:
Gazzaniga’s thesis is that, although determinism reigns at the brain level, so that our actions are determined in advance (though not 100% predictable), humans nevertheless still have free will and moral responsibility. In other words, he’s a compatibilist. Compatibilism is, of course, the notion that “free will” can still exist despite physical determinism of our behaviors, including “choice”. ...

As an incompatibilist, I reject the notion that humans have moral responsibility for their actions, since the concept of “moral responsibility” involves “ability to choose otherwise.” ...

There are dozens of different (and sometimes incompatible!) ways to define “free will” to make it compatible with determinism, which leads me to suspect that compatibilists are like theologians, who redefine God so it always remains compatible with the latest findings of science ...

Gazzaniga’s whole thesis is undercut by this misguided statement: “My contention is that ultimately responsibility is a contract between two people rather than a property of a brain, and determinism has no meaning in this context.” ...

And let me say this one more time: philosophers who are truly concerned with changing society based on reason wouldn’t be engaged in compatibilism, they’d be engaged in working out the consequences of determinism, especially its implications for how we reward and punish people.
It sure is funny how he can reject free will, and still be full of opinions on what everybody should be doing. Rejecting personal moral responsibility is too nutty for this blog, but I want to address some of his dubious scientific assertions.
One of the most obvious resemblances of theology to compatibilism is the continual redefinition of “free will” so that (like God) it’s always preserved despite scientific advances. When Libet and Soon et al. showed that they could predict a person’s behavior several seconds in advance of that person’s conscious decision, the compatibilists rushed to save their definition, declaring that these experiments are completely irrelevant to the notion of free will. They’re not. For if free will means anything, it means that our choices are coincident with our consciousness of making them (to libertarians, our consciousness makes those choices, and we could have chosen otherwise).
John Maynard Keynes (and others) have been quoted as saying:
When my information changes, I alter my conclusions. What do you do, sir?
So there is nothing wrong with theologians revising their views based on scientific evidence. Now that I have seen Libet's experiments, I would not be surprised at all if my urge to eat a cheeseburger was detectable in my brain before I was consciously aware of it. But I still have the free will to choose to eat or not eat that cheeseburger.

Coyne says that he follows this Anthony R. Cashmore paper:
Hence, the popular debate concerning the relative importance of genes and environment on behavior, is commonly inadequate for two reasons: both because it ignores the question of responsibility (or lack of) and because of the additional stochastic component that influences biology (12). ...

The introduction of stochasticism would appear to eliminate determinism. However there are three additional points that need to be addressed here. The first point is that, at least in some instances, what at first glance may appear to be stochastic might simply reflect microenvironmental differences and may not be the direct consequence of some inherent stochastic property of atomic particles. The second point is that some physicists, for example ’t Hooft (14), do not necessarily accept the apparent unpredictability associated with the quantum mechanical view of matter (It was concern about this unpredictability that prompted Einstein to offer the viewpoint that “God does not play dice”). Finally, even if the properties of matter are confirmed to be inherently stochastic, although this may remove the bugbear of determinism, it would do little to support the notion of free will: I cannot be held responsible for my genes and my environment; similarly, I can hardly be held responsible for any stochastic process that may influence my behavior! ...

I believe that free will is better defined as a belief that there is a component to biological behavior that is something more than the unavoidable consequences of the genetic and environmental history of the individual and the possible stochastic laws of nature.
It is true that there is a split among physicists about that "stochastic component". Einstein was an ordinary determinist, with his beliefs backed by religion more than science, and rejected quantum mechanics for that reason. 'tHooft has expressed sympathy for superdeterminism, an extreme view. Most physicists seem to be persuaded that quantum mechanics proves that nature is inherently random. A few deny randomness by supporting the many-worlds or Bohmian interpretations, and all the unobservable ghosts that go along with those views.

I stick to the hard science, and a lot of smart people are seriously confused. A stochastic process is just one that is parameterized by some measure space whose time evolution is not being modeled.

Unless you are modeling my urges for cheeseburgers, then my appetite is a stochastic process. By definition. Saying that it is stochastic does not rule out the idea that I am intentionally choosing that burger.

Certain quantum mechanical experiments, like radioactive decay or Stern–Gerlach experiment, are stochastic processes according to state-of-the-art quantum mechanics. That just means that we can predict certain statistical outcomes, but not every event. Whether these systems are truly deterministic, we do not know, and it is not clear that such determinism is really a scientific question.

The nature-nurture folks are fond of twin studies, where they separate influences into genetic (aka heritable), environmental (aka shared environment), and stochastic (aka non-shared environment) factors. Until these studies, most human behaviors were thought to be mostly environmental. These studies consistently show that they are about half genetic and half stochastic, with very little attributable to the (shared) environment. That is, when two kids grow up in the same household and community, their similarities are attributable to their genes (if siblings or identical twins) or to unknown factors that have nothing to do with that household and community. A 2000 review (pdf) says, "When genetic similarity is controlled, siblings often appear no more alike than individuals selected at random from the population." These findings are hard to accept, as it seems obvious that parental child-rearing practices should be a big factor.

The word "stochastic" here is chosen to include all factors other than heredity (aka genetic determinism) and environment. The twin studies do not know what it is, but need some term for unexplained variance. But now Cashmore goes a step further, and defines free will as what goes beyond heredity, environment, and stochastic process. Then he concludes that there is no such thing as free will.

These guys have some underlying misunderstanding of mathematics. Or physics. Or philosophy of science, I am not sure. If we have free will, and psychologists or sociologists go around studying people making decisions, then that free will would be seen as a stochastic process. We cannot model those decisions deterministically, and can only say that they are choosing from some auxiliary space of possibilities in a way that is not externally controlled. That is exactly what a stochastic process is.

Cashmore and Coyne say that they should not be responsible for their behavior if it is influenced by a stochastic process. They say that as if it were an obvious truth, and give no justification. But free will is a stochastic process. And of course they are responsible. The essence of their argument is to say that life is either random or non-random, and they are not responsible either way. Coyne adamantly contends that this is an irrefutable logical argument that only religious people would reject, and that is another reason for exterminating the evils of religion.

Again, I just deal with the hard science here, and not the theology. Cashmore and Coyne is simply logically incorrect in their dismissal of free will. Their view has no support from science or common sense.

Tuesday, November 18, 2014

The universe is not a quantum computer

Scott Aaronson and his fellow MIT professor are again bragging about the possibility of quantum computers, while dancing around the question of their impossibility.
[Lloyd] Immediately after the Physics Nobel Laureate Richard Feynman presented his vision of the quantum computer in 1982, researchers began to investigate how one could use the superposition of quantum states to do things that are not a conventional computer. ...

[Scott] If we were to discover that there is a deep reason why quantum computers are impossible, then that would ironically be the most exciting result. Because we would then have to rewrite the physics books in order to take account of the new understanding.

[Shmi] Scott, how can a quantum computer used to simulate quantum systems be impossible? After all, it’s just another quantum system, only somewhat more controlled. What might be the distinction between the simulator and the system itself that makes a quantum simulation impossible?

[Scott] You should probably be asking the QC skeptics that question, rather than me. ;-) But I assume they would answer something like: “the quantum systems that can occur in Nature are all extremely special, in ways that probably allow us to simulate them efficiently using a classical computer. So yes, you can presumably build a computer that simulates those systems, but what you had built wouldn’t deserve the title ‘quantum computer.’ What we don’t think you can build is a machine that implements a completely arbitrary sequence of 1- and 2-qubit unitary transformations, in order (for example) to efficiently factor a 10,000-digit number using Shor’s algorithm. And of course, consistent with our prediction, such machines are nowhere found in Nature.”
I am one of those QC skeptics, so I answer here. I have posted 5 reasons I believe QC is impossible, as well as various agreements and disagreements with Scott.

R.P. Feynman proposed a universal quantum simulator in 1982. He pointed out that if you program an ordinary Turing machine (ie, classical computer) to simulate the rules of quantum mechanics, then it suffers an exponential slowdown. But if a quantum computer had the mysteries of quantum mechanics baked in, then it could efficiently simulate a quantum system.

I have no argument with any of that, or with the basics of quantum mechanics.

Shmi argues above that any system could be said to simulate itself, and therefore a quantum system is simulating itself as a quantum computer. So quantum computers must exist, and he does not see how anyone can deny them. Scott calls this quantum computing in their sleep. (“Ein Quantencomputer hingegen wuerde solche Aufgaben im Schlaf erledigen …”)

It is a good question, because I am not denying quantum systems. Let me explain.

Consider a very simple quantum system, such as an electron trapped in a small cavity. Assume that the cavity is just sufficient to confine the electron. According to classical mechanics, the electron bounces around the cavity and cannot get out.

Experimentally, the electron eventually leaks out, at a seemingly random time.

Quantum mechanics teaches that the electron is really a wave of some sort, and eventually tunnels thru the potential energy barrier. Using a wave function for the electron, quantum mechanics can predict probabilities for the location of the electron and the time of escape. Physicists are satisfied with these probabilities because they believe in an inherent randomness of nature, so the quantum mechanical formulas are considered to be a complete solution to the problem.

Some people believe that there is, yet to be discovered, some underlying reality that will explain deterministically how the electron gets out.

This is all reasonable and straightforward, but simulating it has high complexity. Here is why. The simulator can never be sure when the electron leaves the cavity. As soon as the electron leaves, it will likely trigger a chain reaction of other events. So the simulator has to keep track of all possible times for the electron to leave, and then all those chain reactions. An electron leaving at an early time might trigger a very different chain reaction from the electron leaving at a later time.

Each of those chain reactions is another quantum process, and therefore subject to its own exploding number of possibilities. Thus the simulator has to keep track of many possibilities, each of which leads to many other possibilities, and so. Soon, there are way too many possibilities to keep track of.

You can use an approximation that divides time into discrete steps. But the number of possibilities increases exponentially with the number of steps. This is impractical on a Turing computer. The only hope, it seems, it to use a fancy quantum computer that can keep track of multiple states simultaneously without an exponential complexity explosion.

This is a valid argument that quantum simulations have very high Turing complexity. But does it prove the existence of a quantum computer?

For that, we need to run the argument backwards. We know that there are quantum systems in nature, and simulating themselves must have high Turing complexity, so those systems could be viewed as computers that can outperform a Turing computer. That is the (flawed) argument.

Let's go back to the electron in a cavity. Is it doing a quantum computation? To view it that way, you have to imagine that it is magically and silently keeping track of an exponentially increasing set of possibilities, and encoding them all in its wave function. Once you convince yourself that this electron is doing a computation that no Turing machine can do, you are then supposed to believe that advanced technology will be able to extract that ability, and apply it to factoring 200-digit numbers.

The flaw in the argument is the assumption that the natural process is the same as how we model it in the simulation. It is not. The electron is just a dumb particle-like wave-like object that bounces around a cavity until it eventually busts out. It is not doing computations on its wave function.

The typical wave function will represent an electron that is 3/4 in the cavity and 1/4 outside the cavity. Nothing like that is ever observed in nature. Every time we measure the electron, we always find it either entirely inside or entirely outside the cavity. The wave function is just our way of representing our uncertainty about whether the electron has escaped yet. In philosophical jargon, the wave function is epistemic, not ontic.

To answer Scott's question, I doubt that a classical computer can efficiently simulate a quantum system. Too many possibilities proliferate. The wave function may well be the best possible mathematical representation of an electron, but still not a true representation of the physical object. The probabilities help us make sense out of what the electron is doing, but I very much doubt that what the electron is really doing is a computation on probabilities.

I am reminded of artificial intelligence (AI) researchers who try to program a computer to tell the difference between the image of a dog and a cat. They would have surely concluded that it is impossible, were it not for the fact that a 3-year-old child does it easily. The correct conclusion is that the 3yo child is doing something different from what the AI researchers are doing. Likewise the electron is doing something different from what the quantum computing simulators are doing.

Scott wants a proof that quantum computing is impossible, or else he dismisses me as a crackpot who cannot back up what I say. I do not have a proof. I cannot rule out the possibility that an electron really is typically 3/4 in a cavity, even tho no one has ever seen it that way, and that this split in possible electron states can somehow be used to do a computation that is otherwise impossible. It just seems very far-fetched to me.

I do say that the arguments for quantum computing are fallacious. The above argument is essentially to say that simulating a quantum system is too complex so some computational ability can be extracted from that complexity. It is like saying that distinguishing a dog and cat is too complex for our Turing machines, so let's harness the brainpower of 3yo kids to factor 200-digit numbers.

The usual argument for quantum computing is a variation of the above argument. It says that QC is a logical consequence of quantum mechanical rules that have been verified for 90 years. It is true that superpositions have been used to predict probabilities of events for almost 90 years, but QC goes further than that, and uses superpositions to predict superpositions and extract computations.

To me, the observables and events are real, and the probabilities and superpositions are mathematical artifacts. No one ever observes a probability. That is just our way of organizing what we see. And we do not observe a superposition either.

Quantum mechanics was created with a positivist philosophy that views science as being all about explaining observations and measurement. Nowadays it is more common to attach interpretations to wave functions with descriptions like "ontic" and "realist". The positivist view questions whether these have any scientific meaning. There is no experiment that shows that the ontic or realist interpretations have any tangible merit. Just metaphysical fluff, the positivist would say.

That metaphysical fluff is crucial for the belief in QC.

I know that there are 100s of papers claiming to see Schroedinger cat states, where something is alleged to be analogous to a cat being half alive and half dead. And 100s more claiming to use QC or quantum mechanics to do a computation. I admit that every transistorized computer is using quantum mechanics to do a computation.

What we do not see is anyone using cat states to do a super-Turing computation. Scott admits this, as he bashes those who make exaggerated claims. If some does that, I will be proved wrong. Until that, QC is just a sci-fi fantasy like those Interstellar wormholes.

Update: Here is Scott's 2012 offer, with the argument that simulating reality is an argument for QC:
An even more dramatic way to put the point is this: if quantum computing is really impossible, then we ought to be able to turn that fact on its head. Suppose you believe that nothing done by “realistic” quantum systems (the ones found in Nature) can possibly be used to outperform today’s classical computers. Then by using today’s classical computers, why can’t we easily simulate the quantum systems found in Nature? What is the fast classical algorithm for simulating those quantum systems? How does it work? Like a wily defense attorney, the skeptics don't even try to address such questions; their only interest is in casting doubt on the prosecution's case.

The reason I made my $100 000 bet was to draw attention to the case that quantum computing skeptics have yet to offer. If quantum computing really does turn out to be impossible for some fundamental reason, then once I get over the shock to my personal finances, I'll be absolutely thrilled. Indeed, I'll want to participate myself in one of the greatest revolutions in physics of all time, a revolution that finally overturns almost a century of understanding of quantum mechanics. And whoever initiates that revolution will certainly deserve my money.
I say that QC is impossible for non-revolutionary reasons. The impossibility of QC will not overturn a century of understanding of quantum mechanics. It will only overturn 30 years of misunderstandings of quantum mechanics.

I do not know the fastest classical algorithm for simulating a quantum system. If the simulation has to keep track of many divergent probabilistic branches, then I expect it would be slow. The real reason we cannot efficiently simulate nature is because our mathematical models of electrons do not fully capture their physical reality.

Sunday, November 16, 2014

Astrophysical problems with Interstellar movie

The dubious physics of movies like Gravity have gotten attention, but the cosmologists seem much more fired up about Interstellar. I have not seen it yet.

Lee Billings writes in SciAm:
Christopher Nolan’s new film, Interstellar, is a near-future tale of astronauts departing a dying Earth to travel to Saturn, then through a wormhole to another galaxy, all in search of somewhere else humanity could call home. ...

If you watch movies for what they do to your mind rather than to your heart, though, the film may leave you less than starry-eyed. Despite being heavily promoted as hewing close to reality—Caltech physicist Kip Thorne wrote the first version of the story, and served as a consultant and producer on the film—some of the science in Interstellar is laughably wrong. Less lamented but just as damning, some parts of the story having nothing to do with science lack the internal self-consistency to even be wrong. ...

Much has already been written about the film’s scientific faults, pointing out fundamental problems with the astrophysics, planetary science and orbital mechanics that underpin key plot points. Just as much ink has been spilled (or pixels burned) saying that such details shouldn’t get in the way of a good story, that this movie wasn’t made for the edification of scientists but for the entertainment of the general public.
Based on reviews, it sure seems to me that this movie was made for the edification of scientists. They took every peculiar aspect of relativity, and worked it into the plot as well as they could. Time dilation, black hole, wormhole, twin paradox, closed timelike curves, and loss of information in black holes. And probably a couple of more that I might not catch even when I see the movie.

Sure, they could have made a more realistic movie, or a more coherent plot. Or just invented their own science fiction, like Star Trek. No, they made a deliberate decision to pack as much relativity in the movie as they could, and still have a marketable movie.

Does it work? I will form an opinion when I watch. I'll try to view it for what it is: a relativity show on a $150M budget.

Update: NY Times Dennis Overbye says that he had to study the physics and watch the movie a second time to appreciate it:
The second time I saw the movie, clued in by Dr. Thorne’s new book, “The Science of Interstellar,” I enjoyed it more, and I could appreciate that a lot of hard-core 20th- and 21st-century physics, especially string theory, was buried in the story — and that there was a decipherable, if abstruse, logic to the ending. But I wonder if a movie that requires a 324-page book to explicate it can be considered a totally successful work of art. ...

At one point, director Nolan asked for a planet on which the dilation of time because of immensely powerful gravity was so severe that one hour there would correspond to seven years on Earth — an Einsteinian effect that plays a big role in the plot. Dr. Thorne’s first reaction was “no way.” But after thinking about it, he says he found a way, which would require the planet to be very close to a massive black hole spinning at nearly its maximum rate. The hole would spin space around with it, like a mixer swirling thick dough.

The planet could get its heat and light from the disk of heated material swirling around the hole, Dr. Thorne calculated, as long as the hole was not feeding too strongly — a rather carefully tuned but not impossible situation. The black hole itself sprang directly from Dr. Thorne’s equations, and its renderings by the movie’s visual effects supervisor, Paul Franklin, showed details that Dr. Thorne plans to write papers about.

Wormholes are another thing that easily pass the beer test. Einstein himself pointed out that such shortcuts through space-time were at least allowed by his equations, but nobody knows how to make one or to keep it from collapsing, or how to install one near Saturn without its gravitational field’s disrupting the entire solar system.

Ditto the fifth dimension, a logical consequence of various brands of string theory. ...

“So why did they take care with relativity but not even bother with planetary science?” he went on. “Arthur C. Clarke is spinning in his stargate!”
So the relativity is bizarre, but it makes more sense than the rest of the movie.

Friday, November 14, 2014

Hawking scientific contributions

The new Hawking movie got two NY Times reviews, one complaining about the facts, and one with other complaints:
But it is in showing the application of that intelligence that “The Theory of Everything” tumbles into a black hole of biopic banality. My colleague Dennis Overbye recently enumerated some of the film’s historical and scientific lapses in The New York Times, but those are not really my concern. Taking liberties with facts is a prerogative of storytelling. ...

The substance of Stephen’s work is shoehorned into a few compressed, overly dramatic scenes. Equations are scribbled on chalkboards. Fellow physicists proclaim, “It’s brilliant!” or “It’s rubbish!” One of Stephen’s friends diagrams a hypothesis in beer foam on a pub table. Several earnest, potted conversations take place about whether Stephen believes in God, as his wife does. But nothing establishes the stakes in these arguments or dramatizes the drive for knowledge.

Movies often have a hard time with science, as they do with art.
There are lots of TV shows that explain science, on the Discovery, Science, PBS, and NatGeo channels. There is no good reason for the movies to have a hard time with science.

The TV sitcom The Big Bang Theory is able to get its scientific facts correct, even tho it has much greater production pressures. Obviously they make it a priority to check the scripts with experts.

The 1998 movie Pi does not even bother to get the digits of pi correct. It gets about the first ten, and then gives random digits. What could be easier than getting pi correct? Obviously the makers had no interest in accuracy.

Shaunmaguire writes:
In anticipation of The Theory of Everything which comes out today, ... I wanted to provide an overview of Stephen Hawking’s pathbreaking research. ...

Singularity theorems applied to cosmology: Hawking’s first major results, starting with his thesis in 1965, was proving that singularities on the cosmological scale — such as the big bang — were indeed generic phenomena and not just mathematical artifacts. This work was published immediately after, and it built upon, a seminal paper by Penrose. ...

Singularity theorems applied to black holes: ... In the very late ‘60s and early ’70s, Hawking, Penrose, Carter and others convincingly argued that black holes should exist. ...

No hair theorem: ... In the early ’70s, Hawking, Carter, Israel and Robinson proved a very deep and surprising conjecture of John Wheeler–that black holes have no hair! ...

Black holes evaporate (Hawking Radiation): ...

Black hole information paradox: ... Most famously, the information paradox considered what would happen if an “encyclopedia” were thrown into the black hole. GR predicts that after the black hole has fully evaporated, such that only empty space is left behind, that the “information” contained within this encyclopedia would be destroyed. ... This prediction unacceptably violates the assumptions of quantum mechanics, which predict that the information contained within the encyclopedia will never be destroyed. ...
It is nice to see a movie about a physicist. I have not seen it yet. It probably emphasizes his disease and love life, but butchers his physics.

Let's not overstate this. His work on relativity singularities was comparable to Penroses, and was instrumental in convincing people of the big bang and black holes.

But do the singularities really exist? If relativity is correct, the singularities would be unobservable. The laws of physics as we know them would break down in a vicinity of the singular, so we cannot say with any confidence what would be happening. At the big bang, inflation is widely believed, but no one know how it would work.

So while the Penrose-Hawking theorems make useful arguments, they do not prove singularities in the real universe.

By now, Hawking is probably better known for the black hole evaporation and information paradox. Again, this has never been observed, and it is harder to separate his sensible thought ideas from his crazier ideas.

It seems crazy to me to say that the information in an encyclopedia can never be destroyed. Just burn it, then the info is gone for good. Quantum mechanics does not teach that you can get the info back. The many-worlds interpretation does say that the info escapes to an inaccessible parallel universe. There is no way to have evidence for that, and it just seems like a childish belief to me.

Hawking has spent the last 30 years on this sort of nonsense. He is frequently mentioned as a Nobel Prize candidate, but I doubt it. The Nobel committee only likes theories if they are confirmed by experiment.

His personal story is remarkable. He was diagnosed with Lou Gehrig's disease and given 2 years to live, but it appears that he was misdiagnosed. He probably has some other degenerative neural disease.

Tuesday, November 11, 2014

Evolutionist complains Christians fund free will

Leftist-atheist-evolution Jerry Coyne is ranting again against free will, and against the evils of religion, which he blames for all of these supposedly wrong ideas.

In his terminology, "libertarian free will" refers to the religious idea that we are able to act on our choices, and "incompatibilism" refers to his belief that scientific atheism requires believing that everything we do has been determined since the big bang. In the middle is "compatibilism", which says that determinism is compatible with our ordinary belief in free will, and hence you can believe in free will whether the laws of nature are determinist or not.

I stick to the hard science, and it has not resolved either determinism or free will, and probably never will. They are metaphysical beliefs. So I am a compatibilist, as I do not believe that they are inherently contradictory as philosophers use these terms.

In Coyne's analysis, if someone fails to reject free will, then some religious conspiracy must have bribed him into presenting that view:
The question is actually “are we free?” and, in the main, the interlocutors answer “yes.” After all, Templeton wants science to show that we still have free will, something that Dan Dennett mentioned the other day when reviewing Alfred Mele’s new book that defends free will (Dennett likes the book but suggested that Mele’s objectivity might have been compromised because his views are congenial to the source of his funding). Mele is in charge of two multimillion-dollar Templeton grants (here and here; see his response to Dennett here). One of them is the free will project touted in the ad.
My guess is that Coyne is getting his funding from leftist atheist evolutionist, so I guess that explains his views. Or maybe they were pre-ordained since the big bang.

What especially bugs him is the idea that people will live more Christian lives if they know that they have the free will to make their own decisions. He has to spoil it by telling everyone that they are mindless soulless automatons descended from apes.
When Darwin famously told the Bishop of Worcester's wife about his theory of evolution, she remarked: Descended from the apes! My dear, let us hope that it is not true, but if it is, let us pray that it will not become generally known.
The research does seem to show that people take more moral responsibility for their decisions if they believe that they had the free will to make those decisions.
Besides, what makes this whole argument from consequences bizarre is that research shows that most people’s conception of free will is not compatibilist, but libertarian. Yes, there’s one study showing a compatibilist belief in general, but most studies show the opposite. In my own discussions with scientists, many of them, while not explicit dualists, still believe in a libertarian free will in which there is an “I” who makes decisions at any given time, and could have decided otherwise. (I was surprised to learn that physicist Steve Weinberg, an atheist, believes this.)
Back to physics here. It should not be surprising that a big shot atheist physicist believes in free will. I do not think that anyone's Nobel Prize acceptance speech said that he did nothing to deserve it except to carry out the commands that he was programmed to do.

Seriously, the law of physics are not known to be deterministic. Einstein famously complained about this in the 1930s, and the consensus was that Einstein was wrong.

The loudest argument for determinism from today's physicists comes from the many-worlds interpretation of quantum mechanics. They say that what appears to be chance is really the universe splitting into parallel universes. When all universes are taken into account, they are all determined from the big bang. Our free will is just an illusion arising from the fact that we do not care about the other universes that have no interaction with our own.

Coyne adds:
I disagree that determinism is a useless concept, and it can be tested scientifically, as in the tests of Bell’s inequality (it doesn’t hold on the quantum level). ... And there is no scientific evidence that quantum mechanics plays a role in human behavior on a macro level.
He is badly confused. Tests of Bell's inequality do not show anything about determinism. They only show that certain deterministic non-quantum models are wrong. But the experiments favor quantum mechanics, so all the non-quantum models are wrong, as far as we know.

Quantum mechanics plays a role in all chemical reactions. Humans use chemical reactions in everything they do. If I decide to eat oatmeal for breakfast, we do not have a quantum mechanical description for how that decision works, but we don't have any other description either.

A difference between right-wingers and left-wingers is that left-wingers are frequently preoccupied with the motives of others. In this case, Coyne is upset that someone might defend free will out of Christian motives, or funding from a Christian organizations. You see this also in the recent American election, where many Democrats campaigned on motives while Republicans campaigned on results. I guess you see the world a lot differently if you do not believe in free will.

Monday, November 10, 2014

Audio explanation of quantum computers

SciAm is hosting the Tech Talker on Detangling Quantum Computers:
For example, it is really hard for normal computers to find factors of prime numbers. [at 1:30]
It is just as hard for quantum computers to find factors of prime numbers. Primes do not have any nontrivial factors. Okay, I am nitpicking. I have heard this error many times.

At 3'00 to 3'30, it explains superposition as one observer measuring an electron as spin up at the same time another measures spin down.

No, that is impossible.

I am not sure it is even useful to talk about an electron being in a superposition of spin states. It always has a spin in some direction. Only if you force it into another direction does it look like a possibility of up or down in that direction.

And there is no mention of the fact that quantum computers are completely hypothetical fantasies, like time travel machines.

On a recent Slate podcast, I heard:
The word quantum is Latin for: (a) uncertainty (b) parallel (c) natural (d) what amount
The correct answer is (d). Other wrong answers might be "small" or "discrete"

Sunday, November 9, 2014

Philosopher a fan of crackpot evolutionist

Biologist-philosopher Massimo Pigliucci did not like me trashing modern philosophers as having no worthwhile influence on physics, and wrote:
You have obviously read no philosophy of science whatsoever.
He did not like my reaction to this comment:
As for philosophical accomplishments, I can name two:
1) The Scientific Method — Science was not a separate field until recently; not until the 19th or 20th century, I believe. Galileo, Newton, Copernicus, Kepler, along with all the other ‘scientific minds’ of the Scientific Revolution, were natural philosophers. Science is the child of philosophy.
2) The Sex/Gender Distinction — Unless I am mistaken, it is a firmly held belief in various fields (feminism, anthropology, psychology(?), etc.), and a common belief among lay men, that there is a distinction between one’s reproductive role (sex) and the cultural, behavioral norms they must follow qua sexed person (gender). This distinction was introduced by Simone De Beauvoir, an Existentialist philosopher. This seems like a big accomplishment for philosophy.
I did suspect that this comment was a joke. Sure, the scientific method was big, but that goes back millennia. But the sex/gender distinction? Really? Is that the best philosophy has done in 300 years? The distinction is of interest to a few trannies, but not much anyone else.

Pigliucci is a big fan of Stephen Jay Gould, and wrote a 1998 5-star review of Gould's most famous book, The Mismeasure of Man:
Steven J. Gould is most famous among the general public for his collections of essays from his long Natural History series, "This View of Life". But the best of Gould's writing is perhaps to be found in his single-theme books. And The Mismeasure of Man is arguably the finest among them. The volume is about the long history of the search for scientific justification of racism, and the many faux pas that science has committed when it comes to the study of human intelligence. ...

He illustrates this with an array of definitely intelligent people whose brain sizes covered almost the whole gamut displayed by non-pathological individuals. However, this is indeed one of the troublesome aspects of this book and, I dare say, of Gould's writing in general. He dismisses contrary evidence or arguments so fast that one gets the impression of seeing a magician performing a trick. One cannot avoid the feeling of having being duped by the quickness of the magician's movement, instead of having observed a genuine phenomenon. In this particular instance, I can vouch for Gould as a biologist, but I'm not so sure that the general public is willing to trust him on his word. After having dismissed both craniometry and the aberrant work of Cesare Lombroso on the anthropological stigmata of criminals, Gould moves on to his main target: IQ and intelligence testing. ...

That is, the scores on each test are correlated to each other, because they all reflect one underlying quantity, which Spearman named "g", or general intelligence. Spearman therefore provided one of the two pillars of the eugenic movement: there seemed indeed to be one way to rank individuals by their intelligence with the use of one number: this was the score on the g-factor, instead of the score on any of the available IQ tests. Burt's major achievement was a supposed confirmation of the second fundamental piece of the puzzle eugenic puzzle: his studies of genetically identical twins suggested a high heritability (incorrectly read as a high level of genetic determination) of intelligence. So, not only do individuals differ in intelligence, but this is easy to measure and genetically determined. Environment, and with it education and social welfare, cannot alter the innate difference among individuals, genders, and races. QED Well, not really.
No, this book is crackpot stuff. It has no scientific merit, and is wrong on almost every level. You can read the Wikipedia article for details.

Yes, there is overwhelming evidence that there are innate differences among individuals, and a lot of work on how much can be attributed to genetics and how much to the environment. Very few traits are entirely genetic or environmental. This stuff is not particularly controversial, except for Gould fans.

Pigliucci posts a lot of good essays and interviews, and of course he is on the warpath against pseudoscience and creationism. But he is just another philosopher of science with a warped idea of science.

Friday, November 7, 2014

Against quantum realism

Lubos Motl writes:
All the major "realist" attempts to reform the foundations of quantum mechanics – de Broglie-Bohmian mechanics, Ghirardi-Rimini-Weber-like collapse theories, and Everett-style many worlds – are known to suffer from serious diseases. To a large extent, "realism" itself is the problem.

I am still willing to admit that there is no truly "rock-solid proof" of the statement that "there cannot be any realist reinterpretation or 'improvement' of quantum mechanics".
The quest for "realism" in physics is an Einsteinian disease, and led me to write my book on the subject.

We have 80 years of theoretical and experimental evidence, all pointing to the supposedly-realist models of quantum mechanics being wrong. Isn't that enuf?

A Nature article reports:
The bizarre behaviour of the quantum world — with objects existing in two places simultaneously and light behaving as either waves or particles — could result from interactions between many 'parallel' everyday worlds, a new theory suggests. ...

Charles Sebens, a philosopher of physics at the University of Michigan in Ann Arbor, says he is excited about the new approach. He has independently developed similar ideas, to which he has given the paradoxical name of Newtonian quantum mechanics. ...

The next step for the team will be to come up with ways in which their idea can be tested. If the many-interacting-worlds approach is true, it will probably predict small differences from quantum theory, Wiseman says.
This is the game plan. Refuse to accept 1930 physics. Pretend that the world is more like your Newtonian intuitions. Except add some bizarre parallel universes or other nonsense that would be stranger and more non-intuitive than quantum mechanics. Find some way to test it in order to disprove quantum mechanics. Then discover that quantum mechanics still works, just like in 1930. This story has played out a thousand times. Reporting this is like reporting someone's idea for a perpetual motion machine.

It is bizarre that these models are even called realist. Bohmian mechanics supposedly tells you where the electron is, but then requires you to believe that the electron has a sort of guardian angel that has a magical and nonlocal influence on it. The many-worlds requires vast parallel universes that can never be observed.

Webster's defines realism:
1: concern for fact or reality and rejection of the impractical and visionary
2a : a doctrine that universals exist outside the mind; specifically : the conception that an abstract term names an independent and unitary reality
b : a theory that objects of sense perception or cognition exist independently of the mind — compare nominalism
3: the theory or practice of fidelity in art and literature to nature or to real life and to accurate representation without idealization
I guess the idea is that the Copenhagen interpretation of quantum mechanics makes predictions based on our knowledge of the system, and is therefore in the mind. A theory that posits unobservable ghosts would be outside the mind, and therefore realist.

This is backwards. Realism, to me, means accepting what is demonstrably observed, and not living in some fantasy world of unobservables.

So I say that quantum mechanics is realist because it is based on observables and verifiable predictions. Everyone else says that quantum mechanics is an inadequate theory because it is not realist, and it needs to be made realist by hypothesizing unobservable and magical hidden variables or pilot waves or parallel universes.

I can't blamed de Broglie for having some of these ideas in 1923. They were intensely debated in the late 1920s, and thought to be settled then. But it appears that no amount of evidence will convince the Einsteinian physicists who insist that there is a missing ingredient to quantum mechanics that is going to make it realist.

Sean M. Carroll is one who believes that quantum mechanics is missing realism:
But despite its triumphs, quantum mechanics remains somewhat mysterious. Physicists are completely confident in how they use quantum mechanics -- they can build theories, make predictions, test against experiments, and there is never any ambiguity along the way. And nevertheless, we're not completely sure we know what quantum mechanics really is. There is a respectable field of intellectual endeavor, occupying the time of a substantial number of talented scientists and philosophers, that goes under the name of "interpretations of quantum mechanics." A century ago, there was no such field as "interpretations of classical mechanics" -- classical mechanics is perfectly straightforward to interpret. We're still not sure what is the best way to think and talk about quantum mechanics.

This interpretational anxiety stems from the single basic difference between quantum mechanics and classical mechanics, which is both simple and world-shattering in its implications:
According to quantum mechanics, what we can observe about the world is only a tiny subset of what actually exists.
Attempts at explaining this principle often water it down beyond recognition. "It's like that friend of yours who has such a nice smile, except when you try to take his picture it always disappears." Quantum mechanics is a lot more profound than that. ...

Most modern physicists deal with the problems of interpreting quantum mechanics through the age-old strategy of "denial." ...

At some point, however, we need to face the music.
For him, facing the music means believing in unobservable parallel universes.

The idea that we can only observe a tiny subset of reality is just a mystic belief. Or religious. Jesus said, "My kingdom is not of this world." John 18:36. Maybe so, but physics can only deal with what is of this world.

Thursday, November 6, 2014

DNS troubles

This blog was down for a couple of days, because of technical server issues. It should be back up when you read this.

Monday, November 3, 2014

Paradigm shift book is still highly cited

Nature magazine has published the Google Scholar top-100 cited list, and near the top is the 1962 philosophy book The Structure of Scientific Revolutions. It popularized the paradigm shift.

This book dominated modern philosophy of science, but it is garbage. Its main arguments are to deny that science makes progress toward truth, deny that new scientific theories are adopted from quantitative evidence, and deny that scientists are rational.

Philosophers and other anti-science academics love this book because it undermines the validity of science. Science crackpots also love this book also, because it presents science as driven by fads. Instead of arguing that their theories are superior in some way, they can just gripe that they are out of fashion.

MIT debate on multiverse

A couple of MIT physics professors debated whether we live in the multiverse:
Arguing in favor was Max Tegmark, a cosmologist at MIT. His MIT colleague, Frank Wilczek, (winner of the Nobel for his work on the strong force) took the opposing position. ...

And then there's the many-worlds interpretation, in which the wavefunction keeps working, but reality breaks, or at least fragments a bit. In this view, there are particles in every location the wavefunction predicts we'll find them. It's just that those locations end up in different universes once the measurement happens. So, as soon as someone makes a measurement, they split the Universe into multiple universes, each with the particle in a different location specified by the wavefunction. ...

Wilczek was a bit more detailed in his criticisms. He's perfectly happy to accept the wavefunction's existence, but feels that "many worlds is metaphysical baggage added on." His main issue is that all the worlds but one aren't accessible to our experience, and therefore can't be explored scientifically. "I am very worried about ascribing full credence to something other than reality," Wilczek said.

Tegmark for his part, thought that was a very narrow view. "This has less to do with QM than the way we perceive reality," he said, later adding, "it's partly 'what's real is what we can observe,' which seems like an ostrich with its head in the sand." But scientifically, Tegmark said that "If Schroedinger's equation applies to every system, no matter how large, we should have many worlds."...

If inflation is right—and most physicists think it is—then the Universe we see is only a small fraction of the whole. And, if you could somehow go past the observable universe, you'd come to a region where inflation is an ongoing process, rapidly expanding space and ultimately creating additional universes.

"If space is infinite, and the odds of being you is not zero—which it's not, since you exist—then you must exist in other places," Tegmark said. "Maybe you'll first find a Shmioneer Works, but if you keep looking, you'll find another [you]." And this weirdness is a necessary outcome of a theory that most physicists agree on. So, in Tegmark's view, you can't dismiss the many worlds interpretation just because it's weird.
The reason for rejecting the multiverse is not that it is weird. It is no weirder than a lot of religious beliefs that people have.

The reasons for rejecting the multiverse are that (1) there is no evidence for it, and no prospects for getting any; (2) it does not solve any theoretical problems; and (3) it is methodologically incoherent, as it destroys the meaning of the Born rule.

If inflation is true, then it is reasonable to say that part of the universe is outside our observable horizon, and hence outside our possible knowledge. The scientific approach would be to acknowledge that there might be something out there, but leave it at that. To say that there are infinitely many copies of ourselves out there is strange and unscientific.

Tegmark is trying to trick you into one of the various paradoxes about infinity, but he has admitted that one of these paradoxes has driven him to conclude that no infinities occur in nature. So I don't know why he keeps bringing up infinities.

I am glad to see physicists debating the many-worlds MWI because it is a silly idea that cannot hold up under debate. It has become somewhat trendy for physicists to announce that they believe in it, but it is stupid on every level -- philosophically, mathematically, and physically.

Saturday, November 1, 2014

Black holes in the movies

Dennis Overbye gripes about a new movie in the NY Times:
But the movie doesn’t deserve any prizes for its drive-by muddling of Dr. Hawking’s scientific work, leaving viewers in the dark about exactly why he is so famous. Instead of showing how he undermined traditional notions of space and time, it panders to religious sensibilities about what his work does or does not say about the existence of God, which in fact is very little. ...

But when it came to science, I couldn’t help gnashing my teeth after all. Forget for a moment that early in the story the characters are sitting in a seminar in London talking about black holes, the bottomless gravitational abysses from which not even light can escape, years before that term had been coined. Sadly, a few anachronisms are probably inevitable in a popular account of such an arcane field as astrophysics.

It gets worse, though. Skip a few scenes and years ahead. Dr. Hawking, getting ready for bed, is staring at glowing coals in the fireplace and has a vision of black holes fizzing and leaking heat.

The next thing we know he is telling an audience in an Oxford lecture hall that black holes, contrary to legend and previous theory, are not forever, but will leak particles, shrink and eventually explode, before a crank moderator declares the session over, calling the notion “rubbish.”

The prediction of Hawking radiation, as it is called, is his greatest achievement, the one he is most likely to get a Nobel Prize for. But it didn’t happen with a moment of inspiration staring at a fireplace. And in telling the story this way, the producers have cheated themselves out of what was arguably the most dramatic moment in his scientific career. ...

His discovery has turned out to be a big, big deal, because it implies, among other things, that three-dimensional space is an illusion. Do we live in a hologram, like the picture on a credit card? Or the Matrix?

None of this, alas, is in the movie. That is more than bad history.
Bad history, but I am sure that it would be bad physics either way. No one has shown that 3D space is an illusion. Hawking radiation from black holes has not been observed, and probably won't be. And he won't get that Nobel prize. (Possibly he might get some credit from some analogous radiation that is not from black holes, but I doubt it.) No we do not live in a hologram just because some funny stuff happens on the event horizon of a black hole.

Meanwhile, Sean M. Carroll raves about a new movie:
I haven’t seen it yet myself, nor do I know any secret scoop, but there’s good reason to believe that this film will have some of the most realistic physics of any recent blockbuster we’ve seen. ...

Kip recognized that a wormhole was what was called for, but also realized that any form of faster-than-light travel had the possibility of leading to travel backwards in time. Thus was the entire field of wormhole time travel born. ...

I know that Kip has been very closely involved with the script as the film has developed, and he’s done his darnedest to make sure the science is right, or at least plausible. (We don’t actually whether wormholes are allowed by the laws of physics, but we don’t know that they’re not allowed.) ...

And that’s not all! Kip has a book coming out on the science behind the movie, which I’m sure will be fantastic. And there is also a documentary on “The Science of Interstellar” that will be shown on TV, in which I play a tiny part.
I should reserve judgment until I see the movie. It will probably be entertaining. But if the plot uses wormholes for Earthlings to escape global warming and colonize another planet, then I would not call it realistic physics.

Update: Interstellar has some great reviews, but the Bad Astronomer hated the silly and nonsensical physics plot (relativity, black hole, wormhole).

Thursday, October 30, 2014

Silly atheist attack on the Pope

Here is a current academic evolutionist attack on creationism, in today's New Republic. Leftist-atheist-evolutionist Jerry A. Coyne writes Stop Celebrating the Pope's Views on Evolution and the Big Bang. They Make No Sense. Then he announces that he is refusing to read the comments. He quotes the Pope:
The Big Bang, which today we hold to be the origin of the world, does not contradict the intervention of the divine creator but, rather, requires it. ...
Coyne then attacks:
Let’s start with the Big Bang, which, said Francis, requires the intervention of God. I’m pretty sure physicists haven’t put that factor into their equations yet, nor have I read any physicists arguing that God was an essential factor in the beginning of the universe. We know now that the universe could have originated from “nothing” through purely physical processes, if you see “nothing” as the “quantum vacuum” of empty space. Some physicists also think that there are multiple universes, each with a separate, naturalistic origin. Francis’s claim that the Big Bang required God is simply an unsupported speculation based on outmoded theological arguments that God was the First Cause of Everything.
The Pope is not a scientist, and I don't doubt that he uses theological arguments that lack scientific support. My concern here is with scientists misrepresenting the science.

Physicists have no idea whether God or anything was a factor in the beginning of the Big Bang. We have no observational evidence. The closest was supposed to be the BICEP2 data, but that is in serious doubt.

We do not know that the universe could have originated from nothing.

We have no evidence for multiple universes.

Coyne accuses the Pope of unsupported speculation, but the same could be said for multiple universes, or the universe originating from nothing.

Monday, October 27, 2014

Relativity has Lorentz and geometry explanations

I posted about the 3 views of special relativity, which may be called the Lorentz aether theory (LET), Einsteinian special relativity (ESR), and Minkowski spacetime (MST).

Briefly, Lorentz explained the contraction (and Lorentz transformation LT) as motion causing an electromagnetic distortion of matter and fields. ESR deduces the LT as a logical consequence from either the Michelson-Morley experiment (as FitzGerald 1889 and Lorentz 1992 did) or from postulates that Lorentz distilled from that experiment and Maxwell's equations (as Einstein 1905 did). MSR recasts the LT as symmetries of a 4-dimensional geometry (as per Poincare 1905 and Minkowski 1908).

In case you think that I am taking some anachronistic view of MSR, I suggest The Non-Euclidean Style of Minkowskian Relativity and Minkowski’s Modern World (pdf) by Scott Walter, and Geometry and Astronomy: Pre-Einstein Speculations of Non-Euclidean Space, by Helge Kragh. Non-Euclidean geometry was crucial for the early acceptance and popularity of special relativity in 1908, and dominated the early textbooks.

While the Lorentzian constructive view is unpopular, it has some explanatory advantages, and is a legitimate view. I tried to add a sentence last year saying that to the Wikipedia article on length contraction, but I was overruled by other editors.

Dan Shanahan recently wrote a paper on explanatory advantages to LET, and asks:
You say:

"In the preferred (Minkowski) view, the LT is not really a transmutation of space and time. Minkowski spacetime has a geometry that is not affected by motion. The LT is just a way to get from one set of artificial coordinates to another."

and later:

"The physics is all built into the geometry, and the contraction seems funny only because we are not used to the non-Euclidean geometry."

I would understand, and could accept, both passages if you were describing the non-Euclidean metric of the gravitational equation where we have the intuitive picture of a curved spacetime. But I cannot see what changes could actually be occurring in flat spacetime in consequence of the rotation described by the LT. You say that ”Minkowski spacetime has a geometry that is not affected by motion” and it is here in particular that I am not sure of your meaning.

I would say myself that whatever space might be, neither it nor its geometry could be affected (except in the sense described by general relativity) either by the motion through it of an observer or other object, or (and this is the important point) by changes in that motion. But that cannot be what you are saying for that is the Lorentzian constructive approach.
To answer this, you have to understand that Poincare and Minkowski were mathematicians, and they mean something subtle by a geometrical space.

Roughly, a geometrical space is a set of points with geometrical structures on it. The subtle part is that it must be considered the same as every other isomorphic space. So a space could be given in terms of particular coordinates, but other coordinates can be used isomorphically, so the coordinates are not to be considered part of the space. The space is a coordinate-free manifold with a geometry, and no particular coordinates are necessarily preferred.

The concept is so subtle that Hermann Weyl wrote a 1913 book titled, "The Concept of a Riemann Surface". The concept was that it could be considered a coordinate-free mathematical object. I can say that in one sentence, but Weyl had a lot of trouble explaining it to his fellow mathematicians.

Minkowski space can be described as R4 with either the metric or LT symmetry group (or both), but that is not really the space. It is just a particular parameterization of the space. There is no special frame of reference in the space, and many frames can be used.

Poincare introduced Minkowski space in 1905, and gave an LT-invariant Lagrangian for electromagnetism. The point of this is that the Lagrangian is a (covariant) scalar function on the coordinate-free Minkowski space. He then introduced the 4-vector potential, and implicitly showed that it was covariant. Minkowski extended this work in 1907 and 1908, and was more explicit about using a 4-dimensional non-Euclidean manifold, and constructing a covariant tensor out of the electric and magnetic fields. With this proof, the fields became coordinate-free objects on a geometrical space.

This is explained in section 8 of Henri Poincare and Relativity Theory, by A. A. Logunov, 2005.

Thus the physical variables are defined on the 4D geometrical space. The LT are just isomorphisms of the space that preserve the physical variables.

This concept is like the fact that Newton's F=ma means the same in any units. To do a calculation, you might use pounds, grams, or kilograms, but it does not matter as long as you are consistent. The physical variables do not depend on your choice of units.

Changing from pounds to grams does not affect the force, and the LT does not transmute spacetime. They are just changes of coordinates that might make calculations easier or harder, but do affect any physical reality.

To answer Dan's question, motion is just a change of coordinates, and in MST, the physics is built on top of a geometry that does not depend on any such coordinates. Motion by itself is not a real thing, as it is always relative to something else.

To me, the essence of special relativity is that it puts physics on a 4D non-Euclidean geometry. That is why it bugs me that Einstein is so heavily credited, when he only had a recapitulation of Lorentz's theory, and not the 4D, symmetry group, metric, covariance, and geometry. Many physicists and historians blame Poincare for believing in an aether with a fixed reference frame. The truth is more nearly the opposite, as Poincare said that the aether was an unobservable convention and proved that symmetries make all the frames equivalent in his theory.

Dan gives this thought experiment, which is his variant of the twin paradox:
Consider two explorers, who we will call Buzz and Mary. They had been travelling, in separate space ships, side by side, in the same direction. But Buzz has veered away to explore a distant asteroid. Mary concludes from her knowledge of the LT that time must now be running more slowly for Buzz and that he and his ship have become foreshortened in the direction that Buzz is travelling relative to her. Buzz observes no such changes either in himself or in his ship. To Buzz, it is in Mary and her ship that these changes have occurred. Buzz is also aware that events that he might previously have regarded as simultaneous are no longer so.

But what has actually changed? ...

For Buzz the LT will describe very well his altered perspective. But it would be as inappropriate to explain length contraction, time dilation and loss of simultaneity as resulting from a physical transformation of space or spacetime as it would be to describe the rotation of an object in 3-space as a rotation of space rather than a rotation in space.
Some books try to avoid this issue by saying that ESR only applies to inertial frames like Mary, not Buzz. But historically, Lorentz, Poincare, Einstein, and Minkowski all applied special relativity to accelerating objects. General relativity is distinguished by gravitational curvature.

Dan argues in his paper that LET gives a physical explanation for what is going on, and ESR does not. I agree with that. Our disagreement lies with MST.

I say that there are two ways of explaining this paradox, and they are the same that Poincare described in 1905. You can follow Lorentz, say that everything is electromagnetic, and attribute the LT to distortions in matter and fields. Dan's paper explains this view.

His second explanation was to say that relativity is a theory about “something which would be due to our methods of measurement.” The LT is not a contraction of matter or of space. It is just a different parameterization of a geometric spacetime that is not affected by motion at all.

Let me give my own thought example. Suppose Mary and Buzz are on the equator, traveling north side-by-side to the North Pole. Assume that the Earth is a perfect sphere. Along the way, Buzz takes a right turn for a mile, then two left turns for a mile each, expecting to meet back up with Mary. Mary waits for him, but does not alter her northerly path. Then they discover that they are not on the same course. What happened?

You can apply Euclidean geometry, draw the great circles, and find that they do not close up. Or you can apply spherical geometry, where Mary and Buzz are traveling in straight lines, and note that squares have angles larger than 90 degrees. The problem is not that matter or space got transmuted. The problem is that Buzz took a non-Euclidean detour but returned with Euclidean expectations.

In Dan's example, Buzz takes a detour in spacetime. It does not do anything strange to his spaceship or his clocks. The strangeness only occurs when he uses Euclidean coordinates to compare to Mary. Nothing physical is strange, as long as you stay within Minkowski's geometry.

Thus there are two physical explanations for the special relativistic effects. One is the FitzGerald Lorentz LET, and one is Poincare Minkowski geometry. Einstein gives more of an instrumentalist approach to LET, and does not really explain it.