Friday, May 31, 2013

Radical re-examination of the invisible frameworks

Raymond Tallis writes:
But there could not be a worse time for philosophers to surrender the baton of metaphysical inquiry to physicists. Fundamental physics is in a metaphysical mess and needs help. The attempt to reconcile its two big theories, general relativity and quantum mechanics, has stalled for nearly 40 years. Endeavours to unite them, such as string theory, are mathematically ingenious but incomprehensible even to many who work with them. This is well known. A better-kept secret is that at the heart of quantum mechanics is a disturbing paradox – the so-called measurement problem, arising ultimately out of the Uncertainty Principle – which apparently demonstrates that the very measurements that have established and confirmed quantum theory should be impossible. Oxford philosopher of physics David Wallace has argued that this threatens to make quantum mechanics incoherent which can be remedied only by vastly multiplying worlds.

Beyond these domestic problems there is the failure of physics to accommodate conscious beings. The attempt to fit consciousness into the material world, usually by identifying it with activity in the brain, has failed dismally, if only because there is no way of accounting for the fact that certain nerve impulses are supposed to be conscious (of themselves or of the world) while the overwhelming majority (physically essentially the same) are not. In short, physics does not allow for the strange fact that matter reveals itself to material objects (such as physicists).

And then there is the mishandling of time. The physicist Lee Smolin's recent book, Time Reborn, links the crisis in physics with its failure to acknowledge the fundamental reality of time. Physics is predisposed to lose time because its mathematical gaze freezes change. Tensed time, the difference between a remembered or regretted past and an anticipated or feared future, is particularly elusive. This worried Einstein: in a famous conversation, he mourned the fact that the present tense, "now", lay "just outside of the realm of science".

Recent attempts to explain how the universe came out of nothing, which rely on questionable notions such as spontaneous fluctuations in a quantum vacuum, the notion of gravity as negative energy, and the inexplicable free gift of the laws of nature waiting in the wings for the moment of creation, reveal conceptual confusion beneath mathematical sophistication. They demonstrate the urgent need for a radical re-examination of the invisible frameworks within which scientific investigations are conducted. We need to step back from the mathematics to see how we got to where we are now. In short, to un-take much that is taken for granted.
Physicists are asking for this sort of nonsense. They often say things to give the impression that quantum mechanics is an incoherent theory, or that it contradicts gravity, or that trying to understand the reality of time is a crisis in physics, or anything else in the above nonsense.

I found this Poincare quote:
When a scientific theory, claims to tell us what heat, what electricity, or what life really is, it stands convicted at the outset.

Wednesday, May 29, 2013

Unnatural constants make life possible

Natalie Wolchover writes:
Physicists reason that if the universe is unnatural, with extremely unlikely fundamental constants that make life possible, then an enormous number of universes must exist for our improbable case to have been realized. Otherwise, why should we be so lucky? Unnaturalness would give a huge lift to the multiverse hypothesis, which holds that our universe is one bubble in an infinite and inaccessible foam.
This is more religion than science.

Saturday, May 25, 2013

Physics PhD in wrong theory

A theoretical physicist defends what he does:
I study a theory called N=4 super Yang-Mills. ...

First of all, N=4 super Yang-Mills involves supersymmetry. Some forms of supersymmetry are being searched for by the Large Hadron Collider. But those forms involve symmetries that are broken, which allow the particles to have distinctive characters.

In N=4 super Yang-Mills, supersymmetry is unbroken. Every particle has the same mass and the same charge. Furthermore, in N=4 super Yang-Mills that mass is equal to zero; like photons, the particles of N=4 super Yang-Mills would all travel at the speed of light.

There is no group of particles like that in the Standard Model. They can’t be undiscovered particles, either. Particles that travel at the speed of light are part of the everyday world if they have any interaction with normal matter whatsoever, so if the particles existed, we’d know about them. Since they don’t in N=4 super Yang-Mills, we know the theory isn't “true.”

Even with this knowledge, there is an even more certain way to know that N=4 super Yang-Mills isn't “true": it was never supposed to be true in the first place.

A theory by any other name

More than a few of you are probably objecting to my use of the word “theory” in the last few paragraphs. If N=4 super Yang-Mills isn't part of the real world, how could it possibly be a theory? After all, a scientific theory is "a well-substantiated explanation of some aspect of the natural world, based on a body of facts that have been repeatedly confirmed through observation and experiment.”

That's courtesy of the American Association for the Advancement of Science. Confused? You must have been talking to the biologists again. Let’s explain. ...

I’m not a mathematician, however. I’m a physicist. I don’t study things merely because they are mathematically interesting. Given that, why do I (and many others) study theories that aren’t true?

Let me give you an analogy. Remember back in 2008, when Sarah Palin made fun of funding “fruit fly research in France?" Most people I know found that pretty ridiculous.
The AAAS is dominated by leftist-atheist-evolutionists who are sensitive about use of the word "theory" in "theory of evolution". I understand that they are on the warpath against Christians and creationists, but in my experience, scientists frequently use the word theory to describe a collection of ideas that have not been substantiated or confirmed at all.

Fruit fly research is at least telling us truths about fruit flies. One man's research could be proved wrong by another man doing a fruit fly experiment. This guy is bragging that nothing can be done to prove anyone wrong in the field, because the whole field is wrong.

A couple of the comments say that his work is justified because he is a mathematician doing math. But he explicity denies that he is a mathematician, so that is not right.

He also explains:
In referring to the theory I study as “wrong”, I’m attempting to bring readers face to face with a common misconception: the idea that every theory in physics is designed to approximate some part of the real world. For the physicists in the audience, this is the public perception that everything in theoretical physics is phenomenology. If we don’t bring this perception to light and challenge it, then we’re sweeping a substantial amount of theoretical physics under the rug for the sake of a simpler message. And that’s risky, because if people don’t understand what physics really is then they’re likely to balk when they glimpse what they think is “illegitimate” physics.
Silly me, I thought that science was all about trying to approximate the real world. Until I discovered string theorists and others who want nothing to do with the real world.

Friday, May 24, 2013

Photons entangled without coexisting

AAAS Science magazine reports:
Physicists have long known that quantum mechanics allows for a subtle connection between quantum particles called entanglement, in which measuring one particle can instantly set the otherwise uncertain condition, or "state," of another particle — even if it's light years away. Now, experimenters in Israel have shown that they can entangle two photons that don't even exist at the same time. ...

The experiment shows that it's not strictly logical to think of entanglement as a tangible physical property, Eisenberg says. "There is no moment in time in which the two photons coexist," he says, "so you cannot say that the system is entangled at this or that moment." Yet, the phenomenon definitely exists. Anton Zeilinger, a physicist at the University of Vienna, agrees that the experiment demonstrates just how slippery the concepts of quantum mechanics are. "It's really neat because it shows more or less that quantum events are outside our everyday notions of space and time."

So what's the advance good for? Physicists hope to create quantum networks in which protocols like entanglement swapping are used to create quantum links among distant users and transmit uncrackable (but slower than light) secret communications. The new result suggests that when sharing entangled pairs of photons on such a network, a user wouldn't have to wait to see what happens to the photons sent down the line before manipulating the ones kept behind, Eisenberg says. Zeilinger says the result might have other unexpected uses: "This sort of thing opens up people's minds and suddenly somebody has an idea to use it in quantum computing or something."
I don't doubt this experiment, but the explanation is really misleading. Quantum mechanics teaches that photons never exist as particles. They have some particle properties and some wave properties.

A lot of these quantum paradoxes depend on you thinking of photons as particles, analogous to macroscopic particles with which we have personal experiment, like marbles or ping pong balls. Think of photons as particles, and almost everything about light is very mysterious. Stop thinking about them as particles, accept quantum mechanics, and light is not so strange.

There are no applications to uncrackable secret communications or to quantum computing. This is just quantum mechanics, not some great new physics.

Monday, May 20, 2013

Hawking on God's dice

I have noted how non-physicists say silly things about determinism and free will, but physicists are almost as bad.

Stephen Hawking gave this lecture:
Many scientists are like Einstein, in that they have a deep emotional attachment to determinism. Unlike Einstein, they have accepted the reduction in our ability to predict, that quantum theory brought about. But that was far enough. They didn't like the further reduction, which black holes seemed to imply. They have therefore claimed that information is not really lost down black holes. ...

To sum up, what I have been talking about, is whether the universe evolves in an arbitrary way, or whether it is deterministic. The classical view, put forward by Laplace, was that the future motion of particles was completely determined, if one knew their positions and speeds at one time. This view had to be modified, when Heisenberg put forward his Uncertainty Principle, which said that one could not know both the position, and the speed, accurately. However, it was still possible to predict one combination of position and speed. But even this limited predictability disappeared, when the effects of black holes were taken into account. The loss of particles and information down black holes meant that the particles that came out were random. One could calculate probabilities, but one could not make any definite predictions. Thus, the future of the universe is not completely determined by the laws of science, and its present state, as Laplace thought.
Pierre-Simon Laplace lived around 1800 and not only proposed scientific determinism, he also predicted black holes and Bayesian probability. He said: "...[It] is therefore possible that the largest luminous bodies in the universe may, through this cause, be invisible." This idea was so far-fetched that it was removed from later editions of the book.

Laplace was not so silly as to think that black holes have something to do with determinism. That takes a modern physicist with a big reputation and wacky ideas like Hawking. Laplace had a much better understanding of what science was all about.

The equations of quantum mechanics are not any more or less deterministic than the equations of classical mechanics. They both formally predict the future, but cannot be completely deterministic because the inputs cannot be completely known.
Einstein's view was what would now be called, a hidden variable theory. Hidden variable theories might seem to be the most obvious way to incorporate the Uncertainty Principle into physics. They form the basis of the mental picture of the universe, held by many scientists, and almost all philosophers of science. But these hidden variable theories are wrong. The British physicist, John Bell, who died recently, devised an experimental test that would distinguish hidden variable theories. When the experiment was carried out carefully, the results were inconsistent with hidden variables. Thus it seems that even God is bound by the Uncertainty Principle, and can not know both the position, and the speed, of a particle. So God does play dice with the universe. All the evidence points to him being an inveterate gambler, who throws the dice on every possible occasion.
This argument for God playing dice is based on a mismatch between the mythical hidden variables and the observable variables. But if you accept those Bell test experiments, then the hidden variable do not exist, and the argument is fallacious. It tells us nothing about whether God plays dice or not.

Friday, May 17, 2013

Quantum computer with no quantum speedup

MIT complexity theorist Scott Aaronson is back on the warpath against those claiming successful quantum computing:
“Look, Scott, let the investors, government bureaucrats, and gullible laypeople believe whatever they want — and let D-Wave keep telling them what’s necessary to stay in business.  It’s unsportsmanlike and uncollegial of you to hold D-Wave’s scientists accountable for whatever wild claims their company’s PR department might make.  After all, we’re in this game too!  Our universities put out all sorts of overhyped press releases, but we don’t complain because we know that it’s done for our benefit.  Besides, you’d doubtless be trumpeting the same misleading claims, if you were in D-Wave’s shoes and needed the cash infusions to survive.  Anyway, who really cares whether there’s a quantum speedup yet or no quantum speedup?  At least D-Wave is out there trying to build a scalable quantum computer, and getting millions of dollars from Jeff Bezos, Lockheed, Google, the CIA, etc. etc. to do so—resources more of which would be directed our way if we showed a more cooperative attitude!  If we care about scalable QCs ever getting built, then the wise course is to celebrate what D-Wave has done—they just demonstrated quantum annealing on 100 qubits, for crying out loud!  So let’s all be grownups here, focus on the science, and ignore the marketing buzz as so much meaningless noise — just like a tennis player might ignore his opponent’s trash-talking (‘your mother is a whore,’ etc.) and focus on the game.”

I get this argument: really, I do. I even concede that there’s something to be said for it. But let me now offer a contrary argument for the reader’s consideration. ... If that happens, then I predict that the very same people now hyping D-Wave will turn around and—without the slightest acknowledgment of error on their part—declare that the entire field of quantum computing has now been unmasked as a mirage, a scam, and a chimera.  The same pointy-haired bosses who now flock toward quantum computing, will flock away from it just as quickly and as uncomprehendingly.  Academic QC programs will be decimated, despite the slow but genuine progress that they’d been making the entire time in a “parallel universe” from D-Wave.  People’s contempt for academia is such that, while a D-Wave success would be trumpeted as its alone, a D-Wave failure would be blamed on the entire QC community.
I get the impression that there is some resentment of D-Wave's private funding. It is okay if university quantum computer researcher make wild claims, because they don't have to deliver a product. D-Wave promises a product that can be benchmarked, and subject to failure in the marketplace.

There is, unfortunately, no proof that quantum computing is possible. And it may never be. Peer-reviewed professors can live in an academic bubble, and pretend that it is possible.

The NY Times reports:
In tests last September, an independent researcher found that for some types of problems the quantum computer was 3,600 times faster than traditional supercomputers. According to a D-Wave official, the machine performed even better in Google’s tests, which involved 500 variables with different constraints.

“The tougher, more complex ones had better performance,” said Colin Williams, D-Wave’s director of business development. “For most problems, it was 11,000 times faster, but in the more difficult 50 percent, it was 33,000 times faster. In the top 25 percent, it was 50,000 times faster.” Google declined to comment, aside from the blog post.

The machine Google and NASA will use makes use of the interactions of 512 quantum bits, or qubits, to determine optimization. They plan to upgrade the machine to 2,048 qubits when this becomes available, probably within the next year or two. That machine could be exponentially more powerful.
It sounds great, but apparently there is some dispute about whether there is really some sort of magical quantum speedup.

Update: Peter Shor writes:
This [argument that an off-the-shelf classical can solve the D-Wave problem many times faster] is exactly like arguing that if you look at the Wright Brothers’ first flight at Kitty Hawk, they could have gone farther, faster, and much more cheaply if they had just used an automobile. It’s just not the right comparison. D-Wave’s money was not spent only to build this current device; you have to consider that from their viewpoint, it’s just one step on the pathway to a much more complicated and useful device.
SciAm explains:
I began by explaining the theory behind quantum computing and why they hold the promise of significantly faster processing. In essence, it relies upon the fact that whilst conventional “bits” can be 0 or 1, quantum bits (so called qubits) can be both 0 and 1 at the same time (known as superposition). If you can combine qubits (known as entanglement) you can have a system that can process values that expand exponentially with the number of qubits you entangle. As with conventional programming, these qubits are passed through various logic gates to achieve the desired results. Hence, this is known as the “gate theory” of quantum computing.
This is a convenient explanation of quantum computers, but is one that Aaronson denounces as incorrect:
I agree that thinking about the wavefunction “realistically” (as an exponentially-large classical object) seems to be a mistake that countless popular writers make, which then leads them to believe that quantum computers can solve black-box search problems instantaneously, store exponentially-many classical bits, and do other things that they’re known not to be able to do.

Thursday, May 16, 2013

Atheists trashing other religious influence

Caltech cosmologist Sean M. Carroll writes Science and Religion Can’t Be Reconciled:
Why I won’t take money from the Templeton Foundation. ...

And if anyone is tempted to award me the Templeton Prize, I will totally accept it! And use the funds to loudly evangelize for naturalism and atheism. (After I pay off the mortgage.)
So he will take Templeton money if they offer him enough. Everyone has his price, I guess.

I don't care if he wants to promote his atheist beliefs, but his attitude is not that of a cold scientist. He regularly promotes unscientific physics philosophies such as many-worlds,
string theory, and the arrow of time. That stuff is no more scientific than most of the stuff that Templeton promotes.

Another Templeton critic complains:
Tim Maudlin asks for evidence of the distorting effect of Templeton funding.

It seems to me that the enormity of Templeton funding means that religious epistemology and religious perspectives on knowledge, understanding, etc., take a very large position in analytic epistemology overall. (According to Chalmers & Bourget, over 72% of philosophers are atheists; the number of projects in religious epistemology and religiously-motivated epistemology would seem outsized, given that percentage.)
So if 72% of phulosophers are atheists, then no private foundation should do anything to decrease that percentage?

I am all in favor of separating science and religion, but I put many-world and string theory on the side of religion. There is no more evidence for those concepts than there is for astrology.

(I am not quarrelling with old understandings about he second law of thermodynamics. But Carroll goes beyond that, and speculates about time running backwards in other universes. He also says that The past and future are equally real.)

Wednesday, May 15, 2013

New book defends string theory

Peter Woit writes:
There’s a new philosophy of science book out, Richard Dawid’s String Theory and the Scientific Method
The book seems to be an elaboration of these papers, downloadable for free: Underdetermination and Theory Succession from the Perspective of String Theory, On the conflicting assessments of the current status of string theory, and Realism in the age of string theory.

The argument is that a lot of big-shots work on string theory, so it must be science. The theory is unique because it claims to explain everything, while actually explaining nothing. Since the theory cannot be tested, we have to accept new definitions of science and realism. Some really smart people have opinions about what is aesthetically pleasing, and that can substitute for experiment.

Yes, that's it. He is trying to promote string theory, but his empty argument show that the theory is a failure by any objective standard.

Lumo adds:
The three reasons behind the near-certainty about the theory's validity are:
the non-existence of alternatives ...

Concerning the first argument, it is the actual explanation why the top bright theoretical physicists focus this high percentage of their intellectual skills on string theory. They simply divide their mental powers to all promising ideas, with the weight given by the degree to which they are promising. Because one may approximately say that there aren't any other promising "big ideas" outside string theory, people can't work on them.
So if these super-smart guy had a mystical belief in unicorns or astrology, and string theory were perceived as better than the alternatives for the purpose, then they study string theory. The problem with this argument is that there is no good reason to believe in unified field theory, and no good reason for believing that string theory would be progress towards that end.

Monday, May 13, 2013

Discovery of the electromagnetic Lagrangian

Lumo writes:
Schwarzschild is most famously associated with the first nontrivial exact solution to Einstein's equations of general relativity. But he would also study optics, photographic materials, celestial mechanics, quantum theory, stellar structure and statistics, Halley's comet, and spectroscopy. According to Wolfgang Pauli, Schwarzschild was the first man who wrote the correct form of the action for the electromagnetic field coupled to charges and currents.
I was surprised to learn that last Schwarzschild discovery a couple of years ago.

In my my book, I argue that the heart of special relativity was the 1905 discovery of the Lorentz covariance of Maxwell's equations.

Lorentz had a cruder concept in 1895 that he called the theorem of corresponding states. Einstein's 1905 paper postulated what Lorentz proved, but did not have the covariance concept. Poincare's 1905 paper presented the concept, and everyone else got it from him. It was not independently rediscovered by anyone else.

For one proof, Poincare presented a Lorentz invariant Lagrangian density that implies Maxwell's equations. The covariance follows from the invariance of the Lagrangian. I was going to credit Poincare with discovering the Lagrangian himself, but in researching the point for my book, I discovered that Karl Schwarzschild published it in 1903. Schwarzschild did not know that it was Lorentz invariant, or figure out the significance for special relativity, as Poincare did. Minkowski was one of the few people who grasped the significance of what Poincare did, and popularized the 4-dimensional geometrical view in 1908. Einstein was slow to understand Minkowski, but eventually caught to to what relativity was about in 1909.

Einstein's famous 1905 paper is one of the most widely praised scientific papers ever written, but it did not have the Lorentz group, spacetime, 4-dimensional geometry, relativistic Lagraangian, electromagnetic covariance, or gravitational implications. We got all those things from Poincare, and Poincare announced his results in 1905 before Einstein submitted his famous paper.

Wednesday, May 8, 2013

Why we have free will

I attacked Jerry Coyne for his unscientific views about free will. I would not bother with him, except that he is a famous and distinguished U. Chicago professor, and he posts daily about the superiority of evolutionary science to religion. He now writes:
Now most of us think that the notion of “free choice,” as in the sense of “could have chosen otherwise at a given moment,” is wrong. Excepting quantum mechanics — whose effects on behavior are unknown, and whose pure indeterminacy doesn’t fit most people’s idea of ‘ “free will” — our behaviors are determined by physical laws, and can’t be overridden by some spirit in the brain. Ergo, as Jeff said, libertarian free will is dead. I think that nearly all of us agree.
The laws of quantum mechanics are the most basic physical laws we know, and are essential to much of what we know about DNA and other microscopic aspects of life. It is crazy to say that "Excepting quantum mechanics ... our behaviors are determined by physical laws". He is saying that there is no free will (and hence no need for religion) because physical laws are deterministic except where they are not deterministic.

He then challenges:
For compatibilists:

1. What is your definition of free will?

2. What is “free” about it? Is someone who kills because of a brain tumor less free than someone who kills because, having been brought up in a terrible environment, he values drugs more than other people’s lives?

3. If humans have free will, do other species as well? What about computers?

4. Why is it important that you have a definition of free will rather than discarding the concept completely in favor of something like “agency”? That is, what “new knowledge”, as Jeff noted, does your concept add beyond reassuring people that we have “free will” after all?
A reader answers:
Definition of “free”, from OED:
“able to act or be done as one wishes; not under the control of another”

Definition of “will”, from OED:
“the faculty by which a person decides on and initiates action”

Combine the two ultra-standard definitions of the two words and you have a pretty good approximation of my definition of free will.
Free will is how conscious beings describe the choices they make in their everyday lives. Maybe dogs are conscious and maybe computers will be someday. Consciousness is harder to define.

The concepts of free will and causality are central to how we understand the world, how we organize a civilized society, and how we have purpose to our lives. I cannot disprove superdeterminism, so you are free to believe that if you wish, but it is about as silly as believing in solipsism or that we are just simulations in the Matrix.

Coyne goes on to argue that criminals are not morally responsible for their crimes, that religion is invalid, that we should have same-sex marriage, and other political views. All from a misunderstanding of quantum mechanics!

Scott Aaronson writes:
As it happens, I’ve been working on and off for the past two years on a huge essay setting out my thoughts about free will and predictability — and the essay will be online in just a week or two!
I will reserve judgment until I see his essay.

Update: A Wikipedia article on Two-stage model of free will explains how free will can be compatible with physical law.

Monday, May 6, 2013

Modern physics is not crummy

I am unhappy with this new podcast: Live From NECSS With Jim Holt On Why Does the World Exist?

Holt is a fine science journalist, and most of the discussion was about silly and unanswerable philosophical questions. But I believe he also painted a seriously inaccurate picture of modern physics.

He said that the universe is so "crummy", "not elegant", with "60+ elementary particles", 110 elements, and the "standard model is so ugly". [at 15:00] The argument is that God should have been able to create a simpler design.

This is wrong. The standard model is quite elegant. All matter is made of quarks and electrons, with energy being transmitted by bosons. You can only get to 60+ particles if you count colors, flavors, anti-particles, etc as separate particles. I still don't know how you get to 60 unless you also include supersymmetric and other fictitious particles.

The second wrong opinion is that it is "impossible to give a realistic interpretation of quantum theory", and "impossible to make sense of it", with the obligatory R.P. Feynman quote. [at 33:20]

There are textbook explanations of quantum mechanics that make perfect sense, and that has been true since about 1930. There are people who claim that it would make more sense with hidden variables or parallel universes or other such nonsense, but they have been proven wrong for 80 years. Lee Smolin is a recent example.

Yes, I know that Feynman said that quantum mechanics is hard to understand, and that is true if you want to relate it to everyday macroscopic experience. But you can understand the theory by just reading Feynman's textbook.

MIT physicist Alan Lightman reviews Smolin's book in the NY Times:
He rightly remarks that Einsteinian physics frames time as a relative concept in which the line between past and future varies with the observer. ...

Twentieth-century physics has brought us two kinds of strangeness: strange things we more or less understand, and strange things we do not understand. The first category includes relativity and quantum mechanics. Relativity reveals that time is not absolute. Clocks in relative motion to each other tick at different rates. We don’t notice relativity in daily life because the relative speed must be close to the speed of light before the effects are significant. Quantum mechanics presents a probabilistic picture of reality; subatomic particles act as if they occupy many places at once, and their locations can be described only in terms of probabilities. Although we can make accurate predictions about the average behavior of a large number of subatomic particles, we cannot predict the behavior of a single subatomic particle, or even a single atom. We don’t feel quantum mechanics because its effects are significant only in the tiny realm of the atom.
I know what he is trying to say here, but this is wrong. Relativity does not deny absolute time. When cosmologists say that the age of the universe is 13.8B years, they are using absolute time. We can distinguish the past from the future. Clocks don't tick at different rates; they only appear that way to certain observers. We can see relativistic effects in the form of magnetism.

Probability is not essential to quantum mechanics. We can apply the theory to predict the behavior of single atoms. We can say what will happen if it is struck by a photon or electron, and we can say how it can bind with other atoms. We feel quantum effects all the time. In just reading this text, your eye is detecting individual photons.
The category of strange things we do not understand includes the origin of the universe and the nature of the “dark energy” that pervades the cosmos. Over the last 40 years, physicists have realized that various universal parameters, like the mass of the electron (a type of subatomic particle) and the strength of the nuclear force (the force that holds the subatomic particles together within the centers of atoms), appear to be precisely calibrated. That is, if these parameters were a little larger or a little smaller than they actually are, the complex molecules needed for life could never have formed. Presumably, the values of these parameters were set at the origin of the universe. Fifteen years ago, astronomers discovered a previously unknown and still unexplained cosmic energy that fills the universe and acts as an antigravity-like force, pushing the galaxies apart. The density of this dark energy also appears to be extraordinarily fine-tuned. A little smaller or a little larger, and the life-giving stars would never have formed.
We know a lot about the nature of dark energy. We know the pressure and the density, we know that it is appears to be uniform (and Lorentz invariant) thru-out the universe, we know its history since the big bang, and we know how it continues to expand. Or at least we think that we know. And the dark energy is not finely tuned. If it were, then it would have been discovered much more than 15 years ago, as it would have been a consequence of the existence of stars.
He goes on to propose a variety of revolutionary ideas to codify further his notion of “real time.” In one, he suggests that every atom in the universe is causally connected to every other atom in the universe, no matter how many light-years away. According to his notion, the failure of standard quantum mechanics to predict the behavior of individual atoms arises from the fact that it does not take into account the vast numbers of interconnections extending across the universe. Furthermore, this picture of the cosmos requires an absolute time (in violation of relativity), which he calls “preferred global time.”

One of Smolin’s most astonishing ideas is something he calls the “principle of precedence,” that repeated measurements of a particular phenomenon yield the same outcomes not because the phenomenon is subject to a law of nature but simply because the phenomenon has occurred in the past. “Such a principle,” Smolin writes, “would explain all the instances in which determinism by laws work but without forbidding new measurements to yield new outcomes, not predictable from knowledge of the past.” In Smolin’s view such unconstrained outcomes are necessary for “real” time.
This is kooky. I have not seen the book, so I don't know if it is as bad as it sounds.

Friday, May 3, 2013

Impossibility of time travel

The recent Rupert Sheldrake on "Science Set Free" podcast interviews a scientist with crackpot ideas. His excuse is that Thomas Kuhn discovered that science was about groupthink, and not truth. A comment defends pursuing untestable ideas because string theory is not testable either.

Here is a poll of philopher beliefs, but it does not directly ask about Kuhn's paradigm shift theory.

Speaking of crackpot ideas, Scott Aaronson's new book defends time travel:
Yes, the Grandfather Paradox has often been put forward as a “proof” that time travel into the past is logically impossible. But there are several loopholes in that “proof.” One of them is the possibility of resolving the paradox probabilistically or quantumly (as Deutsch proposed). Another loophole is that maybe Nature simply always finds a consistent deterministic evolution, no matter how unlikely it seemed a priori. (E.g., if you went back in time and tried to kill your grandfather, you’d always discover that the gun jammed, or you forgot to load it, or he recovered from the gunshot wound and went on to sire your parent, etc. etc.) So really the Grandfather Paradox should be seen as a central, obvious difficulty that any account of closed timelike curves needs to overcome.

Your resolution of the paradox, in your first comment, is actually a good way to describe or visualize what happens in Deutsch’s resolution. (Indeed, since Deutsch avidly believes in the Many-Worlds Interpretation, he would regard it not just as a convenient way to visualize, but as a literal description of what happens in his proposal.)

However, one can also invent more complicated time-travel scenarios: for example, what happens if you flip a fair coin, and go back in time and kill your grandfather if and only if the coin lands heads? The beauty of Deutsch’s proposal is that it gives you an automatic way to compute a consistent story for any possible such scenario.

(Spoiler alert: in the above example, the solution is that you’re born with probability 2/3 and not born with probability 1/3. Or if you prefer, you’re born in 2 of 3 parallel universes, and not born in 1 of them.)
This is pretty wacky. I say that the Grandfather Paradox disproves time travel.

Here is a completely separate proof that we will never see time machines. If some future advanced civilization ever got time machines, then surely someone would decide that they are a really bad idea, and go back in time to kill the first inventor before he can create a time machine.

If you believe in Many-Worlds, then I suppose a time machine could take you to a parallel universe. But Many-Worlds is another crackpot idea with no scientific merit.

Frank Wilczek is promoting time crystals. These are not as crazy as time machines, as you cannot use them to violate logic and physics laws.