Friday, May 1, 2015

Defending philosophers of physics

Tim Maudlin writes on Why Physics Needs Philosophy:
How can we understand the world in which we find ourselves? How does the universe behave? What is the nature of reality?….Traditionally these are questions for philosophy, but philosophy is dead. Philosophy has not kept up with modern developments in science, particularly physics. Scientists have become the bearers of the torch of discovery in our quest for knowledge. —Stephen Hawking and Leonard Mlodinow

This passage from the 2012 book “The Grand Design” set off a firestorm (or at least a brushfire) of controversy. ...

In fact, several leading philosophers of physics hold doctorates in physics. Yet they chose to affiliate with philosophy departments rather than physics departments because so many physicists strongly discourage questions about the nature of reality. The reigning attitude in physics has been “shut up and calculate”: solve the equations, and do not ask questions about what they mean. ...

Comprehending quantum theory is an even deeper challenge. What does quantum theory imply about “the nature of reality?” Scientists do not agree about the answer; they even disagree about whether it is a sensible question.
The problems surrounding quantum theory are not mathematical. They stem instead from the unacceptable terminology that appears in presentations of the theory. ...

Philosophers strive for conceptual clarity.
Maudlin is a smart guy who understands a lot of physics, but do physicists really need philosophers to lecture them on the nature of reality?

I just don't see that Philosophy has told Physics anything significant about quantum theory. He does not want to accept the common understanding of 1930, but what we have today is not much better.

On the other hand, Physics is overrun with crackpots of their own. Even Scientific American articles talk about parallel universes, black hole firewalls, and other nonsense.

I do not think that physicist hostility to philosopher is based on a differing view of realism, or in the lack of important contributions by philosophers. Most important are the philosophers at war with physics. Notice how Maudlin attacks physicists for ignoring the meaning of what they do, and of using unacceptable terminology. Other philosophers actively deny that physics are rational, or that they make progress, or that they find objective truths. So of course physicists do not think much of those philosophers.

Monday, April 27, 2015

Poincare applied theory to wireless telegraphy

A new paper on Poincare's forgotten conferences on wireless telegraphy:
At the beginning of the twentieth century while Henri Poincaré (1854-1912) was already deeply involved in the developments of wireless telegraphy, he was invited, in 1908, to give a series of lectures at the Ecole Supérieure des Postes et Télégraphes (today Sup’Télecom). In the last part of his presentation he established that the necessary condition for the existence of a stable regime of maintained oscillations in a device of radio engineering completely analogous to the triode: the singing arc, is the presence in the phase plane of stable limit cycle.

The aim of this work is to prove that the correspondence highlighted by Andronov between the periodic solution of a non-linear second order differential equation and Poincaré’s concept of limit cycle has been carried out by Poincaré himself, twenty years before in these forgotten conferences of 1908.
This work was intended for engineers, and was not included in his complete works.

Physics books often discount Poincare as a mathematician who did not really understand physics like electromagnetism. However, the record shows that he understood it better than Einstein or anyone else in Europe:
During the last two decades of his life, Poincaré had been involved in many research on the propagation of electromagnetic waves. In 1890, he wrote to Hertz to report a miscalculation in his famous experiments. Three years later, he solved the telegraphists equation [Poincaré, 1893]. The following year he published a book entitled: “Oscillations électriques” [Poincaré, 1894] and in 1899 another one: “La Théorie de Maxwell et les oscillations hertziennes” [Poincaré, 1899]. This book, also published in English and German in 1904 and reprinted in French in 1907, has been considered as a reference. In Chapter XIII, page 79 Poincaré stated that the singing arc and the Hertz spark gap transmitter were also analogous except that oscillations are maintained in the first and damped in the second. Thus, from the early twentieth century until his death Poincaré continued his research on wireless telegraphy and on maintained waves and oscillations [Poincaré, 1901, 1902, 1903, 1904, 1907, 1908, 1909abcde, 1910abc, 1911,1912].

Thursday, April 23, 2015

Bell doubted quantum mechanics and relativity

Physicist John Stewart Bell was famous for Bell's theorem, and considered by many a great genius who should have gotten a Nobel prize.

A physics article says:
But despite the ascendancy of the Copenhagen interpretation, the intuition that physical objects, no matter how small, can be in only one location at a time has been difficult for physicists to shake. Albert Einstein, who famously doubted that God plays dice with the universe, worked for a time on what he called a "ghost wave" theory of quantum mechanics, thought to be an elaboration of de Broglie's theory. In his 1976 Nobel Prize lecture, Murray Gell-Mann declared that Niels Bohr, the chief exponent of the Copenhagen interpretation, "brainwashed an entire generation of physicists into believing that the problem had been solved." John Bell, the Irish physicist whose famous theorem is often mistakenly taken to repudiate all "hidden-variable" accounts of quantum mechanics, was, in fact, himself a proponent of pilot-wave theory. "It is a great mystery to me that it was so soundly ignored," he said.
It got this comment:
The author of pilot-wave theory Louis deBroglie was an aetherist and he wrote, for example:
"any particle, even isolated, has to be imagined as in continuous "energetic contact" with a hidden medium ... It certainly is of quite complex character. It could not serve as a universal reference medium, as this would be contrary to relativity theory."
It's not surprising, the guy promoting the aether and even doubting of relativity theory gets ignored with no mercy for century with mainstream physics.
People assume that Bell was a champion of quantum mechanics, but actually he and his early followers were trying to prove quantum mechanics wrong. His theorem gave a way to distinguish quantum mechanics from hidden variable theories. Subsequent experiments verified quantum mechanics, and ruled out hidden variable theories.

The de Broglie Bohm pilot wave theory is considered an interpretation of quantum mechanics, but it is really only an interpretation of some non-relativistic special cases. It is supposedly more intuitive because it tells you where the electron really is, but it also requires you to believe in action-at-a-distance ghosts that accompany the electrons. So it is a lot more bizarre than quantum mechanics.

Bell did not believe in relativity as a spacetime theory either, as he is quoted in this 2009 paper:
If it is just long enough to span the required distance initially, then as the rockets speed up, it will become too short, because of its need to Fitzgerald contract, and must finally break. It must break when, at a sufficiently high velocity, the artificial prevention of the natural contraction impose s intolerable stress.
In the Minkowski spacetime relativity, the FitzGerald contraction is a geometric illusion, and does not cause any stresses. Bell's explanation is wrong, as explained in that paper.

Relativity and quantum mechanics are the pillars of XX-century physics. It is good to have skeptics who keep the mainstream physicists honest, I guess, but nobody gets a Nobel prize for that. If experiments had overthrown the conventional wisdom, then Bell would be hailed as a great genius, but that did not happen. In the end, all Bell had was an argument against hidden variables that was more persuasive than the existing arguments that had already convinced everyone.

I use the The Son of Man painting because it is the symbol of the John Stewart Bell Prize. Something about reality being obscured, I guess.

Here is a recent paper for a book honoring Bell.

Monday, April 13, 2015

Concise argument against quantum computing

I summarize my arguments against quantum computers (QC). By QC, I mean super-Turing scalable quantum computing. I have posted similar arguments before in Sluggishly expanding wave function, The universe is not a quantum computer, and Ivan Deutsch says no true qubit yet.

People have claimed to have done quantum computations, such as factoring 15, but there is no true qubit, and no quantum speedup over a conventional computer.

Here are the main 3 reasons.

1. The same reason I am skeptical about supersymmetry (SUSY): the theory is interesting and the math is sound, but when our leading experts spend a billion dollars looking for ten years and see no sign of it, it is time to consider that it may not exist.

2. Disproving Church-Turing Thesis as a characterization of computable functions would be a very surprising result. Quantum mechanics (QM) is also surprising, but it doesn't actually conflict with Church unless it is strictly valid to a very high precision where it has never been tested.

3. QC is unexpected in positivist interpretations of QM. I see QM as a clever calculus for tracking electrons possibly being in 2 or more places at once. For example, the famous double-slit experiment can be interpreted as a electron going thru both slits at once, and the wave function giving probabilities for each slit.

To a positivist like myself, the electron is never really in two places at once, as that is never observed and the theory does not require it. That is just a convenient fiction for visualizing a quantum that has some particle-like and some wave-like properties, but no exact classical counterpart.

If I really believed that an electron could be in two places at once, then I might believe in QC. The whole idea behind QC is that the multi-positional property of electrons can somehow be used for computational advantage. Whereas regular computers are built out of on-off switches and gates, a QC would be built out of gates that act like little double-slits, dividing reality into pieces and simultaneously operating on the pieces.

QC teaches complementarity, meaning that an electron can be considered a particle and a wave, but if you try to do both at the same time, you get into trouble. That is just what QC does -- try to make an electron a particle and a wave at the same time.

Scott Aaronson would say that I have fallen for the fallacy that a QC is a parallel computer, or that I have not fully accepted QM, or that I have failed to give a precise mechanism that is going to prevent a QC from achieving super-Turing results. He says:
With quantum computing the tables are turned, with the skeptics forced to handwave about present-day practicalities, while the proponents wield the sharp steel of accepted physical law.
Maybe so. When the experts figure out whether QC is possible, we can revisit this page and see who was more right.

Meanwhile, Aaronson explains some of those sharp edges of accepted physical law:
I would not recommend jumping into a black hole as a way to ensure your privacy. ...

But a third problem is that even inside a black hole, your secrets might not be safe forever! Since the 1970s, it’s been thought that all information dropped into a black hole eventually comes out, in extremely-scrambled form, in the Hawking radiation that black holes produce as they slowly shrink and evaporate. What do I mean by “slowly”? Well, the evaporation would take about 1070 years for a black hole the mass of the sun, or about 10100 years for the black holes at the centers of galaxies. Furthermore, even after the black hole had evaporated, piecing together the infalling secrets from the Hawking radiation would probably make reconstructing what was on the burned paper from the smoke and ash seem trivial by comparison! But just like in the case of the burned paper, the information is still formally present (if current ideas about quantum gravity are correct), so one can’t rule out that it could be reconstructed by some civilization of the extremely remote future.
Sorry, but I do not accept conservation of information as a physical law. I say that if you burn a book, you destroy the information in it. He says that the information is retained in the ashes somehow, but there is no practical or conceivable way of getting the info out of the ashes. Saying that the ashes retain the info is not a scientific statement.

The same kind of people, who believe in the many-worlds interpretation of quantum mechanics, also believe in time-reversibility, and in information conservation. I guess you could believe in these things separately, as I don't think Aaronson believes in many-worlds. But they are all just abstract prejudices that have no real theoretical or experimental grounding.

Conservation of information is nothing like conservation of energy or momentum. Those types of conservation laws are theoretically grounded in symmetry principles, and experimentally tested to high accuracy. Conservation of information is just based on a belief about how people should think of probability.

When Aaronson talks about the "sharp steel of physical law", he means things like information conservation. And by that, he means that your privacy is not safe in a black hole because info might leak out over the next 1070 years.

Making a quantum computer might be like extracting info from a black hole. Certain experts will always say that it is possible, but it is so far removed from anything observable that it is just idle speculation.

Friday, April 10, 2015

The problems with Everett's many worlds

Lubos Motl attacks Hugh Everett's thesis, and a new paper discusses The Problem of Confirmation in the Everett Interpretation:
I argue that the Oxford school Everett interpretation is internally incoherent, because we cannot claim that in an Everettian universe the kinds of reasoning we have used to arrive at our beliefs about quantum mechanics would lead us to form true beliefs. I show that in an Everettian context, the experimental evidence that we have available could not provide empirical confirmation for quantum mechanics, and moreover that we would not even be able to establish reference to the theoretical entities of quantum mechanics. I then consider a range of existing Everettian approaches to the probability problem and show that they do not succeed in overcoming this incoherence.
I criticized the author, Emily Adlam, for a paper on relativity history.

Everett is the father of the Many Worlds Interpretation of quantum mechanics. I agree that it is incoherent. Some people prefer to call it the Everett interpretation so that it sounds less wacky. It is amazing how many seemingly-educated people take it seriously. It is like a stupid science fiction plot, and there isn't much substance to the theory at all.

There are not really any theoretical or empirical reasons for preferring MWI. The arguments for it are more philosophical. Its adherents say that it is more objective, or more deterministic, or more localized, or something like that. I don't know how postulating the spontaneous creation of zillions of unobservable parallel universes can do any of those things, but that is what they say.

Wednesday, April 8, 2015

Galileo got ship relativity from Bruno

I have posted many times on the origin of special relativity, but this is a new one to me. A new paper credits Bruno in 1584:
The trial and condemnation of Giordano Bruno was mainly based on arguments of philosophical and theological nature, and therefore different from Galilei's. Such elements contribute to unfairly devalue the scientific contribution of Bruno and do not properly account in particular for his contribution to physics. This paper discusses the contribution of Bruno to the principle of relativity. According to common knowledge, the special principle of relativity was first enunciated in 1632 by Galileo Galilei in his Dialogo sopra i due massimi sistemi del mondo (Dialogue concerning the two chief world systems), using the metaphor today known as "Galileo's ship": in a boat moving at constant speed, the mechanical phenomena can be described by the same laws holding on Earth. We shall show that the same metaphor and some of the examples in Galilei's book were already contained in the dialogue La cena de le Ceneri (The Ash Wednesday Supper) published by Giordano Bruno in 1584. In fact, Giordano Bruno largely anticipated the arguments of Galilei on the relativity principle. It is likely that Galilei was aware of Bruno's work, and it is possible that the young Galilei discussed with Bruno, since they both stayed in Venezia for long periods in 1592.
I knew that Bruno was a Catholic monk who denied the divinity of Jesus and argued for many other heresies, and was executed. And I knew that he speculated about an infinity of worlds. But I did not that he had any legitimate scientific contributions.

There were ancient Greeks who argued for the motion of the Earth, such as Aristarchus of Samos, but his works have been lost and we don't know his arguments. Since we do not feel the motion of the Earth, he surely must have argued for some sort of relativity principle. Aristotle argued that our failure to feel the motion suggests that the Earth is not moving. So I do not see how the principle can possibly be due to Bruno or anyone of that time period.

This paper does show that Galileo could have met Bruno and gotten important ideas from him, including the relativity of a moving ship.

Modern relativity got started when James Clerk Maxwell observed that his theory of electromagnetism appeared to be incompatible with the relativity principle. He coined the word "relativity", and suggested an experimental test.

Monday, April 6, 2015

Aaronson's first paragraph

Since I often mention MIT complexity theorist Scott Aaronson, I am giving a plug for his book.

You can tell a lot from the first paragraph of the first chapter of a book. That is where the author carefully attempts to grab your attention and give you a flavor of the book. Some authors will rewrite it 20 times until they get it right.

Here is how Quantum Computing since Democritus starts:
So why Democritus? First of all, who was Democritus? He was this Ancient Greek dude. He was born around 450 BC in this podunk Greek town called Abdera, where people from Athens said that even the air causes stupidity. He was a disciple of Leucippus, according to my source, which is Wikipedia. He's called a “pre-Socratic,” even though actually he was a contemporary of Socrates. That gives you a sense of how important he's considered: “Yeah, the pre-Socratics – maybe stick ’em in somewhere in the first week of class.” Incidentally, there's a story that Democritus journeyed to Athens to meet Socrates, but then was too shy to introduce himself.
The book is 370 pages of this.

He is writing a new book, from his blog posts on similar topics. He has acknowledged that his own book has led people to quantum computer over-hype, and proclaims: "Single most important application of QC (in my opinion): Disproving the people who said QC was impossible!" He laments that "we as a community have failed to make the honest case for quantum computing". Much of the content of his book is on his web site.

Wednesday, April 1, 2015

Lemaitre's Big Bang

There is a new paper on Lemaitre's Big Bang:
I give an epistemological analysis of the developments of relativistic cosmology from 1917 to 1966, based on the seminal articles by Einstein, de Sitter, Friedmann, Lemaitre, Hubble, Gamow and other historical figures of the field. It appears that most of the ingredients of the present-day standard cosmological model, including the acceleration of the expansion due to a repulsive dark energy, the interpretation of the cosmological constant as vacuum energy or the possible non-trivial topology of space, had been anticipated by Georges Lemaitre, although his articles remain mostly unquoted.
I have posted before that Lemaitre should be credited with the Big Bang more than any other single person. This paper confirms that, and describes later contributions of others as well.

Tuesday, March 31, 2015

Controlled by randomness cartoon


I am not sure what this New Yorker cartoonist is trying to say, or whether I agree with it. Yes, we live in a complex system, and unpredictable events affect us all the time. Is he paranoid for thinking that? Is that more or less disturbing than some evil conspiracy controlling us?

Friday, March 27, 2015

Free will observations

I am seeing intelligent scientists and philosophers saying really silly things about free will.

Existence of free will is not a scientific question. There is no way to directly test free will by experiments such as putting two people in the same state of mind and seeing whether they make the same decision.

There are experiments by Libet and others showing that the timing of a decision, as measured by brain scans, can be slightly different from conscious expectations. There are optical illusions that show that your brain perceives images in a way that is also slightly different from conscious expectations. But none of these experiments deny the apparent ability of your brain to make decisions.

Possibilities of solipsism are fruitless. We cannot rule out the possibility that some sort of super-determinism controls everything we see and do, or that we are all part of some vast computer computer simulation. But so what?

Here is a much less radical, but similarly worthless, statement: A hammer is mostly empty space. You can believe that if you want, but you can still hammer nails, and it still hurts if the hammer hits you.

Daily life is impossible without a belief in free will. Everyday we make decisions, or at least we think we do, and we often put a lot of effort into those decisions. What would I do otherwise?

Suppose someone came to me and said: The forward march of time is just an illusion, and time is really going backwards. What would I do with that info? I still have to live my life as if time is marching forwards, as nobody knows how to do anything else.

Denying free will serves leftist political goals, and encourages irresponsible behavior.

Scientific reasoning does not require determinism. There is an argument that unless you believe in religion or dualism or the supernatural, then everything must be determined by initial conditions, except maybe for some quantum randomness. That is, everything is determined except for what is not determined. People give this argument as if determinism is some obvious consequence of scientific rationalist materialism.

It is not. Scientists do try to make predictions, based on whatever data they have, but there is never a claim that everything is predictable.

If we had free will, how would that show up in our physical theories? They would be mostly deterministic, except for some unpredictable aspects. In other words, just like the physical theories that we have.

Here are examples of the argument from two prominent Skeptics in published articles. Physicist Victor J. Stenger writes:
So where does this leave us on the question of free will? Libertarians are correct when they say that determinism does not exist, at least at the fundamental physics level. Nevertheless, it is hard to see how physical indeterminism at any level validates the libertarian view. As Harris points out, “How could the indeterminacy of the initiating event [of an action] count as the exercise of my free will?”22 For an action to be mine, originated by me, it can’t be the result of something random, which by definition would be independent of my character, desires and intentions. To originate and be responsible for an action, I have to cause it, not something indeterministic. So the libertarian quest for indeterminacy (randomness) as the basis for free will turns out to be a wild goose chase. Neither determinism norindeterminism gets us free will.
Philosopher Massimo Pigliucci writes The incoherence of free will:
The next popular argument for a truly free will invokes quantum mechanics (the last refuge of those who prefer to keep things as mysterious as possible). Quantum events, it is argued, may have some effects that “bubble up” to the semi-macroscopic level of chemical interactions and electrical pulses in the brain. Since quantum mechanics is the only realm within which it does appear to make sense to talk about truly uncaused events, voilà!, we have (quantistic) free will. But even assuming that quantum events do “bubble up” in that way (it is far from a certain thing), what we gain under that scenario is random will, which seems to be an oxymoron (after all, “willing” something means to wish or direct events in a particular — most certainly not random — way). So that’s out as well.
Essentially the argument is: It does not matter if the laws of physics seem to allow for free will. Those laws must be deterministic or indeterministic. If deterministic, then everything is pre-determined, so we have no free will. If indeterministic, then there is some randomness we do not understand, so also we have no free will.

My FQXi essay also has a discussion of this aspect of randomness.

This is illogical. It is like arguing:
Studying cannot help you get good grades in college. Social science models show that college grades are 50% correlated with parental income, with the other 50% being random. Studying will not increase your parents income. Randomness will not get you good grades. Therefore studying will not help.
There error here is that the models do not consider studying, so studying shows up as random. The random component is just the sum of unexplained factors. The argument excludes studying from the models, and then tries to draw a conclusion from studying being excluded.

Likewise, brain models do not consider free will. They cannot do that as no one even knows what consciousness is. Quantum randomness is unexplained. You cannot just say, "the brain is explained by factors that are currently unexplained, so therefore there is no free will."

You might say:
Of course the models do not factor in free will. The whole concept of free will is that of "mind over matter", and it is intrinsically unscientific and cannot be modeled. We can understand how studying for college exams might get you a better grade, but there is no way an immaterial dualistic mind can influence a material body.
We certainly have an appearance of having conscious minds that made freely-chosen decisions. No, I cannot explain how it works, but I cannot explain how it could all be an illusion either. Believe what you want, but it is not true to say that science has given us an answer.

Here is an argument from a prominent leftist-atheist:
The events that Sam Harris talks about in the you-tube clip “Sam Harris on Free Will” include descriptions of weak free will events. For example, he asks the audience to think of a city then points out that the audience did not call up an exhaustive list of cities from which a particular city is carefully selected. Instead, a city name (or two, or three) pops into your head. Even if only one pops into your head, you can make a weak free will decision to accept it or to go back to city name retrieval process.
Harris argues that because you cannot explain a fully causal mechanism for how you chose the city, then it does not feel like free will, and it feels more like some demon in your head is forcing the choice on you.

I think the opposite. If I followed a deterministic algorithm for the city, then that would not feel like free will. Spontaneously making some inexplicable choice feels like free will. His argument continues:
Now here is the part that gets a bit tricky. Harris suggests that you often aren’t even aware of why you picked Tokyo, even if you have a story to tell, such as you had Japanese food last night. Even if that story did somehow influence your decision (though he goes on to say how bad we are at assessing such), “you still can’t explain why you remembered having Japanese food last night or why the memory had the effect that it did. Why didn’t it have the opposite effect?”

This point is extremely important here. Even if you remembered the Japanese food, why didn’t you think “Oh, I had Japanese food last night so I’ll choose something different from Tokyo” instead of perhaps “Oh, I had Japanese food so I’ll choose Tokyo”? The fact of the matter is, one of these were forced to the forefront of your consciousness, resulting in your decision. But the chances are you really don’t know why one did and not the other.

Harris goes on to say “The thing to notice is that, you as the conscious witness of your inner life, are not making these decisions. You can only witness these decisions.”
I wonder what he thinks that free will would feel like. To me, it seems quite consistent with free will to assume that part of your brain stores memories of food, and another part makes decisions, and that I am often unable to give a causally-deterministic explanation for my decisions.

Wednesday, March 25, 2015

The quantum artificial intelligence

Here is some silly quantum hype:
Steve Wozniak maintained for a long time that true AI is relegated to the realm of science fiction. But recent advances in quantum computing have him reconsidering his stance.
Here is Here's How You Can Help Build a Quantum Computer:
Quantum computers—theoretical machines which can process certain large and difficult problems exponentially faster than classical computers—have been a mainstay of science fiction for decades. But actually building one has proven incredibly challenging.

A group of researchers at Aarhus University believes the secret to creating a quantum computer lies in understanding human cognition. So, they've built computer games to study us, first. ...

To build a quantum computer, researchers are first mapping human thoughts.
These would be some big advances in a field that has spent $100M just to discover that 15 = 3x5.

Peter Woit is back online, reporting more physics hype. He quotes Weinberg:
I am not a proponent of the idea that our Big Bang universe is just part of a larger multiverse.
Once other planets, stars, and galaxies were named, we had to have names for our planet, sun, and galaxy. I don't know the history, but I am guessing that it took a while for a term like "our Milky Way galaxy" to catch on.

So now we have the term "our Big Bang universe" to distinguish our universe from all the other universes. None of those other universes have names, as they cannot be observed. But we can name our universe, and cosmologists seem to be moving away from the idea that "universe" means everything.

The term "our Big Bang universe" suggests that it includes Earth and everything we see going back to the Big Bang, and everything emanating forward in time, but nothing before the Big Bang, and nothing that is so separated from us that relativity precludes any interaction with us.

Tuesday, March 24, 2015

Testing relativistic mass

Yesterday's Astronomy Cast Ep. 370: The Kaufmann–Bucherer–Neumann Experiments covered:
One of the most amazing implications of Einstein’s relativity is the fact that the inertial mass of an object depends on its velocity. That sounds like a difficult thing to test, but that’s exactly what happened through a series of experiments performed by Kaufmann, Bucherer, Neumann and others.
This was pretty good relativity history, except that if you listen, you might wonder about a couple of things.

Why were they testing relativistic mass in 1901 if Einstein did not invent it until 1905?

Where did they get those formulas involving velocity and the speed of light without relativity?

Relativistic mass for electrons was predicted by Lorentz in 1899 and confirmed by experiment in 1901-1902. Lorentz got the Nobel prize for his electron theory in 1902.

Others found rival theories that were also consistent with experiment, and it took another 5 or 10 years to distinguish Lorentz's relativity from the rival theories. That research eventually concluded that the "Lorentz-Einstein theory" matches the data. It was the first real test of special relativity.

Monday, March 23, 2015

Number line invented in twentieth century

A Russian mathematician has a new paper On the History of Number Line
The notion of number line was formed in XX c. We consider the generation of this conception in works by M. Stiefel (1544), Galilei (1633), Euler (1748), Lambert (1766), Bolzano (1830-1834), Meray (1869-1872), Cantor (1872), Dedekind (1872), Heine (1872) and Weierstrass (1861-1885).
My first thought -- is "XX c" some Russian name for an ancient Greek or Egyptian city? Didn't Euclid have the number line?

No, "XX c" means the twentieth century, from 1900 to 2000. (Or 1901 to 2001, maybe if you quibble about the invention of the zero.) It will always be the greatest century in the history of intellectual thought, and a term like "XX c" gives it a dignified respect. Just like WWII was the greatest war.

I have been using "XX century" to denote that century for a while. I will consider XX c.

The paper is quite serious about arguing that pre XX c concepts of the number line are all defective. It does not explain who finally got it right in the XX c. I would have said that Cauchy, Weierstrauss, and Cantor all had the concept in the 19th century. Surely Bourbaki had the modern concept in the XX c.

I would have said that early XX c great mathematical concepts (related to the number line) were the manifold and axiomatic set theory. But maybe the number line itself is from the XX c.

The paper argues that Cantor's formulation of the reals was deficient, but the references are in Russian, and I do not know whether it is correct.

I posted below about a famous modern mathematician who did not understand axiomatic set theory. The concept of Lorentz covariance was crucial to the development of special relativity by Poincare and Minkowski, but Einstein did not understand it until many years later.

Physicists will be especially perplexed by this. My FQXi essay discusses how mathematicians and physicists view random and infinite numbers differently. Non-mathemathematicians are endless confused about properties of the real numbers. See for example the Wikipedia article on 0.9999 or Zeno's paradoxes.

If it is really true that 19th century mathematicians did not have the modern concept of the number line, then I should assume that nearly all physicists do not either today. I just listened to physicists Sean M. Carroll's dopey comments on Science Friday. He said that we should retire the concept of falsifiability, because it gets used against untestable theories like string theory. He also argued that space may not be fundamental. I wonder if he even accepts the number line the way mathematicians do.

Another new paper says:
The novice, through the standard elementary mathematics indoctrination, may fail to appreciate that, compared to the natural, integer, and rational numbers, there is nothing simple about defining the real numbers. The gap, both conceptual and technical, that one must cross when passing from the former to the latter is substantial and perhaps best witnessed by history. The existence of line segments whose length can not be measured by any rational number is well-known to have been discovered many centuries ago (though the precise details are unknown). The simple problem of rigorously introducing mathematical entities that do suffice to measure the length of any line segment proved very challenging. Even relatively modern attempts due to such prominent figures as Bolzano, Hamilton, and Weierstrass were only partially rigorous and it was only with the work of Cantor and Dedekind in the early part of the 1870’s that the reals finally came into existence.
The paper goes on to give a construction of the reals, based on a more elementary version of Bourbaki's. It also outlines other constructions of historical significance.

As you can see, the construction is probably more complicated than you expect. And it skips construction of the natural numbers (usually done with Peano axioms) and the rational numbers (usually done as equivalence classes of ordered pairs of integers).

This puts the number line before the XX c, but there is still the problem that set theory was not axiomatized until the early XX c.

Wednesday, March 18, 2015

Trying to kill the mathematical proof

SciAm writer John Horgan still defends his 1993 article on the Death of Proof:
“For millennia, mathematicians have measured progress in terms of what they could demonstrate through proofs — that is, a series of logical steps leading from a set of axioms to an irrefutable conclusion. Now the doubts riddling modern human thought have finally infected mathematics. Mathematicians may at last be forced to accept what many scientists and philosophers already have admitted: their assertions are, at best, only provisionally true, true until proved false.”

I cited Thurston as a major force driving this trend, noting that when talking about proofs Thurston “sounds less like a disciple of Plato than of Thomas S. Kuhn, the philosopher who argued in his 1962 book, The Structure of Scientific Revolutions, that scientific theories are accepted for social reasons rather than because they are in any objective sense ‘true.’” I continued:

“‘That mathematics reduces in principle to formal proofs is a shaky idea’ peculiar to this century, Thurston asserts. ‘In practice, mathematicians prove theorems in a social context,’ he says. ‘It is a socially conditioned body of knowledge and techniques.’ The logician Kurt Godel demonstrated more than 60 years ago through his incompleteness theorem that ‘it is impossible to codify mathematics,’ Thurston notes. Any set of axioms yields statements that are self-evidently true but cannot be demonstrated with those axioms. Bertrand Russell pointed out even earlier that set theory, which is the basis of much of mathematics, is rife with logical contradictions related to the problem of self-reference… ‘Set theory is based on polite lies, things we agree on even though we know they’re not true,’ Thurston says. ‘In some ways, the foundation of mathematics has an air of unreality.’”

After the article came out, the backlash—in the form of letters charging me with sensationalism – was as intense as anything I’ve encountered in my career.
For your typical naive reader, this just confirms a modern nihilism, as expressed by this comment:
If it is impossible to codify mathematics, and it is possible to state mathematical ideas that are true, but cannot be proven true, then the same applies to every field. Thus, the mere fact that you cannot prove some idea axiomatically proves nothing. People often demand proofs of that sort for moral or philosophical notions, like, say, the existence of God, but the whole exercise is empty. Thanks, Kurt.
Horgan deserved the criticism, and so did Thurston. It is not true that Godel proved that it is impossible to codify mathematics. It would be more accurate to say the opposite.

Russell did find an amusing set theory paradox in 1901, but it was resolved over a century ago. Set theory is not based on anything false.

Thurston was a great genius, and was famous for explaining his ideas informally without necessarily writing rigorous proofs. He was not an expert in the foundations of mathematics.

The quotes are apparently accurate, as he published as essay defending his views, and his record of claiming theorems and failed to publish the details of the proofs.

His essay complains about the theorem that the real numbers can be well-ordered, but there is no constructive definition of such an ordering.

From this and Godel's incompleteness theorem, he concludes that the foundations of mathematics are "shakier" than higher level math. This is ridiculous. I conclude that his understanding of the real number line is deficient. There is nothing shaky about math foundations.

It sounds crazy to question Thurston's understanding of real numbers, because he was a brilliant mathematician who probably understood 3-dimensional manifolds better than anything.

Doing mathematics is like building a skyscraper. Everything must be engineered properly. But the guy welding rivets on the 22nd floor may not understand how the foundations are built. That is someone else's department. So yes, someone can prove theorems about manifolds without understand how the real numbers are constructed.

It is still the case that mathematicians measure progress in terms of what they could demonstrate through a series of logical steps leading from a set of axioms to an irrefutable conclusion. The most famous works of the past 25 years were Fermat's Last Theorem and the Poincare Conjecture. Both took years to be accepted, because of the work needed to follow all those steps.

The idea that "theories are accepted for social reasons" is the modernist disease of paradigm shift theory.

A NY Times Pi Day article says:
Early mathematicians realized pi’s usefulness in calculating areas, which is why they spent so much effort trying to dig its digits out. ...

So what use have all those digits been put to? Statistical tests have suggested that not only are they random, but that any string of them occurs just as often as any other of the same length. This implies that, if you coded this article, or any other, as a numerical string, you could find it somewhere in the decimal expansion of pi.
That is a plausible hypothesis, and you might even find people willing to bet their lives on it. But you will not find it asserted as true in any mainstream math publication, because it has not been proved. Yes, math relies on proof, just as it has for millennia.

Monday, March 16, 2015

Bouncing oil droplets reveal slippery truth

Ross Anderson writes:
I am a heretic. There, I've said it. My heresy? I don't believe that quantum computers can ever work.

I've been a cryptographer for over 20 years and for all that time we've been told that sooner or later someone would build a quantum computer that would factor large numbers easily, making our current systems useless.

However, despite enormous amounts of money spent by research councils and government agencies, the things are stuck at three qubits. Factoring 15 is easy; 35 seems too hard. A Canadian company has started selling computers they claim are quantum; scientists from Google and NASA said they couldn't observe any quantum speed-up.

Recently, the UK government decided to take £200m from the science budget and devote it to found a string of new "quantum hubs". That might be a bad sign; ministerial blessing is often the last rites for a failing idea.

So will one more heave get us there, or is it all a waste of time?
Not only that, he has co-authored Maxwell's fluid model of magnetism. He claims that physics went bad about 150 years ago, and some ideas that were abandoned then are really right.

Scott Aaronson says that it is extremely safe to say that he is wrong, but is not able to pinpoint the error. Aaronson trashed some related work in 2013.

I sometimes get comments saying that mainstream physics has been wrong for a century or more. I don't know how to evaluation such claims. Science is never that completely wrong.

Anderson's theory seems to be some sort of hidden variable theory. I am persuaded that XX century quantum theory and experiments have ruled these out. So I do not see how he can be right.

Anderson is one of the world's experts on cryptographic security for banking and related industries. My guess is that he is frequently asked whether banks should use quantum cryptography, or worry about attacks from quantum computing. He surely comes to the conclusion, as do I, that both subjects are almost completely irrelevant to banking. Then he must be frustrated by bankers who doubt him because so many big-shots are over-hyping the quantum stuff.

I think that he is right that quantum computers will never work, and that failures so far give good reason for skepticism. I differ from him in that I doubt that there is anything fundamentally wrong with quantum mechanics.