Monday, July 27, 2015

Wilczek's new book on beauty in nature

Here is an endorsement for a new book, A Beautiful Question: Finding Nature's Deep Design:
Deepak Chopra, M.D.: “For a century, science has invalidated ‘soft’ questions about truth, beauty, and transcendence. It took considerable courage therefore for Frank Wilczek to declare that such questions are within the framework of ‘hard’ science. Anyone who wants to see how science and transcendence can be compatible must read this book. Wilczek has caught the winds of change, and his thinking breaks through some sacred boundaries with curiosity, insight, and intellectual power.”
There is a fine line between the frontiers of hard physics and crackpot babble, I guess.

Separately Wilczek claims in Nature magazine:
Particle physics: A weighty mass difference

The neutron–proton mass difference, one of the most consequential parameters of physics, has now been calculated from fundamental theories. This landmark calculation portends revolutionary progress in nuclear physics.
The article is behind a paywall, so I cannot assess how revolutionary it is. My guess is that it uses the masses of the protons and neutrons to estimate the masses of the up and down quarks, and then uses the quark masses to calculate the proton and neutron masses. It does not sound revolutionary to me.

Peter Woit also endorses the book, and a comment says:
If Ptolemy’s epicycles worked, would we consider them to be beautiful?
Ptolemy’s epicycles did work. I scratch my head at how scientists can get this so wrong.

In Ptolemy's Almagest, the principal epicycles were just his way of representing the orbit of the Earth. The orbits of Earth and Mars could be approximated by circles, and the view of Mars from Earth can be represented by the vector difference of those two circles. The Earth circle was called an epicycle. There were also minor epicycles to correct for the orbits not being exactly circular.

So yes, epicycles did work to approximate the orbits, and the same main idea is used today whenever anyone describes a planetary orbit, as viewed from the Earth.

Saturday, July 25, 2015

No Nobel Prize for mathematicians

The NY Times has a profile of a mathematician, so of course it has to explain whether he has won a Nobel Prize and whether he is crazy like all the other mathematicians:
He has since won many other prizes, including a MacArthur ‘‘genius’’ grant and the Fields Medal, considered the Nobel Prize for mathematicians. Today, many regard Tao as the finest mathematician of his generation. ...

Possibly the greatest mathematician since antiquity was Carl Friedrich Gauss, a dour German born in the late 18th century. He did not get along with his own children and kept important results to himself, seeing them as unsuitable for public view. They were discovered among his papers after his death. Before and since, the annals of the field have teemed with variations on this misfit theme, from Isaac Newton, the loner with a savage temper; to John Nash, the ‘‘beautiful mind’’ whose work shaped economics and even political science, but who was racked by paranoid delusions; to, more recently, ­Grigory Perelman, the Russian who conquered the Poincaré conjecture alone, then refused the Fields Medal, and who also allowed his fingernails to grow until they curled.
Sergiu Klainerman explains that the Fields is nothing like the Nobel at all:
Concerning the first issue, the differences between the Fields Medal and the Nobel Prize can hardly be exaggerated. Whatever the original intentions, the Fields Medal is given only to young mathematicians below the age of forty. To have a chance at the medal a mathematician must not only make a major contribution early on, he/she must also be lucky enough to have its importance broadly recognized before the arbitrary fortieth mark. This means that, if an area of mathematics is not represented in the composition of the Fields committee at a given International Congress, truly original and important contributions in that area have very little chance.

In contrast, the Nobel Prize has no age limits. The role of a Nobel committee (in natural sciences) is, at least in principle, to identify those breakthroughs deemed most important by a broad segment of the scientific community and then decide who are the most deserving contributors to it. In contrast with the Fields Medal, which is given strictly to an individual, independent of whether other people might have contributed important ideas to the cited works, the Nobel Prize can be shared by up to three individuals. Thus, in theory, a Nobel Prize is awarded primarily for supreme achievements, and only secondarily to specific individuals. ...

In fact mathematics does not have any prize comparable with the Nobel Prize. The other major prizes — Abel, Shaw, and Wolf — don’t have any age limitation but are almost always given to individuals, based on works done throughout their careers, rather than for specific achievements. Even when the prize is shared there is, in most cases, no identifiable connection between the recipients.
The Abel Prize is maybe the closest to being a Nobel Prize for Math.

There is a wide perception that all the good math is done by young math prodigies. The most famous big math problems of the last 25 years were Fermat's Last Theorem and the Poincare-Thurston conjecture. Both were by mathematicians around age 40, and that is probably the age of highest productivity.

Thursday, July 23, 2015

Comparing special and general relativity

This year is celebrated as the centenary of general relativity, as ten years ago was the centenary of special relativity. What is the difference? Special relativity is the theory of flat spacetime, including Lorentz transformations and electromagnetism. General relativity is the theory of curved spacetime, and gravity.

Einstein fans disagree over which is the greater accomplishment. Special relativity changed thinking about space and time in a way that permeates XX century physics. General relativity is always lauded as a great theory, but its effects are barely measurable and it has had almost no influence on other branches of physics. It has some influence on cosmology, but not much.

So special relativity is the more influential theory, by far. But some Einstein fans prefer to praise general relativity, because that was a conceptually much more difficult accomplishment. Special relativity can be easily explained with some undergraduate linear algebra, but general relativity requires tensors and differential geometry.

Einstein's role was also different. His 1905 special relativity paper was written on his own, building on published papers. His 1915 general relativity was a collaboration with mathematicians. Some people see one as more credit-worthy than the other.

People often say that GPS requires special and general relativity clock corrections, but it is really just special relativity corrections. There is an effect due to satellite speed and special relativity, and an effect due to gravity that is often called general relativity. But the necessary gravity formula was actually derived by Einstein in 1907 from special relativity, using what he called "the happiest thought of my life". This was before he understood relativity as a spacetime theory, and many years before he knew anything about tensors or curvature.

Sometimes people say that special relativity is just about constant velocity inertial motion, but that is not how it was viewed in the early days, say 1895-1910. It was often applied to accelerating electrons and other particles. Gravitational time dilation can be calculated by comparing to acceleration in a flat spacetime. Viewed this way, the only truly measurable general relativity effects are things like precession of Mercury's orbit, and that is a very tiny effect that took centuries to notice.

Even if relativity had never been discovered, we would probably still have GPS. Nobody would understand why the satellite clocks had to be re-synchronized so often, but they could have figured out some heuristics for resetting the clocks.

Monday, July 20, 2015

Carroll explains many-worlds to philosopher

I sometimes wonder how philosophers get such wrong ideas about physics and other sciences. They say that they talk to real scientists, but maybe they are talking to the wrong ones.

As a case in point, philosophers nearly all have wrong ideas about quantum mechanics, and here is one learning crackpot ideas from a fringe publicity-seeking physicist.

Physicist Sean M. Carroll was on a philosophy show, the rationally speaking podcast, defending parallel universes:
Julia Galef: You mentioned the concept of simplicity. I've encountered a lot of
confusion -- also experienced a lot of confusion – over, how do you decide which theory is simpler than another theory?

For example, I've heard critiques of the Many Worlds, or Everett, interpretation of quantum mechanics, to the effect that, "Look, if you're going posit this infinite or uncountably large number of worlds in order to explain this data we're getting, then that's an incredibly extravagant, or incredibly complex, theory. And we should really go for a simpler one, in which there is only one world MM the world that we can see, basically."

Sean Carroll: I would say there are various reasonable critiques of the Many Worlds program; that is not one of them.

Julia Galef: Right. That's what I thought you'd say. [00:14:00]

Sean Carroll: Yeah. To put it as bluntly as possible, that's just wrong. That's just a mistake. It's just a misunderstanding.

Because, again, we're not positing many, many worlds. We are taking the formalism of quantum mechanics that is always there. The Hilbert space, that we call it, which is where the wave function lives, it’s the mathematical structure that a particular quantum state is an element of. The Hilbert space is just as big for someone doing a different interpretation as for someone doing Many Worlds. It doesn't get any bigger. Hilbert space is big. It includes a lot of possibilities. All we're saying is, Hilbert space is all there is, then you stop after you have that. There’s not other structures or other rules or other interpretative dances that you're allowed to do.

To say that positing a lot of worlds is extravagant is to get it exactly backwards. We're positing the minimal mathematical structure needed to make sense of quantum mechanics. Everyone posits Hilbert space. We're just admitting that it's real rather than denying that.
No, Carroll misunderstands math and physics.

Probability is a mathematical device for estimating the likelihood of an event occurring. Carroll says that quantum mechanics is all about probability, but he rejects the way everyone else understands probability. To him, all events occur with certainty, but maybe in other universes.

Physics is about observables. But Carroll insists on attributing reality to all these extra universes that no one can ever observe.

I have posted more detailed arguments on what is wrong with many-worlds. I just want to note here that the blind is leading the blind.

The conversation gets goofier when they discuss morality.
Julia Galef: Right. There was this case -- I don't remember who this was -- one Everettian who I know, she was crossing the street and I guess she wasn't looking where she was going, and the car slammed on its brakes to avoid hitting her.

She was really shaken by this -- which that made sense to me, It makes sense to be shaken if you're almost hit by a car -- but she then explains that the reason she felt so shaken was that she had "lost a lot of measure." In other words, there were a lot of almost identical copies of her who had gotten hit by the car, since it was kind of a toss up whether the car would have hit her or not. Over the set of all copies of her --

Sean Carroll: I'm not sure that attitude can be consistently maintained if you also believe in a block universe version of time, where eventually we're all dead.

Julia Galef: I'll past that on! I don't know if that will cheer her up or not.
This is crazy talk. No, that Everettian did not lose copies of herself in alternate universes. The block universe version of time is another stupid idea, and has little to do with the situation.

Thursday, July 16, 2015

Lorentz and Einstein had similar realism ideas

FitzGerald and Lorentz first derived, independently, the length contraction as a logical consequence of the Michelson-Morley experiment showing a constant speed of light. Then Lorentz developed a more sophisticated theory where the contraction is also explained by electromagnetism pulling atoms closer together. Einstein later called former a principle theory, and the latter a constructive theory.

Mathias Frisch wrote Mechanisms, principles, and Lorentz's cautious realism in 2005:
I show that Albert Einstein’s distinction between principle and constructive theories was predated by Hendrik A. Lorentz’s equivalent distinction between mechanism- and principle-theories. I further argue that Lorentz’s views toward realism similarly prefigure what Arthur Fine identified as Einstein’s ‘‘motivational realism.’’
Many discussion of crediting Einstein for relativity are based on Einstein avoiding the constructive theory in his 1905 paper, and sticking to the principle-theory approach that Lorentz had earlier.

Frisch's paper further shows that Einstein's views on relativity were pretty much the same as Lorentz's.

Monday, July 13, 2015

The 7 competing Solar System models

I mentioned the Myth of the Dark Ages, but I want to emphasize this list:
While we know how the science turned out, people like Bellarmine and the majority of astronomers did not have the benefit of our hindsight.  At the time the question was far from settled and there were actually no less than seven competing models under debate, of which the Copernican model was very much the unfavoured outsider.  They consisted of:
  1. Heraclidean.  Geo-heliocentric.  Mercury and Venus circle the Sun; everything else circles the Earth. 
  2. Ptolemaic.  Geocentric, stationary Earth. 
  3. Copernican. Heliocentric, pure circles with lots of epicycles. 
  4. Gilbertian. Geocentric, rotating Earth.
  5. Tychonic.  Geo-heliocentric.  Sun and Moon circle the Earth; everything else circles the Sun.
  6. Ursine.  Tychonic, with rotating Earth.  
  7. Keplerian.  Heliocentric, with elliptical orbits. 
(Thanks to Michael Flynn for this neat summary)
So the issue was not just heliocentrism v geocentrism, or whether the Earth moves. There were a bunch of possibilities, and it was not clear how to physically distinguish them.

He says "we know how the science turned out", as if everyone knows that Kepler turned out to be right. Kepler's model did give the best results for a century or so. Then can Newtonian models with planets like Jupiter pulling other planets out of their elliptical orbits. For the last century, the consensus has been general relativity, where motion is relative and you can think of the Earth as stationary or as moving however you please, as long as the coordinate transformations are done properly in the covariant equations.

Saturday, July 11, 2015

We cannot really smell a trillion odors

Slashdot reports:
Last year a paper in Science magazine reported that humans can distinguish a trillion different odors, a result that had already made its way into neuroscience and psychology textbooks. Two new papers just published in eLife overturn that result, pointing to fatal flaws in experimental design and data analysis.
I suspected that the trillion odors were bogus, because it seems unlikely and because it is hard to see how a test could confirm it.

Comparing to colors, I can easily imagine showing someone a bunch of random colors, confirming that he can distinguish them, and deducing that trillions of colors are distinguishable. Likewise with musical sounds. But if the data passes thru some low dimensional filter, then there will be trillions of different inputs that are perceived as identical. Unless the experiment is set up to produce these examples, they will be missed.

I did not read the papers, but it is amusing how such a completely bogus result can be reported in the most prestigious science journals, and widely reported as fact in the popular press.

Monday, June 29, 2015

Definition of a function

Many mathematical concepts have long histories, but did not actually get a modern definition until the XX century. I have argued that even the number line was invented in the twentieth century or the late 19th century. See also The real numbers - a survey of constructions:
The novice, through the standard elementary mathematics indoctrination, may fail to appreciate that, compared to the natural, integer, and rational numbers, there is nothing simple about defining the real numbers. The gap, both conceptual and technical, that one must cross when passing from the former to the latter is substantial and perhaps best witnessed by history. The existence of line segments whose length can not be measured by any rational number is well-known to have been discovered many centuries ago (though the precise details are unknown). The simple problem of rigorously introducing mathematical entities that do suffice to measure the length of any line segment proved very challenging. Even relatively modern attempts due to such prominent figures as Bolzano, Hamilton, and Weierstrass were only partially rigorous and it was only with the work of Cantor and Dedekind in the early part of the 1870’s that the reals finally came into existence.
Jeremy Avigad writes:
Today, we think of a “function” as a correspondence between any two mathematical domains: we can talk about functions from the real numbers to the real numbers, functions that take integers to integers, functions that map each point of the Euclidean plane to an element of a certain group, and, indeed, functions between any two sets. As we began to explore topic, Morris and I learned that most of the historical literature on the function concept focuses on functions from the real or complex numbers to the real or complex numbers. ...

Even the notion of a “number theoretic function,” like the factorial function or the Euler function, is nowhere to be found in the literature; authors from Euler to Gauss referred to such entities as “symbols,” “characters,” or “notations.” Morris and I tracked down what may well be the first use of the term “number theoretic function” in a paper by Eisenstein from 1850, which begins with a lengthy explanation as to why is it appropriate to call the Euler phi function a “function.” We struggled to parse the old-fashioned German, which translates roughly as follows:
Once, with the concept of a function, one moved away from the necessity of having an analytic construction and began to take its essence to be a tabular collection of values associated to the values of one or several variables, it became possible to take the concept to include functions which, due to conditions of an arithmetic nature, have a determinate sense only when the variables occurring in them have integral values, or only for certain value-combinations arising from the natural number series. For intermediate values, such functions remain indeterminate and arbitrary, or without any meaning.
When the gist of the passage sank in, we laughed out loud.
It is funny because it so clumsy. It should be obvious that a function can have any domain, and have any definition on that domain.

It is easy to forget how subtle these concepts are, as their meanings have been settled for a century and explained in elementary textbooks. But they were not obvious to some pretty smart 19th century mathematicians. Even today, most physicists and philosophers have never seen rigorous definitions of these concepts.

Now a function can be defined as a suitable set of ordered pairs, once set theory machinery is defined. The domain of the function can be any set, and so can the range. These things seem obvious to mathematicians today, but it took a long time to get these concepts right. And concepts like infinitesimals are still widely misunderstood.

Saturday, June 27, 2015

Quantum computers will not help interpretations

MIT quantum complexity theorist is plugging a PBS essay on his favorite quantum speculations:
a principle that we might call “Occam’s Razor with Computational Aftershave.” Namely: In choosing a picture of physical reality, we should be loath to posit computational effort on Nature’s part that vastly exceeds what could ever in principle be observed. ...

Could future discoveries in quantum computing theory settle once and for all, to every competent physicist’s satisfaction, “which interpretation is the true one”? To me, it seems much more likely that future insights will continue to do what the previous ones did: broaden our language, strip away irrelevancies, clarify the central issues, while still leaving plenty to argue about for people who like arguing.
Lubos Motl defends Copenhagenism:
Quantum mechanics may be understood as a kind of a "black box", like a computer that spits the right result (probability of one observation or another). And we may learn how to perform the calculations that exactly reproduce how the black box works. This is a description that Feynman used to say, too. Some people aren't satisfied with that – they want to see something "inside" the black box. But there is nothing inside. The black box – a set of rules that produce probabilistic predictions for measurements out of past measurements – is the most fundamental description of Nature that may exist. Everything else is scaffolding that people add but they shouldn't because it's not a part of Nature.

Quantum computers won't change anything about the desire of the laymen to see something inside the black box. On the contrary, a quantum computer will be an even blacker box! You press a button, it does something that is completely incomprehensible to a layman, and announces a correct result very quickly, after a short time that experts say to be impossible to achieve with classical computers.
A lot of quantum interpretations fall into the trap of trying to say what is in that black box, without any way to say that it is right or wrong. Aaronson does this when he talks about how much computation Nature has to do in that box. He implicitly assumes that Nature is using the same mathematical representations that we are when we make macro observations.

I do not accept that assumption, as I have posted in essays here.

Eric Dennis comments to Aaronson:
I doubt anyone is really suspicious of the possibility of long coherence times for small systems. We’re suspicious of massive parallelism.
I question those long coherence times. Some of the experiments are like rolling a coin on the floor so that it eventually falls over to heads or tails, with equal probability. Or balancing a rock on the head of a pin so that it eventually falls, with all directions equally likely. Can this be done for a long time? Maybe with the coin, but not with the rock. Can the uncertainty during the long time be used to extract a super-Turing computation? No way.

Update: A company claims:
D-Wave Systems has broken the quantum computing 1000 qubit barrier, developing a processor about double the size of D-Wave’s previous generation, and far exceeding the number of qubits ever developed by D-Wave or any other quantum effort, the announcement said.

It will allow “significantly more complex computational problems to be solved than was possible on any previous quantum computer.”

At 1000 qubits, the new processor considers 21000 possibilities simultaneously, a search space which dwarfs the 2512 possibilities available to the 512-qubit D-Wave Two. ‪”In fact, the new search space contains far more possibilities than there are ‪particles in the observable universe.”
This annoys Aaronson as much as I do, as he says that he has proved that a quantum computer cannot really search all those possibilities simultaneously.

D-Wave has probably a legitimate technological achievement, but they do not have any real qubits, and there are no previous quantum computers.

Thursday, June 25, 2015

Essays for and against MUH

I mentioned the FQXi essay contest winners, and here are a couple more.

French Canadian physicist Marc Séguin won one of the two second prizes with this:
My God, It’s Full of Clones: Living in a Mathematical Universe

Imagine there’s only math — physics is nothing more than mathematics, we are self-aware mathematical substructures, and our physical universe is nothing more than a mathematical structure “seen from the inside”. If that’s the case, I will argue that it implies the existence of the Maxiverse, the largest imaginable multiverse, where every possible conscious observation is guaranteed to happen. ...

In the end, I believe in the Maxiverse because it is the ultimate playground for the curious mind. Living forever… across wildly divergent realities… who could ask, literally, for anything more than the Maxiverse? And if I’m right, somewhere within its infinitely complex simplicity, one of my F-clones is having a drink with one of your F-clones, and we’re having a big laugh about it all. Cheers!
He endorses Max Tegmark's Mathematical Universe Hypothesis, and more. While Tegmark contradicts himself about whether he believes in infinity, Sequin believes that he exists in a infinite number of copies.

This is not even good science fiction.

Lee Smolin won a third prize, and after a lot of dopey comments, ends with:
In closing, I would like to mention two properties enjoyed by the physical universe which are not isomorphic to any property of a mathematical object.

1. In the real universe it is always some present moment, which is one of a succession of moments. Properties off mathematical objects, once evoked, are true independent of time.

2. The universe exists apart from being evoked by the human imagination, while mathematical objects do not exist before and apart from being evoked by human imagination.
The first property is silly. You can say that math objects are independent of time, just as you can say they are independent of space, temperature, energy, or any other physical property. Unless of course the math is interpreted as modeling those things, as they usually do in mathematical physics.

The second is just anti-Platonism. Many or most mathematicians believe that math objects like the real numbers do exist independently of humans.

Monday, June 22, 2015

Myth of the Dark Ages


An essay by Tim O'Neill calls this chart "The Most Wrong Thing On the Internet Ever". He elaborates here and here, as does OFloinn.

Similarly wrong is the conflict thesis:
The conflict thesis is the proposition that there is an intrinsic intellectual conflict between religion and science and that the relationship between religion and science inevitably leads to public hostility. Although the thesis in modern form remains generally popular, the original form of the thesis is no longer widely supported among historians.
This blog presents the view that math, science, and technology have been under continuous development for millennia. For the most part, Christianity has contributed to the development of science.

By contrast, the Marxist view is that history is driven by conflict between grand social forces, with sudden breakthrus and revolutions between these forces being key to explaining everything. So they will say that the Roman Popes caused the Dark Ages by suppressing knowledge of the motion of the Earth and by burning witches, until Galileo stood up to them and started the scientific revolution by leading the Protestant Reformation. Or some such nonsense.

If interested in the broader history, see Catholic Church and science and List of Roman Catholic cleric-scientists.

Leftist scientists are praising the Pope for Laudato si. That book endorses consumer sacrifices and boycotts in order to promote Third World population growth, and it is all justified by some supposed consensus on global warming. The science of CO2 may be fine, but the rest is dubious. The theology is outside the scope of this blog, but I would like to see more analysis of whether the science is correct.

(Now he denounces "businessmen who call themselves Christian and they manufacture weapons.")

The Dark Ages are called dark because of the dearth of historical records, compared to the Roman Empire and other periods. It is like the dark side of the Moon, which got that name because we did not see it and knew almost nothing about it. No one is saying that sunlight failed to shine on the Dark Ages or the dark side of the Moon. But the sunlight from the dark side does not get to us. Likewise dark matter is dark because no starlight is scattered off it to us. And dark energy is dark. Read this blog for the Dark Buzz.

Saturday, June 20, 2015

Quantum computers attract commercial interest

The British Economist magazine is enthusiastic about quantum computing:
After decades languishing in the laboratory, quantum computers are attracting commercial interest ...

By exploiting certain quantum effects they can create bits, known as qubits, that do not have a definite value, thus overcoming classical computing’s limits.

Around the world, small bands of such engineers have been working on this approach for decades. Using two particular quantum phenomena, called superposition and entanglement, they have created qubits and linked them together to make prototype machines that exist in many states simultaneously. Such quantum computers do not require an increase in speed for their power to increase. In principle, this could allow them to become far more powerful than any classical machine — and it now looks as if principle will soon be turned into practice. Big firms, such as Google, Hewlett-Packard, IBM and Microsoft, are looking at how quantum computers might be commercialised. The world of quantum computation is almost here.

Ready or not, then, quantum computing is coming. It will start, as classical computing did, with clunky machines run in specialist facilities by teams of trained technicians. Ingenuity being what it is, though, it will surely spread beyond such experts’ grip. Quantum desktops, let alone tablets, are, no doubt, a long way away. But, in a neat circle of cause and effect, if quantum computing really can help create a room-temperature superconductor, such machines may yet come into existence.
No, this is crazy. No one has overcome any classical computing limits, no quantum computers are being commercialized, and there will not be any room-temperature superconductor.

There are many other technologies that are being commercialized after decades of languishing in the lab. Self-driving cars. Image identification. Voice recognition. Natural language processing. Robots.

In each of those areas, steady progress is being made. There are prototypes that qualify as a proof of concept. There may not be agreement about how far the technology will go, but it is obvious that commercial applications are coming.

Quantum computing does not qualify. There are lots of experiments that qualify as interesting tests quantum mechanics. But there is no prototype that exceeds any classical computing limits, even on a small slow scale.

Most of you are going to say, "Why should I believe some stupid blogger saying it is impossible, when lots of smart people say this technology is coming, and they are backed by a lot of big money?"

There is no need to believe me. Just tell me how long you are willing to wait. What will you say if there is still no prototype in 2 years? 5 years? 10 years? 20 years?

This is the biggest research scam I've seen. String theory and the multiverse are scams, but at least those folks do not pretend to have commercial applications. There have been lots of over-hyped technologies before, such as fuel cells and hydrogen economy, but those are at least technological possibilities. There is never going to be a quantum computer that out-performs a Turing machine.

Thursday, June 18, 2015

More in infinitesimals

I criticized Sylvia Wenmackers, and she posted a rebuttal in the comments.

She explained what she meant by the hyperreals being incomplete. She is right that the hyperreals do not have the least upper bound property if you include non-internal sets. That is, the infinitesimals are bounded but do not have a least upper bound. But the bounded internal sets have least upper bounds.

The distinction is a little subtle. Arguments involving hyperreals mostly use internal sets, because then the properties of the reals can be used.

On another point, she refers me to this Philip Ehrlich article in the history of non-Archimedean fields. He says:
In his paper Recent Work On The Principles of Mathematics, which appeared in 1901, Bertrand Russell reported that the three central problems of traditional mathematical philosophy – the nature of the infinite, the nature of the infinitesimal, and the nature of the continuum – had all been “completely solved” [1901, p. 89]. Indeed, as Russell went on to add: “The solutions, for those acquainted with mathematics, are so clear as to leave no longer the slightest doubt or difficulty” [1901, p. 89]. According to Russell, the structure of the infinite and the continuum were completely revealed by Cantor and Dedekind, and the concept of an infinitesimal had been found to be incoherent and was “banish[ed] from mathematics” through the work of Weierstrass and others [1901, pp. 88, 90].
I think that it is correct that the continuum ( = number line = real numbers) was figured out in the late 19th century by Weierstrauss, Dedikind, and others, and widely understood in the early XXc, as I explained here. See Construction of the real numbers for details on the leading methods. The standard real numbers do not include infinitesimals.

But "banished" is not the right word. It is more accurate to say that infinitesimal arguments were made rigorous with limits.

The Stanford Encyclopedia of Philosophy entry on Continuity and Infinitesimals starts:
The usual meaning of the word continuous is “unbroken” or “uninterrupted”: thus a continuous entity — a continuum — has no “gaps.” We commonly suppose that space and time are continuous, and certain philosophers have maintained that all natural processes occur continuously: witness, for example, Leibniz's famous apothegm natura non facit saltus — “nature makes no jump.” In mathematics the word is used in the same general sense, but has had to be furnished with increasingly precise definitions. So, for instance, in the later 18th century continuity of a function was taken to mean that infinitesimal changes in the value of the argument induced infinitesimal changes in the value of the function. With the abandonment of infinitesimals in the 19th century this definition came to be replaced by one employing the more precise concept of limit.

Traditionally, an infinitesimal quantity is one which, while not necessarily coinciding with zero, is in some sense smaller than any finite quantity. For engineers, an infinitesimal is a quantity so small that its square and all higher powers can be neglected. In the theory of limits the term “infinitesimal” is sometimes applied to any sequence whose limit is zero. An infinitesimal magnitude may be regarded as what remains after a continuum has been subjected to an exhaustive analysis, in other words, as a continuum “viewed in the small.” It is in this sense that continuous curves have sometimes been held to be “composed” of infinitesimal straight lines.

Infinitesimals have a long and colourful history. They make an early appearance in the mathematics of the Greek atomist philosopher Democritus (c. 450 B.C.E.), only to be banished by the mathematician Eudoxus (c. 350 B.C.E.) in what was to become official “Euclidean” mathematics. Taking the somewhat obscure form of “indivisibles,” they reappear in the mathematics of the late middle ages and later played an important role in the development of the calculus. Their doubtful logical status led in the nineteenth century to their abandonment and replacement by the limit concept. In recent years, however, the concept of infinitesimal has been refounded on a rigorous basis.
This mentions and explains what I have been calling my motto or slogan, only it calls it a "apothegm", whatever that is. My dictionary says "A short pithy instructive saying". The "g" is silent, and it is pronounced APP-u-thum. Okay, I'll accept that, and may even adopt the word.

Consider the above statement that a continuous curve is composed of infinitesimal straight lines. Taken literally, it seems like nonsense. It took mathematicians 3 centuries to make it rigorous, and you can find the result in mathematical analysis textbooks.

The common textbook explanations use infinitesimal methods and limits, but not hyperreals.

My problem with Wenmackers is that she treats infinitesimals as sloppy reasoning until hyperreals came along, and the conventional epsilon-delta arguments as something that might only merit a footnote as it might distract casual readers.

This is just wrong. The mainstream methods for rigorous infinitesimal methods use epsilons, deltas, limits, tangents, and derivatives. The hyperreals have their place as a fringe alternative view, but they are not central or necessary for rigor.

In quantum mechanics, the momentum operator is an infinitesimal translation symmetry. This does not mean that either sloppy reasoning or hyperreals are used. It means that infinitesimal methods were used to linearize the symmetry group at a point. This is essential to how quantum mechanics have been understood for almost 90 years.

I also mentioned that special relativity is the infinitesimal version of general relativity. So yes, infinitesimal analysis is essential to XXc physics. How else do you understand relativity and quantum mechanics?

She writes:
What I mean by "loose talk involving infinitesimals" "frowned upon by mathematicians" is that physicists often talk about infinitesimals in a way that is close in spirit to Leibniz's work (and hence to non-standard analysis), which is not compatible with the definition of the classical limit as mathematicians use it in standard analysis.
No, I disagree with this. Limits and standard analysis were invented to make work by Leibniz and others rigorous, and they are still accepted as the best way.

She is essentially saying that the hyperreals are a better way to make Leibniz's work rigorous. If that is so, then why do all the textbooks do it a different way?

There are a few hyperreal enthusiasts among mathematicians who believe the hyperreals are superior, but they are a very small minority. I doubt that there are any colleges that teach analysis that way.
I think many of your other remarks also boil down to the same point: I use the term 'infinitesimal' in a more restricted sense than the way you seem to interpret it.
This is like saying:
When I refer to atoms, I am not talking about the atoms that are commonly described in college chemistry textbooks. I mean the hyper-atoms that were recently conceived as being closer in spirit to the way that the ancient Greek Democritus talked about atoms.
Russell did not just try to banish infinitesimals. He also tried to banish causality from physics, and convinced modern philosophers that there is no such thing.

This is a problem with modern philosophers. They can say the most ridiculous things just because they are accepted by other philosophers and historians. If they want to know how Leibniz's work was made rigorous, they could knock on a mathematician's door and ask, "Did anyone make Leibniz's work rigorous?" He would say, "Sure, just take our Calculus I and Analysis I courses." If she said, "what about the hyperreals?", he would say, "Yes, you can do it that way also, but we do not teach a class on it."

If I am wrong, please explain in the comments.

Monday, June 15, 2015

Contest winner misunderstands infinitesimals

FQXi has announced its annual essay contest winners, and the top prize went to a Belgium philosopher of physics, Sylvia Wenmackers, who wrote:
Essay Abstract
Our mathematical models may appear unreasonably effective to us, but only if we forget to take into account who we are: we are the children of this Cosmos. We were born here and we know our way around the block, even if we do not always appreciate just how wonderful an achievement that is.
My essay did not win any prizes. I suspect that the prizes would have been a lot different if the evaluation had been blinded (ie, if author names were removed during evaluation).

She has her own blog, and a grad student working on quantum teleportation and time travel.

The most substantive comments in her essay are about infinitesimals:
The natural sciences aim to formulate their theories in a mathematically precise way, so it seems fitting to call them the ‘exact sciences’. However, the natural sciences also allow – and often require – deviations from full mathematical rigor. Many practices that are acceptable to physicists – such as order of magnitude calculations, estimations of errors, and loose talk involving infinitesimals – are frowned upon by mathematicians. Moreover, all our empirical methods have a limited range and sensitivity, so all experiments give rise to measurement errors. Viewed as such, one may deny that any empirical science can be fully exact.
No, this is not right. Physicists take non-rigorous shortcuts, and mathematicians frown on loose talk. But mathematicians have rigorous theories for estimating errors, infinitesimals, and all other math in use. Non-rigorous work may be convenient, and full rigor may be impractical in some cases, but it is a mistake to say that science requires non-rigorous math. Mathematicians strive to make all math rigorous.
In mathematics, infinitesimals played an important role during the development of the calculus, especially in the work of Leibniz [11], but also in that of Newton (where they figure as ‘evanescent increments’) [12]. The development of the infinitesimal calculus was motivated by physics: geometric problems in the context of optics, as well as dynamical problems involving rates of change. Berkeley [13] ridiculed infinitesimals as “ghosts of departed quantities”. It has taken a long time to find a consistent definition of this concept that holds up the current standards of mathematical rigor, but meanwhile this has been achieved [14]. The contemporary definition of infinitesimals considers them in the context of an incomplete, ordered field of ‘hyperreal’ numbers, which is non-Archimedean: unlike the field of real numbers, it does contain non-zero, yet infinitely small numbers (infinitesimals). The alternative calculus based on hyperreal numbers, called ‘non-standard analysis’ (NSA), is conceptually closer to Leibniz’s original work (as compared to standard analysis).

While infinitesimals have long been banned from mathematics, they remained in fashion within the sciences, in particular in physics: not only in informal discourse, but also in didactics, explanations, and qualitative reasoning. It has been suggested that NSA can provide a post hoc justification for how infinitesimals are used in physics [15]. Indeed, NSA seems a very appealing framework for theoretical physics: it respects how physicists are already thinking of derivatives, differential equations, series expansions, and the like, and it is fully rigorous.11
I have previously argued that Berkeley was not ridiculing infinitesimals with that quote. The ghosts are the limits, not the infinitesimals.

I don't know why she says the hyperreals are incomplete. They have the same completeness properties as the real numbers. That is, Cauchy sequences converge, bounded sets have least upper bounds, and odd order polynomials have roots.

The impression given here is that differential calculus and mathematical physics were non-rigorous until hyperreals and NSA justified infinitesimals. That is not true, and most mathematicians and physicists today do not even pay any attentions to hyperreals or NSA.

The mainstream treatment of infinitesimals is to treat them as a shorthand for certain arguments involving limits, using a rigorous definition of limit. The main ideas were worked out by Cauchy, Weierstrauss, and others in the 19th century, and probably perfected in the XXc. There is no need for hyperreals.

Infinitesimals were never banned from mathematics. They are completely legitimate if backed up by limits or hyperreals. Maybe physicists never learn that, but mathematicians do.

I might say: "Special relativity is the infinitesimal version of general relativity." What that means is that if you take a tangent geometric structure to the curved spacetime of general relativity, you get the (flat) geometry of special relativity. The tangent may be defined using limits, derivatives, or hyperreals. It is a rigorous statement, and these sorts of statements were never banned.

You do not see statements like that in physics books. They are more likely to say that special relativity is an approximation to general relativity, as they might say that a tangent line is an approximation to a curve. Mathematicians would rather take the limit, and make an exact statement.

Consider f'(x)dx which can be integrated to get f(x). You can view dx as a hyperreal infinitesimal, and the integral as an infinite sum. But the more conventional view is that infinitesimals are not numbers, but a method for getting tangents and tensors. Then f'(x)dx is not a simple function, but something that acts on tangent vectors and can be integrated. I am skipping over subtle details, but it is a rigorous infinitesimal method and described in elementary math textbooks.

Also dy/dx is symbolically the division of infinitesimals, but rigorously defined as a limit.

So the above paper badly misunderstands infinitesimals to treat them as only made rigorous by hyperreals. She also mentions considering Planck's constant h, or the reciprocal of the speed of light 1/c, to be like infinitesimals.

A recent book claims that Galileo used infinitesimals and the Jesuits banned such use. I don't know about that, but that predated Newton, Leibniz, and calculus. And I am sure that some use of infinitesimals was sloppy. All pre-XXc work was sloppy by modern standards. But the usage by mathematicians can be made rigorous. By the early XXc, it was all rigorous (in the math books).

Thursday, June 11, 2015

Dog consciousness causes wave function collapse


Lubos Motl defends the Copenhagen interpretation of quantum mechanics, and now he defends the Von Neumann–Wigner interpretation. Von Neumann supposedly believed that human consciousness caused collapse of the wave function, and Wigner said he thought that a dog had sufficient consciousness to cause collapse.

It sounds ridiculous when you phrase it that way, but it is not so silly. The electron may have an independent objective existence, but our best explanation uses wave functions that cleverly encode how it was observed in the past and how it might be observed in the future.

The Moon exists whether we look at it or not, but the exact position and other physical characteristic are either directly observed or inferred from models and previous observations. The observations confirm the predictions and narrow the error bars.

And yes, a dog can look at the Moon.

Peter Woit cites an ex-string theorist ranting about the field. One notable point is that hardly anyone is really doing string theory any more. They are toying around with mathematical structures and models inspired by string theory, but they are not trying to study electrons as tiny strings or anything you might recognize from popular accounts of the field.

Scott Aaronson has joined Noam Chomsky and other MIT eggheads in denouncing investment in oil companies. Some of the comments explain how this is just feel-good leftist political posturing that will accomplish nothing worthwhile. You would think that all these smart MIT professors could recommend some constructive changes for our society.

Sometimes I think that the environmentalist movement is dominated by anti-environmentalists who invent stupid causes to distract people away from bigger threats.