Monday, May 30, 2016

The medieval politics of infinitesimals

I would not have thought that infinitesimals would be so political, but a book last year says so. It is titled, Infinitesimal: How a Dangerous Mathematical Theory Shaped the Modern World.

MIT historian and ex-Russian Slava Gerovitch reviews the book.
The Jesuits were largely responsible for raising the status of mathematics in Italy from a lowly discipline to a paragon of truth and a model for social and political order. The Gregorian reform of the calendar of 1582, widely accepted in Europe across the religious divide, had very favorable political ramifications for the Pope, and this project endeared mathematics to the hearts of Catholics. In an age of religious strife and political disputes, the Jesuits hailed mathematics in general, and Euclidean geometry in particular, as an exemplar of resolving arguments with unassailable certainty through clear definitions and careful logical reasoning. They lifted mathematics from its subservient role well below philosophy and theology in the medieval tree of knowledge and made it the centerpiece of their college curriculum as an indispensable tool for training the mind to think in an orderly and correct way.

The new, enviable position of mathematics in the Jesuits’ epistemological hierarchy came with a set of strings attached. Mathematics now had a new responsibility to publicly symbolize the ideals of certainty and order. Various dubious innovations, such as the method of indivisibles, with their inexplicable paradoxes, undermined this image. The Jesuits therefore viewed the notion of infinitesimals as a dangerous idea and wanted to expunge it from mathematics. In their view, infinitesimals not only tainted mathematics but also opened the door to subversive ideas in other areas, undermining the established social and political order. The Jesuits never aspired to mathematical originality. Their education was oriented toward an unquestioning study of established truths, and it discouraged open-ended intellectual explorations. In the first decades of the seventeenth century the Revisors General in Rome issued a series of injunctions against infinitesimals, forbidding their use in Jesuit colleges. Jesuit mathematicians called the indivisibles “hallucinations” and argued that “[t]hings that do not exist, nor could they exist, cannot be compared” (pp. 154, 159). ...

The battle over the method of indivisibles played out differently in England, where the Royal Society proved capable of sustaining an open intellectual debate. One of the most prominent critics of infinitesimals in England was philosopher and amateur mathematician Thomas Hobbes. A sworn enemy of the Catholic Church, he nevertheless shared with the Jesuits a fundamental commitment to hierarchical order in society. He believed that only a single-purpose organic unity of a nation, symbolized by the image of Leviathan, could save England from the chaos and strife sowed by the civil war. In the 1650s–70s his famously acrimonious dispute with John Wallis, the Savilian Professor of Geometry at Oxford and a leading proponent of the method of indivisibles, again pitted a champion of social order against an advocate of intellectual freedom. ...

In the 1960s, three hundred years after the Jesuits’ ban, infinitesimals eventually earned a rightful place in mathematics by acquiring a rigorous foundation in Abraham Robinson’s work on nonstandard analysis. They had played their most important role, however, back in the days when the method of indivisibles lacked rigor and was fraught with paradoxes. Perhaps it should not come as a surprise that today’s mathematics also borrows extremely fruitful ideas from nonrigorous fields, such as supersymmetric quantum field theory and string theory. ...

If, as in the case of the Jesuits, maintaining the appearance of infallibility becomes more important than exploration of new ideas, mathematics loses its creative spirit and turns into a storage of theorems.
I do not accept this. He argues that mathematics must accept nonrigorous work because someone might give it a rigorous foundation 3 centuries later.

It was only a few years later that Isaac Newton (and Leibniz and others) developed a coherent theory of infinitesimals. The subject was made much more rigorous again with Cauchy, Weierstrauss, and others in the 1800s.


  1. And it was never rigorous. That's why it produces paradoxes:

    Even Terry Tao can't get out of the 17th century and read some work on computer algebra systems (Risch, Davenport, Bronstein, etc...) or developments in modern logic of foundations. They don't even read major mathematicians like Solomon Feferman because they are overrated quacks making arguments at the extremes without understanding what they are really doing. Hyperreal numbers and limit arguments are just cover for finite algebra. They're a sideshow freak.

  2. "In the 1960s, three hundred years after the Jesuits’ ban, infinitesimals eventually earned a rightful place in mathematics by acquiring a rigorous foundation in Abraham Robinson’s work on nonstandard analysis. They had played their most important role, however, back in the days when the method of indivisibles lacked rigor and was fraught with paradoxes. Perhaps it should not come as a surprise that today’s mathematics also borrows extremely fruitful ideas from non-rigorous fields, such as supersymmetric quantum field theory and string theory."

    The idea that Robinson gave a rigorous foundation for infinitesimals is complete tripe and I have described the problems with these techniques here. People reject theories because they don't work! It's certainly true that mathematics largely borrowed from physics but there are few such borrowings from non-rigorous physics of any consequence.

    David Stove called this fallacy the Columbus argument:

    "We need no books to teach us now how dangerous the Columbus argument is: we have as our teacher instead the far greater authority of experience—expériences nombreuses et funestes (as Laplace said in another connection). For 'They all laughed at Christopher Columbus' led, by a transition both natural and reasonable, to 'It’s an outrageous proposal, but we’ll certainly consider it.' That in turn led, naturally enough, to 'We must consider it because it’s an outrageous proposal.'...

    How did an argument so easily answered ever impose upon intelligent people? Easily. It was simply a matter of ensuring what Ludwig Wittgenstein (in another connection) called a one-sided diet of examples. Mention no past innovators except those who were innovators-for-the-better. Harp away endlessly on the examples of Columbus and Copernicus, Galileo and Bruno, Socrates and (if you think the traffic will bear it) Jesus. Conceal the fact that there must have been at least one innovator-for-the-worse for every one of these (very overworked) good guys. Never mention Lenin or Pol Pot, Marx or Hegel, Robespierre or the Marquis de Sade, or those forgotten innovators of genius to whom humanity has been indebted for any of the countless insane theories which have ever acquired a following in astronomy, geology, or biology. There is no weakness in the Columbus argument which cannot be more than made up for by a sufficiently tendentious choice of examples."

    You can't turn the burden of evidence around or we would have to entertain every crazy idea someone happens to come up with. That's not how science or mathematics works. You must have proofs or verification. End of story. It was the problem with the founders of analysis that they couldn't give compelling arguments and this was the same with Galileo.

    Physicists and mathematicians don't invent much anyways, so I guess I shouldn't get too worked up about it. The plane, the computer, the bagless vacuum, the telephone, the microwave, the lightbulb, the chronometer, the radio etc... Faraday was even bad at math. Do we really need so many trophies for useless theorems of abstract algebra or ugly post-modern buildings? Poincare was one of the only mathematicians, beside Newton (invented a telescope), that took science seriously and he completely disapproved of Cantorian set theory. He was right but you guys in the mathematics departments are getting REALLY behind! You can't even model things if I gave them to you for an assignment. Get your act together. The departments at Ivy League schools are reducing us to laughter. It's like you guys study classical music. Such pride for so little accomplishment in the modern world.

  3. Good pick from amidst the apparently heartfelt outpourings of a dense brain.


    1. A "dense brain" can be construed as a compliment, so be careful with the language (lol). On the other hand, mathematicians and physicist studying donuts let farm boys from Iowa make all the important advancements, like Philo Farnsworth and the television or civil engineers like Konrad Zuse with the stored-program computer and high-level languages. These 17th century analysis crackpots can't even keep up with the latest lattice gauge theories. They just sit around studying moron blowups and pretend math is reality but can't even solve a three-body problem.

  4. Can't measure it? Then no science.

    I don't accept point particles or infinitesimals in science, and at best, only diagrammatically in math or cartography. I accept a point only as far as a coordinate location within a defined grid, or a location on a number line as a distance or period interval from zero, or a euphemistic location being pointed at by hand or gestured to or indicated on a diagram, But I refuse to believe you can use a point itself as a measurable entity of any kind, much less an entity capable of any kind of kinetic motion or interaction with anything, it has no extension or possible volume, it has no center or edge or capacity to rotate, basically by itself, it's at best a place holder of zero dimensions. I have had it to tears with people blathering on about instantaneous velocities at a point, mass at a point, energy at a point, density at a point, outside a flipping body at a point (black hole rubbish), ugh, infinities at a point (stacking an abstraction of the unmeasured on top of an un-measurable geometric abstraction)...utter linguistic mumbo jumbo. Infinitesimals are no different, they are useless in actual measurement as they have no period or set length and basically act like taffy or rubber bands in calculations, much like 'i'. If you wish to do useful calculus, you should play in this little place I like to call reality and keep it finite, as all science should be. We do not have all of eternity to play god with n + 1 onanistically or entertain endless operations which are never any closer (or farther) from completion. Metaphysics (not science, not math) is the proper place for consideration of angels dancing on pin heads, free floating points, infinitesimals, and other unmeasurable abstractions.

    1. They still produce contradictions and it's purely a lie that they are "rigorous". People bandy about terms they don't understand.

  5. Just to point out an example where Roger's statement about "rigour" is clearly wrong, consider this video about the Banach-Tarski paradox:

    One would sound like a complete idiot but might argue that it's only an apparent paradox. You would still be wrong! For instance, the different sizes of infinity DO NOT FOLLOW from CLASSICAL LOGIC because they engage infinite implications. For instance, Cantor's power set argument clearly does this by an endless series of pointless counterexamples. Furthermore, the mappings are not well-defined but "impredicative". It's not rigorous and never will be. Limit arguments don't escape the paradoxes but simple engage non-sequitur inequalities. There is a simple jump in the reasoning.

  6. It is a paradox, not a contradiction. You can learn a lot from paradoxes.

    1. Sure, you can learn that it produces useless math like the Stanford mathematician Solomon Feferman says it does. In his book, In The Light of Logic (1998), he dispenses with a great deal of the paradoxes but I don't think he goes far enough. Completed infinity is a self-contradiction and there are many contradictory results that follow from that:

      "Philosopher William Lane Craig uses an analogy to show the mathematical contradictions involved with making certain kinds of calculations with infinite sets. Craig asks us to imagine that he has an infinite number of marbles in his possession, and that he wants to give you some of them. Suppose, in fact, that he wants to give you an infinite number of marbles.

      One way he could do that would be to give you the entire set of marbles. In that case he would have zero marbles left for himself. However, another way he could do it would be to give you all the odd numbered marbles. Then he would still have an infinite number left over for himself, and you would have an infinite set too. You’d have just as many as he would—in fact, each of you would have just as many as Craig originally had before the marbles were divided into odd and even.

      These illustrations demonstrate that performing simple calculations involving an infinite number of things leads to mathematical contradictions. For the first case in which Craig handed out all the marbles, an infinite set minus an infinite set is equal to zero (ℵ0 - ℵ0 = 0); for the second case in which he handed out all the odd numbered marbles, an infinite set minus an infinite set is still infinite (ℵ0 - ℵ0 = ℵ0). In each case,
      the identical value was subtracted from the identical value (ℵ0 - ℵ0) but with contradictory results (0 and ℵ0). Since dividing and subtracting sets of equal amounts should not produce contradictory results, the contradictions involved with calculating infinite sets casts doubt on the infinite as a coherent notion."

      The Banach-Tarski paradox is a hint that they are making bad assumptions. That's all it is. There is nothing deep or interesting about it.

  7. I think one of the basic problems of considering infinities involving anything actual is that you can not actually create a ratio between a finite quantity and an infinite quantity. Ratios are designed to show a relationship of two quantities, but with infinite this or that there is no relationship. Dividing an infinity by any quantity is useless... as is multiplying it by any quantity, as is subtracting any actual amount from it, or attempting to add any actual amount to what purpose does it serve in relation to anything measureable?

    1. Cauchy just derived some INEQUALITIES and they have nothing to do with the algebra. When I shift a curve up, I just add "+ (shift)" to the algebra. I don't move infinite numbers of points or engage in some bogus limit argument. There is no need to define a self-contradictory continuum at all. You simply adjust the machinery. One can operate on the whole without reference to the parts. Continuity is about degenerate discreteness and is NOT THE OPPOSITE OF DISCRETENESS. You don't define a person's mind by the set of all the words they have said in their life. You don't define algorithms or formulas this way either. Do I need an "infinite" set of words to define my mind? Set theory is bunk and going backwards. It starts with completely arbitrary assumptions and is chaotically sensitive to initial conditions. There are concrete problems that come out different when using different versions of set theory. Before, I mentioned this in relation to coloring the plane. Math departments are just hundreds of years out of date because they are filled with autistics and obsessives. They're clowns and reducing us all to laughter.

  8. Paradoxes are errors of assumption, which can really jam up a logical consideration. 'Can Zeus make a rock so heavy he can't pick it up?' You can assume Zeus can make a rock, you can assume he is supposed to be quite strong, but how do you assume if his rock building abilities are more or less advanced than his rock lifiting abilities? You can't assume Zeus is all powerful, mythology plainly says otherwise. Flawed assumptions with little to no data are as close to useless as you can get in logical arguments.

    You can also play with a paradox to demonstrate hidden incorrect or flawed assumptions. I think Xeno was the one who asked: If you can cross half a room, and then cross half the remaining distance, and the half that remaining distance, how can you cross the room or have movement be possible? The trick is in the construction of the flawed premise/assumption, If you can cross half a distance, you can cross the whole distance. It is given that you can cross half the distance, therefore, a whole distance can be covered as well. Dividing up a distance into infinitely many lengths is not possible in a finite amount of time and is actually a separate action than traveling any finite distance. Endless calculation has no bearing or ability to alter the fact that if you can cross a finite distance, given time, you can cross any finite distance. Once again, dividing a finite quantity (a set distance) into an infinite quantity leads to nonsense.

    The question to cut down all paradoxes to their roots should be posed like this: Can you logically construct an illogical or deceiving statement? Answer: Yes, in any language (machine or human, including...mathematics). It happens all the time in computer programming and it is nothing to brag about. I have never been able to understand why people think the ability to construct an argument that appears logical but isn't is some kind of impressive endeavor. At best it could be called a good con, or an effective lie, or a magic trick. Tricking someone depends upon the tricksters ability to fool the sensibilities of the rube (both observational and cognitive abilities). Heaven help you if you are only trying to fool yourself.

    A more useful recognition of why paradoxes are ALWAYS poorly constructed logical arguments is because they are often slipped into scientific reasoning in the form of the small but deadly 'tautology' (which to this day screws up huge swathes of scientific theory that is currently adhered to almost religiously in some places).

  9. These specialists in academic departments are narrow idiots. They don't even know Plato but are busy publishing their own work. One paper remarks: "I do not think that any mathematical solution can provide the much sought after answers to any of the paradoxes of Zeno. In fact all mathematical attempts to resolve these paradoxes share a common feature, a feature that makes them consistently miss the fundamental point which is Zeno’s concern for the one-many relation, or it would be better to say, lack of relation. This takes us back to the ancient dispute between the Eleatic school and the Pluralists."

    I hate to sound like Charles Eliot or Mortimer Adler but they are bereft of philosophy and history. I've read about the great religions of the world and came to understand the systematic flaws in Eastern irrationalism, like the great mathematician Igor Shafarevich. That includes Egyptian, Babylonian, Celtic, Pagan, Buddhist, Hindu, Satanic, Gnostic, Thelemic, Occult, etc... beliefs. I came to see the macrohistory of Cobbitt, Weaver, Vico, Spengler, Toynbee, Gibbon, Sorokin, Ch'ien, Augustine, Khaldun, Smith, Hegel, Comte, Marx, Spencer, Pareto, Weber, Steiner, Chardin, Gramsci, Quigley, Glubb, Sarkar, Eisler, Lovelock, Voegelin, Barzun, etc... I studied the dangerous millennialist cults such as the Platonists, the Pythagoreans, the Saint-Simonians, the Fouriers, the Owenites (public school system), the Labor Zionists, the Bellamy Nationalists, the dialectical materialists, the Fabians, etc... I studied art, architecture, literature, philosophy, music, politics, science and mathematics. I don't sit around with a left-brained monomania.

    See my blog on axiomatics. I'm not impressed with their attempts to overcome an inferiority complex.

    1. These mathematics degrees are USELESS and almost no one is employed as a mathematician. The BLS put their numbers at 3,500 for the entire country in 2014. Furthermore, the education industry has no empirical justification and this generalizes around the world.

      "Cross national data show no association between the increases in human capital attributable to rising educational attainment of the labor force and the rate of growth of output per worker. This implies the association of educational capital growth with conventional measures of TFP is large, strongly statistically significant, and negative. " - Lant Pritchett (World Bank & Kennedy School of Government)

      "[N]either the increase nor the initial level of higher education is found to have a statistically significant relationship with growth rates both in the OECD and worldwide. This result is robust to numerous different specifications." - Craig Holmes (Oxford University)