The novice, through the standard elementary mathematics indoctrination, may fail to appreciate that, compared to the natural, integer, and rational numbers, there is nothing simple about defining the real numbers. The gap, both conceptual and technical, that one must cross when passing from the former to the latter is substantial and perhaps best witnessed by history. The existence of line segments whose length can not be measured by any rational number is well-known to have been discovered many centuries ago (though the precise details are unknown). The simple problem of rigorously introducing mathematical entities that do suffice to measure the length of any line segment proved very challenging. Even relatively modern attempts due to such prominent figures as Bolzano, Hamilton, and Weierstrass were only partially rigorous and it was only with the work of Cantor and Dedekind in the early part of the 1870’s that the reals finally came into existence.Jeremy Avigad writes:
Today, we think of a “function” as a correspondence between any two mathematical domains: we can talk about functions from the real numbers to the real numbers, functions that take integers to integers, functions that map each point of the Euclidean plane to an element of a certain group, and, indeed, functions between any two sets. As we began to explore topic, Morris and I learned that most of the historical literature on the function concept focuses on functions from the real or complex numbers to the real or complex numbers. ...It is funny because it so clumsy. It should be obvious that a function can have any domain, and have any definition on that domain.
Even the notion of a “number theoretic function,” like the factorial function or the Euler function, is nowhere to be found in the literature; authors from Euler to Gauss referred to such entities as “symbols,” “characters,” or “notations.” Morris and I tracked down what may well be the first use of the term “number theoretic function” in a paper by Eisenstein from 1850, which begins with a lengthy explanation as to why is it appropriate to call the Euler phi function a “function.” We struggled to parse the old-fashioned German, which translates roughly as follows:
Once, with the concept of a function, one moved away from the necessity of having an analytic construction and began to take its essence to be a tabular collection of values associated to the values of one or several variables, it became possible to take the concept to include functions which, due to conditions of an arithmetic nature, have a determinate sense only when the variables occurring in them have integral values, or only for certain value-combinations arising from the natural number series. For intermediate values, such functions remain indeterminate and arbitrary, or without any meaning.When the gist of the passage sank in, we laughed out loud.
It is easy to forget how subtle these concepts are, as their meanings have been settled for a century and explained in elementary textbooks. But they were not obvious to some pretty smart 19th century mathematicians. Even today, most physicists and philosophers have never seen rigorous definitions of these concepts.
Now a function can be defined as a suitable set of ordered pairs, once set theory machinery is defined. The domain of the function can be any set, and so can the range. These things seem obvious to mathematicians today, but it took a long time to get these concepts right. And concepts like infinitesimals are still widely misunderstood.
Could you please touch upon the difference between a function and an operator? I mean, I know that an operator acts on an input function to give you an output function (which possibly can be the same if it's an identity operator), but, my question here is: why is then an operator not regarded as a function, if the idea of a function indeed is so general (or am I over-reading something in the more general definition of a function that you mention here)?
Similarly, what's the formal difference between a function and a distribution? Is there anything of consequence that this difference leads to in practice? As an engineer, I can understand that, perhaps, Dirac's delta cannot be treated as a function. It is defined via a limiting process, and in the limiting process, it not only "hits" infinity, but it also ends up being multi-valued, in a sense. But what precisely is the formal difference? And, why can't at least those simple PDFs, e.g. the Gaussian distribution, be directly treated as functions? Why are they still called distributions?
Yes, an operator is just a function from one space of functions to another.ReplyDelete
A distribution (like Dirac delta) is a more complex object. It can be defined as in the vector space to some function space, or in the completion of a function space with respect to a suitable metric. That is, the DIrac delta is defined by what it does in an integral, and not by the nature of an infinity value at some point.
Logicians have hard a time getting a concept of the continuum because it's a flawed idea, just like set theory, given that it confuses intension and extension (you can't go backwards). Math will never be founded on it because it's a superfluous development that falsely claims to create a foundation for what was developed BEFORE! Real analysis was never put on a sound basis with set theory but ended in paradox and confusion. This is true of Dedekind cuts, Cauchy sequences, uncountable sets and the rest. People are simply ignorant about the history.ReplyDelete
Real fish, real numbers, real jobs
Although Roger says otherwise (I tend to agree with most of what he writes), it isn't true that modern mathematics is based on ZFC (it certainly isn't creative within it). Truth is not Rortyian intersubjectivity! This is postmodern nonsense coming from "metamathematicians" that want people to buy into social constructivism and relativism. Philosophers like David Stove made the same charge of the crackpot Popper. Everyone knew, just like Lagrange, that calculus was just algebra with quacky proofs that used "infinity" to justify finite reasoning and derivations. What was unclear was what the math was ultimately talking about. What is a line, afterall? It can't be a continuum and that's why the discrete vs. continuous distinction is a false dichotomy. A continuum is like a round square, an impossible a priori. The argument has nothing to do with Platonism. There are even probability models where zero-probability events comprise infinitely many sample points!
The main problem is that the constructivists believed in the self-contradictory completed infinity of the natural numbers and this was just as bogus as the completed infinities of the reals. Solomon Feferman at least put to death the transfinite theory of Cantor (and the diagonalizing of Gödel and Turing) building on Poincare's very obvious notion of impredicativity, which philosophers, like Russell (a womenizing flake), and mathematicians, like Bourbaki (pompous, postmodern French abstractionist) simplify embraced.
"In his concluding chapters, Feferman uses tools from the special part of logic called proof theory to explain how the vast part--if not all--of scientifically applicable mathematics can be justified on the basis of purely arithmetical principles. At least to that extent, the question raised in two of the essays of the volume, Is Cantor Necessary?, is answered with a resounding no."
Gödel and Turing in 10 minutes seen to be trivial and meaningless
It's about as arbitrary as the "law" of the excluded middle. The intutionists had points, as well as the constructivists and finitists, like Hilbert (the word "finite" is seriously misleading here). The only people that are serious about logic are the "strict finitists" or "ultrafinists".
Jean Paul Van Bendegem has a great article on strict finitism and an entry on finite geometry in the Stanford Encyclopedia of Philosophy
Strict Finitism and the Logic of Mathematical Applications
The debate is over but we are waiting for unprofessional hacks in math departments to accept linear "approximations" in physics.