Let’s set aside the goal of unifying all knowledge. How are we doing in the millennia-long quest for absolute and objective truth? Not so well, it seems, and that is largely because of the devastating contributions of a few philosophers and logicians, particularly David Hume, Bertrand Russell and Kurt Gödel. ...No, it did not fail. The Russell-Whitehead system was replaced by better ones, which became more famous, with the most popular being ZFC. It is a firm logical foundation for mathematics.

What about maths and logic? At the beginning of the 20th century, a number of logicians, mathematicians and philosophers of mathematics were trying to establish firm logical foundations for mathematics and similar formal systems. The most famous such attempt was made by Bertrand Russell and Alfred North Whitehead, and it resulted in theirPrincipia Mathematica(1910-13), one of the most impenetrable reads of all time. It failed.

A few years later the logician Kurt Gödel explained why. His two ‘incompleteness theorems’ proved — logically — that any sufficiently complex mathematical or logical system will contain truths that cannot be proven from within that system. Russell conceded this fatal blow to his enterprise, as well as the larger moral that we have to be content with unprovable truths even in mathematics. If we add to Gödel’s results the well-known fact that logical proofs and mathematical theorems have to start from assumptions (or axioms) that are themselves unprovable (or, in the case of some deductive reasoning like syllogisms, are derived from empirical observations and generalisation — ie, from induction), it seems that the quest for true and objective knowledge is revealed as a mirage.

There is no such thing as an unprovable truth in mathematics. It is true that a statement symbolizing the consistency of ZFC cannot be proved within ZFC, and that surprised many people at the time. In retrospect, the reverse would have been stranger. But it does not alter the ancient fact that all mathematical truths are proved from axioms. The lack of an internal consistency proof is just a surprising fact to newcomers to the field, like the irrationality of the square root of two, or the uncountability of the real numbers.

It is true that theorems are proved from axioms, and that is how math has worked for millennia. Yes, math gives us absolute and objective truth.

Goedel's most famous theorems say that statements are provable if and only if they are true in all the models, and that there is no computable algorithm for determining whether a statement is provable. His work is an affirmation of the axiomatic method, not a refutation of it. If there were such an efficient algorithm, then mindless application of it would replace the axiomatic method.

Supposedly Hilbert thought that an axiomatization of math should first prove the consistency of its axioms. If so, that was a stupid belief, because inconsistent axioms allow proof of anything. So using an axiom system to prove its own consistency is worthless because the proof would not mean that the axioms are consistent (because inconsistent axioms prove the same thing). Either Hilbert made a trivial mistake or he has been misinterpreted. I suspect the latter, as I cannot find where he clearly said that the axiomatization had to prove its own consistency.

Here is a BBC Radio 4 podcast on the Incompleteness theorem, with discussion of Hilbert's program. The scholars imply that Hilbert admitted defeat by not publicly commenting on Goedel's theorem. However Hilbert posed an assortment of other problems, and sometimes he speculated about a possible solution, but no one cares too much if his speculation differed from the later solution. I do not see any good reason to say that Hilbert was defeated.

Hilbert did say that the proof should be finitary, and worked to find such proof, but there was some disagreement at the time as to what would pass for such a proof. The nature of logic is that inconsistency has finitary proofs, but usually not consistency.

Consistency is never the main goal anyway. As was later shown, consistency allows belief in either the continuum hypothesis or its negation. Mathematicians want what is true, and consistency does not decide the issue.

Even mathematicians are too eager to concede that Hilbert was refuted. Wikipedia says:

Kurt Gödel showed ... This wipes out most of Hilbert's program as follows:But I do not think that Hilbert opposed any of these things.

It is not possible to formalize all of mathematics, ...

... there is no complete consistent extension of even Peano arithmetic with a recursively enumerable set of axioms, ...

A theory such as Peano arithmetic cannot even prove its own consistency, ...

There is no algorithm to decide the truth (or provability) of statements in any consistent extension of Peano arithmetic. ...

It is possible to formalize math in the sense of making all the theorems provable in a system like ZFC. Systems cannot prove their own consistency, but you would not want that anyway. And there is no magic truth algorithm to replace the axiomatic method.

All that shows that Hilbert's program was essentially correct, and not wrong.

Whether or not mathematical foundations developed according to Russell's or Hilbert's expectations is an amusing historical question, but not really relevant to whether math achieves objective truth.

The essence of Hilbert's Program is to reduce infinitistic mathematics to finitistic mathematics. That is an unqualified success. All modern mathematics uses finitary proofs.

Wikipedia says:

Ludwig Wittgenstein condemned set theory. He wrote that "set theory is wrong", since it builds on the "nonsense" of fictitious symbolism, has "pernicious idioms", and that it is nonsensical to talk about "all numbers".This is just ignorant foolishness from philosophers. Set theory is the foundation of mathematics. Logic ought to be a foundation for philosophers, but it is rare to find one with basic competence in the subject. For some reason, leftist philosophers and other academics like to deny the possibility of truth, and they hate logic. Wittgenstein is mainly famous for saying, "Whereof one cannot speak, thereof one must be silent." Maybe he should have kept quiet about set theory. So should Pigliucci and the other anti-truth philosophers.

Update: A comment links to some scholarly work. You can download a free copy of the first paper, The Scope of Gödel’s First Incompleteness Theorem, here or here.

See also Number theory and elementary arithmetic by Jeremy Avigad, Hilbert's Program Then and Now and Stanford Encyclopedia: Hilbert's Program by Richard Zach, and Hilbert's Program Revisited by Panu Raatikainen.

These articles make a good case that the essence of Hilbert's program was accomplished. They even argue that all the important theorems of modern mathematics can be proved in systems that have elementary consistency proofs.

Your views are strongly supportive of a reversed narrative about Cantor's set theory. Set theory came after most useful mathematics and the use of the term "finite" is quite abused. Hilbert had trouble explaining what he meant by it. Edward Nelson said “[f]initism is the last refuge of the Platonist.” The argument Godel made was the same type of diagonalization that Cantor performed and Turing after Godel. It has nothing to do with productive mathematics. By the way, people who think Godel and Turing spawned the computer revolution are plainly mistaken. It's a myth of mathematics departments.

ReplyDelete"No compelling evidence has yet been presented that G1 affects, or future refinements of it will affect, mainstream mathematics."

http://link.springer.com/article/10.1007/s11787-014-0107-3

Self-referencing formulas and impredicative sets are not about mathematics proper. I can't stress how badly people don't understand this! By the way, people who are trying to connect this to the continuum hypothesis are crackpots. They similar to the extent they are both about nonsense.

Chaitin finds that systems have a specific complexity bound beyond which provability falls to zero. Ron Maimon: "Gödel's theorem is a limitation on understanding the eventual behavior of a computer program, in the limit of infinite running time." (http://physics.stackexchange.com/a/14944)

Roger: "Supposedly Hilbert thought that an axiomatization of math should first prove the consistency of its axioms. If so, that was a stupid belief, because inconsistent axioms allow proof of anything."

Precisely! Poincare knew that assuming induction to prove it is circular anyways. I won't bother mentioning the weak retort about infinitely axiomitized systems because it's a fanciful and stupid reply to the absurdity of incompleteness.

"Generalizing a bit, this suggests that logicism may be safe from Gödel's first incompleteness theorem and even the second, as well. If we do not and cannot know the consistency of a putative logicist system, then we are probably no better off with regard to mathematical knowledge than it is (assuming it is otherwise powerful enough). And, if we can know of a system that it is consistent, then, for the benefit of logicism, we should regard the system as too weak to be the relevant one for logicism"

http://www.jstor.org/stable/2214847

Thanks. You can download a free copy of the first paper here or here.

ReplyDeleteYes, there are some consistency proofs of Peano arithmetic and other systems, but they are not very satisfying.

You made serious mistakes in this post. First, the people who don't believe in truth WEREN'T LEFTISTS! They were on the FASCIST RIGHT! Leftists believe in truth.

ReplyDeleteThis mistake is particularly maddening because this was exactly the debate between the right wing and left wing in the 40s and 50s, with extreme right wingers like Heidegger taking the position "There is no truth!", "Your being is primary", while left-wingers like Carnap or Godel or Hilbert were busy elucidating what no-nonsense scientific truth meant.

Now on to your confusion about "why prove your own consistency"?

That's not what Hilbert was after. He was after something more interesting and subtle. The theory of Peano Arithmetic is obviously consistent, because we understand the mental model of the integers, and we can verify that the axioms are true there. Computationally. By defining the operations of plus and times on whole numbers. The axioms of Peano Arithmetic say "there are whole numbers", "there is a plus", "there is a times", and "all the whole numbers are found by stepping up from 0 by one unit indefinitely". Those simple axioms allow you to prove a tremendous amount of stuff, a large fraction of elementary mathematics. But you can't even speak about things like "functions from the real numbers to the real numbers", because those concepts aren't in the language in a simple way. BUT, you can code these more advanced concepts up in a simple way, using a formalism like unicode or coq to represent mathematical symbols. Everything inside a computer is a gigantic whole number. All the transformations mathematica does can be described by Peano Arithmetic.

But now we have set theory. In set theory, there are statements like "consider all function from the real numbers to the set of all functions on the real numbers". These enormous entities are much, much bigger than the set of integers and plus and times, and WE CAN'T verify that the axioms are true by any calculation. We just have to piddle around with intuition.

Worse than that, when people did think they understood things intuitively, Russel found a paradox. "What about the set of all sets that don't contain themselves?" That was a construction in Frege's set theory, and it made the whole thing inconsistent.

So Hilbert wanted to make sure that never happened again. He wanted to prove the consistency of SET THEORY. Not of Arithmetic. He wanted to prove the consistency of set theory using the ideas of arithmetic.

(continued) The consistency of Arithmetic is more or less self-evident, from the fact that integers exist and we know how to add and multiply them. The consistency of set theory is far from evident, and if someone discovers another contradiction in set theory tomorrow, like Russel did, people won't be too shocked. Hilbert wanted to eliminate the possibility by using methods everyone could agree on, by using operations on whole numbers.

ReplyDeleteWhen you formulate an axiomatic set theory, proving theorems becomes a calculation on symbols, on a computer. It turns into manipulating whole numbers. What Hilbert wanted was a proof in a theory of arithmetic that this operation will never reach a contradiction. Further, he wanted a proof that every theorem would eventually be provable, with perhaps a proof that takes a long long time.

But proving the consistency of set theory is HARDER than proving the consistency of arithmetic, because set theory proves the consistency of Peano Arithmetic! So if you could prove set theory is consistent, you would automatically prove arithmetic is consistent. So the goal of "proving your own consistency" was a WARM UP PROBLEM for the harder problem of proving set theory consistent.

The program went well for theories like Pressburger Arithmetic (just addition, no multiplication), and for the theory of the real numbers under addition and multiplication (this is easier than the whole numbers, because there are algorithms to find roots of polynomials).

But then Godel showed that even the warm up problem is impossible! Peano Arithmetic can't prove set theory is consistent, because it can't even prove Peano Arithmetic is consistent!

The method of proof showed that Peano Arithmetic can always be extended by "Peano Arithmetic is consistent" to get a new theory, we'll call that PA+1. Then PA+1 can be extended to PA+2, and so on.

This process of extension can be extended past the integers, to PA+\omega, then to PA+\omega+1, and so on through all computable ordinals.

It is not proved, but it is true, that Hilbert's program becomes correct when formulated on ordinals, not on integers. Hilbert did get it wrong. But he wasn't a simpleton. He didn't expect the proof of consistency of Arithmetic to provide evidence that Arithmetic is consistent. That evidence comes from the computational model of the integers.

By constructing computable ordinals, using various methods which grow in complexity indefinitely, you can approach mathematical truth in the limit. But it is difficult. Truth does exist, but it requires evolving new ordinal notations.