Like everyone else I read Ayer’sThat is correct up to the word "meaningless". There might be some meaning to metaphysics, but it is not the sort of knowledge that comes with a demonstration that it is true or false.Language, Truth and Logicas a teenager and, like many people of a scientific bent, I loved it. The Logical Positivism that it espoused can be summarised as the claim that knowledge is of two types: (1) logical reasoning from axioms, such as used by mathematics; and (2) claims about the universe that can (in principle) be verified empirically. Anything else — such as metaphysics — is literally meaningless.

He also writes Defending scientism: mathematics is a part of science on his blog, and tries to defend these views on the Scientia philosophy blog:

I will take one statement as standing proxy for the whole of mathematics (and indeed logic). That statement is: 1 + 1 = 2I also defend logical positivism, but not this. He has abandoned the "logical" part of logical positivism. Logic and math are forms of knowledge that need no empirical verification, and usually do not get any.

Do you accept that statement as true? If so (and here I presume that you answered yes), then why?

I argue that we accept that statement as true because it works in the real world. All of our experience of the universe tells us that if you have one apple in a bag and add a second apple then you have two apples in the bag. Not three, not six and a half, not zero, but two. 1 + 1 = 2 is thus a very basic empirical fact about the world [3]. ...

I have argued that all human knowledge is empirical and that there are no “other ways of knowing.” Further, our knowledge is a unified and seamless sphere, reflecting (as best we can discern) the unified and seamless nature of reality. ... I thus see no good reason for the claim that mathematics is a fundamentally different domain to science, with a clear epistemological demarcation between them.

Here are comments I left:

Coel, the flaw in your argument is in the triviality of your math examples. "1+1=2" is not much of a theorem, and is more accurately the definition of "2". Try applying your argument to a real theorem, such as the infinity of primes, as someone suggested.I believe that my comments and other comments refute his position.

There certainly is a sharp qualitative difference between the work of Riemann and Einstein. The mathematical theory of general relativity was worked out by Minkowski, Grossmann, Levi-Civita, and Hilbert -- all mathematicians. Einstein did not prove any theorems and rarely even made any mathematically precise statements.

Your comments about Godel's theorem are about like saying that the irrationality of the square root of 2 shattered hopes for geometry, or that comets shattered hope for astronomy. And it surely does not help your argument, unless you can explain how the theorem can be empirically understood or validated.

Your lesson from this is that "scientific results are always provisional". Maybe so, but mathematical results are not. Godel's theorem is not provisional.

You can, of course, define "science" any way you please, but you have failed to give a definition that includes mathematics. To you, science is empirical and provisional, but you do not give a single mathematical result with these properties. Do you really want to argue that "1+1=2" is a provisional result subject to empirical acceptance or rejection? Will you please tell us how this equation might be rejected?

You deny a "clear epistemological demarcation", but you do not give an example on the boundary of math and science. Your closest example is string theory, but you must realize that most of that subject is viewed by outsiders as neither science nor mathematics.

Coel, you say that it is " epistemologically identical", except that one uses empiricism and plausibility arguments and the other uses axioms and logic. In other words, not similar at all.

Part of the problem here is that what mathematicians mean by math is quite a bit different from what most scientists mean. I have heard science and engineering professors tell their students not to take math classes for math majors, because they have proofs in them. The professors act as if a proof is some sort of mysticism or voodoo with no applicability. Most of them do not understand what a proof is.

Coel argues that knowledge is science, and science is provisional, but that is just not true about mathematical knowledge. Mathematical truths are not provisional or subject to any empirical tests. He suggests that "1+1=2" can be tested by looking to see if alternate definitions can be used to predict eclipses. But there are lots of alternative number systems that are completely mathematically valid, even if they are not used to predict eclipses.

I could test "1+1=2" by laying 2 1-foot pieces of string together, and measuring the length. If so, I am likely to get 2.01 or something else not exactly 2. The mathematician says that 1+1 is exactly 2. So what have you tested? You certainly have not validated the mathematical truth that 1+1 is exactly 2. You have an empirical result about the usefulness of the equation, and that's all.

So when Coel says that math is epistemologically identical to science, he is not talking about how mathematicians do math.

SciSal is correct that most math has nothing to do with modeling the real world.

String theory is an odd beast. There are some mathematicians who prove theorems about string models, and physicists who look for empirical tests. But the vast majority of string theorists are not concerned with either of these pursuits. They are more like people playing Dungeons & Dragons in their own imaginary universe.

Coel, you say that math axioms are codified regularities of nature. The most common axiom system for math is ZFC (Zermelo-Frankel). Can you explain how those axioms relate to nature?

Coel, you repeatedly deny any distinction between a definition, a theorem, and an equation that empirically seems approximately valid. So I would lump you in with those other science and engineering professors who do not recognize the value of a proof.

"The professors act as if a proof is some sort of mysticism or voodoo with no applicability."

ReplyDeleteIn the corporate world, they are probably right.

"They are more like people playing Dungeons & Dragons in their own imaginary universe"

Now they are claiming they are ready to conquer condensed matter and overthrow the incompetent incumbents

>> I could test "1+1=2" by laying 2 1-foot pieces of string together, and measuring the length. If so, I am likely to get 2.01 or something else not exactly 2.

ReplyDeleteThat's funny. I imagined you laying out pieces of string on the floor, quite unable to decide. 2, or 2.01?

Concerning mathematics, arguments about Platonism are red herrings because we at least strive for consistency in mathematical systems. They are only a map. The philosophy is quite pointless blather. However, the one aspect in which the methods of scientific discovery and theoretical deduction concur is the notion of burden of evidence. Popper never realized that he provided no way of choosing what to falsify but starts history where it's convenient. We cannot falsify every claim and this would force us to assume an endless number of mutually exclusive theories at the same time. The philosopher David Stove does a nice job refuting the irrationalism of Karl Popper, Thomas Kuhn, Imre Lakatos, and Paul Feyerabend.

ReplyDeleteOne of the worst abuses of mathematical abstraction results from infinity, which is self-contradictory and a great deal of impredictive mathematics has been built on top of it. The problem is that it's shifting the burden of evidence to have people disprove it. A sensible notion of finite might have its complement as zero or empty, for instance. We all know that when you get a repeated decimal then you can get a terminating one in a different base and the square root of two is not a problem if your protractor measures spread instead of a ridiculous arc length that requires transcendental functions. Intension is different from extension. Irrational numbers can never be expressed in extension but they simply don't exist except as algorithms, making them pseudo-numbers. Dedekind cuts and Cauchy sequences are riddled with logical contradictions. The proofs simply fail.

Cantor was and confused multiple generations of mathematicians. Now we have the incomprehensible "continuum," "infinities" and "real" numbers. Cantor can shove his "transfinite numbers," Turing can shove his "infinite tape" and Gödel can shove his "for all." It's all meaningless. The law of the excluded middle is nonsense. Or did you stop beating your wife? Colorless green ideas sleep furiously (grammatical but meaningless). What the hell is so profound about finding out that a system could not prove its own consistency when you suspected it to begin with? Why assume infinite runnable programs but not infinite running computers? As for P vs NP, finding a needle in a haystack is hard work but it certainly isn't puzzling that it isn't hard to check that it's a needle once you found it. It's axiomatic! Zeilberger: “The vast majority of Boolean functions require exponentially many gates (as was first noted by Claude Shannon), the probability that any one specific function (e.g. SAT) will require only polynomially many gates is miniscule.”

Mathematicians love to exaggerate the post-modern results of non-Euclidean geometry but Poincare knew they were merely matters of convention. Mathematicians try to go down one-way streets with set theory and wonder why they lose information. Set theory was not the source of mathematics but an a posteriori sideshow. A proper theory of algorithms is a better foundation than most set theory because it doesn't do math backwards and puzzle, like a behaviorist, at the asymmetries. You can't tell me with absolute certainty what is in a black box by only studying its output. The pathologies of Bourbaki are apparent here. This idea also extends to their absurd notions of proof where mathematicians refuse to understand the existence of irreducible complexity and think that all proofs can be “from the book.” Most proofs are probably longer than any human being could survey but what is wrong with uncertainly on the order of 1*10^-100? Such is life.

Mathematicians hate computers and automation of their boring jobs and try to come up with math that is supposedly above procedure. Physicists regress into primary narcissism with escapist science fiction and untestable theories. Neither want to face the limits to knowledge or consistency imposed by logical thinking or experiment.

(Part 2)

ReplyDeleteLet me also mention a parallel line of reasoning to your refutation of Einsteinian armchair theorizing. I often see the remark that it was modern mathematics that created digital computers. Not so! The papers of Turing and Gödel had little to do with modern computers. That's the revisionism of sloppy historians. Universal computation is a theoretical construct that gives few hints when building practical computers and new "biological" chips are making this even more irrelevant. For instance, the Z3 wasn't even proved to be Turing-complete until 1998. The real contributions were more from the field of logic and engineering. Atanasoff, Berry, Eckert, or Mauchly didn't know much about Turing. Claude Shannon was an MIT electrical engineer that looked to apply 19th century logic of Boole.

"The idea that von Neumann was some kind of torch carrier who convinced the world that computers were important just does not wash with the facts. It does, apparently, sell books." (Bill Mauchly)

The Z3 and Colossus weren't even generally known to exist until the 1970s. The ABC was dismantled by Iowa State, after Atanasoff went to do physics research for the U.S. Navy. The ENIAC was the real game changer.

The theorist's phony reading of history breaks down over and over again.