How much of modern mathematics is historical coincidence?

And my point was that On the Origin of Species … did not even touch upon penicillin-resistant bacteria or Faroe Isle mice, specifically. That is not to say that those new species could not be explained by the evolutionary science of speciation. I’m assuming that it’s not your position that 20th Century maths needs an explanation outside of cognitive science, given that 19th Century maths does not (even though that explanation might only be specualtive and protoscientific as yet). Is that accurate?

My position is that the book doesn’t even pose the question of modern mathematics, and further that I’ve found most authors who don’t assume that the discussion is still between Platonism and Formalism.

My position is that just because cognitive science may explain the roots of mathematical thought is no reason to assume that it will explain all mathematical thought.

My position is that the book may well give background to the common root of mathematics that I proposed early on in this thread, but that it doesn’t even try to apply to anything modern and truly mind-bending.

And to blandly assert that modern mathematics is like 19th-century and early mathematics – only more so – is not only lazy and unrigorous, but easily shown as false.

Mathochist writes:

> And to blandly assert that modern mathematics is like 19th-century and early
> mathematics – only more so – is not only lazy and unrigorous, but easily shown
> as false.

O.K., show it.

Then what would explain those things? Is that not like the Intelligent Design-ist accepting the explanations of evolutionary biology for everything except humans?

I asserted no such quote, except insofar as to point out that they are all mathematics. What specific examples are you suggesting cannotever be explained by cognitive science? (Penrose gives examples like the Mandelbrot set and GIT.)

Topos theory and huge advancements in model theory, for one. The current prevailing attitude (even if not consciously formulated) is that one model of set theory obtains, but all other topoi and their models are perfectly valid objects of study. Only the results of the one can apply to the “real world”. For instance, in many “intuitionistic” topoi, all functions are continuous.

A related concept is the current view on structures such as groups. Classically, a group was a set of transformations of some other object (usually geometrical, like the data of an extension field of Q) that was closed under composition and inversion. At the end of the nineteenth century this was just giving way to the “abstract group” as a set with a binary operation. Nowadays, however, a group is really purely structural. There is a “theory category” of groups with objects the natural numbers and morphisms from 0 to 1 (unit), from 1 to 1 (inversion), and from 2 to 1 (composition) such that certain diagrams commute. The normal category of groups is recovered as the category of functors from the theory category of groups to the category of sets.

The categorical view has at a large number of points become completely divorced from direct relation to the “real world”. The motivation for basic mathematics may have been abstracted from physical or evolutionary reality, but somewhere along the line, abstraction became an end in and of itself. Newer mathematics was abstracted from older mathematics, and the current wave is actually to abstract the newest mathematics from older abstractions.

[QUOTE=SentientMeat]
Then what would explain those things? Is that not like the Intelligent Design-ist accepting the explanations of evolutionary biology for everything except humans?

[QUOTE]

I’m not asserting an exceptionalism, but just that the program of abstraction has gone so far that what’s being worked out now is either completely divorced from physical reality, or so intimately tied to the logical substructure of existence that to explain it at a human-evolutionary level is exceedingly awkward at best.

The topos that obtains as set theory in the “real” world is amazingly close (at our level of perception) to one with a Boolean algebra of subobjects of 1. Thus we developed things like Boolean algebra. However, we also study topoi whose algebras of subobjects of 1 are Heyting, not Boolean. Some of those are applicable to quantum-level descriptions of objects, yes, but the vast majority aren’t even close to obtaining in any physical system we know. We didn’t abstract them (consciously or not) from nature; they’re purely abstractions from previous mathematics.

Again, I don’t deny that the book’s story may be a very good one for how the whole mathematical enterprise got off the ground, but as the concepts get more abstract – and especially as the mathematicians are more conscious of the abstractions – the explanation loses its applicability.

And again, to try once more to tie back into the OP: cognitive science, evolutionary biology, and physical reality may well explain a common root of all mathematics. But once off the ground, the Martians may have gone in entirely different directions than we did.

Well, I think we’d have to agree to differ there. If we are using the same spatial processing modules to construct these abstractions-upon-abstractions, we are still employing similar cognition no matter how far removed from the “real” world those flights of mathematical fancy are. I’d suggest that it’s the biological computational equivalent of converting a program through all kinds of different programming languages until the final product is simply unrecognisable when compared to the original. That still does not mean that computer science cannot explain that final, multiply-converted result or process.

In fact, whether or not something like Heyting algebra is applicable to “real life” is not particularly relevant here IMO. I consider that maths is a (special) kind of language. The cognitive science of an ape using the sentence “the cat sat on the mat” is not fundamentally different to that of the same ape using the sentence “the colorless green ideas sleep furiously”. One sentence clearly has little applicability, but the same cognitive modules in the brain are still being used. I’d suggest that same is true of the mathematics of any century.

The problem is that’s not what you started by saying. Let’s review, shall we?

The original notion of a natural number line may well have been a line of footsteps, but now there aren’t, strictly speaking, any numbers (elements) required for the natural numbers at all. It’s just a universal object in a topos satisfying a given diagram scheme.

Yes, but I’m not suggesting that we do 19th century maths by literally walking around the maths department planting physical footsteps, either. What I said there was that our evolution gave us the ability to mathematise, spatially, just as it gave us the ability to use verbal language. My point is that just because this ability now allows spatial cognition having little applicability back to those footsteps and whatnot does not mean that modern mathematics is no longer within the realm of cognitive science, just as my ability to come up with a linguistic sentence which references no “real” thing removes language from the realm of cognitive science.

And if this were a question about a mathematical parallel to Chomsky’s “organ of language”, that would be on point. However, the question posed was to what extent the mathematics we’ve developed (however we are able to “do mathematics”) is forced on us and to what extent it’s path-dependant.

To which I suggested that these other 3-D, entropic beings would develop similar spatial processing modules capable of similar abstraction-upon-abstraction. Whether they’d happened to come up with equivalents of our 19th or 20th century maths by the time we met them would be merely a historical coincidence (or not), IMO.

19th-C, eventually they would have (modulo how much they cared about completeness).

20th-C, it’s already divergent on this planet. For example, Fourier transforms are a part of classical mathematics. Mathematicians of the day even started to have some inkling that it involved functions on a space and some sort of dual space. However, to modern algebraists the space is a group and the dual space is the unitary dual, while to modern analysts the space is a geometric manifold and the dual is the spectrum of the Laplacian on the manifold. The two generalizations are independant threads of theory, they deal with different situations, and they don’t always agree when they do both apply. It’s entirely possible that a culture could have gone down one and ignored the other. Maybe there’s a third road that we haven’t found yet. Maybe there’s a third view that unifies the other two, and another culture would have found that one first.

Again we see that while fin de siècle mathematics may well be “natural” in the sense of the book, once a culture moves into abstraction there are choices to be made, consciously or not.