Mathematics doesn't provide universal truth with human perspective being irrelevant

Mathematicians have always been boggled by math’s effectiveness in describing the physical universe.

Math itself is a universal truth, but I’m not sure how meaningful it is if it can’t be used for anything. The bizarre thing is that it is way more useful than it seemingly has any right to be.

The Unreasonable Effectiveness of Mathematics in the Natural Sciences

Still, the math only describes the universe. It is not the universe itself. It’s a way for humans to understand the universal truth, but it is not the universal truth itself. Math is an awesomely useful extended metaphor, but in and of itself it is meaningless.

Then you’re saying that the definition of a star is a human construct. You’re not saying that the quantity of the stars is a human construct.

Yeah, there’s nothing here unless you’re really into sophistry for it’s own sake. Math is the language that has the most bullshit stripped out of it, and thus is the best language to use to speak about the world, but it is still a language and not magic. Blaming math for the fact that it doesn’t have magical powers isn’t really a coherent argument. Also, did anyone read that article?

[QUOTE=oh wow]
Calvin’s father is a lawyer and so he never studied the special branch of mathematics called “Set Theory” - which can go by other names like “Discrete Math” or “Number Theory”.
[/QUOTE]
Yeah she sounds like she knows what she’s talking about

The same place the square root of -1 does.

I’m saying that “the quantity of stars” is inexorably bound up with “the definition of stars”. Things don’t become countable until we adopt criteria for distinguishing one from another.

We’re accustomed to thinking of the universe as made up things and those things having a number. But organizing reality into things is an artifact of human cognition, not a fundamental property of reality.

So the successor function exists as a mathematical abstraction, and not as a fundamental part of the universe?

Or, in other words, it’s a technique we use for modeling/describing reality, not a feature of reality?

Anything based on unprovable axioms is not a universal truth. Saying “2 + 2 = 4” relies on some axioms; the statement is undeniably true for those axioms but that’s different from universal truth.

i isn’t just a curiousity - it is very important, IIRC, in the definition of things such as sin and cosine. It gets used all the time, mainly in exponents from what I learned, in describing very real things. Does that make it a part of the universe or not? That’s a philosophical, not a mathematical, question. The reason I brought it up is that while we have a nice grasp of the reality of the successor function, we don’t for i, but they are in someways very similar. Not exactly - succ is axiomatic, while i is actually a definition, but close.

2 + 2 = 4 is not a universal truth - that 2 + 2 = 4 given the axiomatic definition of 1 and succ is. You can’t remove the axioms from the system and have anything meaningful. The universe suggests what some good and useful axioms should be, but as we see in non-Euclidean geometry sometimes using the obvious axioms doesn’t model the oddest bits of reality as well as it might.

That’s ridiculous. Are you saying we can’t create a definition for what a star is unless we know how many stars there are? Obviously that’s not the case. So quantity is not part of identity.

No, I’m saying the opposite. We can’t count stars without defining what a star is.

“Star” is a conceptual category we’ve constructed for our human convenience. The number of stars we count is determined by how we’ve constructed that conceptual category. There is no “number of stars” that exists separately from our human conception of what a star is.

There are more natural numbers than there are conceptual categories in the universe.

Exactly so. What was taken as “universal truth” 200 years ago (Euclidean geometry) turns out to be a not-so-accurate human-constructed model of the universe. In the end that’s all that mathematics is. (I have a math degree so it’s not as if I dislike math.)

Where is the bong hit smiley?

Love you too :slight_smile:

I’m trying to program an ipod prove the first Godel incompleteness theorem… Any help? :slight_smile:

In the end, math is just:

[ol]
[li]Make up a couple of things[/li][li]Make up rules for the things to follow[/li][li]See where those rules take you[/li][/ol]

The universality of math (well, one kind of universality) lies in the fact that, given the same things and the same rules, everybody gets to the same places. There’s no guesswork, no human input, no intuition needed, at least in principle. Of course, if you use different rules or things, you’ll end up different places; but that’s empty, it’s nothing else but saying that if you don’t agree, you disagree.

The point is that if you have a set of axioms A that prove some theorem T, then the proposition that A proves T (A -> T) is provable from no axioms at all; it’s as universal a truth as you’ll get anywhere.

However, there is another kind of universality to math that’s somewhat more subtle: if the system of things and rules I use is sufficiently complex, then, for any system of things and rules you might use, I can encode your things and rules in mine, and use my system to prove everything yours can; this is, for instance, the basis of Gödel numbering, which encodes a mathematical system of arbitrary symbols and their manipulation-rules into the natural numbers. The requisite complexity is reached precisely if my system can be used to encode a Turing machine. (Though ‘complexity’ is really a bit of a red herring, or at least used in a somewhat unfamiliar way, here: very simple systems can be Turing complete (cf. Rule 110), while it is possible to design systems of near-impenetrable intricacy that fail to be.)

So all those ‘sufficiently complex’ systems are in a well defined sense ‘the same’: For any two systems A and B, both of which are Turing complete, both A can encode B and B can encode A (I sometimes prefer to say ‘emulate’ instead of ‘encode’, since it’s basically the same: two different computational architectures – say, Mac and PC – can be made to emulate one another for much the same reason two mathematical systems can).

This also gives a nice answer to Wigner’s ‘unreasonable effectiveness’: in the end, the world around us is only composed out of certain things that follow certain rules, as well; that one thus can find embeddings of these rules and things within mathematics is perfectly unsurprising (well, at least as long as reality isn’t hypercomputational, but that’s not something to go into right now); at least as unsurprising as it is that one can create computer simulations of aspects of the real world (to my knowledge, nobody has ever marvelled at the unreasonable effectiveness of computers in modelling – that’s because the notion of universality, or Turing completeness, was built into the field from the beginning).

I’m not getting your point. Could you elaborate?

Did Euclid develop his geometry to be a model of the universe (the way Newton developed his physics) or as a mathematical exercise. As math, it is just as true today as ever. As a model of the universe, it has shown to be imperfect. Anything dealing with the real universe is possibly imperfect; anything in its own plane, as math is, might not be. (Excluding human errors in proofs.)

6÷2(1+2)=

1 or 9?

If you asked 1million people it would be almost 50/50
Don’t seem to universal to me