Ultimate basis of arithmatic operations

The joke goes that a mathematician is someone who, when asked “Does one plus one equal two?” replies, “Give me an exact definition of ‘one’, ‘plus’, ‘two’, and ‘equals’, and I’ll tell you”.

Anyway, my question was, how are the arithmatic operations of addition, subtraction, multiplication and division defined in higher mathematics? The reason for my question is I"m trying to figure out if in some cases, it makes more sense to regard operations involving zero as “no operation”, rather than taking zero as an integer value. Certainly you can’t divide by zero. Could adding and subtractring zero really be a case of doing nothing?

The concepts you’re referring to are defined axiomatically. A good starting point would be the Peano Axioms, defined for example here. Using these axioms you can define the non-negative integers, and then define addition by saying something like “0+x=x for all x” and (if S(x) denotes the successor of x) “S(x)+y=S(x+y) for all x and y”. Then having defined addition for non-negative integers you can extend it to all integers, then all rational numbers, all real numbers, all complex numbers, all polynomials, and so on and so on.

So adding something to 0 is actually the only kind of addition that isn’t defined in terms of other additions. Whether or not this makes addition to zero “no operation” or not is probably open to interpretation.

Peano axioms aren’t actually the axioms used to define those operations today; the axioms generally used today defined integers (and operations on them) in terms of sets. But the idea is the same. ultrafilter posted a thread going into all of this in detail some time ago, but I forget the title, I’m afraid. He’ll probably be along shortly himself; perhaps he can refresh my memory :slight_smile:

To answer your implied question, yes, 0 is a number. Every construction of the integers contains 0, and most people accept it as a natural number.

Now then…

We are given a natural number 0 and a unary operator ’ that returns another natural number. Usually, we take the natural numbers as sets, with 0 = {} and n’ = n U {n}.

In that case, x + 0 = x and x + y’ = (x + y)’. Similarly, x * 0 = 0 and x * y’ = (x * y) + x. It’s possible, although not easy, to show that these operations have all the familiar properties.

We can now define the integers from the naturals. We take them in pairs (a, b), and make some definitions. (a, b) = (c, d) iff a + d = b + c–this is an equivalence relation, so we can make equivalence classes here. (a, b) + (c, d) is taken to be (a + c, b + d). (a, b) * (c, d) is taken to be (ac + bd, ad + bc).

We note that (a, b) = (a + n, b + n), so we can regard (n, n) as zero, and (b, a) as the additive inverse of (a, b). So (a, b) - (c, d) = (a, b) + (d, c).

This may be clearer if you keep in mind that (a, b) represents a - b.

We can now create the rationals as ordered pairs of integers. (a, b) = (c, d) iff ad = bc–this is only an equivalence relation when the second element is restricted to be non-zero, so we disallows pairs of the form (a, 0). (a, b) + (c, d) is taken to be (ad + bc, bd), and (a, b) * (c, d) is taken to be (ac, bd).

We note that (a * n, b * n) = (a, b), so we can take (n, n) as 1, and define (b, a) as the multiplicative inverse of (a, b).

This may be clearer if you keep in mind that (a, b) represents a/b.

From there, we can construct the reals, but that’s a bit more complicated, and I’m not going to discuss it here.

If I understand the history of mathematics, some time ago - maybe 19th century - you could devide by zero, and get positive or negative infinity (depending on the sign of the numerator). Or you could divide by infinity and get zero likewise. Perhaps I should say, “they thought you could…” or “they chose to define the operation such that…”. Since then, more conservative views have held sway. In any case, IIRC the Pentium FPU has a hardware mode that supports these operations, and has various special floating point representations for infinities, plus and minus zero, and more (don’t remember if this is part of the IEEE spec or just part of the processor design).

Didn’t Betrand Russell once prove with about two pages of symbols that 1 + 1 =2 (though I don’t know which axioms he used.)

Russell and Whitehead prove that 1 + 1 = 2 in the Principia Mathematica:

http://www.cut-the-knot.org/selfreference/russell.shtml

Bertrand Russell co-wrote the Principa Mathematica, which attempted to unify arithmetic with formal logic to give a firm axiomatic basis to what we’d been doing since we’d been counting with notches on sticks. It was the project of the age, and it was seen as a means to give all thought, not just mathematics, a basis in pure logic. Goedel came along and knocked out the underpinnings, but for the time it was seen as a huge step forwards.

I thought it was seven hundred and two pages.

Actually, it was proved on page 362, early in Volume II.

Don’t take my word for it: here’s the page.