Are there any mathematical functions beside +, -, *, / ?

Chronos: Link? I’ve always been rather interesting in logic operators, having gotten bored awhile ago and downloaded a list of all sixteen…

Yeah, I’d also like to know about Peano’s road and Goedel’s wall. Please elaborate!

As far as operators go for the OP, what about derivatives? Or integrals?

The former is used to find the slope of a function and the other to find the area under a curve, neither can be defined with +, - , *, or /.

Hmmm. Here’s an informal argument.

First, we need at least a binary (two-argument (gotta keep saying that so you don’t think “two-valued”)) operation, since otherwise we couldn’t combine input values. There are 16 possible binary boolean functions.

Second, we need to be able to negate at least one of our inputs, otherwise we couldn’t even implement NOT. That is, we cannot have F(0,0)=0 or F(1,1)=1, because then we wouldn’t be able to implement NOT.

That means we have the partial truth table:


A  B | F
0  0 | 1
0  1 | X
1  0 | X
1  1 | 0

Which give us four possible functions:



A  B | F0  F1  F2  F3
0  0 | 1   1   1   1
0  1 | 0   0   1   1
1  0 | 0   1   0   1
1  1 | 0   0   0   0

Of these, F1(A,B) is just not(B), and F2(A,B) is just not(A). Since NOT is not functionally complete, then neither F1 or F2 is functionally complete.

F0 (NOR) and F3 (NAND) are candidates. It remains to demonstrate that both NOR and NAND can be used to implement NOT, OR, and AND (which we know are functionally complete because every boolean function can be written in disjunctive normal (sum of products) form and conjunctive normal (product of sums) form).

kg m²/s²

PS: Did you make up names for all 16 binary boolean functions?

hmm, I’m surprised no one else brought this up but integration and differentiation are operations as well and I don’t believe they can be reduced to iterations of simple addition.

Peano wanted to reduce all of arithmetic to a logical theory. So he came up with five axioms that were supposed to encompass everything about the natural numbers (which can be used to define the integers, rationals, reals, and complexes). Other people refined the axioms, and it was thought that strict adherence to formal logic would get rid of all the problems that were going on at the time (more on this later). But Godel managed to show that any formal theory of arithmetic which was “strong” enough either contained unprovable true statements or no unprovable statements.

**

Those are definitely different. Neither can generally be determined by a finite sequence of arithmetic operations. Actually, there are many other operators–there’s a whole division of analysis called “operator theory”. I’ve never studied it, but that’d be a good Google search for someone whose interested.

Well, addition is really only a shortcut for counting. In a technical sense, all operators are just shortcuts for either counting or counting backwards.

–Cliffy

I’m not up to integrals yet, but I do know that derivatives have an equation:
f(a + a’) - f(a)
a’

Where a is a point on the x axis, and a’ is another point on the x axis, and f is a function. All the other laws of derivatives are based on this equation (in some way, AFAIK), and are heruisitics used to save time.

You left out the fact that you have to take the limit as a’ goes to 0. Limits, in general, cannot be expressed in terms of basic arithmetic operators.

Shoot, you’re right, that is the long way to find a derivative!

Well, an integral is a backwards derivative. There are two kinds, definite and indefinite.

When solving an indefinite integral you have to add a constant C at the end of the solution. That C prevents you from algebraically manipulating the long derivative formula backwards. So maybe that one can’t be expressed with the basic operands.

Just thought of this, partial deriatives. Those don’t follow the formula do they? You have to purposely pretend there’s a constant, therefore a 0, to solve them.

If we aren’t changing the source data, through ommission or addition, then partial derivatives can’t be easily expressed right?

Newton Meter: Are you speaking to me? I didn’t! I found a site with them, one involving mainframes. If you are, let me know, and I’ll drop a link. Believe me, I would have chosen nicer names. =)

In grade school in the 1960s, I learned a longhand method of calculating square roots. It’s described here: http://mathforum.org/library/drmath/view/52610.html

Good point rowrrbazzle. That’s a fine method. You could also use Newton’s Method. But either way, I think I should point out that it requires an infinite number of arithmetic operations actually to do the complete operation of taking the square root. There’s a qualitative difference between this and something you can reduce to a finite number of operations.

Good point rowrrbazzle. That’s a fine method. You could also use Newton’s Method. But either way, I think I should point out that it requires an infinite number of arithmetic operations actually to do the complete operation of taking the square root. There’s a qualitative difference between this and something you can reduce to a finite number of operations.

Not sure exactly what you mean here, but you can use one equation to get partial derivatives and single-variable derivatives. So they’re not really different.

For the curious: Let e[sub]j[/sub] be the vector whose j[sup]th[/sup] entry is 1, and whose other entries are 0. Then the partial derivative of f wrt x[sub]j[/sub] is limit((f(x + he[sub]j[/sub]) - f(x))/h, h -> 0). If x is one-dimensional, j = 1, and you get the standard formula from calc I.

Achernar, Newton’s method shouldn’t yield an infinite number of steps for any rational root, provided the “seed” value was chosen intelligently. It would only yield an infinite amount of steps for irrational numbers, which is kind of obvious, no? This is going off of memory, I haven’t used Newton’s method since 11th grade high school, and even then I simply programmed it into my calculator (nice programmable one, I had a program for any polynomial up to fifth degree!).

erislover: You are right but most square roots are irrational. An integer only has a rational square root if it is a perfect square ( so its square root is an integer) and a rational number in its lowest terms only has a rational square root if it is the quotient of two integer perfect squares.

According the Math World, the elementary operations are +, -, *, /, and integer or rational root extraction.

But to define * in terms of + (or + in terms of succ), you need to use the non-elementary operation of iteration or recursion: still two operations.

Well, since nobody’s actually listed them, here is a summary of Peano’s work (from the 19th century):

First, you need to define the natural numbers. This requires five postulates:

  1. Zero (0) is a natural number.
  2. Every natural number has a successor.
  3. Different natural numbers have different successors.
  4. Zero is not the successor of any natural number.
  5. If a statement is true of zero, and it can be shown that whenever the statement is true of a natural number, then the statement is true of the natural number’s successor, then the statement is true of every natural number.

This suffices to fully define the ‘counting numbers’: 0,1,2 … Now we just need to define addition, to get us started on math:

  1. For any natural number x, x + 0 = x.
  2. For any natural numbers a, b, and c, then if a + b = c, the successor of a + b = the successor of c.

Congratulations! You now have all of the tools you need to derive every other mathematical operator and branch of mathematics in terms of 0, the successorship function, and mere logic! Here’s a good warmup exercise: prove that 1 + 1 = 2. :smiley:

Jabba: But my point is that even if we consider “[root]” as an operator, we haven’t eliminated infinte operations. So defining roots in terms of an algorithm that uses only the four elementary operators isn’t actually changing anything.

Allen: I don’t think so. 2*3 asks “what is the result of 2+2+2”? Unless you feel that “2+2+2” involves iteration. In which case iteration doesn’t come from multiplication, but from a certain method of expressing addition.

[someone correct me if I am using the term “closed” incorrectly] The problems come in when we discuss operators that aren’t closed under number sets. Addition alone is closed under whole numbers in that every answer one gets from using “+” on whole numbers yields whole numbers. Defining subtraction as addition of an opposite allows addition (and hence subtraction) to be closed under the integers (but not whole numbers). If we define multiplication here as repeated addition we are still fine. The trouble comes in when we want to define division… we get “answers” that aren’t integers. (the modulus operator still works fine, though, since it outputs zero or the smallest positive non-zero remainder of repreated subtraction)

Division is closed under the rationals, though. Any finite number of operations on rationals will always yield rationals (provided we don’t divide by zero, which will immediately cause the reader to notice that division by zero is nonsense when we think in terms of repeated subtraction: 4/0 asks, “How many times can we subtract 0 from 4?” which, of course, is no number at all).

Any number at all. :smiley: