Is there logic to the order of operations?

Is the order of operations (please eat my dear aunt Sally) derived from any set of axioms? I get why parentheses are first, as they establish groupings. I also get why there has to be an agreed upon order. But other than this convention, is there mathematical reason why multiplication should be done before division, etc.?
Is this similar to asking is the alphabet organized according to the song or vice versa?

Multiplication and division can be done in any order, as long as they’re done after exponents and before addition and subtraction. Similarly, addition and subtraction can be done in either order, as long as you save them for last.

The order is consistent with the nomenclature of polynomials. That is, to write, for example, 3x^2+4x+6 would be a lot more cumbersome to write if we did addition and subtraction first. Almost all functions, and therefore equations, that model the natural world can be expressed on polynomials. Therefore, it makes sense to use a system that makes it easy for us to write said polynomials as concisely as possible.

ETA: It’s not at all analogous to the alphabet.

The order of operations is arbitrary, and is just defined to be what it is out of convention. What it does is allow us to do is express formulas without a bewildering array of parentheses:

3 + 4 / 5 - 67 is really just shorthand for (3+(4/5))-(67)

We could just as easily define addition and subtraction to have higher precedence, in which case:

3 + 4/5 - 6*7 is shorthand for ((3+4)/(5-6))*7

Anyway, to recap, there is no logical reason why the order of operations can’t be something else, but as Santo Rugger pointed out, there are still good reasons to use the system we do.

The previous answers said all that needs to be said about basic arithmetic, but I thought I’d chime in with a bit of computer programming, just to reinforce how this stuff is arbitrary but chosen for convenience.

In C (the programming language*), you have a lot more operators. As well as adding, subtracting, multiplying, dividing and exponentiating with variables, you can also call functions, access array elements, access structure members directly or indirectly, logically negate, increment and decrement (prefix and postfix), reference or dereference, typecast, find the memory size, find the remainder of the division, shift the bits, take the one’s complement, test equality, test the usual inequalities, logically conjoin and disjoin, assign, conditionally assign, evaluate sequentially… and a few more that I’m not too sure of right now. This leads to fifteen levels in C’s hierarchy of operations! And even that doesn’t solve all the problems that can come up, there are still situations in which the evaluation of an expression is ambiguous and compiler dependent.

I’ve only just started learning C++. As a superset of C I imagine it’s even worse.

So then you get J (another programming language) where they decided that instead of having 15 levels of hierarchy, it simply evaluates everything right-to-left (to copy the way we write functions as ‘f(x)’). Though you can still use brackets to override that. I rather like not having to remember precedence :).

  • props to page 53 of Kernighan and Ritchie for the content of this paragraph!

APL is another programming language with IIRC about 100 native operators. There is no precendence whatsoever. Each statement is evaluated purely right to left.

Back to the OP. In a very real sense, addition & subtraction are the base operators of arithmetic over the real numbers.

Multiplication and division are defined in terms of addition and subtraction. They are derivitive operators. (Not 'derivitive" in the calculus sense but in the logical sense)

Exponentiation is defined in terms of multiplication and is a derivitive of a derivitive.

The calculus operators are further layers of derivitive, and so it goes up to things you’ve never heard of and only experts understand.

So there is some logic beside convenience for polynomial notation fo ranking the operators as we conventionally do.

I’ve typically thought of the addition and subtraction signs in an equation as positive and negative signs for the term they’re next to (i.e., 3x[sup]2[/sup] — 4x is a positive added/combined with a negative… I hope that makes sense), so much of this makes sense.

Is there any history on the establishment of the convention? That is, did Mathoclese or Addistotle derive the operations first, then determined the best order as equations become more complicated? Who first wrote PEMDAS in a textbook?

I love number theory. I just wish I was better at math!

There are three kinds of programming languages without precedence rules:
[ul]
[li]APL-derived languages such as J and A+, where the equations look normal but are evaluated strictly right-to-left. Thus, 32+1 is 6. Parentheses can be used to force some things to be evaluated first, so (32)+1 follows the normal rules. APL was originally invented out of Iverson Notation, an unambiguous mathematical notation invented by Kenneth Iverson, which explains its odd semantics.[/li][li]Lisp-derived languages such as Common Lisp, Scheme, and Emacs Lisp, where everything is parenthesized and all operators come before their operands (everything looks like a function call), so 32+1 becomes (+ ( 3 2) 1). Evaluation is innermost-to-outermost.[/li][li]Forth-derived languages such as many tiny Forth dialects, all different, where nothing is parenthesized and operands come before operators, so 3*2+1 becomes 3 2 * 1 +. Evaluation is left-to-right, and is often modeled as pushing operands onto a stack and then popping them off.[/li][/ul](Yes, assembly languages usually have precedence rules used in the notation for effective address calculations.)

Better not get interested in Perl 6 (PDF), then.

Oh, FUCK. I’m doing a bioinformatics course, there will be a module on Perl. And if I go on to a career in it, they say bioinformaticians do lots of Perl. Thanks for the heads-up anyway. :frowning:

Wouldn’t that be 9?

This is all true for natural numbers, but once you get to rationals or reals, you need alternate definitions. I suppose you could “unfold” real multiplication to show that it ultimately depends on natural addition, but it’s not really profitable to think of it that way.

Don’t worry too much. Nobody really uses Perl 6 for anything yet, because they still haven’t finished a production-ready compiler.

Perl 5 only has about half the number of operators. :smiley: The precedence rules are mostly the same as C, though, except for all the Perl operators that C doesn’t have.

Could it have to do with distributive properties? Exponentiation distributes over multiplication, so exponentiation has precedence over multiplication. Multiplication distributes over addition, so multiplication has precedence over addition.

Actually — and I’m sure you know this — both C and C++ lack an exponentiation operator. There is instead a function for it (or multiple functions) in the standard library.

Perl is actually really easy to code in. You only use the operators you need to use, and the obscure ones will be, well, obscure. Most of the available operators are there as shortcuts to other functions, anyway.

The real problem comes in reading someone else’s Perl code. If it’s well-written code, then it’ll be a breeze, but poorly-written (or intentionally terse) Perl code can resemble line noise.

Back (when I had hair) and used to program in FORTH, I read from top to bottom. Thus, each word took a step; code looked like

3
2
*
1
+

(FORTH employed RPN, beloved of HP calculator users. To me, the tremendous advantage was in not having to keep track of parentheses when parsing a complex operation.)

I’d say that the languages listed have precedence rules, just not operator precedence rules. The reason for a consistent set of rules, whatever they are, is to be able to parse an expression during compilation, and to write a grammar for the language, and to have results be independent of the order of parsing. I can imagine a language with multiplication and addition having equal precedence, but then either of the results totoismomo notes might come out, depending on implementation.

The language has to define both the precedence rules and the associativity between operators of the same precedence. So long as it does that, there will be no ambiguities.

I once wrote a compiler for some military CPU, and just to make things easier on me, defined the order of operations as proceeding from left to right. When I learned more about heaps and stacks, I regretted it, but no one complained at the time. I just made sure the manual was crystal clear about the order.

Could we use as a metaphor the linguistic concept of being “tightly bound”, or find something in cognitive science of mathematics ?