Order of Operations--Mere historical accident?

Could the order of operations in math have easily been settled differently than it actually was? Or is there something about mathematical practice that somehow recommends doing exponentiation first, then multiplication, then addition?

I could write up a whole new system of math notation which has the operations done in a different order. For example, I could make a system where you do additions first, then multiplications, such that

2 * 4 + 3

means “add 4 to three, then multiply the result by two.”

Under such a system, in order to say “multiply two by four, then add three to the result” I’d need to write

(2 * 4) + 3

The parens appear redundant to normal mathematical instincts, but given the new order of operations I described you can see how they’d be necessary to express the quantity I described in English in the way I described it in English.

Anyway, this alternative is of course not how things historically turned out. And my question is, is there a practical or rational reason why things turned out as they did, or is it just kind of how things arbitrarily ended up?

You should look at the link I posted in your other thread. Your assumption about common or standard practices may not hold up.

NM

The entire concept of an order of operations is actually (somewhat) arbitrary. It’s more of a function of our infix system of notation.

By “infix”, I mean we put binary operator between the two numbers. For example, “2+2” has the + operator between the numbers.

While that lends itself naturally to language (“2+2” = “Two plus two”), it requires an order of operations and/or parentheses to produce unambiguous statements.

While that’s a practical (even rational) consideration, it’s not necessarily ideal.

We can use a post-fix notational system (Reverse Polish) to avoid parentheses and order of operations ambiguities.

In a post-fix system, the operator is placed after the numbers, so we have something like:

2 2 +

So, let’s say we’re evaluating something like (1+2)*3. In Reverse Polish, you could write it as

1 2 + 3 *

Since you always evaluate left to right, it’s clear you add 1 and 2 together first, and then multiply the result by 3. No parentheses necessary to disambiguate the order between addition and multiplication.

So, there’s no mathematical necessity for a notational system that has an order of operations. In theory, we could switch to a system that doesn’t need one. But we’ve been using our current system of notation for centuries, which makes it impractical (in the short run, anyway) to switch.

Which assumption?

I’m not certain what you mean by “arbitrary”, but any convention of how algebraic operations are implemented is defined by the rules of that convention. For instance, the convention to which you refer–called “infix” by computational mathematicians–is designed for visual representation of a complete formula rather than computational simplicity. It requires a number of passes and intermediate calculations (or intermediate variables) in order to solve. The conventions are also not always very clear, requiring the use of parentheses, brackets, or vinculum to enforce or clarify the intended grouping of operations. However, it is a compact way to display a large formula and is consistent with they way we use grammar in natural language in the Indo-European languages of Europe from which modern mathematical conventions emerged.

For computational purposes, other conventions are prefered for the simplicity of stack operations which don’t require storage of intermediate quantities or multiple passes. The order in which operations are read in polish notation (and for those familiar with Hewlett Packard scientific calcuators, postfix or reverse polish notation) immediately preceeds (or follows) the individual operation. For tenkey (adding machine) type operations, the operation comes between operands but without distinction for precidence. While not convenient for human interpretation, these work quite well for machine operations.

From a historical perspective, the order of operations was probably selected for maximum clarity and compactness, e.g. 4 + 3[SUP]2[/SUP] = 13 is much clearer than 4 + 3[SUP]2[/SUP] = 49, and in general precidence reflects increasing complexity of the operation, e.g. multiplication is more computationally intensive than addition, surds are more complex than multiplication, trigonometric, logrithmic, and other transendental functions are more complex than surds, et cetera. You could change the order of operations, but this would doubtlessly make for greater ambiguity in interpretation of operations and require additional use of parentheses in order to clearly express the formula.

Stranger

D’oh.

There is one thing I neglected to mention.

Because the way we write exponentiation tends to not involve an actual symbol but one number raised as a post-script, our order of operations naturally has exponentiation first. But again, that’s a function of how we write exponentiation and not a natural consequence of the operation itself.

As for multiplication first or addition first, there is a small reason why it’s preferred (and won out without much of a fight), though it’s not an iron-clad mathematical law or anything that we have such a precedence (as I’ve shown, it doesn’t even come up in a post-fix notational system).

The distributive property of multiplication over addition can be stated as:

a * (b + c) = a*b + a*c

This property has a mild implication of multiplication precedence over addition.

That said, for a long time, there was some back and forth over multiplication’s precedence compared to division. We currently give them both equal precedence, but that’s mostly a 20th century thing and again something that is obviated with a postfix notational system.

That there is an established order of operations to discuss as having an accidental origin.

Niklaus Wirth, designer of the programming languages Pascal and Modula-2, made some “interesting” choices in operator precedence order. E.g., the expression

x < y or z = 0

would be interpreted as

x < (y or z) = 0

I.e., logical operators had a higher precedence that comparison/arithmetic operators.

This drives people crazy since it’s “obvious” that it should be the other way around.

I think something similar happens with the basic operators. Addition and subtraction are considered at the same level or difficulty/usability. Ditto multiplication/division. Taking the expression

3*x+2

it would seem that in most situations you would want to multiply rather than add first. I mean, this type of calculation happens a lot. So there’s a push to simplify things and the most likely interpretation based on what people run into usually drives things.

Note that most operators are left associative, but a few are right associative. E.g., exponentiation.

2^x^y

is interpreted as

2^(x^y)

rather than as

(2^x)^y

since that latter is really

2^(xy)

and should have been written that way for clarity.

There is usually a combination of logic and experience that guides most of these decisions.

This, but I think there’s even a bit more reason for multiplication to have precedence over addition. “Sum of products” (SP) computations come up very commonly, more so that “product of sums”.
That is, calculations like:
(ab) + (cd) + (e*f) + … or more generally, Σ x[sub]i[/sub]y[sub]i[/sub]
are quite common, whereas:
(a+b) * (c+d) * (e+f) * … or more generally, Π (x[sub]i[/sub] + y[sub]i[/sub])
is much less common.

Examples of SP are: Computing the total of your receipt at the supermarket; computing your grade-point average (or any other kind of weighted average); and matrix products, which are used for all sorts of things like that.

It makes some sense if you think of operators in tiers.

First you have addition.

Then multiplication is a BUNCH of little additions

Then exponentiation is a BUNCH of little multiplications

Then brackets are… arbitrary, okay.

Even so, I think if you left infix math to develop naturally about half the time you’d see Grouping Addition Multiplication Exponentiation, and the other half you’d see Grouping Exponentiation Multiplication Addition.

Perhaps I’d wager that multiplication and exponentiation would take precedence slightly more often because multiplication and exponentiation will tend to be created after addition, and you always want the new exciting feature to take precedence over the old, less powerful, boring one.

Additional problems with infix is that operators must be binary. For instance, “a+bcd” doesn’t mean “add a to b to c to d”. But to handle n-ary operators you need more than RPN, too.

Order of operations is indeed purely an artifact of infix notation, as Great Antibob correctly points out.

I think (but this is just speculation) that the reason multiplication is taken to bind more tightly than addition is because multiplication distributes over addition: any product of sums can be re-expressed into a sum of products, but not conversely. Thus, we might as well privilege the ability to write sums of products conveniently.

Reverse (or forward) Polish notation handles n-ary operators perfectly straightforwardly…

f(x[sub]1[/sub], x[sub]2[/sub], …, x[sub]n[/sub]) is expressed as f x[sub]1[/sub] x[sub]2[/sub] … x[sub]n[/sub] (in forward Polish notation) or x[sub]1[/sub] x[sub]2[/sub] … x[sub]n[/sub] f (in reverse Polish notation).

Yes, the x[sub]i[/sub] may be arbitrarily complicated expressions themselves. So long as all operators have fixed arities, there will be no ambiguities in parsing.

I must be using a term incorrectly, then. I thought n-ary mean “arbitrary arity”, e.g., to mix syntax horribly
(+) = 0
(+ a) = a
(+ a b …) = a + (+ b …)

Is n-ary usually used to mean “fixed arity”?

Isn’t the order of operations the inverse of the order of derivitive operations, other than special modifiers like ()?

Here’s what I mean.

You do the powers and roots first. Powers and the inverse (but not always, for even powers) roots are a higher level derivitive of multiplication.

Multiplication, and it’s inverse division, are a higher level derivative of addition.

Order of ops is (), powers/roots, division/multiplication, and addition/subtraction. Similar pattern , it seems.

Does this apply to Hyperoperations like Tetration, Pentation, and so fourth?

Do you Pentatate before you Tetratate? Do you Tetratate before you Exponate?

Never mind my above questions. Already answered. Thanks Stranger!

I think it arises naturally when calculating lists of things e.g what’s the price of 5 apples and 7 oranges? If you did ± first, it would be much harder to express.

AFAICT, nobody has yet addressed the question of what actual evidence there is for the historical development of our order-of-operations convention, whether accidental or planned or some combination of the two.

I highly recommend Jeff Miller’s online collection of original source citations documenting the earliest known occurrences of various mathematical terms and symbols. Regarding the order of operations, he notes that there was no firmly established standard convention until well into the 20th century:

In a very real sense, it is a chicken and egg issue. The order is mostly due to the way we write the operations, but if the order were different, we would be forced to write things differently.

I think multiplications high priority might be because in Algebra we like to write things like 3X+7Y=12.

And this is related to telling someone to go to the store, and buy 4 apples and 7 oranges and gallon/2 of milk. We don’t normally think of multiplication-division when we speak of quantity of stuff, but that is exactly what it is, and it needs to be applied to the units we are quantifying immediately.

Ah Wirth: pronounced “Veert” in his native German, but mangled to “Worth” by most English speakers.

So in German he is called by name, in English, he is called by value! Thank you, I’ll be here all week. Tip your servers generously.
I have a chance to use that joke less than 1/decade…can’t let it pass!