And why? is there an explanation for how it came to be?

Looking to you, @Hari_Seldon

And why? is there an explanation for how it came to be?

Looking to you, @Hari_Seldon

The order of operations was almost settled in the 1910’s (scroll to ORDER OF OPERATIONS) but by no means accepted by all. In a way it makes sense with multiplication defined by addition and then powers defined by multiplication and always with the out with the grouping operators and the inverse operators really being the same viz. subtraction is adding the negative, division is multiplying by the reciprocal and roots are fractional powers.

Even if that makes sense, I suppose we could have had the alternative that the Order of Operations would be grouping, addition/subtraction, multiplying/dividing, powers/roots

I suspect that you could make math work with almost any OoO, so long as everyone agreed on the rules and used them consistently.

I also suspect that, were you to try out a few different systems, you’d find some that are generally better/easier to use than others.

It’s like using any other tool. Sure, there may be many ways to do a thing, but one of the ways will usually be the best for a particular use.

I think there is great love for Reverse Polish Notation.

In addition to the excellent points above …

That order as described has logic solely within mathematics. When mathematics is applied to the real world in areas like physics, it quickly becomes evident that Nature uses the same order of operator precedence.

For example in classic grade school word problems involving location, distance, speed, and time, very quickly it becomes evident that you want to multiply the things that multiply together before adding other things to the result of that multiplication.

Once in high school and we add acceleration and the vaguest hints towards calculus into those problems, once again the higher order operations naturally have to be done first to deliver the right results.

In more advance math the idea that you can create a total ordering across all possible mathematical operators breaks down.

As well, in computer programming it’s not always straightforward to incorporate logical operators, conversion operators, and other fancier operator ideas into the basic precedence of add/subtract, multiply/divide, and powers/roots.

Some programming languages try, with sometimes surprising results = bugs.

Others go to the opposite extreme and eliminate all precedence, even multiply over add. The thinking being that with e.g. 150 natural operators and the ability to define an infinitude more operators, why make an exception just for the traditional 3 pairs for basic arithmetic? Simpler and less surprising to just make them all exactly equal and leave it to the code writer to control the order of evaluation by straight lexical sequence in their equations as overridden by parentheses where needed.

It wasn’t really invented. It was already in use by mathematicians in their notation. Multiplication came before addition because multiplication is usually represented by juxtaposition and thus is already clearly grouped together. Exponentiation comes before multiplication because otherwise x^{3*2} and x^{2³} become the same thing. Parentheses/brackets were designed to specify what comes first.

Even today, when people say PEMDAS is wrong, they are basing that on how mathematicians actually write math. Multiplication by juxtaposition is always treated as coming before explicit division and explicit multiplication.

What the “order of operations” does is just codify what had already naturally developed as a practice.

For me, RPN seems ideal. 432+* = 20 seems clear and unambiguous. No parens ever needed. But it isn’t taught and we don’t use it even among mathematicians.

As for the OP, I haven’t the foggiest notion. I suspect it goes back to things like a(b+c) = ab + ac. In any case I would never count on order of operations to disambiguate anything. I would always use enough parens to keep it clear.

I had a friend and collaborator who insisted on putting functions to the right of their arguments. xf instead of f(x). This means, among other things, that the composite of f: X → Y and g: Y → Z is fg, not gf. The only objection is the unfamiliarity of it. But it does avoid a lot of confusion. It leads directly to RPN.

Yes but is an inner product linear in the first argument or the second?

At one point I think I worked out an argument that only one of the choices is reasonable…

As mentioned elsewhere, RPN also eliminates the need for operator precedence. You just evaluate left to right.

So, it does not appear to be so much an explicit quirk of nature as a quirk of most human languages as-spoken vs as-written (plus a ton of baggage regarding mathematical notation - Newtonian vs Leibniz is a fine example here or alternate ways the same thing can be written), which, I suppose, could be considered an implicit quirk of nature via the way our brains are wired. But that leads down some deep philosophical rabbit holes.

Am I missing something? These two statements don’t line up for me?

It’s just convention for the presumed order of operations. Any known order will do. No order is necessary if the order is specified by other means such as using parentheses to isolate the operations. Having worked with varying orders of operations I’ve found the best to be the simplest, operators are processed left to right and parentheses are used to specify any other order. Usually those used to conventional orders like PEMDAS tend to complain but it encourages the use of parentheses for clarity. In modern usage on a computer it is relatively easy to use colors and graphics to parse and clarify the order of operations conducted under known rules, and also to allow expressions to be translated between known orders.

The latter statement is somewhat imprecise but it comes down to a bit of history.

What we currently know as the order of operations had already been in use implicitly by many people before the 20th century. There’s some quibble about multiplication preceding division and so on but something largely resembling what we use today has been in place for at least a few centuries. It is true that it wasn’t explicitly “invented” the way we would think. It seems to have been a combination of long-recognized implicit notational usage plus a bit of evolution over time on some of the details.

But it is also true that a more formal and explicit “Order of Operations” as you might see in a math textbook as if handed down from on high on stone tablets was largely (but not entirely) accepted and promulgated by a large number of people by early in the 20th century.

The thing is, mathematicians have been fighting over notation for a long time. So, getting everybody that does any mathematics to agree on a single ‘anything’ to do with how to write expressions has never occurred, even to this day. But we have something most people generally agree with. Just don’t take it for Gospel Truth that everybody will see it that way or remain 100% consistent in usage, though.

The best thing is to accept the general principle that it is on the writer of mathematical expressions to take reasonable care to avoid ambiguity rather than rely on some notional set of rules that has existed in its current form for hardly more than a century. And even then, a century in some countries and a scant handful of decades in others. Which is why those internet “what’s the correct answer?” order of operations things are a personal pet peeve. They are deliberately written to be ambiguous, which misses the point entirely.

It is computer programming languages which have a formal grammar with different operators having different precedence languages, or no such rules at all, and of course different languages will have different rules.

You can crack open a textbook from, say, 1801 and read stuff like

Ante omnia obseruamus, quamuis formam axx+2bxy+cyy, cuius determinans bb-ac=0, ita exhiberi posse m(gx+hy)^2, denotantibus g, h numeros inter se primos, m integrum.

You can’t eliminate all precedence. I suspect that what you’re calling “eliminate precedence” is just “precedence is determined by order in which they’re written”, which is still a precedence (and which is still part of the conventional order of operations).

There were different orders of operation since symbolic math came out but the current order of operations being accepted by all was standardized in the early 1900s

I hope this isn’t a hijack, but @Hari_Seldon can you explain this? Admittedly I am lacking in math, but 432+* = 20 to me is baffling.

Quite right. Which is why the latter part of the same paragraph said

LSLGuy:

leave it to the code writer to control the order of evaluation by straight lexical sequence in their equations as overridden by parentheses where needed.

“Lexical sequence” = “order in which they are written”. Sorry if my intent was unclear.

Or perhaps we’re reading different nuance into the word “precedence”. To me, operator “precedence” is about ideas that lexical order is overridden by some operator priority by category, like multiply before add. Order of evaluation is what the result is. Absent any precedence that’s going to be pure lexical order as overridden by parentheses or equivalent.

There are programming languages (or at least expression evaluating languages) such as RPN which are evaluated lexical left to right, and programming language such as APL that are evaluated right to left.

cincinnatus:

423+* means apply the + operator to the last two operands, giving 5, leaving 45* and then the times operator to those giving 20. By the same logic 423*+ = 10. This is exactly how an RPN calculator works. Unambiguous and free of parens. Incidentally, if you do your income tax return by hand, this is also how you can do it most efficiently. Only its unfamiliarity makes it seem hard.

I fell in love with RPN in my data structures class and stacks and queues. Considering I get high school students that still don’t know order of operations I wish we could have RPN be the new standard.

Thanks. This will take me some learnin’.