50+50-25x0+2+2=?

It IS useful to distinguish between mathematical truth and just conventions. The order of operations I would say is a strong enough convention that the question is unambigious. However, I think the question itself is not mathematically interesting. It’s just to test if you remember the convention.

Btw, 0! = 1 is also just a convention, and not a mathematical truth. It is just useful to define it they way.

Good question.

Even if the order of operations is a clear, mutually-agreed-upon set of conventions, the problem is that we don’t have a clear, mutually-agreed upon set of conventions for how to render something that is clear and unambiguous when handwritten or typeset into a one-dimensional string of typable characters that fits all on one line.

Among other reasons, 0! = \Gamma(1), so it is not clear what other value you could practically assign to it. Similarly it should be clear what \frac12! means, etc. and there is some mathematical interest in being able to reformulate those values as integrals, for instance.

This forum’s pseudo-TeX renderer seems to do OK with its one-dimensional input strings :slight_smile: But it’s a good thing nobody ever really had to restrict themselves from writing things like 2^{2^2} or \frac{1}{\sqrt{10x+53}} even if it potentially annoyed some typesetters. The older school way iirc was to describe everything in full Latin sentences.

Surely, the only reasonable compromise is to have a pie in March, and then also two in June?

And we’d better eat three on September 42nd, just to be sure.

Heresy, but of the good kind. I’m down for it.

There’s now more than one math, we call it maths.

You mean, we call them maths.

Story from my A-11 math class: Whenever the teacher made a goof and some student called him on it, he’d say: “Darn, they’ve changed it again!”

This teacher was also fond of pointing out that you can define anything any way you want. He tended to say this when there was a question of why something was defined a certain way, the point being that definitions can be arbitrary.

Uh, no. I don’t agree with that.

Danger_Man, above, says:

Well, yes, it’s useful and it’s consistent with other definitions of factorials, so it works. So it’s not “just a convention”. Hey, we know that anything divided by itself = 1, right, so we can always replace a/a with 1 regardless of what value a has, right?

Unless a = 0. That always throws monkeywrenches into whatever you’re doing.

So, why don’t we just define 0/0 to be = 1 and simplify a lot of things?

The short answer: It still doesn’t work, and no, you can’t just define anything any convenient way you want.

As noted earlier, it isn’t just a convention. It makes sense when you look at the wider mathematics, and don’t just restrict yourself to a simple/simplistic definition across the integers. (I do wonder what the constructivists make of the Gamma function however.)

But mathematical truth opens a whole new can of worms. Arguments continue to rage about just what this might even mean. There are many conventions in mathematics that are used because they make sense in the context of, or enable, further mathematics. Similarly there are a lot of conjectures or unproven theorems that when accepted open up vast worlds of new mathematics. Probably the best known stem from Euclidian geometry and the parallel postulate. Is the parallel postulate a mathematical truth? At another extreme, what about the Axiom of Choice? And you can unwind things much further. This quickly becomes philosophy rather than mathematics. Famously Russell and Whitehead came unstuck trying to codify mathematics from first principles, and it hasn’t got any better since. Just like any other philosophical discussion, the nature of truth, even in mathematics, is more than a bit rubbery under the covers.

Of course you can define anything any convenient way you want. You can even, if you choose, define things in any inconvenient way you want. Sometimes there’s good reason to define things in some way; sometimes there’s not. Sometimes it may appear at first that there’s no good reason to define something a particular way, but on further examination, it turns out to be very interesting indeed.

One of my happiest moments as a teacher was when a few students in a middle school honors math class arbitrarily declared that 9=0, and proceeded to follow through on all of the implications that would have on all of the other math they knew… And they were getting it right. Because it turns out that you can do that, and still end up with a lot of interesting and valid mathematics. All I had to do in response was to mention that it would be even more interesting if they instead declared 7 or 11 to equal 0.

Those get into questions of consistency and completeness, which is a whole 'nother kettle of fish.

This is true in one sense: definitions in mathematics are prescriptive, not descriptive.

If you’re reading a math book or article, and a definition is given, what that definition says is true by definition, just because the writer says—at least within the scope of that book or article.

But it doesn’t follow that definitions are arbitrary. Some definitions are more useful or fruitful than others. Some are nonstandard—it’s not forbidden to define a term or symbol to mean something different than what other writers mean by it, but it’s not a good idea to do so unless you have a good reason.

And, as noted, you have to worry about consistency. It is forbidden to make a definition that logically contradicts something else that’s true within your system.

And then there’s the problem that people don’t understand that if you change a definition, any conclusions drawn do not apply outside that context. This happens sometimes with division by 0.

Right; I can write a perfectly valid and legitimate proof that 5 ÷ 3 = 4… if my definitions are such that 0 = 7. But that proof wouldn’t apply in the more familiar case where 0 ≠ 7.

See, this really turns on what, exactly, you are defining.

If, by defining the symbol “7” to be synonymous with the symbol “0” (as referring to the number commonly called “zero”), that really adds nothing new or different to your math. But if you allow “7” to also still refer to the number commonly called “seven” while simultaneously meaning “zero”, then you can develop all kinds of inconsistencies.

This is none other than the logical fallacy of equivocation, using a defined term with one meaning in your argument and then shifting it to a different meaning elsewhere in the argument. Likewise, one can define the symbol “0/0” to mean “one”. But it can’t simultaneously mean that and also have the more common meaning of “p/q” being a quotient or ratio of two numbers.

It may be the same math, but different tools for solving the same sorts of problems may be offered, and may result in a radically different way of looking at a problem. I took all the math classes my high school offered, and I took algebra, trigonometry, and calculus in college in the early 1980s. When my kids asked me to look at their high school math homework a few years ago, it totally blew my mind. They were using matrices to solve simultaneous equations, and I had absolutely no idea what the #@%$ it was! I flipped back in the book to the beginning of the section and read it all, worked through several of the problems myself and was amazed. It was like suddenly being able to do magic.

As far as I understand it, it’s not a wholly agreed convention even today. There is (according to my admittedly cursory research) no absolute consensus on the order of operations for the specific case of serial division without enforcing parentheses, so 100/100/100 could equal 0.01 or 100, depending on which division operation is done first.

Calculator evaluation.

Unless you had that other kind of calculator, that does it the other way:

2+2 = 4
4+0 = 4
4*25 = 100
100-50 = 50
50 + 50 = 100

With all this talk about calculating devices, it is important to emphasize that all they do is compute what you type into them. That is, you already have to know how to evaluate the desired expression (including schemes like Horner’s rule which I assume are not applied automatically, as well as tricks to avoid possible loss of precision, etc.)

The number “commonly called seven” is defined to be 1+1+1+1+1+1+1. The number called zero is defined to be the unique number such that n+0 = 0+n = n for all n. But I can assure you that, without changing any of that, the integers modulo 7 introduced in middle school and studied more extensively in a beginning university-level algebra course are not more inconsistent than any other numbers. Not that, generally speaking, it is difficult to come up with a list of statements that are inconsistent— that is the basis of proof by contradiction. Logically, one would like to say that if a theory has a model, then it is consistent.

More precisely, if it has a model within some other system, where that other system is presumed to be consistent. In the case of modular arithmetic, the usual approach is to model a “modular number” as an equivalence set of ordinary integers. Based on that model, you can prove that, if arithmetic on the ordinary integers is consistent, then so is modular arithmetic… but you can’t thereby prove that arithmetic on the ordinary integers is consistent. We all really, really hope it’s consistent, because if it’s not, then the way the world works and the way our minds work are so hopelessly different that there’s no hope of understanding anything, but we can’t prove it.