Math Question

In that case, what do you think of this “proof”?

x = (4+3)/2
2x = 4+3
2x(4-3) = (4+3)(4-3)
24x-6x = 4^2-9
9-6x = 4^2-2
4x
9-6x+x^2 = 4^2-2*4x+x^2
(3-x)^2 = (4-x)^2
3-x = 4-x
4 = 3

If you’re consistent, you’d make the same objection as before: “…there’s nowhere in the “proof” that any actual value of “4” is used”.

If that’s the way you want to consider it, it’s fine with me. On the other hand, when I see it, I see the symbol commonly used to designate the value of 4, and that’s the way I interpret it. When we multiply both sides by (4-3), I see it as multiplying both sides by the value 1, which is not a problem.

I see no problems until the square root is taken, because throughout the proof, I understand “4” to be a well-understood symbol representing a value. Whether we actually do any arithmetic with that value is irrelevant, that’s still the way I read it–as a value. Same with the symbol “Pi”. I don’t see how you can possibly fault my logic in saying the error doesn’t really arise until the square root is taken.

What if one of your students solved the following problem:

The sum of two consecutive integers is 21. Find the smallest of these two integers.

in the following manner:

Let 4 denote the smallest of the two integers. We know:

4 + 4 + 1 =21

2*4 = 20

4 = 10, our final answer.

Would you congratulate the student for doing the problem correctly, or would you complain over his unorthodox (and potentially very confusing) notation of using the symbol “4” as a variable (or possibly both congratulate and complain)?

The multiplication adds solutions, and the square root should yield two alternatives, but the proof throws one away. Is the symbol “4” interpreted as an indeterminate or as the natural number 4? A choice must be made, and made consistently. If it denotes the number, then the one result is nonsensical for numbers (see the thread on “1+1=2”).

For “Pi”, the hypothetical student is trying to determine the value of the a priori unknown symbol. The multiplication adds a new class of solutions, which is an error dual to dividing by zero in the first “proof”. Then the student omits the first class of solutions after taking roots. The proper last line would read “either x = (Pi+3)/2 or Pi=3”, which is indeed implied by the initial hypothesis. However, to go from here to “Pi = 3” is erroneous.

Here the confusion over interpretation is even clearer. At the beginning the student asserts the use of the symbol “4” as an indeterminate, but in the conclusion interprets it as the number 4.

What is your justification for the claim that “in the conclusion [the student] interprets it as the number 4”? Why doesn’t your justification, whatever it is, also indicate that the student was using the symbol “4” to represent the number 4 at the beginning?

My hypothetical student is actually being consistent. At the start, he’s using the symbol “4” as an indeterminate–the (unknown) smaller of the two integers. At the end, he demonstrates that the value represented by “4” was, in fact, 10–the correct answer. If you took this proof to a culture identical to our own, with the exception that their culture have exactly reversed the roles the symbols “x” and “4” play in our culture, they would accept the given proof as easily as we would accept the proof:

Let x denote the smallest of the two integers. We know:

x + x + 1 =21

2x = 20

x = 10, our final answer
There’s no logical inconsistency in the original student’s proof (using “4” in place of “x”); nowhere does he use the symbol “4” to represent the value 4.

Of course, the symbol “4” already has a “universally” understood meaning–it represents the value 4 (nitpick–by “universally” I really mean restricted to our culture, of course).

Because of this, the complaint might be made that the problem was solved in a confusing manner, using the symbol “4” in a confusing way. I would agree. That was the whole point! Objectively, there’s absolutely no reason to reject the student’s work–“4” is just an arbitrary symbol, use it however you wish. Subjectively, there are many reasons to object; namely on the basis that we’re “kidnapping” (in a manner of speaking) the symbol “4” and using it to represent things other than what we’re accustomed to. In some objective sense, this may be fine, but in a subjective sense, it can’t help but cause confusion.

At any rate, of course I know you from these boards, Mathochist, I know you know your math. I do see your point. I think perhaps we’re talking a bit past each other regarding a symbol in and of itself (such as “Pi”) versus the meaning of the symbol (such as the commonly understood value of Pi). Agreed?

You can object to the proof on the basis that “Pi” is being strictly used as a symbol, or you can accept that “Pi” is being used to represent a specific, commonly-understood value, and reject the proof on other grounds, as I have done. Personally, I think the latter is more natural since the vast majority of people seeing this proof, in this context, will see “Pi” and understand it to represent the value we are familiar with (just like we do when we see “4”).

I omitted this branch, thinking it was obvious. If the student was trying to interpret “4” as the number 4 at the beginning then he’s begging the question, which is an error right off the bat.

I think the point with the pi puzzle is that if you replace the symbol pi with any other symbol, whether or not that symbol has an a priori accepted meaning, the puzzle remains essentially the same. I could just as easily use the same framework to “prove” that 4 = 3, or that e = 3, or y = 3. Hence, one might reasonably say that that erroneous “proof” does not depend on the number pi or any of its properties.