There is actually more wisdom in the students’ fallacy than one might think. The problem is that the wisdom is being misapplied to theoretical scenarios where letting x = y is permitted up front. The logic would be well applied in a hypothetical legal case scenario with a dead body and an instruction to the law school student to “Suppose that the defendant is the murderer”. That is illogical because it depends on an unstated assertion that there actually is a murderer. Before assuming (you know what happens when you do that…) that there is a murderer, you first have to find evidence that what happened was actually a murder, as opposed to a suicide, accident, lawful execution, combat death, justifiable homicide, or some other event that can leave a body without a legally recognized act of murder taking place.
There’s also the old fallacy about the student who is told something like “Suppose that the number of sheep in Scotland is x” and responds “But what happens if the number of sheep in Scotland is not x?”, suggesting that there could be a case where x is actually twice the number of sheep in Scotland or where someone else (someone in another class, maybe?) has preempted the use of x for their own use in a problem related to the number of cows in Germany.
While division by zero is the more fundamental problem, I spotted the error just by looking at the last two steps:
“Therefore (x+y)=y.”
“If x = 1 and y = 1”.
x+y=y can be simplified to x=0. Having established that x=0, we also know that y=0.
Thus, “if” x=1 or “if” y=1 are inherently false suppositions. The variables x and y are solved quantities, and they are not =1.
That is like saying “I spotted the error just by looking at the last step: (1 + 1) = 1. This is wrong; 1 + 1 does not equal 1.”
Typically, what is meant by “spotting the error” is understanding what specifically, in the chain of apparently sound reasoning, caused us to move into unsound conclusions.
In this case, the proof purported from the start to be applicable to any pair of equal x and y. You are correct that the reasoning of the proof, by deriving that any such x and y satisfy x + y = y, would further allow us to derive that any such x and y are equal to zero, and that this should not be derivable simply from the presumption that x and y are equal. The question is, where did the proof go awry in bringing us to this conclusion? To answer this question is to do more than simply to point out that we reach a wrong conclusion; that is why people focus on the division by zero, as the unique point in the proof where we infer an equation not actually entailed by previously supposed or inferred equations.
OK, I understand what you’re saying, but I’m not sure you understand my point. Let’s assume that there was no division by 0. x=y and x+y=y are two valid equations, right? There’s nothing inherently wrong with them separately or together.
When you solve those those two equations, you see that x=0 and therefore y=0.
Therefore the statement “let x=1” is wrong. It doesn’t matter what came before, it’s still a contradiction of previously established. Nothing true could possibly come from the last two steps, even if the rest of the T-shirt was correct.
The argument purports to derive x + y = y from only the premise that x = y. This cannot be valid. “Spotting the error”, then, is figuring out which particular link of the apparently valid chain of reasoning from x = y to x + y = y was erroneous.
If you really had legitimately derived x + y = y from only the premise x = y, it would be legitimate to go ahead and furthermore plug in x = 1. Yes, this would be in conflict with the fact that you could also derive x = 0 and y = 0 from this. This is the observation that the reasoning has gone awry; this isn’t an explanation of *how * the reasoning has gone awry.
There seems to have been a “genre” of such jokes in the early 50s (and perhaps earlier). This particular Abbot and Costello clip is from 1953 (though they also used it in other contexts, as with many of their bits; see two more instances of it here, along with one other bit of numerical tomfoolery), but see also this Ma and Pa Kettle scene from 1951, doing the exact same joke in the exact same permutations with 5 x 14 = 25.
Note, of course, that in all these cases, in each permutation, what is being done is ignoring the standard interpretation of the left digit of the two-digit factor as a tens digit, and instead treating it as yet another units digit. (Thus, for Abbott and Costello, 13 is manipulated as 1 + 3, which does indeed multiply by 4 to yield standard 28, and similarly, for Ma and Pa Kettle, 14 is manipulated as 1 + 4, which does indeed multiply by 5 to yield standard 25)
[Standards for routine-copying were apparently quite different in earlier eras. Abbott and Costello didn’t originate “Who’s on First?” either; they just perfected it.]
The radical symbol √ conventionally stands for the principal square root function. Therefore √(1) = 1. That’s why your algebra teacher made you write ± when learning quadratic equations.
By “ill-behaved”, I don’t mean ill-defined*. I mean ill-behaved in the sense that instead of clean patterns in its behavior, we get all kinds of unclean aberrations. For example, the very one under discussion: the principal square root function almost preserves multiplication, but doesn’t actually (except if we restrict to a context where there is no such thing as “principal” vs. “non-principal” square root); it is discontinuous, it lacks conjugation symmetry, etc.
Whereas thinking of square roots without arbitrarily designating principal square roots is a perspective which allows the restoration of clean patterns. We can from this vantage point instead say that “Any choice of square root of a times any choice of square root of b is a choice of square root of ab”, the function mapping inputs to their sets of square roots is continuous in a suitable sense, and so on. It’s much cleaner a way of thinking about things in most contexts, while there are very few contexts in which the discontinuous principal square root function is naturally called for.
[*: Though there are contexts in which only continuous functions are well-defined; e.g., intuitionistic mathematics, where the principal square root function cannot be totally defined on the complex numbers precisely because of its discontinuity]
(The above is about the principal square root function in contexts such as the complex numbers. As I began to indicate, in other contexts, principal square root functions can work more cleanly.)
But the principal square root function is explicitly defined on R -> R, so this seems like hairsplitting. You’re talking about a different function.
Anyway, I still fail to see how any of this constitutes I’ll behavior. Functions are allowed to be discontinuous and they’re not required to be bijective. Throwing the conventional definition out because you think the graph is ugly in order to pick nits about a joke proof just seems like so much dick measuring rather than useful discussion.