Mathematicians - please provide some context on this freshman math example

In the spirit of both fighting ignorance and biting off more than I can probably chew, I thought I’d work through some MIT OCW math courses. Looks like 18.100B is worth giving a go.

So, would you be so kind as to take a look here, from Principles of Mathematical Analysis. Scroll down to equation 3 and 4.

Here’s the general question 1): are there freshman university students who can look at the steps from equation 3 to 4 and see how the latter follows from the former at a glance? I finally worked it out (won’t reproduce my work), but start with equation 3 and: 1) square both sides; 2) subtract 2 from both sides; 3) on the right side get a common denominator (p+2)^2 for the negative 2; 4) expand the right side; 5) do a lot of arithmetic and 6) voila, equation 4! I counted it up and in took me 8 lines of math to get it.

Here’s general question 2): is the point to make sure one can follow all the steps (as I attempted to do here) or do mathematicians just go “uh-huh” when they see the leap from equation 3 to 4 and concentrate on the larger point he’s really trying to make.

Consider Paul Wolfskehl, he of the eponymous prize.

How else would one do it? If you start with an expression for q, and you want an expression for q^2 - 2, then you square q, and then you subtract 2. That’s what the expression q^2 - 2 means.

Yep, I realized (eventually) that’s how one would do it. But there’s about 7 lines of calculations before one gets exactly what’s on the right side of equation 4. Would a typical freshman math major know at a glance that all those calculations would end up working out to the right side of equation 4, or would they have to (as I did) work out all the middle steps to understand where that came from?

Or you could perform (q + √2)(q - √2). Wouldn’t be any simpler in this particular case, but it might if the right-hand side were different. An experienced mathematician would be filled with these kinds of “tricks” and be more likely to do otherwise-complex manipulations at a glance.

The latter, I think. A more experienced mathematician, at least, would react as Chronos did: he wouldn’t see at a glance that (3) is obviously equivalent to (4), but it would be obvious to him how one would go about getting from one to the other if he wanted to work out the details.

See also the discussion of “When is something obvious?” in which a mathematician is quoted as saying “A statement is obvious if a proof instantly springs to mind.”

I’d think that a first-year mathie would probably have the fluency to do those little algebraic manipulations in their head, or at least wouldn’t take that much paper to verify them. It’s kind of a convention that if you have an expression like “a + b/c” you put it all over the same denominator as “(ac +b)/c” and most of the rest follows from that.

Or my freshman calc prof put it: “The best math magician is the best mathematician.”

There’s also the definition that anything is obvious if it has already been proven. Or, Feynman’s Theorem, which states “Mathematicians cannot prove unobvious theorems”.

Hey D18, congrats. Maybe I’ll tune in. Math is great, huh? Too bad I was too stoned in Calculus.

Raymond Smullyan, in What is the Name of This Book? tells this story of four math professors, and what they all meant when they said something was “obvious”:

It’s just the same as if he had written “8324[sup]2[/sup] - 2 = 69288974”. I couldn’t tell immediately, just from viewing this equation without context, that it was correct, but I would know immediately how to mechanically check whether it was correct or not, and my trust in the author would be such that I likely would not bother actually carrying out the check unless I felt there was some unremarked upon insight to be gained by doing so.

Thank you all for your comments. I found that helpful.

Leo, don’t know how far I’ll get, but no harm in trying!

This strikes me as an excellent analogy. (BTW, I trust you enough not to check that equation, but I did at least look at the last digits as a quick plausibility check. :))

Let me start by saying the “baby Rudin” is anything but a freshman text. I took the course based on it in my third year. I realize that calculus has moved into the high schools, but “Introduction to Mathematical Analysis” is still not a freshman course. Only advanced math majors would take it. In fact at McGill where there is a strong distinction between majors and honours, only the latter would take it.

Now in the example given you obviously have to square the right hand side and subtract 2. If I were reading this, I would likely say to myself, that’s how you do it and skip the actual doing of it. But if I taught it I would likely just go through it. A student might be expected to do it in detail. One point: inclusion of the intermediate steps, while possible, would be unenlightening. As a student you will not learn anything from them. You might learn something by doing it yourself. As has often been said: Mathematics is not a spectator sport. When I read a paper and come to a theorem, I generally first try to prove it myself and, then, if I cannot pretty quickly, I look to the published paper not for the complete argument but for hints. Only then would I really comprehend what is going on.

A couple years ago, I was reading something in which the result seemed improbable. The published proof occupied about 3 lines (I am not exaggerating this). When I finally finished writing down the proof, it occupied 2 pages! But, and this is crucial, those three lines provided all the hints I needed.

I attended Harvey Mudd College and I took the Real Analysis class senior year, but I wasn’t a math major. Most of the math majors would take that class sophomore year, as I recall. Rudin was our text, and I remember the professor working us through this particular proof in class. I would not guess that most of the students in class could automatically make the jump from equation 3 to 4 in their heads, but if necessary could have down so on paper given sufficient time.

Bit off more than I can chew, indeed!

The MIT math site says “The subject 18.100 Real Analysis is basic to the program” so I figured I’d start there. I guess, like the word “obvious” as discussed up-thread, the word “basic” has some different meanings!

I’ll go find something a little more in line with where I’m currently at.

While we’re at it, all textbooks have something like “introductory” or “an introduction to” in their titles, because by the time you get to the advanced classes, you’re going beyond what’s in any textbook.

Nonsense. I offer you: Abstract Algebra (Lang), Topology (Kelly), Homological Algebra (Cartan & Eilenberg), Category Theory for the Working Mathematician (Mac Lane), Commutative Algebra (Zariski & Samuel), just glancing at a few of the books on my shelf, but I could go on all day.

I found the following description on MIT’s website:

(bolding mine)
“Option B” is what you mentioned in your OP, which may not be the best choice, but if you have enough experience with Calculus, “Option A” might not be beyond you. From a page devoted to a specific (not online?) section of that: