Does higher level math ever become as easy as reading?

Math eventually becomes as easy as reading Finnegans Wake.

I nominate “comparagle” as the New Word of the Year. We should definitely use this (somehow) in that thread about how to remember < from >

It may be readily seen that my views on math pedagogy tend strongly towards the “old school” ideas – but with a twist.

I am astonished to learn, from Thudlow Boink’s post above, that even simple multiplication is mired in controversy over how it should be taught. Abandoning the initial 2nd-grade exposure as simple repeated addition? Now that seems radical to me. Even ab-surd.

Most of use seem to agree that shooting for “understanding” (rather than rote rule memorization) is the way to go. (Quoth Tom Lehrer in “New Math”: . . . The goal is to “understand what you are doing, rather than to get the right answer”.)

Maybe we can discuss what people mean by “understanding”.

I like the idea of teaching by presenting simple models, or practical uses, or illustrations, or whatever, to give any new topic some motivation. I think students will best come to “understand” the concepts and procedures that way. The “New Math” approach (which was just coming into fashion in time for me to see its leading edges in the 1960’s) certainly emphasized “understanding”, but seemed to have a different notion of what that meant.

New Math, as I saw it, tended to teach things in the abstract first. This is the most perfectly logical and understandable way, of course – For seasoned mathematicians who actually already understand that stuff!

I learned about negative numbers long before I took first-semester 9th grade algebra. But I think some other students hadn’t. I remember to this day, the first exposure to negative numbers in my 9th grade algebra text:

Well, that sounds like something a lawyer would have written. What is a student to make of that, if he doesn’t already have some understanding of negative numbers? The text then goes on to develop more properties about negative numbers, all in the abstract like this.

No wonder math students lose it, thinks I.

I would much prefer to teach negative numbers by showing examples of situations where they might be useful (and it doesn’t even have to start with the bizarre concept of numbers “less than zero”), then present negative numbers as an extension to the numbers system that they already know. I strongly suggest teaching new topics the simple, less-abstract way first, then bring in the abstract constructions a little later. And I like the idea of extending concepts by adding, you know, extensions as mentioned in that Wiki article that Thudlow Boink cited.

ETA: The definition of a surd that I liked above isn’t exactly the definition that I thought I knew, although it’s close.

Whut iz algəbЯa?

I seriously think the teacher has a BIG part to play in explaining math in a way that students can easily grasp (my view is that it should be more concrete and less abstract, at least as each topic in first exposed). It isn’t good if a teacher just recites definitions and theorems like the definition of “additive inverse” that I quoted above, as some teachers more-or-less do.

I would like to begin a college-level Algebra I class (and maybe even a 9th grade algebra class) by asking: “What is algebra?” and having a bit of discussion about that. Here are my thoughts:

What is algebra?

I’d start by asking this, which I would expect a lot of students to answer “Algebra is where you use letters for numbers” or something equally vague.

My answers:

Algebra is five things:
[ol]
[li] The collection of facts about numbers and number systems [in particular, just the Real numbers for now], and how numbers behave and relate to one another. These facts exist whether or not anybody actually studies them, and have existed, presumably, since the Universe was created.[/li][li] The study of (1).[/li][li] A language, both spoken and written (but mostly written), by which mathematical ideas and facts can be expressed, to be used for communicating ideas from one person to another.[/li][li] A set of basic tools for solving mathematical problems.[/li][li] A collection of techniques [like completing the square, for example] for using the language and the basic tools to accomplish more complex feats.[/li][/ol]
and I would illustrate each of those with simple examples AND analogies to other fields. For example, for (1) and (2) I would liken algebra to invertebrate zoology: It is (1) the body of facts about how they live and function, and (2) the study of those facts. I would discuss (3) by talking about the need to write your math thoughts so others can read them, and the need to read others’ writings. I would point out that the notations used in (3) also provide us the tools (4) to manipulate math expressions. And I would illustrate (4) and (5) with analogies to carpentry: Tools such as hammers, screwdrivers, and saws, and techniques for using those tools for solving problems. (Did you ever watch a carpenter build a staircase?)

Throughout the semester, as various new “things” are mentioned, I would ask the student to consider which of the above categories the new “thing” fits in. For example, is each new “rule” a fact about how numbers work (like the Commutative Rule), or is it a man-made convention about how things ought to be (like the rules of operator precedence, which is nothing more than an agreed-upon convention in the written language).

In tutoring beginning college-level algebra, I have seen lots of confusions that I think would be much better understood, if students just understood things as I have outlined here. For example, some students aren’t clear on the concept that
3x + 6 = 21
is the same problem as
3d + 6 = 21
because they aren’t clear on the idea that the choice of what letter to use is just a man-made convention, or they think the Distributive Law is too complicated and wonder why mathematicians make such rules (because they don’t understand the distinction between raw facts of numbers, vs arbitrary conventions like precedence rules that we make). But even arbitrary rules like precedence aren’t exactly entirely arbitrary – we make rules like that for reasons, too, because they make for good problem-solving tools.

Huh. I took calc in '91 or '92 and we were definitely taught about derivatives being the slope, and integral being the area under the curve. First third of the year was derivatives, second third was intervals, and I forget what we moved on to after that. But it was conceptually taught, starting with limits and progressing along much in the manner described by Senegoid. I can’t imagine how it could be taught simply as an arbitrary set of formula memorizations. I mean, we certainly memorized some stuff for deriving more complex equations, but the underlying concept for what a derivative and integral was was necessary to solve word problems.

The analogy between text, equations, and music notation is a good one, and works on a number of levels.

First, I’m confident that we all have different propensities. Some of us learn all three quickly, some all slowly, and others excel at some but not at others, with equal effort.

Second, no matter what your propensity, you can learn, and the more you read any of the above, the easier it becomes. It’s hard to avoid reading text, and we get so much practice it ends up seeming easy, but we spend thousands of hours developing our abilities. If you spend thousands of hours reading math or music, it gets a lot easier!

Not long ago I pulled out a book on my shelf, “Digital Filters” by Hamming. The first chapter covers the required underlying math (diff EQ, which was the last course I took in college, and I was totally mystified that I got a C rather than E since I had NO CLUE what I was doing on the final exam. I parroted a few things the lecturer had put on the board, IIRC.) The math was almost complete gibberish. So, I got a book that’s essentially “Differential Equations for Dummies.” It started with some basic calculus, which I vaguely recognized (from 35 years earlier). I had to pull out my old college Calc textbook and start relearning, and it was a tough go (and I didn’t ever make it back to Digital Filters – maybe next year!)

Yet I remember having little trouble with three terms of college calc. differentiation & integrals? No sweat; high school had provided an excellent understanding, and in college I just had lots of trig formulas to remember – that stuff in the endpages of the calc textbook. Second term, taylor series (infinite series) analysis, way cool. Third term: multivariate and partial differentials, oh boy, big fun, especially trying to draw/visualize. I’d been doing algebraic math steadily since age 14, so simply reading the math came pretty easily.

I remember as a kid in school, every year getting a new math book, looking inside and seeing stuff that looked like a foreign language. By the end of the year, it was all simple! (Actually, I was annoyed that so much of the year was review of stuff from earlier years, only getting to that new complex looking stuff near the end of the year. ARRGH!)

So, I’ve looked at math from both sides now (sung to the tune of Joni Mitchell’s “Clouds”).

I can still read a polynomial well enough, and now and then I even use exponentials and logarithms, and can manage to derive the few formulas I need, but I definitely know what the OP means about the frustration of deciphering higher math.

The bottom line is simple. If you want it to be easy, just do lots and lots and lots of it. Stuff that looks like stuff you’ve seen before gets real easy. (This is true of music, too.) The better you get in general, the wider range of stuff looks easy. For anyone but the world-class, there’s always stuff that looks nearly incomprehensible.

And even for the world class pianist, unless they’ve been working (hard) on something like Gaspard de la Nuit (jpg), even though you understand what each symbol means, comprehending the whole is a serious undertaking!

Aha! I finally found the quote this thread has been reminding me of. It’s by Andrew Wiles, the mathematician who is famous for solving Fermat’s Last Theorem (quoted here):

So, even for a great mathematician, the clarity and insight are preceded by struggle and stumbling around.

Note, however, that he’s talking about doing mathematics—that is, conducting original mathematical research. The OP compares math to reading, but doing mathematics at that level is more akin to writing than to reading. And for many writers, including some great ones, writing doesn’t ever become effortless and automatic.

This is exactly the feeling. I am not a mathematician but use applied math in my research and prove my own results. They are very useful for my kind of work but I do not kid myself into thinking they are mathematically interesting. It is a lot of wandering around in the dark and enduring months of confusion before a little clarity emerges. But immediately after that feeling of clarity comes the feeling that the end of the proof is farther away than you had first thought. Sometimes I don’t even think I am close to the end but voila, I have the result. It’s difficult, frustrating, confusing, and deeply addictive.

I was also told for the longest time that I was just “not a math person,” well-intentioned guidance that I have spent years both resenting and trying to get over.

Can you elaborate on your proof? I came up with a solution in a few minutes, but only via algebra:

[spoiler]The statement “2^n-1 is prime implies n is prime” can be rephrased as “n is composite implies 2^n-1 is composite” by Boolean laws.

So if 2^ab - 1 can be factored (a and b != 1), then we have a proof. And so:
2^ab - 1 = (2^b - 1)(2^(a - 1)b + 2^(a - 2)b + … + 1)

Of course, it’s necessary but easy to see that neither subexpression on the right is equal to 1 as long as a and b are not 1.[/spoiler]

The Wiki article says that teaching “repeated addition” may interfere with later generalizations of multiplication (to complex numbers, etc.). However, it might be equally claimed that *not *teaching repeated addition interferes with later generalization of operators: that addition is repeated succession, multiplication is repeated addition, powers are repeated multiplication, power towers are repeated powers, and so on. Perhaps not as useful by itself as complex numbers, but nice (to my mind) as a way of seeing operators themselves as part of a general framework and not just standalone units.

looks like geek to me.

My proof (that 2[sup]n[/sup] - 1 can be prime only if n is prime) does exactly what your proof does, only visually.

Start with the same contrapositive that you did (prove: If n is composite, then 2[sup]n[/sup] - 1 must be composite).

Next: (HINT): Consider what 2[sup]n[/sup] - 1 (for any positive integer n > 1) looks like when written in Base 2. (It helps if you’ve spent the better years of your life doing computer programming. :slight_smile: )

Can you take it from there?

ETA:

Ya got that right! :smiley:

Ever tried to read Euclid in the original? It’s Greek geek!

What do you call the rendition of Euclid into English?

Translation of axioms.Yes, I made that one up myself. I asked that question out loud in Calculus class once, at the beginning of an exam.

When I was a kid, I was excellent at reading (and spelling) and exceptional at math. When a math teacher would ask the class a question, and go around the room getting everyone’s wrong answer, they could always count on me to get the correct one . . . or even a solution that was better than the teacher’s. I think that what I did right and everyone else didn’t do was visualization. I’d see the problem in my head, and do all the calculations almost instantaneously. That was also what made me a good speller.

My SAT score was 1600 (of 1600).

Of course, that was many decades ago. These days my brain is basically dead meat.

Oh! Nice. Yes, I see it now. In fact, it’s also trivial to imagine what my right-hand subexpression–which otherwise looks a bit hairy–must look like.

The left side corresponds to a contiguous cluster of ones, while the right side has one in every location where the cluster “plugs in”. It’s just a way of concatenating a bunch of the clusters. But only if the count is composite can you make the pattern work out.

Here’s my way of thinking the same thing:

(2[sup]n[/sup] - 1), written in binary, is just n 1’s in a row. Example:
(2[sup]12[/sup] - 1) = 111111111111
If n is composite = ab, then those digits can be grouped into a groups of b 1’s (or b groups of a 1’s):
111111111111 = (e.g.) 1111,1111,1111 = 111,111,111,111
from which the factors 1111[sub]2[/sub] and 111[sub]2[/sub] are immediately obvious.
(Their co-factors, respectively, being 1,0001,0001[sub]2[/sub] and 1,001,001,001[sub]2[/sub])

But… But… But… We already have a user here on SDMB who calls himself KarlGauss. Is that you? Are you his sock puppet here? :smiley: Are you Carl Friedrich Gauss IRL?

Absolutely. In fact, pretty much everything up to linear algebra and calculus bears little relationship to higher math. Math is a creative endeavor about rigorously proving and discovering ideas about very abstract concepts; going through an algorithm to divide polynomials or integrate a complicated pile of trig functions might as well be a completely different subject. Higher math is math research, or at least proofs and abstraction in preparation for math research; it’s not memorizing a bunch of algorithms.

For example, here’s the proof of the Kervaire invariant problem from a few years ago. It’s a long paper solving a famously difficult unsolved problem using some very advanced and technical results, so it’s probably not the easiest thing in the world to read. It’s a good example of what mathematicians actually do, though.

So, your original question was about higher-level math becomes as easy as reading. The answer is no: even the greatest mathematicians can’t read through a paper like this in the same way that they could read through even the most convoluted novel. On the other hand, we do get to the point where we can read through something at the level of a calculus textbook without any effort. The main difference, though, is that even extremely literary, capitalized-Art novels are telling stories about characters, or using language in creative ways, or trying to impress some feeling or idea on you. Math papers are about presenting an ironclad, rigorous argument about something, and that always takes a decent amount of concentration and analysis. It gets easier with practice and more mathematical skill, but reading current research is, at the very least, a completely different flavor than reading a novel. It takes concentration to slog through, say, Ulysses, but reading math papers is on a separate axis.

On the other hand, if you’re asking (and sorry to be so vague in my response) whether it’s possible to read mathematical notation like prose, then the answer is an unqualified ‘yes’. It just becomes second nature after a while, and you forget there was ever a time when you didn’t recognize what C^\infty \Gamma(T^* X) was (and probably just as easily parse the LaTeX notation in your head). But I think that’s true with any sort of notation, like music notation as someone mentioned above.

That would make me . . . let’s see . . . 236 years old? . . . I think . . . damn, I used to know how to do this . . .