If I recall correctly, it’s exactly the method taught in this book my grandparents had called “Mathematics Made Simple.” (There were a whole series of these, including “Philosophy Made Simple,” a book which may or may not have been any good, but which shaped my future when I read it at the young age of ten or so…)
I did the square root thing as kind of a parlor trick for my math teachers back then.
Sorry, I guess that was a bit of a drive-by post. People thinking they’re experts on pedagogy because they have a kid in school irk me. I’m terse when I’m irked.
As the wikipedia article says, the Singapore method is by and large simply the math curriculum employed in Singapore. needscoffee’s implication that a large feature of the curriculum is rote algorithm learning is wrong. As the article touches on, the curriculum is designed to foster a deep understanding of mathematics and develop strong problem-solving skills. This is a hallmark of the mathematics curriculum in many of the best-performing countries around the world, like Finland, the Netherlands, and Japan (notably lacking in North American curricula).
I’m most familiar with the bar-model method (or part-whole method) from the Singapore curriculum, which I’ve used to great success in developing fluency with fractions and introducing algebra. Here’s a decent description of how it works. I find this method’s often concrete enough to appeal to students who simply have a mental block with using variables, yet abstract enough that it’s very versatile.
Thanks, Snifit. I was a little surprised to hear that the Singapore method was in opposition to the methods I’d described, since my memory from my (excellent) math pedagogy professor was that the concrete-to-abstract method of investigatory/constructivist pedagogy was based on what was happening in certain high-performing Asian countries. Turns out that it’s not in opposition to the methods I described, it’s the foundation for the methods I described.
Well, arithmetic makes sense. I’m not sure Euclidean geometry does. [1] And though I know the basic algorithms of calculus, I’ve never mastered the intuition behind the relationship between the slope of the curve and the area underneath it. Ok, I can see how they might be related in some way- but I wouldn’t expect that an anti-derivative could tell you the area under a curve (with the proper constant).
After a certain point, I found math a lot easier when I just accepted the algorithms. For me a multi-step proof doesn’t provide me with a very good overall understanding.
[1] I’m thinking of Euclid’s parallel axiom.
What doesn’t make sense about Euclid’s parallel axiom? It describes a perfectly cromulent abstract geometric space, one which I imagine most people find more intuitive to think about than other geometries. For example, the world of a flat sheet of paper. Lines are parallel just in case they’re oriented along the same direction, and there’s a unique line along any given direction through any given point. What’s non-sensical about that?
As for why the derivative and the area under a curve are related: the area under a curve between two points is equal to the width between those two points * the average height of the curve over that interval. In other words, over any interval, the average rate of increase of the accumulated area under a curve is equal to the average height of the curve. And thus, at any point, the instantaneous rate of increase of the accumulated area under a curve is equal to the instantaneous height of the curve. That is to say, if F(t) is the area under the curve y = f(x) from some fixed point up through x = t, then the derivative of F is f.
The ones who irk me are the ones who refuse to see when methods aren’t working. I have Singapore math workbooks here; I know what the Singapore method is. After the concept is gone over, the students actually have to do the repetitive math problems in the workbook. I have had the awful Discovering Math books, and the TERC curriculum we had before that. Some kids are going to do well in math no matter how you teach; others, not so much. Since the discovery-based curricula took over in my city, test scores have dropped, even with increasing the length of the math classes by 15 minutes. Minority students have had the largest drop in scores. Obviously you have no idea what is taking place in my city, and seem to be assuming that since where you are, things are fine, they must be fine everywhere. And for some reason you are ignoring the signed petition by hundreds of the math and science faculty of the Univ. of Washington asserting that the incoming students’ math skills are insufficient; maybe they’re not expert enough on pedagogy, either. But go ahead and assume that since I have kids in school, I’m ignorant and you’re superior.
So you haven’t seen it or worked with it yourself, but you recollect what your teacher told you. Then you haven’t seen the Singapore workbooks full of repetitive math drills where the problems are expected to be solved according to the algorithms provided?
Uh, yeah, you got it. That’s why I didn’t object to your characterization of it before, because I had nothing better than a shaky memory to go on. It’s totally absurd for you to get snarky about that.
Your quote says that students are ill-prepared. Your quote notably doesn’t blame constructivist pedagogy for this lack of preparation. Your quote is also not rigorous scientific data of the sort that would be relevant to your claim. It’s possible, for example, that the youdub system has been recruiting students from worse schools over the last decade, or that cutbacks in education budgets have led to increased class sizes, or that any of a number of other factors are in play. It’s also possible that constructivist pedagogy is to blame. But nothing you’ve offered clarifies what’s at fault.
On another note, I seem to recall my mother (another elementary ed teacher) saying that her previous school district used the Discovering Math books; if those are the ones she used, then she agreed with you that they were terrible. She said they explained things very poorly, were basically a rehash of that company’s same materials from the seventies, and were completely antithetical to the constructivist approach, even though they claimed to be constructivist.
I actually did learn that method in one class somewhere (Jr High? I really don’t recall). I remember setting it up like a division problem but then doing a different algorithm. I also remember thinking it was more confusing than long division, and that it could be confused with long division if someone were not practiced.
I don’t recall the part about how to feed in guesses for the next number (the 20 p + x part). But it was so long ago, that could have been included.
Notably look at the second video. That is Prof. Cliff Mass from Washington State, and he lists studies that support the conclusion that the problem is the math teaching methods.
Like the National Research Council formed a committee to evaluate the 1989 National Teachers Council of Mathematics (NTCM) Standards, which push the constructivist methods and teaching materials. They concluded that there is NO evidence of effectiveness of these methods.
Or a study by William Hook, Wayne Bishop, and John Hook published in the peer-reviewed journal Educational Studies in Mathematics that showed when California moved away from constructionist methods to curricula of leading math nations, they had a “stunning increase in performance”. (Quoting the slide, don’t know if that is from the article.)
I haven’t read all the material there, but it is clear from that site that most of the critics do blame the teaching methods and do blame the lack of focus, the undirected nature of the study, and the failure to teach the consistent and most effective algorithms for computing the answer, and the lack of repetitive drill to reinforce the ability to compute. It’s like anything else - practice is essential.
They are not antithetical to the constructivist approach; they ARE the constructivist approach. But they’re poorly conceived and carried out.
Actually, the enrollment requirements of the UW have become more selective each year to where it’s becoming difficult to get in. A “B” average is no longer sufficient to get in. The Seattle schools have been using the discovery-based math curricula for at least the last 10 years. The Everyday Math curricula was the latest one adopted, and it’s just as bad. The wheresthemath.org site has plenty of information and links, and simple googling brings up tons of information on USA lagging behind much of the developed world in math achievement scores on the PISA and TIMMSS studies. The California schools switching to Singapore math away from more discovery-based is credited with raising the their test scores in the last few years. The right approach to teaching math to kids may well end up being a mix of discovery-based with drills. The funny thing is, the schools get to take credit for scores which have been helped by the ever-increasing private math-tutoring industry (which is almost completely non-constructivist). It’s one of the fastest growing fields out here.
The question then becomes, what are the tests testing? If you have schools teaching math, but tests testing arithmetic, then the students are going to do poorly on the test, because arithmetic isn’t math. On the other hand, though, if you have schools teaching arithmetic, but tests testing math, then the students are going to do, not just poorly, but abysmally.
So, Are you arguing that, because a particular curriculum carries out a certain pedagogical approach poorly, it indicts that pedagogical approach? If I find a curriculum that carries out the drill&kill approach poorly, does that prove that we shouldn’t drill&kill?
Now, when you’re asked for specific scientific evidence to back up your claims, you offer nearly half an hour of opinionated YouTube videos to watch. Thanks but no thanks. I looked at the first dozen or two links from that page; not one of them was to a peer-reviewed study.
Not saying such studies don’t exist, but it’s your claim that constructivism is responsible for the dismal state of math preparedness today; so far you’re doing a totally lousy job of supporting that claim.
This article about SIngapore Education suggests that they’re in line with what I’m saying. From their description of Singapore’s “Teach Less, Learn More” program:
Should I go on? No longer operating from shaky memory: your less-than-stellar cites have encouraged me to dig into it, and lo and behold, I remembered correctly.
I had no dog in the fight with respect to the “discovery-based curricula”. It’s being pushed as a great thing in teacher colleges, but I’m not convinced. I’m about to start a career in teaching, and abandoning the traditional North American behaviourist method of teaching is an intimidating thing.
You’re right that I have no idea what is taking place in your city–all I did was point out that you mischaracterized the Singapore method. I have studied it and used it in a classroom. You have a couple of workbooks. Of course workbooks are going to be repetitive; that is their nature. They’re for students who need practice to nail down their problem-solving techniques. It does not mean the Singapore method emphasizes rote-learning.
I am interested, though, that you seem to think a discovery-based approach is doomed to failure. Things are not fine where I am, but many of the leading countries in math education use a discovery approach. Are you suggesting there is something about your area that means a discovery-based approach cannot work? You said some kids are going to do well no matter how they’re taught; that’s true, but how come some countries are far and away better than others when it comes to math education? Are you suggesting it’s genetic?
Again, I don’t know the situation where you are, but the fact is that a discovery-based approach can work. As Left Hand of Dorkness has said, there are many more factors at work in education than just the curriculum employed. In fact, the most important factor in student development is the skill of the teacher. If I were concerned about the quality of education in my area (and I am), I would first question whether quality teachers are being trained in and attracted to my area.
Just wanted to comment on this specifically, as well. Of course the faculty are probably good judges of their students’ skills in math, but have you ever taken a math course at the university level? The overwhelming majority of professors are absolutely horrid teachers. The old adage of “those who can’t do, teach” isn’t entirely wrong (my comfort with calculus stops at around the ODE level, and I barely passed abstract algebra), but the unspoken implication that those who can ‘do’ are thereby qualified to teach is certainly wrong. The math and science faculty are probably not pedagogy experts; they are math and science experts. We should listen to them when they say the math ability of their students is poor, but they don’t have special knowledge as to what the causes or solutions might be.
It strikes many people as odd that mastery of a subject doesn’t automatically qualify one to teach, but we are moving away from the concept of teachers as ‘transmitters of knowledge’ and towards that of ‘enabler of learning experiences’ (shoot me, I’m starting to speak in jargon). The transmission model is a poor one; many of my university professors were inferior to the course textbook. Making knowledge available is not the challenge of education. Making knowledge meaningful is.
I don’t know what distinction you are trying to make. The classes and tests all are supposed to cover the same material - addition, subtraction, multiplication, division, fractions, decimals, percents, etc. Higher levels get into algebra, geometry, trigonometry.
I do not know, but imagine the tests are all of the type “Find the answer to this problem”. Perhaps even multiple choice, with scantron. The test doesn’t care what methodology is used, only that the correct answer is determined.
I don’t want to speak for Chronos, but he may have been trying to distinguish between the ability to carry out algorithms vs. the ability to solve problems or do things that are not quite like anything they’ve ever done before.
There are people who, if you tell them to multiply two numbers together, they can do it, but if you give them a problem where the way to solve the problem is to multiply two numbers together, they’re stuck.
Chronos’s point is not about methodology. Chronos’s point, provocatively worded, is that questions like “What is 323 * 276?” are fine tests of something, but is that something math? If your goal in “teaching multiplication” is for students to be able to carry out by hand the process of taking in two digit strings and outputting the digit string corresponding to their product without error, then your goal is the cultivation of a skill which is both rather trivial (the mere ability to remember and follow instructions) and quite unrelated to what mathematics is essentially about (understanding of abstract concepts, reasoning about them, and so on). It is entirely possible that a student would understand multiplication perfectly fine but be liable to getting tripped up in details when forced to actually carry out long multiplications by hand, or not have memorized the “multiplication tables”, or whatever.
Naturally, if there is a mismatch between the kinds of skills one is focusing on cultivating in their teaching and the kinds of skills one is focused on assessing in one’s testing, then many students will be assessed as subpar. But we have to question which skills are really the important ones to cultivate and test.
As an analogy, we could test students’ knowledge of history by handing them a long list of events and requiring them to state the date of each one. But is the skill this tests actually the skill which our goal in teaching history is to cultivate, or is it merely a red herring to what we’re actually after?