I never heard of any of the jargon above despite the fact that:
I took math every year from 1955 (K-1) to 1968 (2nd semester Freshman year in college). My SAT Math score in 1966 was an undistinguished but hardly shameful 597 (I think that would be ~80th percentile). I earned a B in college Algebra 101 and a C in Probability 101. Of course all that was back in pre-calculator days, and where I went to school there there never any crib sheets, as in not a single one for any class at any level. That of course made for a ton of memorization, an approach I fully support. And finally, I was never introduced to arrays and number lines, and I doubt I missed them.
I was taught by memorization. I didn’t realize what a futile concept that was until I began learning algebra, and realized there were much better ways to do things (many of which are similar to the “Common Core” methods) than all the nonsense I’d been taught.
I find it interesting that Israel is below the United States in the PISA scores for both Math and Science while it ranks very high in science and technology achievement and “boasts the highest number of scientists, technicians, and engineers per capita in the world” … but be that as it may.
Trends are interesting things … and one interesting thing is that the trend is for U.S. student math scores to be improving (especially in younger grades albeit with some recent leveling off) and that the relative weak position internationally long predates the implementation of Common Core.
Of note the U.S. is not the best at fourth-grade but not too shabby either.
International comparisons must be understood in a context. Let’s take the best score on PISA (done at age 15) for a comparison: Singapore. In Singapore children were traditionally tested by grade 6 (end of compulsary education)into one of three tracks: “a special school for the gifted, a vocational school or a special education program.” That has been changed in recent years into dividing them into the “Integrated Program” or the “Express” tracks, one of two “Normal” tracks (one “academic” and one “technical”) and a “Vocational” program.
Still that system does produce impressive scores. Other that families knowing that the child’s entire future is on the line from the earliest grades on and pretty much written by the time of those 6th grade placement tests and impressing that upon the kids constantly, maybe there is something about the way it is taught?
How about Finland? Most scientists and engineers per capita in the world. Also tops in those international student test scores. They must, Like Singapore test and pressure … but then they likely get kids memorizing like crazy … right? Uh no. Not tracking. Not lots of testing. They are all about equity
Huh yet again.
For the little it is worth my disappointment in the math education my kids got has been that it has not been applied like reading skills are. You learn to read and then you use that increasing skill level reading to learn in all subjects. But increased math skills are scarcely applied as a tool to learn in other subjects. A bit in some science classes but not much. Except physics and a few parts of chemistry … barely. The lesson learned is that math is for math class; not for understanding or learning anything else.
Sorry to multi-post, but this is also of interest. Tennessee and Washington DC were the earliest adopters of the Common Core Standards in the United States and showed the most improvement on NAEP scores at 4th and 8th grades, up 21.8 and 22.2 respectively. (Most recent scores in 2013.)
Of the four states that have still not adopted Common Core Standards, only one is above the average increase seen in the U.S. (Nebraska, up 6.7 compared to up 5.4 for U.S. average) and two (Alaska and Texas) has gone down in scores. The state in last place, Montana, down 7.0, was the last of the 46 states that have adopted them (Nov 2011).
Correlation is not causation and is also not 100%. Lots of other factors in play. But still, again, for all the Common Core demonization that goes on one must again state: Huh.
I’m beginning to suspect that** LOHD** knows how to teach better than some Common Core critics do. Who’d have thought?
The fact is that the majority of people can’t tell you what determines good instruction (vs. bad), outside of vague, subjective and idealistic notions derived of their personal experience.
This includes most of those making policy decisions about school “reform,” who couldn’t tell you what technically makes teaching good or bad if their lives depended on it. That’s why they rely so much on test scores.
I struggle with the idea of quantifying teaching. On the one hand, I tend to like data, to be skeptical of squishy feelings as a guide to behavior, and I’m doubly suspicious of adopting principles that coincidentally make my own life easier. In these regards, quantifying good teaching sounds like a great idea.
On the other hand, every time I’ve looked at a quantification of teaching closely, it’s failed to capture important aspects of the situation.
One example (of many): in my state, we conduct thrice-yearly checks of student reading ability. One of the main measures we use is to have each student read a variety of leveled books until we find the top level at which they can read 90-95% of the words correctly, answer 4/5 questions orally, and answer two written questions including specific details from the text. We use all this information to assign each student a lettered level of text, from BR (beginning reader–lacking basic literacy skills like knowing which way to open a book or where the last word on a page is) to >U (able to read and answer sophisticated questions about 6th-grade-level texts, including explaining alternate points of view and implicit scientific theories). And we use that information to create reading groups and otherwise guide instruction.
It’s all very data-driven, very standardized, very normed, very objective. I get precise data out of it.
And yet.
Three kids might be reading slightly below grade level–level L, in my case. One girl might be able to read every word on the page beautifully, but has absolutely no idea what’s going on in the stuff she reads. Another kid might struggle with every word with a slightly nonstandard spelling (five words in the part of this sentence before the parenthesis), but he is able to figure out the storyline and thinking carefully and subtly about characters. A third reader might be so overconfident that she zips through the written questions and fails to respond to a key part of a question (“How did Mailman Sam feel when the zombie bit his arm? Use a detail from the story,” it asks, and she answers, “He felt scared,” not using a detail from the story).
All three kids show up as level L readers. Grouping them, or even offering them the same remedial instruction, is a giant waste of time.
My data is objective, precise, standardized, normed, and useless.
This is one example, but the field abounds with them. The best data I get is from individually examining a kid’s work, and then working one-on-one with the kid to fix the specific errors I’m seeing.
Obviously I am of the belief that standardized testing has been over-done, over-emphasized, that is granted more validity as a measure than it deserves, and that the result has too often been district-wide attempts to game for the metric more than to actually improve the education children receive.
But OTOH …
It seems that the tool you describe is not completely useless, as long as the limits of the tool are recognized.
You can use it to identify a subgroup that needs more individualized assessment. The top level can be grouped together and the bottom all need some very intensive remediation. Those slightly below grade level need to be looked at more closely than perhaps other kids to figure out specifically what is going on. The tool does not make the specific diagnosis; it screens. It can flag them for that extra close look. Of course if every child gets that close attention such is unneeded.
A tool that is very limited on an individual basis can still have validity as a population-wide assessment device. If we know, for example, that your district habitually gets, if anything, more than an average number of kids starting off at BR upon entry into the system, and that by 4th grade you have an over-representation of kids who are >U (compared to nationwide norms) then your district should be looked at to see what potential best practices can be identified. Conversely a district that has an average amount of BR to start but at 4th grade has more kids slightly below grade level and fewer >U than most other schools, should be looked at to see if something there can be modified.
As well as Singapore has done with a very heavy emphasis on standardized test scores, and as Finland has done with virtually no standardized testing, the best approach is, IMHO, somewhere in-between, using the tool of standardized testing in a much more nuanced, critical, restrained, and skeptical manner than we currently do. Recognizing that it is a tool, a limited utility tool, to assess achievement of both individuals and of populations, but not the gold standard definition of that achievement for either.
You are definitely right–I’m afraid I gave in to frustration and hyperbole when I wrote that. I should have said that the information we gain from the assessment does not in my opinion justify the amount of time it takes to administer the assessment three times a year. But it’s not completely useless–indeed, the administering of the assessment (rather than just the letter output) gives me a lot of useful information at the beginning of the year, as long as I interpret it qualitatively.
And of course, you could do all of that quantitatively, too. Instead of just one score for each student, give them three scores, for how good they are at identifying individual words, how good they are at extracting meaning from what they read, and how good they are at following directions. But of course, that would make an already long and slow process even longer and slower, and you probably still couldn’t group students based on it, because you might not even have multiple students in the same three-parameter group.
The other problem is in using these measures without enough context (and there’s an awful lot of context that’s needed). If one teacher’s students consistently score as being one grade level behind, does that mean that she’s a bad teacher who should be fired? Maybe… if her students are all coming into her class at grade level. On the other hand, if they’re coming into her class three grade levels behind, then she’s doing an excellent job; give her a raise.
The relatively high number of STEM professionals is probably explained by the Israeli government taking steps to ensure that everyone with the aptitude to be a STEM professional actually becomes a STEM professional.
And while you may take a casual view of the low Israeli PISA scores, the Israeli people and government differ:
Common Core was promulgated in 2010. Due to margin of error considerations I doubt that the +2% improvement from 2011 to 2013 should be considered an upward trend.
And now that the 2015 NAEP test results are in, it is unmistakably clear that we are at worse than a standstill:
(4th Grade, % students scoring at proficient level per NAEP):
13%: 1990
18%: 1992 (+38.5% improvement over 1990)
21%: 1996 (+16.7% improvement over 1992)
24%: 2000 (+14.3% improvement over 1996)
33%: 2003 (+37.5% improvement over 2000)
36%: 2005 (+9.1% improvement over 2003)
40%: 2007 (+11.1% improvement over 2005) 39%: 2009 (-2.5% decline from 2007)
2010 Common Core adoption began
41%: 2011 (+5.1% improvement over 2009 )
42%: 2013 (+2.4% improvement over 2011 ) 40%: 2015 (-4.8% decline from 2013)
So: after 17 years of encouraging progress 1990-2007, significant upward trending came to an end, followed by 6 years of stagnation to 2013 and culminating with the 2015 scores which may fairly be characterized as a debacle, setting us back to the 8-year old square one of 2007.
And NB Common Core is a set of standards, not a set of methods, and my beef in this thread concerns methods. Something is going wrong, and it is now fair to ask if our present methods are part of the problem rather than part of the solution.
Furthermore, the actual tone of your citation is more in line with my pessimism rather than with your optimism:
(emphasis added):
And finally, there is a great big stinking shitstorm of contradictory data, addressed next:
Your cite is to the USDEd reporting 2011 TIMSS comparative international results. Here are all other TIMSS comparative international results:
If you open the 2012 PISA link and compare results with the 2011 TIMSS you will see that US 2012 PISA scores were below PISA scores for 15 countries (all 15 are OEDC, I think), and US 2011 TIMSS scores were above TIMSS scores for the same 15 countries. The differences are drastic, and it is unreasonable to accept the validity such a big turnaround in so short a time, so the PISA and TIMSS results convey empirical contradion.
Also NB the US average is significantly behind the OEDC average for each year reported by PISA. This is more meaningful than the above average results reported by TIMSS because 3rd world results are added to the comparison.
It would be nice if someone could explain or resolve the contradiction. However, even the much better US TIMSS results place the US in the middle of the OEDC pack, and we should certainly have higher ambitions than that.
Googling around for this reply reveals that Singapore is, or was adopted as a role model for the US, at least to some extent. As we have established above this adoption has not resulted in improved US 4th grade math scores.
Furthermore, there is some reason to believe that the adoption was halfassed:
(emphasis added):
I also came across this data point:
I doubt that 5% of all K4 US students receive after-school private math tutoring. Whatever the number is, comparative scores must somehow be normalized before we can say whether the Singapore classroom system is any better than any other system where after-school private tutoring is significantly less common, as it surely is in the US.
Irrelevant to this discussion.
Sure, I’ll figure.
Consider non-Singaporean Chinese math, especially in Shanghai:
On 2012 PISA Shanghai K4 scores were clearly the best of all tested, 7% higher than 2nd place Singapore.
Chinese education experts and administrators have apparently been reading too many threads like this one because they seem dissatisfied with the teaching methods which have produced world-leading results:
I sounds to me as though even Singapore might be able to use a bit more drill and rote memory, if not a bit fewer bar models too.
You just got through telling us in your last post that it was Israel that had the most SE per capita. Make up your mind. And whoever it is I gave a possible explanation in my last reply.
Also, Finnish K1-12 teachers must have Master’s degrees, a fact which certainly skews the relative STEM attainment numbers in Finland’s favor.