Does spending more money on education increase test scores?

Well, does it?
More specifically, does higher per pupil spending correlate cleanly with higher test scores [and, theoretically, more actual knowledge absorbed]?
Seems rather obvious to me, but a search of the DOEducation website and a google search didn’t come up with the data I was hoping for.

Anyone?

Do test scores necessarily reflect the quality of education?

Try looking at the Newspaper sites. A new study claiming funding made a difference came out in the last few weeks. Of course I can’t remember just wher I saw it.

Yup, educational statistics done here.

Your question is very tough to get your arms around. The “proof” is difficult to see.

We could go backwards…
If we cut ed spending would we expect test scores to go down? probably. This would show a positive correlation, but it doesn’t necessarily extrapolate to “increased” spending.

To look at this question “scientifically”, we need to define lots of terms. Different systems define “per pupil spending” differently. Some systems include such things as replacing library books in their pps numbers while other systems don’t.

Test scores are also difficult to pin down. We could look at, say, CTBS test scores over time and see a change, but we likely would not see the same changes in different test scores (SAT, or such).

We won’t even get into the problems of translating test scores to “knowledge absorbed” (vs. memorized for testing).

The biggest problem with addressing your question (and why you might not find DOE type info is that different systems and even schools spend pps money differently. One school might use a lot of it this year to purchase new text books while another might use the money to resource teachers. In this age of “Site Based Management”, individual schools have a lot of freedom in disbursment of money. While this may be good for schools, it does make answering your question difficult.

We (my office and I) have done quite a bit of work with “Value Added Assessment”, i.e., determing what proportion of score change is attributable to the schools (what value the schools add to the score). We have found (as have others) that schools account for 3-10% of variance in student outcomes, the rest being related to student factors out of the control of the school.

One of the better areas for increased funding for schools is the teacher staff. Having a qualified, experienced teaching staff has more of a correlation on student outcome scores than the number of chemicals in the chem lab or the number of microscopes int the biology classrooms.

For articles, I suggest you do an ERIC search or other journal search. Try the American Education Research Journal and such.

Here is a site that lists each state and Canadian province by spending http://www.osstf.on.ca/www/issues/edfi/survey.html
Here is a site that ranks the states by SAT scores http://www.conterra.com/ehp/scein/scores/satstate98.htm
After a cursory examination it appears that there is not a correlation.

The top five states for spending with their rank for SAT scores are : New York-41st, New Jersey-39th , Alaska-29th, Connecticut-32nd and Delaware-46th.

SAT scores might not be the best measure since not every student takes the SAT but it was easy to find.

Spending more money on education does correlate with student achievement, but it’s a relatively small one, and just one factor among a great many. California public schools are rated in two ways: compared to other schools without regard to socio-economic factors, and and compared to other schools of similar socio-economic status. This is done because the richest schools routinely have the highest overall scores, but this does not necessarily make them the most effective schools. These schools tend to get the best students. For example, the school where I teach had scores that were about average compared to all schools in CA, giving us a score of 5, but compared just to other similar schools, we scored 3rd out of the 100 similar schools, for a score of 10.

Consider: Send 100 students each from Beverly Hills and from rural Fresno (where they’re likely to come from migrant farm families) to the same school and the Beverly Hills students are going significantly outperform the Fresno students, on average.

What factors that affect achievement are dependant on money?Here are a few:

  1. Class size. Reduce classes from 30 students to 20 students per teacher, and achievment increases. This means 50% more teachers, with an equivilent increase in salary cost.

  2. Facilities: Buildings, desks, computer labs, science labs, etc. All cost money to maintain and keep up to date.

  3. Specialists: Certain subjects require a specialist to be taught effectively, such as music, p.e., and art. Students with learning disabilities need help from resource specialists with smaller classes who are trained for diagnosing and adapting for these disabilities. Likewise gifted students. These specailists are in addition to the regular classroom. teacher. Of course, these things don’t show up on tests, but I think music, art, and PE are inportant subjects that need good teachers.

  4. Schools/districts that pay their teachers more get to pick and choose from the best applicants. Schools/districts that pay the least must take what they can get.

Many of the important factors that affect student achievement that schools have no control over:

  1. Parent education level.
  2. Family socio-economic status.
  3. Family mobility.
  4. Family “culture” (do parents read to children, visit the library, help with homework, etc.)

I infer that what you really want to know is whether spending more money will increase learning. That depends on the school. In Beverly Hills, probably not. In poor schools, more money could increase the quality of education if it is used to:

  1. decrease class size
  2. hire an adequate number of specialists
  3. update facilities and materials
  4. attract better teachers.

Here is a summary of a longitudinal study in progress that suggests that spending more money leads to modest improvements in student learning, and that even modest improvement has a benefit to society.

Here is another study that suggests that more money spent to decrease class size increases perfomance.

Specifically:

The problem with Biddle’s findings is that he does not separate the different levels of variance. Trying not to get too technical here, the variance in student scores is on more than one level. There are student level factors, school level factors and even system level factors (one county will spend money differently than another, for example). Adding to the difficulty of untangling these factors in variance is the hierarchical nature of the levels. Partialling out variance effects if pretty easy with regression analysis, even if there is a lot of covariance between factors (e.g.,high poverty areas also have high mobility) if all of the factors are on one level. However, the hierarchical variance structure does not lend itself to typical regression analysis.

Because we have many students nested within classes, and many classes nested within schools and many schools nested within a system, we have to realize the the variance in student outcomes is due to student level factors, class (teacher) level factors and school level factors. Further, factors at one level have an influence on variances at other levels and may do so differently for different groups. An example might be of the teacher factors affecting students differently (imagine a very talented mysogynist teacher-males might learn from him while females might not). Now, the variance structure is not at one level. This nature of the variance structure requires special analyses.

This is why he finds school funding (high level variance) and child poverty (nested, lower level variance) account for 55% of the variation in scores. I’m not blasting his work; Biddle has done great things for education evaluation. I’m just pointing out difficulties in ed measruement.

And don’t get me started on the lack of randomization problems!!

This is oftendiscussed here in DC. DC has one of the highest spending per pupil (if not the highest) and one of the lowest results (if not the lowest).

I remember an amusing exchange on TV between someone who thought more money was needed and someone who disagreed:

  • We need more money to improve the quality of education

  • You already have the highest spending and getting lousy results. Maybe you need better management. Utah and North Dakota spend way less per pupil and yet have much better results…

  • hmm… well… yes, but the conditions there are different

  • In what way? Do you mean that if we move the District of Columbia closer to the Canadian border we could spend less and get better education?..

No disagreement here. Educational measurement is very complex, and the question in the OP isn’t nearly as simple as it seems. All I was doing was citing evidence to support my claim that money is a factor that does make a difference. Not the only factor, not even the most important factor, just one factor. I also said in my first post that more money is only likely to improve student achievement if it is spent on a school that has specific needs that can be addressed by improving things, such as class size, lack of classrooms, specialized instruction, etc., that require more money.

Absolutely, Number Six, I was hoping to explain the discrepancies between my statement of 3-10% accountable variance at the “above student level” and Biddle’s 55%. You summed your points beautifully.

This is the true problem with measuring the OP. Where a school decides to spend money differes and where the school should spend money differs. One school might be staffed well but should spend resources on instruction programs while others should spend money on improving staff, etc.

SAT scores by state are absolutely 100% meaningless. Please note that I did not say that the SAT itself is meaningless on an individual level. This is very easy to explain if you think about it. Look at the states at the top of the list. They tend to be in the Midwest and the South. These are not areas known for an overall high quality of education so why are the averages so high?

  1. Fewer students take the SAT here than in states in the East and California. The students taking the SAT are almost always college bound. This is unlike states like Massachusetts where almost every high school student takes the SAT. This results in a very real self-selection process.

  2. The ACT, not the SAT, is the test of choice in the South and Mid-west. This means that those that take the SAT from these areas are probably traveling to another region to attend college and there is a good chance that they are going to be attending a rather prestigious school.