Twelve years ago, Brian Powell and Lala Carr Steelman analyzed state SAT scores in a landmark article in the Harvard Educational Review. At the time, politicians and the media, among others, had been using raw state SAT scores to make inferences about the relative quality of education among the U.S. states. Powell and Steelman, however, found that more than 80 percent of the variation in average state SAT scores could be attributed to the percentage of students in a state taking the test — in other words, in states where the percentage of students taking the SAT was low, state SAT averages tended to be high because that test-taking population included a high proportion of high-achieving students, and vice-versa. Since the percentage of students taking the SAT was not necessarily linked to the quality of education in a given state, Powell and Steelman cautioned against using unadjusted state SAT averages to evaluate educational quality.

In this article, Powell and Steelman revisit the subject of state SAT scores, providing an update on how state SAT scores continue to be used and misused in public deliberation over the last decade, reanalyzing interstate variation in SAT scores using contemporary data, and extending their analysis to investigate variation among state ACT scores. Powell and Steelman conclude by reaffirming their earlier position that state rankings based on SAT scores change dramatically once they have been adjusted for factors such as the participation rate or the class rank of the student test-taking population. In addition, despite the claims of some researchers and policymakers that money does not make much difference in terms of student achievement, Powell and Steelman find that public expenditures are positively related to state SAT and ACT performance.

This content is only available as a PDF.
You do not currently have access to this content.