by William J. Mathis and Carol Corbett Burris
Does “tracking” elementary school students by how well they score on standardized tests improve student achievement?
The vast majority of research into so-called tracking or ability grouping of students has reached a definite conclusion: it’s harmful. Students placed in low-track classes fall further behind. Yet a recent working paper published on the website of the National Bureau of Economic Research reaches a different conclusion, purporting to find evidence from a study of young children in Texas that sorting students based on their test scores improves outcomes for low-achieving and high-achieving students alike.
A review of the NBER study, though, finds flaws in its methods so severe as to render it unreliable in guiding policy.
The paper, Does Sorting Students Improve Scores? An Analysis of Class Composition, was written for NBER by Courtney A. Collins and Lin Gan.
It was reviewed for the Think Twice think tank review project by Carol Corbett Burris, the principal of South Side High School in Rockeville Centre, New York, and Katherine E. Allison, a doctoral student in research and methodology at the University of Colorado Boulder. Dr. Burris is the co-author of two books on tracking and equity as well as numerous articles regarding these issues in peer-reviewed and popular journals. The review is published by the National Education Policy Center, housed at the University of Colorado Boulder School of Education.
Does Sorting Students Improve Scores? is based on a review of data involving students in Texas. Using standardized Texas state test scores, the report compares scores of third- and fourth-grade students. It concludes that sorting students by scores is associated with significant learning gains for both lower- and higher-achieving students, although it does not find similar effects for gifted, special education, or Limited English Proficient students.
Burris and Allison identify several methodological problems in the paper.
First, it simply assumes, based on test score distributions, that the schools tracked students between classes – and this assumption is highly questionable.
Second, it provides no criteria by which students were classified as high or low achievers. Finally, while it purports to be based on a measure of learning growth, it’s not. The measure is only of relative standing of students on two proficiency tests given in different years.
Consequently, they conclude, the paper is too weak to offer any reliable conclusions, and thus should not be used to inform policy regarding tracking or grouping of students.
No comments:
Post a Comment