I have been asked by quite a few people lately about recent media reports on the Texas public school ratings and, in particular, the controversy over the “Texas Projection Measure” and its role in the recent significant improvement in the ratings. To explain this issue as concisely as possible, I can’t improve on the article below by my colleague, Andrew Erben, President of the Texas Institute for Education Reform (TIER), from the current issue of the TIER Capitol Report.
“The Texas Education Agency’s (TEA) Texas Projection Measure (TPM) has drawn its share of controversy lately. But what is the TPM and what is it supposed to measure?
Before we get into that, let’s look at the recent history that led to the creation of the TPM. Since the state’s accountability system was created in the early 1990s, critics have pointed out that the system only measures the percentage of students that pass tests and gives no credit for improvement or academic growth. For example, take an immigrant student that enters a Texas school in the 5th grade but tests at the 1st-grade level in English. Even if the school helps the student to reach the 3rd-grade reading level by the end of the year (a two-year improvement), the school is penalized under the accountability system if that student fails the 5th-grade TAKS test.
To address this, the legislature instructed the TEA to adopt a “growth” measure. Ideally, this is a measure that looks at student improvement over time and gives credit for academic growth towards proficiency by the time the student graduates. In other words, the student is counted as passing if he or she is making enough academic progress to perform at grade level in the near future. This is referred to as a “growth-to-standard model”.
The TEA looked at several models before adopting the TPM. At the time, TIER suggested that the model address the following:
1. Base projections on student data from multiple years. Research indicates that multiple years of performance are required for validity and reliability.
2. Secure multiple independent validations of the growth measure by nationally-recognized test experts.
3. Adopt the TAKS Commended Level as the standard for showing a student is ready for postsecondary pursuits.
4. Pair a growth-to-standard model and a value-added model. This will allow schools to get credit for students meeting state standards, improving so that they are on track to meet state standards, or exceeding expectations.
Unfortunately, the TPM did not include all of these recommendations. Instead of predicting student achievement by looking at the student’s performance over time, the model takes a student’s test scores from a single year, and predicts future achievement based on how other students—who had similar test scores—performed on future tests.
While the TPM is advertised as an accurate forecaster over 90% of the time, this includes results from high-performing students (who are highly likely to pass the next test) and low-performing students (who are very likely to fail the next test). As a result, the accuracy rate for marginal students is quite a bit lower.
Criticism of the TPM reached a peak during a recent legislative hearing when Rep. Scott Hochberg pointed out that a student could get no questions correct on the writing portion of the TAKS and still be projected to pass writing based on his or her scores in other subjects. While this is an extreme example, it underscores the flaws with the TPM.
These flaws are important because the TPM projections were used to raise the 2009 accountability ratings of 331 school districts and 2,560 campuses. Of these, 79 districts and 358 campuses used TPM to move to a rating of “academically acceptable” and avoid sanctions that come with underperformance. In all, 61% of campuses were rated as “recognized” or “exemplary” under the TPM.
The good news is that the TEA has promised to either stop using the TPM or radically retool it. TIER and our partner organizations will continue to work with the TEA to develop growth-to-standard and value-added measures that accurately predict student growth and give credit for students who are moving toward postsecondary readiness.”
I hope this helps to clarify to some extent a very complicated issue. Unfortunately, shortly after this article was written, the TEA compounded the problem by releasing the 2010 school accountability ratings without any adjustment to the projection model, and the debate continues as to the adjustments to be made to this model in order that our school accountability system truly reflects the growth or lack thereof in the achievement of each student toward proficiency and college and career readiness.
To keep current on this and other Texas public education issues, visit www.texaseducationreform.org.