Modeling Future Student Success
/Over the next few weeks, I will be updating the mathematical model I created to predict students' future success in college. That model, which my school has been using and revising for the past four years, looks for patterns in academic and behavioral data to help predict individual student's likelihood of earning passing scores in college coursework.
I created the model in response to learning that standardized test scores alone left far too many edge cases to accurately predict future academic success. Too many students had previously scored well on tests yet did poorly in college classes. Similarly, some students we thought could handle college coursework did not score well on traditional measures of college "readiness."
Using this model, my school sends a third of its juniors and half of its seniors to college. Last year, these students passed 97% of the courses attempted. Ninety-three percent passed with a C or better.
To learn more about my school and why we send so many students to college while still in high school, I recommend reading my post from June titled Early College For All.
There is nothing magical about the model. It simply applies what is already known about past students' success to predict how well current students might do in college coursework.
The model uses three primary sources of data:
- Standardized college placement or college readiness scores: I have used data from different assessments over the years with relatively similar results (Compass, Accuplacer, ACT, and SAT).
- High school grade point average: in my school, the strongest predictor of future academic success is past student success.
- Teachers' subjective assessment of student "agency:" Each winter, I ask my faculty to evaluate each student on how well they are perceived to grow through challenging work and complete work on time.
Each year, the weight applied to each of these data sources has changed to reflect what we've learned about past student success. Last year, high school GPA and test scores were weighted about evenly. Agency, while found to be an accurate predictor, was weighted very little (approximately 10%) due to its subjective nature and the potential for perceived bias.
Over the coming weeks, as I update the model, I hope to share more of the details that go into its creation and revision. I see great value in having more schools analyzing data in this way and think it's a simple enough process that can be replicated with bit of time and effort.
Disclaimer: I am not a mathematician and do not claim to be an expert in inferential statistics. I am simply a practitioner with a good memory of his Statistics 101 class. I welcome any feedback from readers with stronger mathematical grounding.
If you have questions about this model that you would like me to expand upon or would simply like to learn more, feel free to leave a comment or reach out by email.