More Performance Correlates
I was really struck by finding that of all my AP Computer Science students, only those who had done a significant amount of extra out-of-class practice scored over 90% on the major test of the semester. I took 10 minutes from a class period last week to share my findings with them. Of course, these are some smart kids and so they immediately wanted to know what other correlates might exist and if I had quantified them. So I went back to the data to see what else might be there. Since I teach an introductory programming course that some but not others of them had taken, that was one of the first places I could look.
Taking the intro class turns out to be worth about a letter grade (12%) on the average score. But it’s no guarantee that a student won’t fail, and not a prerequisite for success. Not all students from the intro class go on to this class. So it’s also quite possible that the first course merely serves as a filter for students who would fail, not as a transformative educational experience that ensures later success. I suspect reality might be somewhere between these extremes, but the camel still has two humps.
I decided to look at another measure of the impact of being in my class: attendance.
There was a small negative correlation (r = -0.25) between class periods missed and exam performance. But again, do the kids who come every day already love the subject? Wouldn’t they teach it to themselves if I weren’t there? Or is it that every moment in my room imbues them with an increment of additional knowledge and skill? Ask the student who came every single day and got a 29%. There’s some relationship, but I wouldn’t call it controlling.
That being the case, I thought I’d look outside of my own class to see what other factors might predict performance. I have access to their standardized math test scores from last year, when most of them took precalculus.
There’s a positive correlation here (r = 0.49). It makes sense. But a couple students with fairly low math scores did great, and some students who, in comparison, had much higher math scores did much worse. Again, nothing as strong as the correlation between practice and performance that I found previously.
My strongest motivation behind looking at all of this data is to see if there’s any clear indication of something I ought to do more of or less of or differently. I’d also like to find a reliable measure that predicts success in programming, with the intention of recruiting more students into the course with confidence that they have a shot at it. So far those are still open questions.