Much is made of national results from the Program for International Student Assessment (PISA). I have often questioned the way results are commonly interpreted. Sharon Machlis points out several problems with these interpretations in her Computerworld blog. There are many reasons why some students do better than others on these exams. Schools and curricula are not the only reasons. There are many confounding factors.
Even if one thing correlates with another, that does not mean either one is caused by the other. She cites the example that data show a strong correlation between ice cream consumption and drowning deaths in an area. Obviously, eating ice cream does not cause drowning. Both things are likely to happen when it’s hot outside.
Confounding factors in educational performance include parents’ educational level and income, she argues. See the chart she included to illustrate. It has the ten best performing districts and the ten worst performing districts on a Massachusetts standardized test. The poverty rates of the top ten districts are colored blue and the poverty rates of the bottom ten districts are colored red. The visual impact is striking with the top ten districts having much lower poverty rates. The chart was reposted from a blog post by Massachusetts State Senator Pat Jehlen, where there are other related graphs.
The PISA data is also based on projections of how a student would do if all the questions were answered on the test. This brings the results more into question for me.
Ms. Machlis concludes “But the bottom line here is this: You can’t assume that high test scores are caused by a certain type of curriculum, teaching methodology or education policy simply by looking at raw standardize test results.”
“Do things like curriculum, teaching methods and policies matter in educational outcome? Of course they do. But unless your testing data can factor out things like students’ family incomes and parents’ education levels, standardized test results won’t tell you.”
Right on.