Denmark’s miscue?

I just arrived home again from Denmark and have literally TONS of material conversations on their system of education. Perhaps most interesting is that Denmark is moving towards a system of accountability testing that is modeled after NCLB to some degree. This testing has been introduced by a conservative government led by a Prime Minister who greatly admires President Bush and is being advertised as a way to “hold schools accountable” for student performance.

Students—starting this year—will take computer based multiple choice assessments on the computer to measure their understanding of required curricula. Results from the testing will then be used to rate and rank schools much like the US. Right now, there are no plans to hold students back based on test results, but it’s likely that such steps will logically follow in the future.

What makes this step so amazing is that assessment in Danish schools has traditionally been far more nuanced and complex. Students aren’t given “grades” by teachers. Instead, constructive feedback is offered orally and in written form on classroom generated assessments. Conferences are held at least twice each year between parents, students and teachers where student performance and ability are reviewed. Written documents shared during these conferences replace “report cards.”

Students are also never “failed” at all. There is a belief that no child is a “failure” just because they haven’t mastered a specific set of skills. Instead, there is an effort to celebrate what a child has mastered and an attempt to continue to master missing skill sets.  Portfolios that document student strengths and weaknesses are started as children enter school and then follow children throughout their school careers.  Work samples are included that can be used for reflection by parents, students and teachers.

Most interesting, however, is that oral examinations have formed the foundation of Denmark’s “final assessments” until now.  Teachers develop a series of performance related tasks and questions, complete with supporting materials necessary for completing the “exam.”

Students are then given time—in front of the teacher—to answer the question.  A written response is provided, and then students are asked to defend their reasoning in front of the teacher.  Teachers ask probing questions to gauge the depth of student understanding and then “grade” a child’s “performance.”

How does that sound for quality assessment?

I spent a day living with two teachers while I was in Denmark and we talked extensively about Denmark’s new move towards testing. They were both very concerned about the new move towards standardized tests and saw them as “watering down” what has been a really effective assessment system. They were also worried that newspapers would begin printing testing results and selling them as indicators of school success. These results, they feared, would shape community perception about students and schools even though they were simplistic assessments of performance.

Sound familiar?

They were also, however, pretty certain that assessment wouldn’t completely shift towards testing in Denmark because their country has had a long tradition of schooling that emphasizes the development of individual values and the ability to act on personal decisions—things that couldn’t be easily tested. What’s more, quality assessment has been a part of the fabric of their system for over 100 years—reducing the support that testing was likely to have in the general public.

I told them that I wasn’t so sure….I said that testing has been broadly embraced by non-educators in our country because it seems to make measurement simple. Outsiders don’t understand our arguments that student learning is far more complex than just a number on a test. Instead, they like the easy comparisons that come from standardized tests—-even when they have little understanding of what those numbers really mean.

Does anyone else believe that the reason we over-rely on test results for measurement in America is because the general public has a poor understanding of exactly what tests can and cannot do?  Is the first barrier that we face in creating more reliable measures of assessment convincing non-educators that tests have significant limitations that are lost in the simple numbers published in newspapers around our country?