Are we there yet?
So, like I said, in my continuing education I've discovered a lot of people who can and have discussed and argued the pointlessness and destructiveness of the present testing regime. Their outstanding work includes academic studies and personal narratives and lots of stuff in between. I continue to work my way through them.
In the meantime, what can I offer? I can share my own experience and let readers decide what is similar and what is different to their own. I can talk to you and to other teachers like I've been there because I have. I know what you're going through because I went through it, and I came out the other side healthy, happy, and retired.
So here's one final, personal rant on data. I have other stuff to say, after all.
This, from Answer Key:
Data is bullshit, and professional developments about data are nuclear powered bullshit and tough to take without going completely insane. Besides all the philosophical reasons for opposing the toxic testing-industrial complex, and all the moral reasons for fighting against the reduction of human beings to data points, the numbers are phony. It’s all a fraud. The data do not mean what they say the data mean.
Now if you are tasked with administering these tests and you happen to be arithmophobic (an arithmophobiac?), you may not notice or want to, but the tests do not measure what they purport to. They are not reliable. They are not valid. First of all, if you’ve ever given a ten question multiple choice quiz to your students, you will know that even if you test exactly the same material again a week later, sometimes the same student will score higher and sometimes their score will go down. Shit, if you've ever taken an "ideal mate" quiz online--I almost wrote in a magazine, which tells you how old I am--you know that one day you'll wind up with Barack Obama and the next week you'll be yearning for Courtney Love.
And before you start sputtering about the difference between questions of preference vs. matters of fact, take a look at the questions students are asked on these exams--not that you're allowed to talk about them. There may be clear answers in algebra, but the relative strength or weakness of evidence, or of an opening sentence, or the interpretation of an extended metaphor are subjective and open to...well...interpretation.
Performance on a task including an examination depends on any number of factors including most recent experience, blood sugar level, and the quantity and intensity of available distractions. That is why no teacher worth the name would ever rely on a single assessment given a single time to evaluate a student in any significant way. We give tests, but we also give projects and writing assignments short and long, formal and informal. We assign debates, we listen to conversations and questions and we try to get a comprehensive, meaningful picture of what each student knows and can do along with some idea of what they need--particularly from us--in order to be able to know and do more.
In other words, it's the opposite of the testing-industrial imperative. And the opposite of what most people who are not teachers mean when they say "data."
And it’s even worse than that. In addition to not measuring what a student has learned, the tests don’t deliver even within their own narrow objectives. They do not measure what they say they do.
I’ll give you an example: My school hired a group of education consultants to boost our test scores. (Hilariously, we were not allowed to acknowledge that’s what we were doing.) We had days and days and days of meetings with these “experts”--only the math and English departments of course, because (sotto voce) That’s what we test. Meanwhile, everyone else could work on what they needed to do to prepare for actual teaching.
And when we were through with these literal weeks of PDs, we had created five exams to give in each class, one every five weeks and each test would assess five distinct standards representing five discrete skills. That’s five, then five weeks later a different five, then a different five, until we get to twenty-five. Why twenty-five? They’re the ones that occur most frequently on the Big Test in the spring.
And so, after each test we would look at how dismal our scores were and compare them to the previous test to see if they went up or down. In a PD about *data* we would assemble in the elementary cafeteria and show each other our dismal scores (our students’ scores) and make up a story about why our scores might have gone down or up, and we’d make posters and cut out student names and put up the posters around the cafeteria and go one-by-one around the room and tell our stories and answer for our sins. That is, those of us in the math and English departments. All others adjourn to your rooms to work on teaching stuff.
The tests were all different and tested different standards and skills. We were told we could not test the same things twice to see if we had made progress. We compared scores from different kinds of tests and had to pretend that the improvement or decline from one test to the next meant something. “Wow, those kids really got it this time.” “Oh (downward inflection), that’s disappointing. What did you do differently?” “I taught different stuff!” And we did this for years.
I tried several times during the early days of this catastrophe to point out that we were not actually measuring progress or the lack of it. “It’s apples to orangutans!” “Five answers to five questions is not a valid measure of proficiency!” “Who chose four correct answers as the benchmark for proficient? And why?” “Skills? Standards? I looked at the data and my student scores precisely tracked their reading levels. Aren’t we really just assessing their reading?” I was... unheard.
And there’s more. We were required to give the exam--in English--to every single student. Boost scores? Shhh! Yet we were required to prep for and administer the exams to seniors who would never be taking another test in high school. Our ELD teacher was forced to give the exam--in English--to newcomers to the U.S. who had been in country for a week and as yet spoke zero English. For these students and others, as for the teachers in their classrooms, the tests were a deliberate insult.
<snip>
To defend standardized testing, you are likely to hear some version of business management guru Peter Drucker’s assertion that “If you can’t measure it, you can’t improve it.” I once heard “We measure what we value,” which I thought was ridiculous because my students and I valued collaboration, humor, persistence, flexibility, and lots of other things depending on the day. None of these ever showed up on the standardized exams--not that I’m discussing what did show up because, of course, that would be breaking the oath they make us all sign to protect the market share and business model of testing companies. You might ask yourself why we are not allowed to discuss these top secret contents. Wouldn’t that help us better prepare our students? Wouldn’t it lead to better exams? Good questions all, but beside the point. And the point is profit.
What I found is that, far from measuring what we value, we instead assign value to the measurements themselves, as long as they can be readily obtained and easily expressed. The measurements are the point, not what they measure. The numbers are vital to a system that employs thousands and thousands of people who interpret the numbers, explain the numbers, compare the numbers, and thousands more who come to your school to tell you how bad your numbers are and what you should do to make them marginally less bad. All the while, mind you, not measuring what you value. They don’t care about what teachers value or what students value. This is what they value.
Do not fall for it. You may have to give the tests, but you do not have to believe in them. Do not adopt their self-serving framing that everything can and should be quantified. It can not. But when you are a hammer, everything looks like a nail. Big money in nails.
I know you know this. I know you have a million stories just like this and much, much worse. I know that many of you are tortured by the knowledge that the relentless testing is preventing you from serving your students better. I know I was. You and I know that, regardless of advertising, testing is an insidious form of control that shifts power away from the classroom and limits what students learn and teachers teach. Testing is a crucial component in a weapons system armed and aimed directly at public education.
But, and I know this is pretty rich coming from a retired guy, please please please please please please please don't give up. We all know that what is happening is wrong, but we don't yet know how to stop it. Still, your students need you, and I'm living proof that you can survive--at least for twenty-five years. If you are in a position to speak up, resist and push back, I urge you to do so. If you are not in that position yet, you will be if you stay long enough.
I'm going to keep fighting and looking for ways to win. I'm going to shift gears from ranting to strategizing. If you have any brilliant ideas, please read this blog and comment here. Or contact me directly at nowwaid@gmail.com.
For the time being, I'll be shifting another kind of gear as well. I began this joyride by announcing that I was retired and loving it. I think I even wrote that the best retirement advice you'll ever get is "As soon as possible." Well that's all still true, but it's a year now and it's time for me to do some real retirement stuff, so I'll be taking a road trip across the country and back. By camper van. Because that's what you do.
I think I'm going to post about the trip a little, just to give a glimpse of the delight that waits for you on the other side. Over the rainbow. Please stay tuned and stay in the fight.
Hey! It's the weekend! Have a cocktail--or several. Turn your clocks forward and, when you do, remember that there will never be enough time to do everything you need to do. Start with the things you really want to do, and go from there.
No comments:
Post a Comment