Happy or Proficient?

IMG_0021Our good friend and UTL president Paul Georges shared this article with me this morning: "Is a good teacher one who makes kids happy or one who raises test scores". If you read nothing else in this post, migrate to EdWeek and read that article.For educators, this is the question above all questions because doing one thing does not necessarily compliment the other.  According to the EdWeek article, a recent study found that, on average, a teacher who managed to raise test scores was worse at making students happy. Here's the study from David Blazar in MIT Press- read it and weep.Over the course of my career, I have been an MCAS test administrator (admittedly only for the "legacy" version - whatever that descriptor means). I've felt the dichotomy of creating a positive and joyful learning environment for 3rd and 4th grade students and the pressure of removing high stakes testing monkey from our backs. Don't forget the weeks of "preparation".I have no great love or respect for high stakes testing nor for the value of high stakes testing. It did not inform my teaching in a timely manner as the results from the Spring arrive on a teacher's desk in October. How helpful is that?What testing in the era of No Child Left Behind and its successors does accomplish is the creation of a toxic and stressful environment for everyone. The joy of learning and exploring is sucked right out of the room; curricula are narrowed and teachable moments left in the dust.Of course in a perfect world teachers could just not worry about test scores. The reality, however, is far more harsh and possibly devastating.  Agree with it or not, state Departments of Education (including our own here  in Massachusetts), periodically attempt to tie student high-stakes test results to teacher evaluations. So far, thankfully, that effort in Massachusetts has failed.Kids and teachers are more than a number. Isn't it time schools used other measures beyond a test to evaluate learning and schools?

To whom are you accountable?

We were asked that very question during a faculty meeting presentation yesterday.  Oh there are layers and layers of accountability in the education world in which we live: administrators, students, parents. Yes, we are all accountable to them. Family members, significant others? Those people too.My answer? I am accountable to me.I am accountable to me for what I do in my profession. And for acting to improve those things that need fixing in my own practice. If, on reflection, a lesson fails, it is on me to figure that out and fix it. If the students "don't get" what I'm teaching, I am accountable for finding another way for them to access those skills or that knowledge.If I disagree with how I am being told to teach or even what to teach, I am accountable to me. I need to read and research and seek out those who are expert so that I can persuade or disagree or (heavens!) go against the directive and do what is right. Even when it is lonely.DSC_0107Oh there are some "experts" who have the bully pulpit these days who would tell me that my job is to follow directives. Like a sheep.But sometimes I cannot do that.  I am accountable to me.

Three Things My Students' Test Scores Won't Tell You

Every day there appears a new idea for making teachers accountable for student achievement. Yesterday I noticed a pip of an idea in a twitter post: Phys. Ed. teachers should be evaluated based on their students' fitness level.  This preposterous idea, that the fitness level of a student who has maybe 40 minutes contact time with the physical education teacher, should be the basis for that teacher's effectiveness is exactly what discourages me. Isn't there an "outside" influence on such success? Of course there is -- the home, the importance a parent places on physical activity  follow-through, not to mention nutrition choices!And then I began thinking about how our own state testing is going to impact how I am perceived. Here are three things that you won't see from picking apart my students' MCAS scores:Being in class matters: The students who did not regularly attend school had the worst SRI growth -- I'm waiting to see what the MCAS data officially looks like, but I won't be surprised if these same students' results are not very good.  Their growth from beginning to end of year using the Fountas & Pinnell benchmark (although that's somewhat subjective) also reflected limited growth. It would appear that something must be taking place in class that would cause students who do come to school to learn. Hmmm, wonder what that could be?Supportive families matter: Even when students come from some pretty unbelievable socio-economic circumstances (homelessness, poverty, violence), the end-of-year results of students where the parent was a collaborator were positive. What does that say? Could it be that learning in a vacuum without home involvement is rare?Timing is everything: One of my biggest -- notice I said "one of" -- is the timing of the state English Language Arts exams.  It happens in March which is, let me count, 7 months into the school year. Please explain how 7 months of learning makes a complete year (10 months). It follows on the heels of ELL testing, the MEPA in Massachusetts. the poor 8- and 9-year old kiddos who have to do all of this get exhausted.If I'm accountable for learning for an entire third grade year, shouldn't I get the whole year? This year was a special challenge; students coming from one of the classrooms had a long-term substitute for much of second grade. The regular classroom teacher is a strong, conscientious teacher but the substitute was definitely not up to the task. For these students I spent a LOT of time trying to bridge gaps from second grade. I really could have used more than 7 months for this work.Isn't this what bothers educators about state testing tied to evaluations? It is the unknown, random, living-breathing fabric of teaching. We work with humans. Stuff happens. Outside influences impact the final "product".  There is more to growth (an lack thereof) than testing. 

Using Test Results to Evaluate Teaching

The writing is on the wall... the DESE is in the process of recommending that the teacher evaluation system overhaul include data about teacher effectiveness using the state's MCAS test.I really am annoyed that no one is listening to teachers who are saying "Wait a minute...". Not because we don't want to be evaluated; a constructive evaluation and critique of how to do things better is always welcomed by me. But I do have some footnotes that need to be added to my students' results.Like students whose parent(s) don't care enough about their child to get the child to school. I'm not talking about students who are absent for medical reasons here. If my teaching technique is so all-fired important, why is a student with 25, 30 or (all time winner) 44 absences allowed to count toward my effectiveness.This spring I plotted student absenteeism/tardiness and percentile growth using the standardized reading assessment we administer (SRI) and -- big surprise -- the student with 44 absences not only didn't make any gains, the student had negative growth. Well, duh. If the child isn't in school and instead is watching daytime television, or playing video games, is this a shock?No one wants to talk about the elephant sitting on that chair in the corner. But we need to.... learning success depends upon a student being in attendance. Without the student participating in learning activities, how can teacher effectiveness be measured.

Defining "Good" and "Bad" Teaching

Since when does a nationally recognized newspaper purport expertise on what makes an effective teacher?Since this morning, April 19, 2011 when the Boston Globe published an uncredited editorial entitled: Ed Commissioner's Plan for Teacher Evaluation Gets It Right. Apparently all that is necessary for teacher evaluations is some evidence of the following:

Effective teachers routinely impart a year-and-a-half-gain in student achievement over the course of a single academic year. Three or four consecutive years of exposure to that level of instruction can eradicate the achievement gap between low-income and high-income students. Bad teachers routinely secure just a half-year of student progress over the same period.

That's right, unless your students routinely make a year-and-a-half gain in the course of one academic year, you must be a "bad" teacher. Really? Where did you get that particular piece of data, Mr./Ms. Globe Editorial Writer?  Because if true, those teachers at high performing schools may not be "good" teachers -- their students may not be growing academically by a year and a half either.We all know that there is a real need for real evaluations of educators - and I include administrators too. I've taught under good ones and I taught under pathetic ones. I've also received children from teachers who clearly hadn't a clue and that makes me crazy too. No child should have to put up with it either.Clearly some kind of evaluation that is constructive is needed - as opposed to the punitive "everyone in education is crap" platitudes coming from business types who really haven't a clue what it is to deal with a human and therefore ever-changing "product" or from newspaper editors who simply and insidiously use their highly inflammatory language to sell more newspapers.So, Uncredited (do you really exists - show your face coward!) Globe Editorial Writer, if you have some data showing that "good" means a year and a half of growth please enlighten us. If you are pulling this data to support your thesis out of your rear-end or basing your editorial contribution on your own baggage and prejudices, you should be fired.

Standing on the shoulders of...

Emily Rooney's Greater Boston panel discussed the connection between a teacher's despondency and suicide and a recent LA Times article which ranked teachers by name. One can argue the stupidity of people who don't understand educational issues and all of the things that impact students. One can argue about the current need to equate education with business practice, i.e. "value added". But what I really don't get is how anyone can think testing in one grade level isn't impacted by what has happened before.Case in point: my current group of students includes 11 students reading at the first grade level. I teach third grade. I am not one of the two special education inclusion classes this year. This group of children is "regular" education, or as I prefer to say, my sped students haven't yet been identified.Where I will start teaching this year is not based on some immovable starting line. Where these students finish may not be at "grade" level.Will they get better? Will they improve as readers and writers? You had better believe that they will. But I am not the second coming and it is statistically doubtful that we can close a gap of 2 years within the 10 months (or 6 until MCAS Reading) we are working together. In other words, my students' learning and my ability to help them move along is based on what they have been able to do before they got to third grade.The class dynamic is quite a challenge even for a teacher with 23+ years experience. Traumas, poverty (2 of my students are living in welfare hotels), custody battles, ELL challenges, indifferent parenting....  this particular group of students, and their classmates in other homerooms are impacted by it all.  I often hear people talking about "last year's second grade"; they don't look wistful in their reminiscence.There's a history here; there's a dynamic with this group that has been present since they first arrived in the building. It spills over into the academics over and over throughout the day, impacting not only that one child's learning, but the other children's as well.What I am trying to say is that no one teacher is responsible for a students' progress. No teacher should be singled out by name in a newspaper article as ineffective. Education is a collaboration. It starts the minute a student steps in to a school. We are standing on the shoulders of what has happened before and we are reaching for the sky.