Box Scores (me)
That's my pie chart over there. Green represents students who improved one quintile; yellow for students who remained in the same quintile year-to-year; red for students who dropped to a lower quintile. That green section accounts for over 70 percent of student performance. There is no red. As a class, each of my groups averaged one quintile of growth.
Feeling good about a lot of green, but especially my Special Ed (RSP) kids. These kids accounted for 20 percent of my overall student population. Every one posted a higher scale score than the previous year. Over 80 percent moved up one quintile. Nineteen percent moved two quintiles.
Feeling bummed about all that yellow. It's too much. If I taught kids who began the year Proficient (4) or Advanced (5), that (generally speaking) static growth would be fine. I don't. I teach, in the parlance of the times, critically at-risk kids, those who fall many years of academic performance below what it would take to scrape out a high school diploma, nevermind the A-G requirements, nevermind higher ed matriculation. Massive, dramatic growth is called for, and even when the scaled score goes up within the quintile, that's not enough.
Feeling good about the writing scores. I've never taught so many students who scored proficient.
Feeling bummed that the stupid-ass narrative writing exam saved my butt. I'm thinking that pie-chart looks a lot more yellow without those weak proficient scores on that weak March assessment.
Feeling good about my just-out-of-the-newcomer-program kids. There were three of em, CELDT 1s and CST 1s all, and each kid scored Basic (3) on the CSTs, and two of them did so on the 8th grade exam. Those kids are awesome, and a walking advertisement for 1) the importance of teaching kids primary language literacy first and 2) the importance of a true newcomer center to teach, as a primary function, the English language.
Feeling bummed in general. It's been suggested to me that the previous post took an unnecessarily harsh and gloomy view of things. Maybe, but I expect better. From myself and those around me. We struggled a little, let the embers smolder rather than build that towering bonfire. I have more static students. I have far fewer proficient students. I have fewer students who achieved the goal of three years growth in one year. There's an obvious danger in looking yourself into a must-get-better-constantly model, especially when we're dealing with variance in the extent of success, the kind of success that if cruncher data is to be believed, no one's been having with these kids; still, it doesn't feel as good getting my pie charts back, and there's a clear imperative to come back with more focus, more energy, and more grit this year.
5 Comments:
At my school we're having a lot of conversation about data, collecting it, using it, what does it tell us. I'm proud of my 8th grade exam scores - always two years out-of-date, of course - even though the second year's scores fell a lot compared to the first. But I share your feeling of, these are good but never good enough. And although the drop is entirely explainable based on who was teaching what that year, and we have a much stronger team now, the drop is still not something that makes me happy. As a school, we make AYP easily but at a citywide level, we've done well enough to be moved up to a new category of "comparison schools" - and we don't look good in that category. Never mind that we're the only school in that category that is not in Manhattan. It's not an excuse, but it is a fact. Then there's some disturbing conversation about how we should accept a broader range of kids coming in... something I completely agree with for a variety of philosophical reasons around equity but not for the reason stated, "We're being measured on improvement, we should start with more room to improve." That's a lousy reason to take more struggling kids, IMO. Messy. How do we celebrate our successes, acknowledge the conditions that make our situation challenging, and still push for more & better? It's a fine line to walk between frustration and complacency.
Hi TMAO, it is Liz from I Speak of Dreams.
One thing I don't quite grasp about California's testing and reporting is that the scores are aggregated, rather than pegged to the individual students.
Wouldn't that make more sense?
I'm with Liz. Connecticut deals with its scores in the same way, from what I can tell, not taking into account that each year's crop of 10th graders contains different students with different issues. Add to that the fact that they still haven't entirely settled on a test format (there's always some new wrinkle) and it makes me wonder about the validity of the thing as a determiner of graduation status.
TMAO, you've given me some things to think about on my last day of summer vacation. Thanks for this post, and the last one.
Hi Liz; hi Jeff,
The API gets props for being a growth-based measure that weights moving the kids out of the lowest quintile heavily. Where it falls down is rather than measuring growth from start of year-1 to end of year-1, it measure growth from the end of year-0 to the end of year-1. Those aren't the same kids. In bigger schools, and certainly in elementary schools, this is less of a factor, but we are a relatively small middle school wherein 2/3 of our students were brand new last year.
And so on.
This comment has been removed by a blog administrator.
Post a Comment
<< Home