A Los Angeles Times series that rated thousands of elementary school teachers based on their students' average test score gains rattled the education world -- including many who study teacher effectiveness for a living.

At the core of the provocative newspaper report is a new method of teacher evaluation with an esoteric name -- "value-added" -- and a complicated statistical formula. This measurement is designed to estimate the average progress a teacher's students made during a given school year, compared to other teachers, as captured by standardized tests.

The Times analyzed seven years of student test data for teachers in grades 3 to 5 and assigned 6,000 teachers rankings from 1 to 5, from "least effective" to "most effective." Teachers whose students showed greater improvement than others, on average, received higher rankings.

The nation's education secretary, Arne Duncan, endorsed the Times decision to publish the names and ratings of the teachers, saying parents have the right to know if their child's teacher is effective. But some, including noted UC Berkeley education professors Bruce Fuller and Xiaoxia Newton, said the Times acted irresponsibly by publishing the names of individual teachers and their rankings. They also argued that it's impossible to accurately reduce a teacher's effectiveness to one number. "Dumbing-down the public discourse does little to lift teacher quality," Fuller and Newton wrote in an opinion piece published in the Times.


Advertisement

To explore the issues in greater depth, Fuller and others at UC Berkeley organized a public forum on the subject. Panelists at the Sept. 27 event include a statistician with UC Berkeley's Graduate School of Education, an Oakland elementary school principal, a Hoover Institution economist, a state senator, and a high school journalism teacher from Walnut Creek.

"There's sort of a critical mass of researchers and faculty members here who think we should encourage a more informed conversation about how we should be evaluating teachers," Fuller said.

While better than previous models, some experts say, the value-added measure can be unreliable and unpredictable. Researchers with the U.S. Department of Education and with the Economic Policy Institute have recently urged policymakers not to rely too heavily on this method for high-stakes decisions, such as firing, discipline and pay.

A statistical analysis conducted by the federal Education Department's National Center for Education Evaluation found that in the typical "value-added" measurement system, one in four average teachers will be misidentified as poor performers, and that one in four poor performers will be overlooked.

"Though (value-added) methods have allowed for more sophisticated comparisons of teachers than were possible in the past, they are still inaccurate " said an EPI policy brief released late last month.

The Oakland school district has the capacity to do a similar assessment of its teachers, but it hasn't done so. Superintendent Tony Smith said his staff is working with hundreds of teachers to define what excellent teaching is and how to measure it. He said student test score data "has to" be included in evaluations along with measures such as suspension rates, student attendance, and the amount of time teachers spend learning from their colleagues.

"Is it better to be in one classroom or another? We have to be able to know that," he said.

The Mt. Diablo school district in Contra Costa County has recently created a new division that is pushing for increased testing, in the hopes that teachers will adjust their lessons based on how students are learning. But it doesn't evaluate its teachers based on test scores.

Mike Langley, president of the teacher's union, doesn't think it should. Testing doesn't take into account variables that affect children's lives outside the classroom, he said. Since testing focuses mainly on English and math, he argued, it would give short shrift to other subjects, such as physical education or ceramics, for which it would be nearly impossible to evaluate teaching based on standardized scores.

"If you can't evaluate all teachers using the system, then the system has to be flawed," Langley said. "We want to quantify everything in our culture and there are some things, unfortunately, that are not quantifiable. And that makes people angry."

Dale Eilers, a teacher at Oakland's Manzanita SEED Elementary School, said she thinks test scores should be a part of a teacher's evaluation, but that it's only fair to do so for teachers -- like her -- who aren't required to use a set curriculum. It's fair to judge her teaching by her students' scores, she said, because she writes her own curriculum and is free to tailor it to her students' needs.

Like many teachers, Eilers uses periodic, or "benchmark," tests to see how well her students are learning what she's teaching. She alters her lessons accordingly. Over the summer, after the children have moved onto the next grade, she sees how they did on the state test.

"I'm always disappointed," Eilers said. For her, she said, the lines on the data report aren't just numbers. They are people, and she imagines them "playing catch-up" in the next grade.

Tajada Scarbrough, whose children attend Cox Academy, an East Oakland charter school, said she thinks schools should use test score data internally to spot weaknesses and help teachers improve.

"But to put it out there to me, and to the neighbor next door, and to the lady next door? No, because it's too easy to speculate."

Staff Writer Theresa Harrington contributed to this story. Read Katy Murphy's Oakland schools blog at www.ibabuzz.com/education. Follow her at Twitter.com/katymurphy.

grading the teachers
WHAT: A forum to discuss teacher evaluations and test score data in light of the Los Angeles Times' "Grading the Teachers" series
WHEN: 1:30-4:30 p.m. Sept. 27
WHERE: UC Berkeley, Banatao Auditorium, Sutardja Dai Hall
Details: http://gse.berkeley.edu