The publication of a controversial, and groundbreaking, article by the Los Angeles Times raises complex questions about whether to "out" teachers whose students perform poorly on reading and math tests.
That is especially true when using "value-added" techniques that are complicated even for statisticians who do this kind of thing for a living.
The Times' analysis holds the potential to fling open the door of any California classroom for public examination in a way that has never been attempted before.
U.S. Secretary of Education Arne Duncan has no problems with the practice. "What do they have to hide?" he said in response to the Times article, referring to the teachers identified in the report.
Bonnie Reiss, Gov. Arnold Schwarzenegger's secretary of education, also gushed. "Publishing this data is not about demonizing teachers," she said. "It's going to create a more marketplace-driven approach to results."
On the other hand, the United Teachers of Los Angeles denounced the disclosures as "dangerous and irresponsible." Union leader A.J. Duffy is threatening a boycott of the paper, plus possible legal action.
Let's try to get beyond the rhetoric.
Researchers I talked with tell me that if this had been an academic study, the researchers would never have been given permission under human subject research guidelines to disclose the names of teachers.
Jennifer Imazeki, an economist at San Diego State University, wrote on John Fensterwald's The Educated Guess:
Regardless of how one feels about value-added, as a researcher, I've been shocked at the public disclosure of teachers' names. Most researchers have to sign their lives away in confidentiality agreements if they want to use student-level data with individual identifiers. How in the world did the Times get their hands on this data without such an agreement?
Richard Buddin, a respected economist at the Rand Corporation who as an independent contractor ran the numbers for the L.A. Times, said he had nothing to do with releasing the teachers' names.
In two e-mails to me, he explained that the files he used for his analysis had "scrambled student and teacher identifiers" and that he made "no attempt to link the scrambled identifier with teacher names." "The Los Angeles Times did this after I completed my analysis," he wrote in an email.
So how did the Times get the names of teachers from LAUSD? Simple: They asked for them.
Robert Alaniz, LAUSD's director of communications, told me the district's legal department concluded that under California's Public Records Act, the district had no choice but to release the names of the teachers, and to link their names to the test scores of their students. He said that if test scores had been used as part of a teacher's performance evaluation, the scores would have remained private. But because they aren't, they are not regarded as confidential information.
"We vetted it with our legal staff, and determined that the request was valid, and that we did have to turn over the teachers' names," Alaniz said. "As adults, as employees, their names fall into the public domain."
He said the district has some safety concerns about the Times' plan to publish the names of 6,000 teachers and where they teach, because some may want to keep their location secret from former spouses and others they may have restraining orders against, etc. The district also has concerns about an over-reliance on on using test scores to evaluate teachers. "It should be just one of many different factors," he said.
All this would be more straightforward if teachers were identified on a clear-cut fact that is either true or false, such as whether they have the proper teaching credentials, or how much they get paid, experts say.
A report issued last month by the U.S. Department of Education's Institute of Education Sciences concluded that "policymakers must carefully consider likely system errors when using value-added estimates to make high stakes decisions regarding educators."
And last fall, the National Research Council took a close look at the administration's promotion of the value-added methodology as a criterion for states to qualify for its $4.3 billion "Race to the Top" program.
The headline announcing its report, referred to briefly in the Times article, declared, "Value-added methods to assess teachers not ready for use in high-stakes decisions."
The distinguished panel that drew up the report, which included two UC Berkeley professors, Michael Hout and Mark Wilson, warned the administration that "although the idea has intuitive appeal, a great deal is unknown about the potential and the limitations of alternative statistical models for evaluating teacher's value-added contributions to student learning."
One of the concerns raised by the panel was the complexity of the statistical methods used, which would make "transparency" difficult and critiquing an impossibility for anyone but the most sophisticated statistician.
That seems to apply to the dense report written by Buddin accompanying the Times article, in which he explains his methodology.
Take this paragraph, picked more or less at random:
Data sets on teacher inputs are incomplete, and observed-teacher inputs may be chosen endogenously with respect to the unobserved-teacher inputs (teacher-unobserved heterogeneity). For example, teacher effort may be difficult to measure, and effort might be related to measured teacher qualifications, i.e., teachers with higher licensure test scores may regress to the mean with lower effort.
Or this paragraph:
Teacher heterogeneity (φj) is probably correlated with observable student and teacher characteristics (e.g., non-random assignment of students to teachers). Therefore, random-effect methods are inconsistent, and the fixed-teacher effects are estimated in the model. The fixed-teacher effects are defined as ψj=φj+qjρ.
It will require a lot more than fifth grade arithmetic to penetrate that algebraic thicket.
UPDATE: From Jason Felch:
We welcome the discussion and scrutiny of our articles and methodology, but several things in Freedberg's post require clarification. ...
The premise of Freedberg's post is that, had it been an academic publication, it may have been done differently. It was not an academic publication, it was investigative reporting done in the public interest with public records. The decision to post teachers names was a journalistic one made after careful consideration at the highest levels of the LA Times.
Freedberg refers to the value-added technique as "mostly untested." The technique was developed in the 1970s and has been used by school districts and states since the early 1990s. Dozens of academic papers during that period have studied the approach, its potential and its limitations. In 2008, for example, leading researchers from Harvard and Dartmouth conducted a random-assignment experimental validation of the approach with LAUSD data and published their results here: http://isites.harvard.edu/fs/docs/icb.topic245006.files/Kane_Staiger_3-1... (PDF)
Finally, Freedburg takes the curious tack of criticizing our transparency. The methodological paper he cites is written for the research community so that our approach can be vetted by experts in the field. For a lay audience, the Times has also published a lengthy Q&A, two videos explaining the approach, a list of research papers reviewed by reporters, an About this Story explaining our process and an Editor's Note addressing the decision to publish teachers names. It appears Freedberg did not see, or did not care to mention, those efforts.