Facing tough competition from other states, California has been denied education stimulus funds twice in the past three months by the U.S. Department of Education.
The culprit? Not politics, say federal officials. Not a belief that California is too far gone to be saved. Rather, officials say, an objective, "peer review" process is the responsible party.
I decided to look into into how the peer review process works, because its outcome has the potential to directly affect millions of California school children.
The first time California was dinged was in March, when it lost out on potentially hundreds of millions from the Race to the Top fund. The second rejection came in May, when California's $20 million grant proposal to set up an ambitious longitudinal data system to track students from kindergarten through the 12th grade, college and into the workforce met the same fate.
California's longitudinal data application was evaluated by a panel of reviewers convened by the Institute of Education Sciences, an agency established during the administration of President George W. Bush. The goal was purportedly to make education policymaking more "science-based."
The Peer Review Summary Statement – made available publicly here for the first time, as far as I know – shows that during February meetings in Washington, D.C. a review panel assembled by the institute gave California's data-tracking proposal a respectable 2.346 score. (The score was based on a scale of one to five, with one being outstanding and five being poor).
But that put California at 26th place out of 50 states – and only the first 20 states received grants out of the $250 million available. Many have student populations a fraction the size of California's. (For example, while California got nothing, Arkansas got $9.8 million, Maine $7.3 million and Virginia $17.5 million.)
Many of the concerns raised by two principal reviewers were technical in nature and could have been clarified if California officials had been given a chance to amend the 238-page proposal or clarify certain points.
For example, Reviewer A, who was much more critical of California's longitudinal data proposal than Reviewer B, concluded the following:
There is no plan to train local educators in the collection and entry of data and no plan to monitor and evaluate the project. These are major oversights. If the State were to adjust this application to address these adequately, there would be very strong support for the funding of this project.
So, if California could have provided that plan, it appears there would have been a good chance for the state to receive funding.
But neither California nor any other state had a chance to respond to any of the questions raised by the reviewers before the state's application was rejected.
Sue Betka, the Institute of Education Sciences' deputy director for administration and policy, said that the U.S. Department of Education issued very extensive instructions for how to complete the proposal, and any state could have requested input from her staff before submitting it.
"We don't have a process by which applicants are able to respond to questions (after they have submitted their proposals)," she said.
Rather, a panel of reviewers pose the questions -- but no one is there to answer them.
And who are the reviewers?
They are selected in response to outreach efforts by the Department of Education. All have "a mix of expertise in building and managing state databases," said Anne Ricciuti, the Institute's deputy director for science.
Each reviewer receives $2000 for their work, along with travel expenses. The Education Department doesn't disclose the names of reviewers on any of its panels – although at the end of each year, it provides a list of all the reviewers it has used.
Each state's longitudinal-tracking proposal was assigned to two reviewers who provided "initial scores and written critiques."
What was striking were the reviewers differing assessments of California's proposal.
Reviewer A expressed doubts that "the major changes envisioned (in California's proposal) would actually be implemented." He or she also complained that "there is no plan to train local educators" to collect and enter data and "no plan to monitor and evaluate the project."
Reviewer B was far more positive, describing California's proposal as "a thoughtful and well conceptualized application." The reviewer said the "state has the internal expertise," which it will supplement "with appropriate external consultants and organizations." The state had put together a "workflow that can be carried out with the specified timeline .. and the outcomes, objectives and tasks are measurable in terms of specific deliverables."
How could reviewers have such different views of the same proposal? I asked whether reviewers are given any training on how to score proposals or whether there are any checks to makes sure that scores given by one reviewer actually mean the same thing as scores given by another – what statisticians call "inter-rater reliability."
After the two reviewers have gone through the proposals, they gathered with the full panel – a total of 16 members – to review all 53 proposals received by the Department of Education. This is what happens next, Ricciuti explained in an e-mail:
"In any process involving peer review, there are advantages and disadvantages," Ricciuti said. "There is no perfect way to do peer review." But she said the procedure is one followed by other government agencies, including the National Institute of Mental Health.
The problem for California – and the nation – is that California is home to one of every eight public school children in the United States and that "peer review" decisions made by an anonymous panel have potentially significant consequences not only for the state but the nation as well. "Peer review" may work well in deciding on research grants for individual researchers at universities around the nation, but may need to be adapted for big grants to states that have real-life consequences for their residents.
This is even more the case in a "high-stakes" competition, where California is applying for funds to set up systems without which California could be disadvantaged in future competitions for federal funds.
Although it may involve more work for federal officials, it seems reasonable to give states a chance to respond to questions posed by reviewers and to make adjustments to their proposals if they choose to do so.
Providing that kind of input is the way it typically works when submitting proposals to private foundations, something I have done dozens of times on behalf of a range of projects and organizations. The end result, I have found, is almost always a stronger proposal and plan with better outcomes for all.