In a striking analysis published in Nature, Stanford University researcher John Ioannidis, the expert on metadata investigation at the Stanford Prevention Research Center, has seriously questioned the way the National Institutes of Health fund research proposals. He and his colleague Joshua Nicholson have argued in what seems like a rousing condemnation of the status quo that peer-review, the process by which study sections review and rank research applications, is totally broken. The researchers argued that peer review at NIH (简称) encourages “conformity, if not mediocrity”, favoring proposals submitted by people who know how to network and play the petty games of academic sociology, rather than those by people who have original and potentially influential ideas.
These conclusions rest heavily on the observation that only 40 percent of scientists with highly cited papers (say, those with more than 1000 citations) are principal investigators on NIH grants. That is, those scientists whose peers value their work most highly are often not receiving NIH support for that work.
Of course, the analysis may be imperfect. Here is my critique for one: Using high citation level as a proxy for originality is probably not entirely correct (but then what is a good proxy for originality?). It is also possible that a good percentage of these investigators have not even applied for NIH funding in the first place. And, finally, they may have other sources of support for their research, most likely, the Howard Hughes Medical Institute (should we then conclude that HHMI funds more original research than NIH?).
It is perhaps true that the peer review system is broken. The majority of the authors of the most influential papers in medicine and the life sciences seem not to have NIH funding according to Ioannidis and Nicholson, and their funding rate is possibly less than average. Perhaps the most disturbing observation, the one that truly needs the closest scrutiny, is that study section members are almost always funded while their citation impact is typically low or average: they are not the high-impact innovators.
This leaves us with a sad reflection. Probably a truly innovative idea cannot be appreciated by the peers, while if peers can readily grasp it (to the level they are willing to fund it), it is probably not innovative.
NIH is seemingly aware of this problem and has earnestly tried to address these concerns introducing specific award categories such as the Pioneer and New Innovator Awards. Perhaps Ioannidis and Nicholson may be willing to evaluate the efficacy of these categories in capturing true talent.