Adam Marcus, Anonymous Peer Review, Blog, Character Assassination, Civil Death, Clare Francis, Defamation, Defamation lawsuit, Donald Trump, Expression of concern, Ivan Oransky, John Ioannidis, Joshua Cherry, Joshua L. Cherry, Joshua L. Cherry NIH, Post publication peer review, Post Publication Peer Review Scam, Reporting Retractions, Research Integrity, Retraction Watch, Science, Science blogs, Science Journalism, Science Transparency, Scientific corruption, US President

Anonymous peer review is fine, while anonymous post-publication review is not

When a scientist submits a paper for publication to a journal, he entrusts the journal editor with the task of finding peers would be able to review the paper and are knowledgeable enough to assess its scientific merit. The names of the reviewers are typically concealed to the author. The intent is to grant the reviewer complete freedom in his candid assessment without fear of retaliation. The system is imperfect, very much so, but during the last three centuries scientists have not managed to come up with anything better.

Post-publication peer review (PPPR), on the other hand, cannot be said to be imperfect. It is not even wrong. It is a grotesque aberration. PPPR is usually anonymous but in this case we have absolutely no assurance that the reviewer of the paper is a peer of the author, that is, someone capable of passing serious judgment, or rather someone with an ax to grind launching his or her personal attack. There is simply no editor that arbitrates PPPR, just reporters or science outsiders, like Ivan Oransky, who typically know nothing of the scientific subject of the paper and who merely reproduce a note in a journal or a piece of gossip or an opinion without adding any value. The consequences of this lack of leadership are dire for science: about 90% of the attacks launched by Oransky’s blog Retraction Watch under the pseudonym Clare Francis are either false or lacking merit, even if they manage to elicit an “expression of concern” (an illegality stigmatizing a person presumed innocent unless proven guilty). If US president Donald Trump branded reporters as a pathetic dishonest bunch, just imagine what he would have to say about blogs like Retraction Watch, where the founding reporters usually know nothing about the science related to their mini-scandals.

 

Oransky

This atmosphere of dishonesty provides a fertile soil for PPPR, where a few snipers like Joshua L. Cherry (NIH/NCBI?) strive. As readers may recall, Joshua L. Cherry has been identified by Science Transparency. Cherry is truly obsessed (read Cherry’s exchange with Prof. John Ioannidis), but unfortunately not with producing good science. When he launches personal attacks, Cherry disguises under multiple pseudonyms and e-mails, he cowardly shoots from the shadows, yet his style remains unmistakable: He obsessively insists in performing statistical analysis of large datasets with no scientific understanding of the data, or obsessively tries to reproduce data in a field he knows nothing about, failing miserably. Unfortunately, Joshua L. Cherry is the kind of byproduct that Retraction Watch and other such blogs generate. Were it not for the lack of leadership in PPPR, Cherry would have probably remained a scientist perhaps not incapable of generating interesting ideas. Yet, like many at Retraction Watch, he got trapped in futile battles against windmills.

As the Romans used to say: video meliora proboque, deteriora sequor ( I see the best and verify it, but I follow the worst). Tragic, tragic…

Advertisements
Standard
美国国立卫生研究院, Howard Hughes Medical Institute, John Ioannidis, National Institutes of Health, NIH, NIH funding, Peer Review, Principal Investigator, Research grant, Study Section

Peer Review: Is NIH Rewarding Talent?

In a striking analysis published in Nature, Stanford University researcher John Ioannidis, the expert on metadata investigation at the Stanford Prevention Research Center, has seriously questioned the way the National Institutes of Health fund research proposals. He and his colleague Joshua Nicholson have argued in what seems like a rousing condemnation of the status quo that peer-review, the process by which study sections review and rank research applications, is totally broken.  The researchers argued that peer review at NIH (简称) encourages “conformity, if not mediocrity”, favoring proposals submitted by people who know how to network and play the petty games of academic sociology, rather than those by people who have original and potentially influential ideas.

These conclusions rest heavily on the observation that only 40 percent of scientists with highly cited papers (say, those with more than 1000 citations) are principal investigators on NIH grants. That is, those scientists whose peers value their work most highly are often not receiving NIH support for that work.

Of course, the analysis may be imperfect. Here is my critique for one: Using high citation level as a proxy for originality is probably not entirely correct (but then what is a good proxy for originality?). It is also possible that a good percentage of these investigators have not even applied for NIH funding in the first place. And, finally, they may have other sources of support for their research, most likely, the Howard Hughes Medical Institute (should we then conclude that HHMI funds more original research than NIH?).

It is perhaps true that the peer review system is broken. The majority of the authors of the most influential papers in medicine and the life sciences seem not to have NIH funding according to Ioannidis and Nicholson, and their funding rate is possibly less than average. Perhaps the most disturbing observation, the one that truly needs the closest scrutiny, is that study section members are almost always funded while their citation impact is typically low or average: they are not the high-impact innovators.

This leaves us with a sad reflection. Probably a truly innovative idea cannot be appreciated by the peers, while if peers can readily grasp it (to the level they are willing to fund it), it is probably not innovative.

NIH is seemingly aware of this problem and has earnestly tried to address these concerns introducing specific award categories such as the Pioneer and New Innovator Awards. Perhaps Ioannidis and Nicholson may be willing to evaluate the efficacy of these categories in capturing true talent.

Standard