Is science broken?
- Published on 21 January 2011
- Written by Nicholas C. DiDonato
- Hits: 3462
In 1802, Italian Gian Dominico Romognosi discovered something astounding: electric current can alter the direction of a compass. He realized that there must be some sort of relationship between electricity and magnetism. Unfortunately for Romognosi, no scientific journal would publish his brilliant work. How could this be? Simple: journal publishers didn't realize how important it was. This problem is still around today: Stefan Thurner and Rudolf Hanel (both from the Medical University of Vienna) have found that even a small number of incompetent or biased journal referees can lead to a dramatic decrease in journal quality.
Thurner and Hanel modeled a typical scientific journal, with two referees rating each submission. If both referees approved a sample paper, it would be published; if they both rejected the paper, it would not be published; and if they disagreed, there would be a 50% chance of publication.
They categorized refereeing style into five types. First, “correct” referees would always accept good papers and reject bad ones. Second, “altruists” would accept all papers, and (third) their counterpart, “misanthropists,” would reject all papers. Fourth, the “rationalists” accept papers that serve their own interests as academics (e.g., reject papers that hinder their own work or embarrass them professionally). Fifth and finally, “random” referees would accept/reject papers at random; this category represents referees who are either incompetent or who rush through their reviews.
Thurner and Hanel ran the model, which gave depressing results. If just 10% of the referees are in any category other than “correct,” the quality of published papers drops by a standard deviation. If a journal has only referees of the correct, rational, and random category and these are in equal numbers, then the quality of published papers essentially disappears.
Cosmologist Daniel Kennefick (University of Arkansas) believes that a major part of the problem is that academia encourages scientists to publish for the sake of publication, even if it adds nothing to the field. Publication leads to tenure, and so personal interests can easily supersede an article’s accuracy or strength of argument.
The field of religion faces the same problem. It too relies on peer-reviewed journals for spreading ideas, establishing credible ideas, and granting tenure. Though Thurner and Hanel intended their research to analyze scientific journals, their model is abstract enough to apply to any peer-reviewed journal where there are two referees per submission. For those interested in the intersection of science and religion, it is quite discouraging that both fields can be so easily disjointed by a small number of less than completely competent referees.
Can anything be done to improve the system of academic journals? Kennefick suggests that referees need to be accountable for the papers they accept and reject. As the saying goes, “Who watches the watchers?” Implementing some sort of review of reviewers may improve journal quality.
Some journal publishers have started to address the problem. Electronic submission systems often include tracking procedures that allow journal editors to analyze the history of a referee's performance. But this gives only a journal-by-journal view of a referee's history. And no analyses are provided of referee history against article quality (as judged, say, by citation frequency).
More radically, Thurner believes the entire system should be revamped: rather than authors submitting their papers to journals, they should submit them to a pre-publication server, and then journals should hire scouts that work through this server. Ideally, these scouts will find the most innovative and interesting papers, and then contact the author with an offer of publication. Thurner believes that not only will authors find publishers in this system, but that the best papers will receive publication offers from multiple journals. But it is doubtful whether such a centralized system could ever be established, given the competitive relationships among journals and journal publishers.
Of course, none of this research means that journals are completely unreliable or that they publish only recycled ideas. It does mean, however, that journals all too easily can fail to publish the papers that they should be publishing, and that peer-review is a fragile practice. Hopefully the next great ideas in every field will be published by their respective journals. As long as fields do not recognize the revolutions in their midst, they are broken.
For more, see "Peer review highly sensitive to poor refereeing, claim researchers" in Physics World. Thurner’s and Hanel’s paper is also online.