Creating a free account will enable you to subscribe to our daily and weekly email newsletters, as well as customize your reading experience to show only the categories most relevant to you.
Signing up only take a few minutes, so why not give it a try and see what you've been missing out on.
Are scientists any good at judging the importance of the scientific work of others? According to a study published 8 October in the open access journal PLOS Biology (with an accompanying editorial), scientists are unreliable judges of the importance of fellow researchers' published papers.
The article's lead author, Professor Adam Eyre-Walker of the University of Sussex, says: "Scientists are probably the best judges of science, but they are pretty bad at it."
Prof. Eyre-Walker and Dr Nina Stoletzki studied three methods of assessing published scientific papers, using two sets of peer-reviewed articles. The three assessment methods the researchers looked at were:
The findings, say the authors, show that scientists are unreliable judges of the importance of a scientific publication: they rarely agree on the importance of a particular paper and are strongly influenced by where the paper is published, over-rating science published in high-profile scientific journals. Furthermore, the authors show that the number of times a paper is subsequently referred to by other scientists bears little relation to the underlying merit of the science.
As Eyre-Walker puts it: "The three measures of scientific merit considered here are poor; in particular subjective assessments are an error-prone, biased and expensive method by which to assess merit. While the impact factor may be the most satisfactory of the methods considered, since it is a form of prepublication review, it is likely to be a poor measure of merit, since it depends on subjective assessment."
The authors argue that the study's findings could have major implications for any future assessment of scientific output, such as currently being carried out for the UK Government's forthcoming Research Excellence Framework (REF). Eyre-Walker adds: "The quality of the assessments generated during the REF is likely to be very poor, and calls into question whether the REF in its current format is a suitable method to assess scientific output."
PLOS Biology is also publishing an accompanying Editorial by Dr Jonathan Eisen of the University of California, Davis, and Drs Catriona MacCallum and Cameron Neylon from the Advocacy department of the open access organization the Public Library of Science (PLOS).
These authors welcome Eyre-Walker and Stoletski's study as being "among the first to provide a quantitative assessment of the reliability of evaluating research", and encourage scientists and other to read it. They also support their call for openness in research assessment processes. However, they caution that assessment of merit is intrinsically a complex and subjective process, with "merit" itself meaning different things to different people, and point out that Eyre-Walker and Stoletski's study "purposely avoids defining what merit is".
Dr Eisen and co-authors also tackle the suggestion that the impact factor is the "least bad" form of assessment, recommending the use of multiple metrics that appraise the article rather than the journal ("a suite of article level metrics"), an approach that PLOS has been pioneering. Such metrics might include "number of views, researcher bookmarking, social media discussions, mentions in the popular press, or the actual outcomes of the work (e.g. for practice and policy)."
Funding: This work was supported by the salary paid to AEW. The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Authors: Eyre-Walker A, Stoletzki N
PLoS Biol 11(10): e1001675. doi:10.1371/journal.pbio.1001675
Article adapted by Medical News Today from original press release. Click 'references' tab above for source.
Visit our Biology / Biochemistry category page for the latest news on this subject.
Please use one of the following formats to cite this article in your essay, paper or report:
Biology, PLOS. "Scientists 'bad at judging peers' published work,' says new study." Medical News Today. MediLexicon, Intl., 8 Oct. 2013. Web.
10 Dec. 2013. <http://www.medicalnewstoday.com/releases/266916>
Biology, P. (2013, October 8). "Scientists 'bad at judging peers' published work,' says new study." Medical News Today. Retrieved from
Please note: If no author information is provided, the source is cited instead.
If you write about specific medications, operations, or procedures please do not name healthcare professionals by name.
For any corrections of factual information, or to contact the our editorial team, please use our feedback form. Please send any medical news or health news press releases to:
Note: Any medical information published on this website is not intended as a substitute for informed medical advice and you should not take any action before consulting with a health care professional. For more information, please read our terms and conditions.
This page was printed from: http://www.medicalnewstoday.com/releases/266916.php
Visit www.medicalnewstoday.com for medical news and health news headlines posted throughout the day, every day.
© 2004-2013 All rights reserved. MNT (logo) is the registered trade mark of MediLexicon International Limited.