FANDOM


File:ScientificReview.jpg

Peer review is the evaluation of work by one or more people with similar competences as the producers of the work (peers). It functions as a form of self-regulation by qualified members of a profession within the relevant field. Peer review methods are used to maintain quality standards, improve performance, and provide credibility. In academia, scholarly peer review is often used to determine an academic paper's suitability for publication. Peer review can be categorized by the type of activity and by the field or profession in which the activity occurs, e.g., medical peer review.

Professional Edit

Professional peer review focuses on the performance of professionals, with a view to improving quality, upholding standards, or providing certification. In academia, peer review is used to inform in decisions related to faculty advancement and tenure.[1] Henry Oldenburg (1619–1677) was a German-born British philosopher who is seen as the 'father' of modern scientific peer review.[2][3][4]

WA prototype is a professional peer-review process originally recommended in the Ethics of the Physician written by Ishāq ibn ʻAlī al-Ruhāwī (854–931). He stated that a visiting physician had to make duplicate notes of a patient's condition on every visit. When the patient was cured or had died, the notes of the physician were examined by a local medical council of other physicians, who would decide whether the treatment had met the required standards of medical care.[5]

Professional peer review is common in the field of health care, where it is usually called clinical peer review.[6] Further, since peer review activity is commonly segmented by clinical discipline, there is also physician peer review, nursing peer review, dentistry peer review, etc.[7] Many other professional fields have some level of peer review process: accounting,[8] law,[9][10] engineering (e.g., software peer review, technical peer review), aviation, and even forest fire management.[11]

Peer review is used in education to achieve certain learning objectives, particularly as a tool to reach higher order processes in the affective and cognitive domains as defined by Bloom's taxonomy. This may take a variety of forms, including closely mimicking the scholarly peer review processes used in science and medicine.[12][13]

Scholarly Edit

Main article: Scholarly peer review

Scholarly peer review

Government policyEdit

Template:Further The European Union has been using peer review in the "Open Method of Co-ordination" of policies in the fields of active labour market policy since 1999.[14] In 2004, a program of peer reviews started in social inclusion.[15] Each program sponsors about eight peer review meetings in each year, in which a "host country" lays a given policy or initiative open to examination by half a dozen other countries and the relevant European-level NGOs. These usually meet over two days and include visits to local sites where the policy can be seen in operation. The meeting is preceded by the compilation of an expert report on which participating "peer countries" submit comments. The results are published on the web.

The United Nations Economic Commission for Europe, through UNECE Environmental Performance Reviews, uses peer review, referred to as "peer learning", to evaluate progress made by its member countries in improving their environmental policies.

The State of California is the only U.S. state to mandate scientific peer review. In 1997, the Governor of California signed into law Senate Bill 1320 (Sher), Chapter 295, statutes of 1997, which mandates that, before any CalEPA Board, Department, or Office adopts a final version of a rule-making, the scientific findings, conclusions, and assumptions on which the proposed rule are based must be submitted for independent external scientific peer review. This requirement is incorporated into the California Health and Safety Code Section 57004.[16]

MedicalEdit

Main article: Clinical peer review

Medical peer review may be distinguished in 4 classifications: 1) clinical peer review; 2) peer evaluation of clinical teaching skills for both physicians and nurses;[17][18] 3) scientific peer review of journal articles; 4) a secondary round of peer review for the clinical value of articles concurrently published in medical journals.[19] Additionally, "medical peer review" has been used by the American Medical Association to refer not only to the process of improving quality and safety in health care organizations, but also to the process of rating clinical behavior or compliance with professional society membership standards.[20][21] Thus, the terminology has poor standardization and specificity, particularly as a database search term.Template:Citation needed

TechnicalEdit

Main article: Technical peer review

In engineering, technical peer review is a type of engineering review. Technical peer reviews are a well defined review process for finding and fixing defects, conducted by a team of peers with assigned roles. Technical peer reviews are carried out by peers representing areas of life cycle affected by material being reviewed (usually limited to 6 or fewer people). Technical peer reviews are held within development phases, between milestone reviews, on completed products or completed portions of products.[22]

CriticismEdit

To an outsider, the anonymous, pre-publication peer review process is opaque. Certain journals are accused of not carrying out stringent peer review in order to more easily expand their customer base, particularly in journals where authors pay a fee before publication.[23] Richard Smith, MD, former editor of the British Medical Journal, has claimed that peer review is "ineffective, largely a lottery, anti-innovatory, slow, expensive, wasteful of scientific time, inefficient, easily abused, prone to bias, unable to detect fraud and irrelevant; Several studies have shown that peer review is biased against the provincial and those from low- and middle-income countries; Many journals take months and even years to publish and the process wastes researchers' time. As for the cost, the Research Information Network estimated the global cost of peer review at £1.9 billion in 2008."[24]

In addition, Australia's Innovative Research Universities group (a coalition of seven comprehensive universities committed to inclusive excellence in teaching, learning and research in Australia) has found that "peer review disadvantages researchers in their early careers, when they rely on competitive grants to cover their salaries, and when unsuccessful funding applications often mark the end of a research idea".[25]

Low-end distinctions in articles understandable to all peers Edit

John Ioannidis argues that since the exams and other tests that people pass on their way from "layman" to "expert" focus on answering the questions in time and in accordance with a list of answers, and not on making precise distinctions (the latter of which would be unrecognizable to experts of lower cognitive precision anyway), there is as much individual variation in the ability to distinguish causation from correlation among "experts" as there is among "laymen". Ioannidis argues that as a result, scholarly peer review by many "experts" allows only articles that are understandable at a wide range of cognitive precision levels including very low ones to pass, biasing publications towards favoring articles that infer causation from correlation while mislabelling articles that make the distinction as "incompetent overestimation of one's ability" on the side of the authors because some of the reviewing "experts" are cognitively unable to distinguish the distinction from alleged rationalization of specific conclusions. It is argued by Ioannidis that this makes peer review a cause of selective publication of false research findings while stopping publication of rigorous criticism thereof, and that further postpublication review repeats the same bias by selectively retracting the few rigorous articles that may have made it through initial prepublication peer review while letting the low-end ones that confuse correlation and causation remain in print.[26]

Peer review and trust Edit

Researchers have peer reviewed manuscripts prior to publishing them in a variety of ways since the 18th century.[27][28] The main goal of this practice is to improve the relevance and accuracy of scientific discussions. Even though experts often criticize peer review for a number of reasons, the process is still often considered the "gold standard" of science.[29] Occasionally however, peer review approves studies that are later found to be wrong and rarely deceptive or fraudulent results are discovered prior to publication.[30][31] Thus, there seems to be an element of discord between the ideology behind and the practice of peer review. By failing to effectively communicate that peer review is imperfect, the message conveyed to the wider public is that studies published in peer-reviewed journals are "true" and that peer review protects the literature from flawed science. A number of well-established criticisms exist of many elements of peer review.[32][33][34] In the following we describe cases of the wider impact inappropriate peer review can have on public understanding of scientific literature.

Multiple examples across several areas of science find that scientists elevated the importance of peer review for research that was questionable or corrupted. For example, climate change deniers have published studies in the Energy and Environment journal, attempting to undermine the body of research that shows how human activity impacts the Earth's climate. Politicians in the United States who reject the established science of climate change have then cited this journal on several occasions in speeches and reports.[note 1]

At times, peer review has been exposed as a process that was orchestrated for a preconceived outcome. The New York Times gained access to confidential peer review documents for studies sponsored by the National Football Leagues (NFL) that were cited as scientific evidence that brain injuries do not cause long-term harm to its players.[note 2] During the peer review process, the authors of the study stated that all NFL players were part of a study, a claim that the reporters found to be false by examining the database used for the research. Furthermore, The Times noted that the NFL sought to legitimize the studies" methods and conclusion by citing a "rigorous, confidential peer-review process" despite evidence that some peer reviewers seemed "desperate" to stop their publication. Recent research has also demonstrated that widespread industry funding for published medical research often goes undeclared and that such conflicts of interest are not appropriately addressed by peer review.[35][36]

Another problem that peer review fails to catch is ghostwriting, a process by which companies draft articles for academics who then publish them in journals, sometimes with little or no changes.[37] These studies can then be used for political, regulatory and marketing purposes. In 2010, the US Senate Finance Committee released a report that found this practice was widespread, that it corrupted the scientific literature and increased prescription rates.[note 3] Ghostwritten articles have appeared in dozens of journals, involving professors at several universities.[note 4]

Just as experts in a particular field have a better understanding of the value of papers published in their area, scientists are considered to have better grasp of the value of published papers than the general public and to see peer review as a human process, with human failings,[38] and that "despite its limitations, we need it. It is all we have, and it is hard to imagine how we would get along without it".[39] But these subtleties are lost on the general public, who are often misled into thinking that published in a journal with peer review is the "gold standard" and can erroneously equate published research with the truth.[38] Thus, more care must be taken over how peer review, and the results of peer reviewed research, are communicated to non-specialist audiences; particularly during a time in which a range of technical changes and a deeper appreciation of the complexities of peer review are emerging.[40][41][42][43] This will be needed as the scholarly publishing system has to confront wider issues such as retractions[31][44][45] and replication or reproducibility "crisis'.[46][47][48]

Views of peer review Edit

Peer review is often considered integral to scientific discourse in one form or another. Its gatekeeping role is supposed to be necessary to maintain the quality of the scientific literature[49][50] and avoid a risk of unreliable results, inability to separate signal from noise, and slow scientific progress.[51][52]

Shortcomings of peer review have been met with calls for even stronger filtering and more gatekeeping. A common argument in favor of such initiatives is the belief that this filter is needed to maintain the integrity of the scientific literature.[53][54]

Calls for more oversight have at least two implications that are counterintuitive of what is known to be true scholarship.[38]

  1. The belief that scholars are incapable of evaluating the quality of work on their own, that they are in need of a gatekeeper to inform them of what is good and what is not.
  2. The belief that scholars need a "guardian" to make sure they are doing good work.

Others argue[38] that authors most of all have a vested interest in the quality of a particular piece of work. Only the authors could have, as Feynman (1974)[note 5] puts it, the "extra type of integrity that is beyond not lying, but bending over backwards to show how you're maybe wrong, that you ought to have when acting as a scientist." If anything, the current peer review process and academic system could penalize, or at least fail to incentivize, such integrity.

Instead, the credibility conferred by the "peer-reviewed" label could diminish what Feynman calls the culture of doubt necessary for science to operate a self-correcting, truth-seeking process.[55] The effects of this can be seen in the ongoing replication crisis, hoaxes, and widespread outrage over the inefficacy of the current system.[32][27] It's common to think that more oversight is the answer, as peer reviewers are not at all lacking in skepticism. But the issue is not the skepticism shared by the select few who determine whether an article passes through the filter. It is the validation, and accompanying lack of skepticism, that comes afterwards.[note 6] Here again more oversight only adds to the impression that peer review ensures quality, thereby further diminishing the culture of doubt and counteracting the spirit of scientific inquiry.[note 7]

Quality research - even some of our most fundamental scientific discoveries - dates back centuries, long before peer review took its current form.[27][56][28] Whatever peer review existed centuries ago, it took a different form than it does in modern times, without the influence of large, commercial publishing companies or a pervasive culture of publish or perish.[56] Though in its initial conception it was often a laborious and time-consuming task, researchers took peer review on nonetheless, not out of obligation but out of duty to uphold the integrity of their own scholarship. They managed to do so, for the most part, without the aid of centralised journals, editors, or any formalised or institutionalised process whatsoever. Supporters of modern technology argue[38] that it makes it possible to communicate instantaneously with scholars around the globe, make such scholarly exchanges easier, and restore peer review to a purer scholarly form, as a discourse in which researchers engage with one another to better clarify, understand, and communicate their insights.[41][57]

Such modern technology includes posting results to preprint servers, preregistration of studies, open peer review, and other open science practices.[47][58][59] In all these initiatives, the role of gatekeeping remains prominent, as if a necessary feature of all scholarly communication, but critics argue[34] that a proper, real-world implementation could test and disprove this assumption; demonstrate researchers' desire for more that traditional journals can offer; show that researchers can be entrusted to perform their own quality control independent of journal-coupled review. Jon Tennant also argues that the outcry over the inefficiencies of traditional journals centers on their inability to provide rigorous enough scrutiny, and the outsourcing of critical thinking to a concealed and poorly-understood process. Thus, the assumption that journals and peer review are required to protect scientific integrity seems to undermine the very foundations of scholarly inquiry.[38]

To test the hypothesis that filtering is indeed unnecessary to quality control, many of the traditional publication practices would need to be redesigned, editorial boards repurposed if not disbanded, and authors granted control over the peer review of their own work. Putting authors in charge of their own peer review is seen as serving a dual purpose.[38] On one hand, it removes the conferral of quality within the traditional system, thus eliminating the prestige associated with the simple act of publishing. Perhaps paradoxically, the removal of this barrier might actually result in an increase of the quality of published work, as it eliminates the cachet of publishing for its own sake. On the other hand, readers know that there is no filter so they must interpret anything they read with a healthy dose of skepticism, thereby naturally restoring the culture of doubt to scientific practice.[60][61][62]

In addition to concerns about the quality of work produced by well-meaning researchers, there are concerns that a truly open system would allow the literature to be populated with junk and propaganda by those with a vested interest in certain issues. A counterargument is that the conventional model of peer review diminishes the healthy skepticism that is a hallmark of scientific inquiry, and thus confers credibility upon subversive attempts to infiltrate the literature.[38] Allowing such "junk" to be published could make individual articles less reliable but render the overall literature more robust by fostering a "culture of doubt".[60]

One initiative experimenting in this area is Researchers.One, a non-profit peer review publication platform featuring a novel author-driven peer review process.[63] Other similar examples include the Self-Journal of Science, PRElights, and The Winnower, which do not yet seem to have greatly disrupted the traditional peer review workflow. Supporters conclude that researchers are more than responsible and competent enough to ensure their own quality control; they just need the means and the authority to do so.[38]

See alsoEdit

NotesEdit

  1. Template:Cite journal
  2. Template:Cite web
  3. Template:Cite journal
  4. Template:Cite book
  5. Template:Cite journal
  6. Template:Cite journalTemplate:Dead link
  7. Template:Cite journal
  8. Template:Cite web
  9. Template:Cite web
  10. Template:Cite web
  11. Template:Cite web
  12. Template:Cite journal
  13. Template:Cite journal
  14. Template:Cite web
  15. Template:Cite web
  16. Template:Cite web
  17. Medschool.ucsf.edu Template:Webarchive
  18. Template:Cite journal
  19. Template:Cite journal
  20. Template:Cite book
  21. Template:Cite web
  22. Template:Cite book
  23. Template:Cite journal
  24. Template:Cite news
  25. Template:Cite news
  26. JPA Ioannidis (2005) "Why Most Published Research Findings Are False"
  27. 27.0 27.1 27.2 Template:Cite journal
  28. 28.0 28.1 Template:Cite journal
  29. Template:Cite journal
  30. Template:Cite journal
  31. 31.0 31.1 Template:Cite journal
  32. 32.0 32.1 Template:Cite journal
  33. Template:Cite journal
  34. 34.0 34.1 Template:Cite journal
  35. Template:Cite journal
  36. Template:Cite journal
  37. Template:Cite journal
  38. 38.0 38.1 38.2 38.3 38.4 38.5 38.6 38.7 38.8 Template:Cite journal
  39. Template:Cite journal
  40. Template:Cite journal
  41. 41.0 41.1 Template:Cite journal
  42. Template:Cite journal
  43. Template:Cite journal
  44. Template:Cite journal
  45. Template:Cite journal
  46. Template:Cite journal
  47. 47.0 47.1 Template:Cite journal
  48. Template:Cite journal
  49. Template:Cite journal
  50. Template:Cite journal
  51. Template:Cite journal
  52. Template:Cite journal
  53. Template:Cite journal
  54. Template:Cite journal
  55. Template:Cite journal
  56. 56.0 56.1 Template:Cite journal
  57. Template:Cite journal
  58. Template:Cite journal
  59. Template:Cite journal
  60. 60.0 60.1 Template:Cite web
  61. Template:Cite journal
  62. Template:Cite journal
  63. Template:Cite web

ReferencesEdit

Further readingEdit

External linksEdit


Cite error: <ref> tags exist for a group named "note", but no corresponding <references group="note"/> tag was found.
Community content is available under CC-BY-SA unless otherwise noted.