Jamie Hale

Jamie Hale

Thursday, January 11, 2018

Peer Review is not the Antidote



 In the Peer Review Process a paper is submitted to a journal and evaluated by several reviewers (often reviewers are individuals with an impressive history of work in the area of interest-that is, the specific area that the article addresses).  After critiquing the paper the reviewers submit their thoughts to the editor.  Then, based on the commentaries from the reviewers, the editor decides
whether to publish the paper, make suggestions for additional changes that could lead to publication, or to reject the paper. 

Single Blind and Double Blind Reviews

In Single Blind Reviews authors do not know who the reviewers are.  In Double Blind Reviews authors do not know who the reviewers are, nor do reviewers know the identity of the authors.  In many fields Single Blind Reviews are the norm, while in others Double Blind Reviews are preferred.
“Peer review is one way (replication is another) science institutionalizes the attitudes of objectivity and public criticism.  Ideas and experimentation undergo a honing process in which they are submitted to other critical minds for evaluation.  Ideas that survive this critical process have begun to meet the criterion of public verifiability” (Stanovich, 2007, p. 12).

Criticisms Peer Review Process

Reviewers find it hard to remain Purely Objective due to their own education, experience and biases

The process is slow Critics point out, and this may deter submission of quality papers  

There are many examples of poor research published in peer-reviewed journals, which indicates the peer review process is often unsuccessful in preventing the publication of  bad science

Sometimes good research is not published, especially when findings are not statistically significant.  Publication bias is problematic and demonstrates confirmation bias

Reviewers are not always knowledgeable regarding contents they are reviewing (I have been asked to review papers that consist of contents that was outside my area of knowledge)

Lack of agreement and communication among reviewers

Reviewers tend to be highly critical of articles that contradict their own views, while being less critical of articles that support their personal views (example of myside bias). Well-known, established scientists are more likely to be recruited as reviewers

Lacks standardized criteria, and criteria for publication demonstrates huge variability among scholarly publications.

Final word-Peer Review Process
The Peer Review Process is not perfect, but some researchers suggest  it is one of the best safeguards we have against junk science (Stanovich, 2007). When evaluating the worth of scientific data, in addition to whether it is published in a peer reviewed journal, it is important to take into consideration:  funding sources, study replication, study design, sample size, conflicting interest, sampling error, different measures of reliability and validity, reporting limitations and other possible criticisms of study (special concern with *statistical validity- often not acknowledged or understood). 

There are good studies that never get published in peer review publications, and low quality studies are published by peer review publishers.  It is erroneous to label a study, review, commentary, meta-analysis or any other scholarly papers as high quality based solely on peer review status. This over glorification of peer review pervades academia and pop science.      

* When researchers question a study’s statistical validity they are questioning issues relevant to how well the conclusions coincide with the results, represented as statistics. Interrogating statistical validity may include some of the following questions: If the study found a difference what is the probability that the conclusion was a false alarm?  If the study’s finding found no difference what is the probability that a real relationship went unnoticed?  What is the effect size?  Is the difference between groups statistically significant? Are the finding practically significant? What type of inferential stats were used to assess predictions? Could different statistical procedures have been used? How would different samples influence statistical findings?  Stats make use of samples; inferences about the population are derived from data collected in the study.  It is important to avoid an exaggeration of the findings, consider sampling error, consider external validity and consider the need for converging evidence, and what the finding indicate regarding a representation of the population.


Addendum- 2-9-18



from When Does Peer Review Make No Damn Sense:


"What sort of errors can we expect peer review to catch? ...I’m well placed to answer this question as I’ve published hundreds of peer-reviewed papers and written thousands of referee reports for journals. And of course I’ve also done a bit of post-publication review in recent years.

To jump to the punch line: the problem with peer review is with the peers.


In short, if an entire group of peers has a misconception, peer review can simply perpetuate error. We’ve seen this a lot in recent years, for example that paper on ovulation and voting was reviewed by peers who didn’t realize the implausibility of 20-percentage-point vote swings during the campaign, peers who also didn’t know about the garden of forking paths. That paper on beauty and sex ratio was reviewed by peers who didn’t know much about the determinants of sex ratio and didn’t know much about the difficulties of estimating tiny effects from small sample sizes.

OK, let’s step back for a minute. What is peer review good for? Peer reviewers can catch typos, they can catch certain logical flaws in an argument, they can notice the absence of references to the relevant literature—that is, the literature that the peers are familiar with. That’s how the peer reviewers for that psychology paper on ovulation and voting didn’t catch the error of claiming that days 6-14 were the most fertile days of the cycle: these reviewers were peers of the people who made the mistake in the first place!

Peer review has its place. But peer reviewers have blind spots. If you want to really review a paper, you need peer reviewers who can tell you if you’re missing something within the literature—and you need outside reviewers who can rescue you from groupthink. If you’re writing a paper on himmicanes and hurricanes, you want a peer reviewer who can connect you to other literature on psychological biases, and you also want an outside reviewer—someone without a personal and intellectual stake in you being right—who can point out all the flaws in your analysis and can maybe talk you out of trying to publish it.

Peer review is subject to groupthink, and peer review is subject to incentives to publishing things that the reviewers are already working on."


From: Evidence on peer review—scientific quality control or smokescreen?

"Summary points


  • Blinding reviewers to the author’s identity does not usefully improve the quality of reviews
  • Passing reviewers’ comments to their co-reviewers has no effect on quality of review
  • Reviewers aged under 40 and those trained in epidemiology or statistics wrote reviews of slightly better quality
  • Appreciable bias and parochialism have been found in the peer review system
  • Developing an instrument to measure manuscript quality is the greatest challenge" 


Even though not labeled as peer review, popular pubs often use peer review. That is, articles are reviewed by individuals that can be considered peers, suggestions are made regarding changes to article, and author responds and revises relevant to suggestions. So, the claim that only scholarly like "peer review" pubs involve reviewing and revising in accordance to others suggestions i incorrect. But, peer review journals, as a whole, publish higher quality information correct? Yes. To reiterate, the peer review process has strengths, but it has limitations, and it shouldn't be used "alone" as a marker of quality.

Read these:

Effects of  Editorial Peer Review

Effectsof Training on Quality of Peer Review 

 

 
   



No comments:

Post a Comment