Implicit Bias in Peer Review

When the term implicit bias is mentioned, many people's initial response is "But I'm not biased!” Of course, most of us are keenly aware of explicit bias, or prejudice - the kind that is easy to identify, might cause eye rolls or dropped mouths, and is openly discouraged in our educated circles.  Implicit bias is much more discrete, and much harder to identify.  It is also something we're all guilty of, because it occurs without us even knowing.  

Implicit bias is intimately linked to our own personal experience, and unconsciously affects our opinions and decisions, both positive and negative. It is molded over a lifetime of experience, images, stories, and perceptions.  Sometimes, our implicit bias is contrary to our stated ideals.  Implicit bias is omni-present, and predicts our behavior and response to specific situations, our judgements and decisions.  Implicit bias affects everyone, regardless of race, religion, or origin.  The good news is that implicit bias is also malleable, and shifting your own biases begins when you become aware of them.

How does implicit bias play out in scientific peer review? If you've sat on a grant review panel, or reviewed a paper for publication, you may have heard or thought:  "Her other publications aren't in high impact journals." "I'm not sure that institution has enough resources." "He could have used a more sophisticated technique to answer this question".  Alternatively:  "Dr. So-and-so has always produced good work." "Everything from that institution is high quality".  Implicit bias is lurking in all of these assumptions.  Does publication in a high impact journal necessarily reflect the quality of your work?  If you only publish in high impact journals, you might lean towards that opinion.  Does your appointment at a particular institution guarantee that you're producing the best science?  Maybe - but not necessarily - and you're more prone to believe that if you are or have been at one of those top institutions. 

What is the solution?  As for many things in life - education and awareness.  We can't eliminate our own implicit bias.  However, if we build awareness of how our experiences and perceptions shape our views, we can begin to slowly peel away at our biases.  Or, at the very least, we can begin to question those biases.  In peer review, this might mean probing whether our own experiences are universal.  How does our own position/age/race/gender/institution/research area/fill-in-the-blank influence our view of what makes a scientist/grant proposal/research publication successful?  Maybe, at your institution, you have the most sophisticated facilities at your disposal; however, the author of the paper you're reviewing may not have access to the same resources.  Does that ultimately diminish the quality of their science?  Is there bias in believing that only the newest, most sophisticated approaches yield appreciable results?  Maybe, but reflecting on how you form your views is what we do every day as scientists - it's time to turn it around and apply this approach to how we form our view during peer review.


Anna E. Beaudin, PhD
Assistant Professor
Department of Molecular and Cell Biology
School of Natural Science
University of California, Merced

Comments

  1. I think we have all been in a grant review session when comments made by a reviewer did not match facts in the proposal. Of course, someone might have simply missed something as they reviewed a ton of grants (and we all usually give them the benefit of the doubt). However, I think about implicit bias using that example. Do the ground-state facts and perception match? When they do not match, then one reason could be implicit bias. The question is whether we can be self-aware enough to catch when that happens.

    ReplyDelete

Post a Comment

Popular posts from this blog

Lab Spotlight: Laurenti Lab

ISEH 2023 Annual Scientific Meeting – Highlights from the New Investigators Committee

How to Make the Most Out of Your Lab’s Move