The Bias in Philosophers’ Beards
The Brandeis Philosophy Department hosted a conference
Imagine the notion that not all philosophers have beards! This seems simple, right? Philosophy would disagree. Such an assumption is an example of implicit bias (in this case, that men are more academic than women). Natalia Washington of Washington University in St. Louis discussed issues like this at the Brandeis Philosophy Department‘s Sixth Annual Spring Conference, for which scholars from all over the country came to discuss this concept.
The conference was called “Implicit Bias and Social Justice,” and it featured three lecturers, including on by Washington, who teaches and studies the cognition of implicit bias. Washington is a postdoctoral fellow in the Philosophy-Neuroscience-Psychology program at Washington University in St. Louis. She studies “theoretic and conceptual foundations of psychiatry” and “projects in implicit bias and implicit cognition.” Her talk was called “Implicit Cognition and Practical Reason” and focused on how actions that individuals take are influenced by factors other than strict reason and thus tend to be a product of irrational thought.
Washington began by explaining the modern definition of implicit bias: that when making decisions, “many, if not most times, we’re influenced by factors we don’t enforce with reasons.” She then explained, “that’s a problem if we are motivated to prove ourselves as practical.” With this established as a major problem, she focused her talk on teaching what implicit bias is and explaining what to do about it.
Specifically, Washington discussed the underrepresentation of women in philosophy, pointing out the common misconception that being “a real genius in philosophy is a thing for people with beards.” She explained that this creates modern problems regarding the status of women in the field. For example, an admissions officer who reads entrance essays to a philosophy graduate program may have a certain degree of implicit bias while performing this task, often without even realizing it. Despite being “committed to doing this task fairly … due to the influence of an association between maleness and academic achievement, she chooses more male applicants.”
Additionally, Washington discussed a study in which participants had to press one button when an African American’s face and a “good” word appeared, and a different button when a Caucasian face and a “bad” word were displayed. Participants were scored on speed and accuracy. This first task yielded lower scores than when African American faces were associated with bad words and Caucasians with good words, showing that people have a internal, unexplainable bias for associating African Americans with negativity and Caucasians with positivity. Such a result in a study is “where that implicit association score comes from.”
TEACHING TOOLS: Washington discussed methods to prevent our internalized biases.
Lastly, Washington discussed a study in which one Caucasian and one African American face were each displayed on separate screens, each of identical brightness, and participants were asked which of the two seemed darker. Despite screen brightness being the exact same, most of the participants selected the screen with the African American as darker than the screen with the Caucasian. Even after hearing that the screens were the same brightness, participants still had trouble seeing them as equal.
After introducing these examples of implicit bias, Washington went on to discuss the importance of mindset when trying to solve these problems: “We shouldn’t think of implicit bias as a part of our psychology like the way a carburetor is a part of an engine. It’s not something that can be neatly taken out.”
Washington then went on to introduce potential solutions to this issue. There were several strategies that applied to various types of implicit bias.
Washington’s first proposal was to completely remove names, or “triggers,” from applications, assignments or anything that requires unbiased evaluation, which she suggested would make the evaluator only assess said assignments based exclusively on its quality. This would reduce gender or race differences, “and therefore, [the evaluator’s] implicit association would not be able to be triggered.”
Another way to mitigate implicit bias, Washington proposed, is to “use implementation intentions.” She explained that an implementation intention is “an if then plan that you can rehearse.” By this, she means that if someone sees something that may trigger an implicit bias influenced response, they should force themselves to think of the practical behavior as opposed to a biased one. Washington believes that “practicing this intention will help you automatize the behavior that you’re interested in.”
Another strategy, Washington suggested, was goal priming, which is the use of repeated affirmations that a bias or stereotype influencing a bias doesn’t exist. As an example, she discussed the writing of women in philosophy entrance essays: “There’s evidence to believe that [if] before you sit down to grade papers, you think ‘I’m egalitarian and women are just as good at philosophy as men,’ your bias will decease over time.”
These skills, Washington claims, are crucially important to people who struggle with implicit bias. She believes that in order to overcome bias, people ought “to practice, to routinize it, to put these movements into what we might call muscle memory.”