About hidden biases and some possible ways to reduce their effects on professional decisions.
Mahzarin Banaji is Richard Clark Cabot Professor of Social Ethics in the Department of Psychology at Harvard University. On 6th October, 2015 she gave a talk at National Institute of Advanced Studies (NIAS), Bengaluru based on her book Blindspot: Hidden Biases of Good People, co-authored with Anthony Greenwald. Blindspot discusses ‘hidden biases’ that all of us carry as a result of our social experiences — biases related to gender, ethnicity, religion, social class, physical traits of people, etc. After the talk, Hari Sridhar emailed Mahzarin a few questions about the implications of her findings for the way we run academia.
Reposted from INNGE blog
In your talk at NIAS, you said that interviews are the wrong way to compare candidates when hiring for a job (or a PhD programme). Can you tell us why you think so?
No doubt, interviews provide unique and even valuable information. But interviews also provide a great deal of irrelevant and even biasing information. Without knowledge, the interviewer is unable to separate the information that is actually a predictor of success versus information that is irrelevant and even pernicious. For example, preferences for those who are like us or share our beliefs is a powerful bias. In addition, stereotypes based on physical appearance, the degree of slickness of presentation, and categories of gender, ethnicity, community, caste, religion, social class, age, and language are given weight even when they may not be good predictors of performance.
Do you suggest that we do away with face-to-face interviews altogether? Or, are there ways in which interviews can be modified to reduce/eliminate bias?
There may well be ways of getting rid of interviews altogether, but in many instances, a candidate’s ability to speak and communicate is part of the hiring decision. In this case, how might we get such information? One possibility is that a face-to-face interview is conducted by a non-decision maker whose job is to pose a specific set of questions and simply pay attention to the skill of communication. That information may be added to other data contained in the resume. It may be possible in such a case to not have the decision-maker meet with the candidate at all, but to use the communication score as input along with the resume. This is not fool-proof of course, because the person testing communication skills will also be influenced by physical beauty, etc. But if they are trained only to attend to one feature (i.e., communication skill) it may be possible to get an assessment that is lower in bias.
I am not saying that resumes contain all the relevant information at all, or that they alone should be trusted. In fact, I would be in favour of doing away with resumes submitted by candidates and for these to be replaced by a form that all applicants complete. That way, the university or company gets to decide which variables are important for selection rather than candidates making that assessment. If one’s family has been in the ‘business’ of crafting and submitting resumes for a few generations such a person will naturally have tacit knowledge about how to produce an effective resume. If you want to create an equal playing field, all candidates should have to provide only the information required on a form. That way I don’t need to know your marital status or what you do in your spare time, items that often appear on a resume. Application forms should also be designed in such a way that more precise information is elicited about what exactly a person did to acquire the experience they say they have. For example, a candidate notes on her resume:
- Responsible for development & maintenance of an integrated digital strategy with objectives of Lead generation, Service Visibility, Brand Building and Thought Leadership. Oversee campaign management, Adwords, SEM and content strategy.
- Identified market opportunities, analyse customer needs and turn these into product requirements.
How is a decision-maker to know from this what exactly the person has done, how deep their role was, and how effective they were? We need to move to a process where the resume is replaced by an application form that seeks information in depth about actual work done, and for us not to be satisfied with such general and even vacuous statements.
Do you think these blindspots also affect other aspects of academia, e.g. peer-review of papers and grant proposals?
Whenever biasing information is present, and especially if it is not acknowledged as being biasing, we must remain open to the possibility that it may play a role in our decision making.
It is often difficult to keep peer-reviewed papers and grant proposals fully blind, but to the extent they can be, it is a reasonable strategy. Even the institution to which an individual belongs has been shown to change the favourability of peer review.
Do you think academia is doing enough to tackle bias and prejudice?
Those who belong to institutions of higher learning tend to think of themselves as unmotivated by concerns other than the pursuit of truth. They are indeed people who care less about making money, because many academics could indeed be employed in ways that would allow them to make substantially more money. For these reasons, perhaps, and because we do not have good measures or regular analysis of where our decisions were right or wrong, members of academia can hold on to the myth that they are not biased. In my experience, although universities and research institutions have come to see that their own decision making needs to be examined, it is those in the private sector who are the most aware — because they can see how biases based on group membership can cost them their bottom line and with immediate impact.
Psychologists, as a community, are probably much more aware of these issues. Does that also make them less prone to these biases?
I don’t think so. A study by Tony Greenwald and Eric Schuh showed that citation bias (in this case, the extent to which authors cite members of their own ethnic/religious community) exists even in a secular country like the United States. And such bias was greater among those who studied the topic of prejudice! That’s maybe because of a lack of awareness of bias, and those who study it feeling that they are outside the fray of bias.
How should an academician deal with his or her biases, so that they don’t affect the professional decisions he or she makes?
Awareness is the first step. Identifying the points in the decision making stream where bias can operate is a worthy endeavour. Constant vigilance is a must. And finding ways to have public discourse about the possible solutions that will work for a given community is the hard and final step.