Why are robots subject to racial and gender bias?
Podcast: Play in new window | Download
Subscribe: Spotify | Amazon Music | iHeartRadio | TuneIn | Deezer |
Racism is a major problem in our society and it is an issue from which the HRI community cannot shy away from. Several studies showed that people transfer their racial biases onto robots. In this episode we will talk about these difficult topics of racism and sexism. My guest are Kumar Yogeeswaran, Friederike Eyssel and Megan Strait. They all work on racism among humans and towards robots. Besides identifying the biases we also talk about how and if robots might be able to help reducing them.
Transcript
The transcript of the episode is available as a PDF. You can also follow the episode with subtitles through Descript.
HRI-Podcast-Episode-005-Robots-And-Racism-Transcript
ISSN 2703-4054
Relevant links
Addendum
In an earlier version of this post the work of Megan Strait on racism among humans and towards robots was described as “extensive”. Upon Megan’s request the qualifier “extensive” was removed.
Megan Strait would like to also note that:
I do not endorse the idea of using robots to reduce bias, as I do not find the premise to respect existing understanding and literature. If racism were readily solvable via intergroup exposure, there would not be such movement on “AI ethics” as is readily apparent in the discourse of mainstream media. With respect to combatting racism, my perspective is that robots have potential value in the role they could play in moderating of social dynamics (see, for example, Campos, Martinho, & Paiva 2018, Hoffman, Zuckerman, Hirschberger, & Shani-Sherman 2015, and Martelaro, Jung, & Hinds 2015). Applications in this manner have particular potential to address social inequities (e.g., the placement of responsibility largely on people of color to combat manifestations of racism). But that does not specifically serve toward attenuating individual and institutional bias.