REPORTS

The 4th BAIRAL Research Meeting for 2021 Report on “From When Are Algorithmic Decisions Assumed as Discriminatory?: A Philosophical Criticism Using Normative Theory”

Akira Tanaka (2021 Research Assistant of the B’AI Global Forum)

・Date: Saturday, October 9, 2021 15:00-16:30 (JST)
・Venue: Zoom Meeting (online)
・Language: Japanese
・Guest Speaker: Haruka Maeda (PhD Student, Graduate School of Interdisciplinary Information Studies, The University of Tokyo)
・Moderator: Akira Tanaka
(Click here for details on the event)

On Saturday, October 9, 2021, we held the “BAIRAL,” where research assistants of the B’AI Global Forum organize to invite guest speakers with various research and practices that connect AI and society. We invited Haruka Maeda, researching AI and discrimination, to introduce how to understand the relationship between algorithms and the discrimination they cause from a philosophical perspective.

According to her, the ethical factors of AI, not just technological factors, cause discrimination. However, it is unclear what kind of discrimination is unfair because, just as human interacts with external influences, algorithms are affected by data and programs. Furthermore, there is a “black box problem” in that no one knows what logic AI is basing its decisions on, so discrimination cannot be attributed to the engineer. Therefore, while referring to Michel Foucault’s or Bruno Latour’s ideas, which do not depend on the subject, Ms. Maeda adopted Hellman’s theory, which does not focus on the intention of the subject, and concluded that algorithmic discrimination can be considered to be discrimination because the expression is possible that offends human dignity (expressive condition), that it is typical (conventional condition), and that it subjugates someone (hierarchical condition). In this regard, she raised some cases. For example, the US courts adopt the recidivism prediction component of COMPAS, which makes a decision unfair to black people by judging from an attribute, not a crime. In this case, she explained that its law enforcement purposes are problematic.

As mentioned above, based on the fact that artificial intelligence can discriminate in the same way as humans, she also presented the results of a previous study on the perception of artificial intelligence and discrimination in Japan. For example, discrimination based on false recognition (e.g., recognizing a black person as an animal) was perceived more negatively than discrimination that does not recognize a specific object (e.g., not recognizing only black faces). Moreover, many people in their 50s and younger prioritized the benefits of artificial intelligence over discrimination. In addition, many respondents answered, “I do not know,” accounting for 1/5 of the total. However, they were more likely not to answer if their “familiarity score” (with artificial intelligence) was medium or high.

Based on the above presentation, there was a lively discussion with the audience. Some excerpts are as follows; the human-centred legal system and ethics may not have caught up with the idea that objects are actors just like people and algorithms themselves do not discriminate, but only the people who create them do. In sum, the workshop was an excellent opportunity to raise the issue that even the conceptual understanding of discrimination needs reexamination in today’s society, that algorithms increasingly mediate anything.