REPORTS

Report on the 9th BAIRAL Research Meeting for 2022
“Co-creation of Agency: Towards Ethical Design for Social Robots”

Nozomi Ohtsuki (Research Assitant, B’AI Global Forum)

・Date: Friday, February 3, 2023, 16:00-17:30 (JST)
・Venue: Zoom Meeting
・Language: Japanese
・Guest Speaker: Takuya Mizukami (Postdoctoral Researcher, RIKEN Center for Advanced Intelligence Project (AIP); Visiting Researcher, Interfaculty Initiative in Information Studies, The University of Tokyo)
・Moderator: Nozomi Ohtsuki (Research assistant of the B’AI Global Forum)

(Click here for details on the event)

The 9th BAIRAL was held online on February 3, 2023. At this session, Takuya Mizukami, a Postdoctoral Researcher at RIKEN AIP and a Visiting Researcher at The University of Tokyo III, was invited to speak on the theme of “Co-creation of Agency: Towards Ethical Design for Social Robots.”

As the use of social robots becomes more prevalent in society, various applications have been made. However, the excessive emotional impact on users caused by their speech and behaviour has been identified as an ethical issue. In Mizukami’s presentation, the philosophical problem of understanding and designing the artificially created “agency” of social robots was critically examined, drawing on debates surrounding agency in the philosophy of technology and AI ethics.

Mizukami first defined social robots as technical artefacts that exhibit autonomous behaviour in an engineering sense and are developed with the purpose of having some kind of social role. This includes so-called bots, and robots or AIs with social roles are referred to as social robots. Additionally, it was pointed out that social robots become “social” not because of their inherent functions but because we humans interpret their appearance, speech, and gestures as cues and regard them as social beings, similar to ourselves (Media Equation).

Next, it was explained that the statements made by social robots have the potential to cause ethical problems, and the responsibility for these issues is unclear, using the example of chatbots. The moral influence of social robots cannot be reduced solely to the original design actions or intentions of the designers, and it is difficult to control. The reasons for this include the shift in the architecture of dialogue systems from dictionary-based to generation-based, enabling responses that the designers had not anticipated; the importance of the user’s imagination and cognition in making social robots social beings; and the varying cultural backgrounds of users. As moral influence is constructed in a networked manner, the question of how to consider the ethical responsibility of designers has become an issue. From the designers’ perspective, it is unclear why social robots have moral significance and to what extent designers should take moral responsibility, which may lead to a reluctance to develop these technologies. Mizukami is concerned about this situation and is conducting research with the hope of providing explanations and proposals for the understanding of social robot behaviour from the perspective of the philosophy of technology.

In the philosophy of technology, there is a debate on whether technological artefacts can be understood as “moral agents.” In recent philosophy of technology, an approach is employed that focuses not on the actual capabilities of technology but on the role that technology plays in its relationship with humans. However, Mizukami argues that while the relational approach is important as a general direction for analysis, it is difficult to apply it to the interpretation of moral agency due to the issue of responsibility. Instead, he proposes the prop theory, which positions social robots as props that generate fictional psychology in relation to users. This allows for a separate examination of the responsibility of social robots, distinct from human agency, which serves as the basis for the attribution of moral responsibility.

Furthermore, the ethics of social robots have been understood as a subfield of the philosophy of technology and technological ethics and have been considered within the frameworks of ethical assessment of technology, policy, and design guidelines. However, Mizukami proposes that the evaluation of social robots should also consider play mediated through robots. Social robots have both technological and fictional aspects, so ethical research should refer to both technological assessment practices and fictional work evaluation practices.

The discussion focused on switching between the fictional world and the real world when humans are hurt during play with social robots, who makes that decision, and the possibility of arbitrary judgment. Mizukami argued that a distinction should be made between whether the place where humans are hurt is in the fictional world or reality and that when problems arise, the fictional world should be turned off and dealt with in the real world. However, the criteria for making this switch are a future challenge.

The balance between human-centred and post-humanist approaches in robot ethics was also discussed. Furthermore, in response to whether technical/quantitative approaches and humanities/qualitative approaches can coexist, it was explained that this could be pursued as a separate project and that trial and error through various evaluations and methods, such as rapid technology development and workshops, is essential. Participants expressed the view that both approaches aim for better social robots. Hence, their directions align and can enhance each other, leading to a lively discussion on this interdisciplinary topic.