2023.Jun.12
REPORTSReport on Workshop by Dr. Arnaud Claes “Algorithmic Personas: Effects of the Anthropomophization of Automated Media on User Reflexivity”
Eva LE RAY (Graduate Student Member of the B’AI Global Forum)
・Date: Tuesday, 23 May 2023, 4:00-5:30 pm (JST)
・Venue: On-site (B’AI Global Forum Office, Asano Campus, University of Tokyo)
・Language: English
・Guest Speaker: Arnaud Claes (JSPS Postdoctoral Researcher, Kansai University; Visiting Researcher of the Interfaculty Initiative in Information Studies, University of Tokyo)
・Moderator: Nozomi Ohtsuki (Research Assistant of the B’AI Global Forum)
On May 23, 2023, the B’AI Global Forum welcomed Dr. Arnaud Claes (JSPS Postdoctoral Researcher at Kansai University; Visiting Researcher of the Interfaculty Initiative in Information Studies at the University of Tokyo) to give an in-person workshop on “Algorithmic Personas: Effects of the Anthropomophization of Automated Media on User Reflexivity.” This research meeting was a continuation of Dr. Claes’ previous presentation at the 1st BAIRAL Research Meeting for 2023 on “Experimenting with Speculative Interfaces: A Design-Based Approach to the Study of Algorithmic Practices,” held online on April 21, 2023.
Dr. Claes’ current work primarily focuses on Human-Computer Interaction (HCI) and recommendation algorithms, a system of information filtering system that predicts and suggests relevant items or content to users based on their preferences, past behavior, or behavior of similar users. As recommendation systems gain popularity across various domains such as e-commerce platforms, streaming services, social media platforms, and news websites, Dr. Claes seeks to examine them through the lens of social sciences, aiming to provide reflective guidelines and insights.
The focus of the meeting was specifically exploring the anthropomorphization of recommendation systems used in public service media and its effects on users’ engagement with it. As starting point for the discussion, Dr. Claes posits that perceiving anthropomorphization as the user’s failure to grasp the technical complexity of the system might be an inadequate perspective. Instead, he proposed that anthropomorphization can enhance the user’s understanding of the automated media and foster deeper engagement with it. This perspective stemmed from the recognition that anthropomophization, primarily defined as attribution human characteristics to non-human entities, operates a ‘’metaphorical projection’’ (Lakoll & Johnson, 2003): by mobilising their knowledge of human interaction, users can gain an understanding of the features and operation of automated media. Furthermore, there is no doubt for Dr. Claes that anthropomorphization holds substantial cultural significance across regions as well, and particularly in Japan, which boasts rich heritage associated with animism and other non-human entities.
To support this argument, Dr. Claes showcased an example of automated media system which he co-designed with a public service media company in Belgium. This system, called ‘’ALVEHO’’, empowers users to actively participate in shaping their own preferences and customising their media experience. This is achieved by visualising and streamlining the control parameters of the system, resulting in improved controllability, user engagement, and enhanced accuracy of recommendations. While Dr. Claes introduced ‘’ALVEHO’’, he also presented alternatives approaches to anthropomorphising systems. One such approach involved, for instance, the creation of avatars. Overall, the purpose of anthropomorphization is more focused on fostering a critical outlook and enabling users to identify the flaws of the device, rather than simply instilling blind confidence in the system, which is a general common approach in computer science fields. In other words, the objective is to encourage a nuanced perspective rather than seeking unwavering trust.
The subsequent discussion among the participants of the meeting predominantly centered around the utility of anthropomophization in systems and the potential biases that can emerge as a result. The participants emphasised the importance of addressing these biases and ensuring that the customisation features of the system do no perpetuate exclusionary practices. Dr. Claes concurred with this concern and reiterated the necessity for media companies to assume greater responsibility in the design process, considering the substantial implications at stake.
In conclusion, Dr. Claes’ presentation offered a new and insightful perspective on the interplay between anthropomophization and technology. By shedding light on the potential benefits associated with anthropomophizing recommendation systems, it significantly contributes to the exploration and enhancement of AI literacy and media education. Moreover, it encourages deeper reflection on system designs, promoting a more thoughtful and conscientious approach to the development and implementation of automated technologies.