2023.Jun.05
REPORTSReport on the 1th BAIRAL Research Meeting for 2023 “Experimenting with Speculative Interfaces: A Design-Based Approach to the Study of Algorithmic Practices”
Kayoung KIM (Project Researcher of the B’AI Global Forum)
・Date: Friday, 21 April 2023, 3:00-4:30 pm (JST)
・Venue: Zoom Meeting (online)
・Language: English
・Guest Speaker: Arnaud Claes (JSPS Postdoctoral Researcher, Kansai University; Visiting Researcher of the Interfaculty Initiative in Information Studies, University of Tokyo)
・Moderator: Nozomi Ohtsuki (Research Assistant of the B’AI Global Forum)
(Click here for details on the event)
On April 21, 2023, the first BAIRAL research meeting for 2023 was held online. As a guest speaker, we invited Arnaud Claes, a JSPS Postdoctoral Researcher at Kansai University and a Visiting Researcher at the B’AI Global Forum at the University of Tokyo. He delivered a presentation on the theme “Experimenting with Speculative Interfaces: A Design-Based Approach to the Study of Algorithmic Practices.”
Claes, who has conducted research in the fields of Human-Computer Interaction (HCI) and media education, focuses on the impact of “recommender systems” as technologies that significantly influence people’s access to information and media experiences. Recommendation algorithms, which predict and present content or products that individuals are likely to prefer based on their past online behavior, have been widely adopted by digital platforms such as Amazon, Netflix, Facebook, and Instagram, both supporting and constraining users’ choices. Claes mentioned that not only commercial enterprises but also an increasing number of public service media are considering the implementation of the recommendation algorithms and emphasized the need for social sciences and humanities to provide design guidelines when such technologies are utilized in organizations with significant social responsibilities. As an example of such an endeavor, he introduced a news platform called “ALVEHO,” which he developed in collaboration with a public media company in Belgium.
The most significant feature that sets this platform apart from others that also utilize recommendation algorithms is its controllability, allowing users to control and modify the behavior of the algorithm. In ALVEHO, users can indicate their interests in specific news categories by selecting their own profile, and they can easily exclude certain categories they are less interested in from their news feed by clicking just one button. Moreover, users can freely adjust the level of “similarity” (the extent of the similarity between recommended articles and those the user has previously viewed) as well as the level of “subjectivity” (the extent of subjectivity in the tone of recommended articles), on a scale from -10 to +10, enabling them to modify the articles recommended by the algorithm.
Claes presented an experiment conducted using ALVEHO to discuss the impact of granting users control over recommendation algorithms on their media practices. The experiment involved 23 students majoring in Communication Science who used ALVEHO for five weeks (a minimum of five days per week, with a minimum of 20 minutes per day). It employed a combination of quantitative and qualitative research methods, analyzing the participants’ eye-tracking and log data on the platform and conducting individual interviews with them after the five weeks to gather insights into their media literacy and experiences with ALVEHO.
Surprisingly, the results revealed that most of the participants did not utilize the algorithm control features. Claes analyzed this finding, attributing it to cognitive overload resulting from information overload and participants’ habitual media consumption patterns. He emphasized the importance of designing user-friendly interfaces and enhancing media literacy through education.
During the Q&A session following the presentation, there were numerous opinions and questions regarding the methodology of the experiment, participant demographics, and the design of ALVEHO. Of particular interest was the revelation of complex user behavior discovered through interviews with the participants. According to Claes, when using platforms like Facebook or Instagram, participants consciously viewed content they had no interest in to manipulate the recommendations generated by algorithms. This implies that they had a fundamental understanding of how algorithms operate and made deliberate efforts to manipulate their browsing history to control the recommendations. However, when presented with a platform designed to easily enable such control with just a single button, they did not utilize that functionality. While limitations inherent to the experimental environment could be identified as reasons for this behavior, the crucial point is that users are aware of how algorithms narrow their choices and recognize the need for improvement.
In conclusion, Claes highlighted that while automation technology and recommendation systems are often perceived as negatively impacting human autonomy, they can significantly contribute to enhancing autonomy by providing diverse information and new insights. He proposed that rather than completely excluding recommendation systems, a better solution is to allow users to determine the extent to which algorithms make decisions on their behalf. Considering that technologies such as AI have permeated our daily lives, this perspective is indeed practical and constructive. As emphasized by Claes, implementing such an approach requires a design-based approach and insights from the humanities and social sciences. In that sense, it is crucial for B’AI to consider these aspects and address them as part of future challenges.