2024.Mar.22
REPORTSReport on the 5th BAIRAL Research Meeting for 2023
“Rethinking trust in AI: Imaginaries, Habits, Ecologies”
Priya Mu (Research Assistant, B’AI Global Forum)
・Date: Thursday, January 18, 2024, 1:00-2:30 pm (JST)
・Venue: On-site & Online Hybrid
・Language: English
・Guest Speaker: Dr. Andrew Lapworth (UNSW Canberra)
・Moderator: Priya Mu (Research assistant of the B’AI Global Forum)
(Click here for details on the event)
The 5th BAIRAL research meeting for 2023 was held on January 18, 2024. This time, we invited Dr. Andrew Lapworth who is a Senior Lecturer in Cultural Geography at the University of New South Wales (UNSW) Canberra in Australia, to speak on the theme of “Rethinking Trust in AI: Imaginaries, Habits, Ecologies.”
Originating from an interdisciplinary project funded by a startup grant at UNSW Canberra, his paper bridged the gap between social scientists and engineers to tackle AI-related challenges. Its primary focus lay on trust within contemporary society, especially in relation to AI-driven navigation applications such as Google Maps and Apple Maps. Collaborations with Dr. Tom Roberts (UNSW Canberra) and Dr. Richard Carter-White (Macquarie University) propelled this research project towards developing new conceptual and empirical insights into trust. These insights prioritized the experiential and dynamic aspects of human engagement with AI technologies in various cultural and institutional contexts. The study delved into the social and cultural facets of trust in AI, underscoring the critical role of trust in technology amidst the rapidly evolving AI landscape, where reliance on intelligent machines increasingly became a societal norm.
The paper shed light on the significant consequences of distrust in crucial technologies, which might intensify societal divisions. It pointed out the “black box effect,” where the intricate technological processes behind AI remained hidden from the general public, complicating the evaluation of its trustworthiness. Concerns were raised regarding AI’s capacity to learn and adapt, questioning the reliability of such evolving technology. The challenges AI posed to traditional understandings of trust in technology were examined, highlighting the shift in how trust was experienced and managed. The research utilized qualitative studies with navigation app users to illustrate trust’s complex nature in AI, advocating for a deeper comprehension that transcended technical fixes.
The evolving notion of trust in relation to technology and AI was explored, illustrating how AI blurred the distinctions between conventional reliability models in inanimate objects and interpersonal trust. The concept of “ontological trust,” inspired by new materialism, was introduced as a broader alternative to traditional approaches. The discussion extended into empirical research, drawing from interviews with AI users to explore trust’s multifaceted dimensions, including the impact of media and cultural narratives, the broader socio-technical contexts of trustworthiness, and the significance of embodied and affective experiences in forming trust. The debate over machines’ trustworthiness and AI’s anthropomorphic tendencies was also addressed. The presentation emphasized the complexity of trust in AI and technology, calling for nuanced understandings that surpassed conventional human-to-human trust models.
The presentation further discussed trust in technology, particularly AI, with Amazon’s Alexa as a case study. It highlighted how AI systems like Alexa, which feature anthropomorphic traits, might encourage users to overtrust them. The commodification of trust in AI discourse, where trust was depicted as an economic asset, was critiqued. A shift towards an ontological perspective on trust was advocated, emphasizing its relational and collective aspects, incorporating both human and non-human actors within socio-technical frameworks. This perspective broadened the understanding of AI technologies’ roles in societal and organizational contexts, affecting human identities and relationships. Trust was reconceptualized as a phenomenon influenced by material and affective forces, challenging traditional views that framed trust as a mere psychological trait of individuals. It was seen as an embodied and affective capacity, influenced by designers’ attempts to shape user-technology relationships through tone, packaging, and aesthetics. This approach promoted a situated, experiential, and intertwined comprehension of trust in human-technology interactions, provoking a reevaluation of conventional discussions on the topic.
The qualitative study explored trust in AI-enabled navigation apps among Australian users, revealing that trust was shaped by cultural narratives, concerns over data privacy, corporate motives, and the technology’s reputation, rather than its actual capabilities. It underscored the importance of habits and routines in trust formation, highlighting trust’s context-dependent nature within complex socio-technical ecosystems. The complexity was explored through the identification of three key themes emerging from the interviews. Firstly, it was observed that participants’ relationships with these technologies were profoundly influenced by their imaginaries, significantly affecting their perceptions of trustworthiness. Secondly, the manner in which participants negotiated trust was found to be closely aligned with the broader socio-technical ecosystems surrounding AI’s existence. Thirdly, the influence of unconscious habits on the facilitation or hindrance of trust in varying contexts, environments, and scenarios was examined. The qualitative and intensive nature of this research design cautioned against broad generalizations based on these findings.
The necessity for expanded research into trust within the AI context was acknowledged. The call for comprehensive, longitudinal studies to observe the everyday use of AI for a dynamic comprehension of trust was emphasized. Furthermore, the proposal of conducting “ride-alongs” with individuals utilizing AI aimed to investigate trust under more dynamic conditions. The conclusion highlighted the intricate and contradictory aspects of trust in AI, questioning traditional, human-centric trust notions and advocating for a more detailed exploration of human-technology relations in current discourses.