2026.Feb.10
REPORTSReport on seminar series “Cross-Cultural Approaches to Desirable AI 2nd Edition”
Sunjin Oh (Project Assistant Professor of the B’AI Global Forum)
・Event Period: From October 22, 2025 to January 22, 2026
・Venue: Zoom
・Language: English
Click here for details of the event
The second edition of the seminar series Cross-Cultural Approaches to Desirable AI concluded on 22 January 2026, marking a new milestone in the collaborative partnership among the University of Tokyo, the University of Bonn, the University of Cambridge, and the University of Europe for Applied Sciences.
In the previous series, the planning and organisation of each session were conducted separately by teams at the individual universities. In the present series, however, the organisational structure was substantially transformed. Building on the close network established in the preceding year, the organisers shifted from institution-based coordination to theme-based collaborative teams that operated across university boundaries. From the earliest stages of theme selection, intensive discussions were held regarding the relevance and significance of each topic. Each session, structured around the selected themes, was co-chaired by two scholars from different institutions chosen on the basis of expertise rather than affiliation. Speakers were invited in careful consideration of the overall composition of each session, and deliberate coordination ensured clarity regarding both the scope of the session and the position of each individual presentation within it. As a result, presenters frequently discovered unexpected connections among their perspectives during panel discussions, while audiences were able to deepen and extend their understanding by relating the individual contributions to one another. For example, a student team from the University of Europe for Applied Sciences, inspired by the “Education” session, developed the technological design project El faro and presented its concept in the final session of the series.
A defining feature of this year’s programme was not merely the introduction of diverse non-Western philosophical traditions, but the achievement of qualitative comparative reflection across them. In the session “Be(yond) Human,” Ubuntu ethics and Buddhist philosophy were presented as compelling alternatives to the individualistic assumptions that often underpin Western AI discourse. Both traditions understood AI not as an autonomous species, but as a mirror that reflects and amplifies human habits and desires, emphasising relationality (“I am because we are”) and the importance of cultivating a form of human literacy grounded in fallibility and embodiment. This dialogue articulated a shared horizon in which the development of AI shifts from a narrow focus on efficiency toward care, dignity, and the transmission of ancestral wisdom.
Further expanding the critical scope of the series were discussions that closely intertwined the themes of labour, work, and identity. Reconsidering Hannah Arendt’s distinction between labour and work and their relation to technology, one line of argument demonstrated how AI renders visible a central paradox of modern society: liberation from labour coincides with the loss of labour as a foundation of meaning. AI thus appeared not as a simple technological rupture, but as a moment that clarifies the limits of productivity-centred thought and reopens the possibility of plural, non-productivist forms of human significance. In addition, a Marxist-feminist critique of the concept of the data prosumer revealed how platform economies depend upon invisibilised reproductive labour distributed along gendered, racialised, and colonial lines, thereby connecting debates on desirable AI to broader questions of value, care, and global justice. This perspective extended into the domain of digital identity, where processes of datafication, self-presentation, and AI-generated personae illustrated how the very understanding of subjectivity is being transformed while mediated through technological infrastructures.
Alongside these theoretical reflections, the series also addressed direct interventions into the mechanisms and design of AI models. Research on intersectional bias benchmarking in Japanese large language models demonstrated that bias arises not from single attributes alone but from the interaction between social attributes and contextual conditions, thereby exposing the limitations of Western-centric evaluation frameworks. Parallel work on image-generation models showed how visual outputs may reproduce social stereotypes in subtle and often invisible ways, linking bias mitigation to broader concerns of alignment, safety, and long-term risks to humanity. Furthermore, analysis of synthetic persona generation revealed that contemporary training and alignment processes embed normatively sanitised and idealised images of the human, positioning persona-based evaluation as a diagnostic tool for uncovering the cultural values encoded within generative AI systems. Collectively, these studies call for bias to be addressed not merely as a technical malfunction but as a social and epistemic challenge that must be consciously assumed.
Another important issue that newly emerged across the series concerned digital wellbeing and privacy. Discussions moved beyond technical remedies to engage the socio-political structures that shape digital life itself. Speakers highlighted the growing tendency for wellbeing to be framed as an individual responsibility under conditions of constant connectivity, while historical perspectives from the Middle East and Europe demonstrated that privacy has long functioned not as a universal right but as a social privilege. From this standpoint, digital health was repositioned not as an added value available only to a few, but as a foundational condition indispensable to any genuinely desirable future.
Taken together, the present seminar series advanced a pluralistic and inclusive vision of AI grounded in a guiding commitment inherited from the previous programme: moving beyond narrow notions of technical optimisation toward a comprehensive understanding of technology as a relational field deeply embedded in social, cultural, and environmental contexts. The pursuit of desirable AI, in this sense, constitutes a task encompassing all humanity, inseparable from ethical imagination, social justice, and the conditions of human existence. At the same time, this task resides in the most immediate dimensions of everyday life—often appearing trivial or insignificant from the perspective of our AI agents—yet acquiring meaning precisely through forms of cultural diversity that can never be reduced to a single universal frame.