{"id":4127,"date":"2026-05-04T01:16:48","date_gmt":"2026-05-03T16:16:48","guid":{"rendered":"https:\/\/baiforum.jp\/en\/?p=4127"},"modified":"2026-05-04T01:17:04","modified_gmt":"2026-05-03T16:17:04","slug":"re148","status":"publish","type":"post","link":"https:\/\/baiforum.jp\/en\/report\/re148\/","title":{"rendered":"Report on LCFI-B&#8217;AI Joint Workshop \u201cPro-Justice: Cross-Cultural Approaches to AI Ethics\u201d"},"content":{"rendered":"<p>On March 10 and 11, 2026, the B&#8217;AI Global Forum at the University of Tokyo and the Leverhulme Centre for the Future of Intelligence (LCFI) at the University of Cambridge held a two-day joint workshop titled &#8220;Pro-Justice: Cross-Cultural Approaches to AI Ethics.&#8221; The workshop program comprised five sessions organized around the themes of desirable AI, digital immortality, AI and disability, risk and regulation, and technology, society, and utopia. In addition to members of both institutions, the workshop brought together AI ELSI researchers, artists, and practitioners based in Japan, Germany, Brazil, Poland, and South Korea.<br \/>\nIn opening remarks, B&#8217;AI Global Forum Director Ai Hisano spoke to the importance of adopting a broad perspective that takes seriously both AI&#8217;s potential to advance social justice and its risks of deepening social inequalities and bias, and reflected on the significance of the workshop as a space for examining AI technology from ethical, legal, and social angles.<\/p>\n<h4>Session 1: Roundtable on Desirability, Pro-Justice, and Alienation<\/h4>\n<p>Chair: Ai Hisano (B&#8217;AI Global Forum, University of Tokyo)<br \/>\nPanelists: Eleanor Drage (LCFI, Cambridge), Jir\u00e9 Emine G\u00f6zen (University of Europe for Applied Sciences), Sunjin Oh (B&#8217;AI Global Forum, University of Tokyo), Yuko Itatsu (B&#8217;AI Global Forum, University of Tokyo)<\/p>\n<p>The session opened with an introduction to the trajectory of the &#8220;Desirable AI&#8221; project, which began in 2024. Jir\u00e9 Emine G\u00f6zen traced its origins to a 2023 workshop held at the German Institute for Japanese Studies (DIJ) in Tokyo, from which grew an international online lecture series involving the Leverhulme Centre for the Future of Intelligence (Cambridge), the B&#8217;AI Global Forum (University of Tokyo), the Center for Science and Thought (University of Bonn), and the University of Europe for Applied Sciences. The project has run more than twenty sessions between October 2024 and January 2026, cultivating a multifaceted perspective on AI through themes ranging from spiritual traditions to labor, disability, digital immortality, law, regulation, and ontology and epistemology.<br \/>\nEleanor Drage drew attention to the ambiguity of the word &#8220;desirable,&#8221; raising the question of whose desires and wishes are actually being fulfilled in AI development. Drawing on a contemporary artwork that reconstructed Leonardo da Vinci&#8217;s Vitruvian Man, she argued that truly desirable AI is not about datafying bodies to realize some idealized, aesthetically perfect human form, but about enabling people to live well together on this planet as finite beings.<br \/>\nYuko Itatsu reflected on how the project had marked an important turning point for the B&#8217;AI Global Forum, emphasizing how the international dialogue had allowed it to break out of locally siloed conversations. She also noted that the project had received the &#8220;AI-ELSI Award (Perspective Category)&#8221; from the Ethics Committee of the Japanese Society for Artificial Intelligence (JSAI), pointing to a growing awareness within Japan&#8217;s AI community of the importance of engaging with social implications beyond purely technical perspectives.<br \/>\nSunjin Oh drew on Pierre Bourdieu&#8217;s sociology of taste and Mich\u00e8le Lamont&#8217;s critique of it, questioning Lamont&#8217;s tendency to lump together under the heading of &#8220;ethics&#8221; both judgments that differentiate the self from others through assessments of moral character (such as &#8220;being a bad person&#8221;) and the willingness to accept those who differ from us. Oh argued that this conflation has become a dominant lens in the processes through which AI ethics has been established as a field, producing a situation in which diverse normative challenges surrounding AI are all processed as &#8220;ethics problems.&#8221; She called for a careful separation of the domains that should properly be treated as economic or political questions.<br \/>\nIn the Q&amp;A, Tom Hollanek raised the question of whether Japan&#8217;s distinctive design history \u2014 characterized by a different understanding of simplicity from that found in Europe or America \u2014 might serve as a resource for resisting the homogenization of AI design. Maya Indira Ganesh asked how humanities and social science researchers connect with industry and government, and Jeanette Hofmann cited the role of civil society organizations in Europe&#8217;s AI ethics debates and asked whether similar partnerships might be possible in Japan. Responses acknowledged that such spaces for collaboration are not yet sufficiently formed in Japan, while noting signs of emerging dialogue.<\/p>\n<h4>Session 2: Cross-Cultural Approaches to Digital (Im)mortality \u2014 Research Insights and Film Screening<\/h4>\n<p>Chair: Jir\u00e9 Emine G\u00f6zen (University of Europe for Applied Sciences)<br \/>\nSpeaker: Katarzyna Nowaczyk-Basinska (LCFI, Cambridge)<br \/>\nDiscussants: Chihyung Jeon (KAIST), Akiko Orita (Kanto Gakuin University), Grant Jun Otsuki (University of Tokyo), Justyna Olko (University of Warsaw)<\/p>\n<p>Session 2 took digital immortality as its central theme. Katarzyna Nowaczyk-Basinska presented findings from her international collaborative research project at Cambridge, &#8220;Imaginaries of Immortality in the Age of AI: An Intercultural Analysis.&#8221; The project conducted twelve research activities across Poland, India, and China \u2014 three expert workshops and nine focus groups \u2014 gathering the voices of approximately one hundred participants.<br \/>\nIn Poland, strong concern was expressed about the commercialization of digital immortality, and almost all expert participants refused to act as consultants for commercial companies. In India, the concept of the &#8220;business of dignity&#8221; was raised as a counter to &#8220;death capitalism.&#8221; In China, expert participants notably observed that the digital post-mortem services industry is growing at a rapid pace and may now be the largest in the world.<br \/>\nThe session also included a screening of the documentary Eternal You (directed by Hans Block and Moritz Riesewieck). In the discussion that followed, centered on responses to the film, Justyna Olko cited examples such as the D\u00eda de los Muertos in Mexican Indigenous culture and the itako mediumship rituals in Japan, arguing that communion with the dead is a fundamental human desire, historically managed and protected by ritual specialists, communities, and coherent worldviews and belief systems. According to Olko, AI-based &#8220;digital immortality&#8221; tools tap into the same desire but lack such protective frameworks, and thus risk isolating bereaved individuals from their social support networks. Grant Jun Otsuki raised the disturbing possibility of posthumous avatars being appropriated by governments for political purposes during wartime. Chihyung Jeon posed the question of how collective, broadcast-mediated practices of grief \u2014 as seen in public broadcasting \u2014 differ from commercially individualized services. Akiko Orita suggested that Buddhist ritual milestones such as the forty-ninth-day memorial and annual death anniversaries might serve as cultural reference points for designing responsible &#8220;endings&#8221; to technology-dependent grief processes.<br \/>\nA recurring concern throughout the discussion was the risk of technology isolating the experience of grief. There was also animated debate around the design of &#8220;off-ramps&#8221; for service use (how to conclude engagement), the possibility and political risks of positioning digital post-mortem services as public goods, and vigilance against &#8220;enshrification&#8221; \u2014 the process by which commercial platforms shift from serving users&#8217; interests to serving algorithmic interests.<\/p>\n<h4>Session 3: AI, Health, and Disability<\/h4>\n<p>Chair: Aisha Sobey (LCFI, Cambridge)<br \/>\nPanelists: Asuka Ando (University of Tokyo), Joseph Austerweil (Henkaku Center, Chiba Institute of Technology)<\/p>\n<p>The second day opened with a session on AI and disability, centered on lived experience as both a methodological and ethical resource. Chair Aisha Sobey framed the session with a critique of how AI image generation systems reproduce narrow and stereotyped visual representations of disability, and introduced the &#8220;social model&#8221; framework, which locates disabling barriers not in individual bodies but in the social and built environment.<br \/>\nAsuka Ando began from her own experience as a CODA (Child of Deaf Adults), tracing the history of communication technology through her family: from the pokeberu (pager) of her childhood, through fax machines and mobile phones, to LINE. Her deaf parents&#8217; first language is Japanese Sign Language (JSL), and she had always noticed the subtle unnaturalness of their written Japanese; she described how that experience led to her interest in JSL technology research. JSL recognition technology \u2014 which converts sign language into text or speech \u2014 continues to lag far behind comparable technologies in other languages, impeded by privacy concerns around facial expression data, the inefficiency of fragmented data collection projects across Japan, and the complexity of JSL&#8217;s regional variation and mouthing.<br \/>\nAndo introduced the &#8220;JSL Collaboration Project&#8221; she has developed to address these challenges. The project works directly with deaf linguists to build datasets, conducts consent processes with native deaf staff using sign language, and has deaf linguists perform annotation \u2014 embodying a &#8220;by deaf people, for deaf people, with deaf people&#8221; approach. Ando concluded by articulating her vision: a world in which people who use JSL as a first language can access the same level of technological convenience as those who use spoken language.<br \/>\nJoseph Austerweil brought perspectives from psychology and computer science, and shared his own experience with invisible disabilities (dyspraxia and associated mental health challenges). Drawing on the example of Xbox voice recognition systems that failed to accurately recognize certain voices, and on Joy Buolamwini and Timnit Gebru&#8217;s &#8220;Gender Shades&#8221; study \u2014 which demonstrated stark performance disparities in commercial AI face recognition systems for Black women \u2014 he argued that those who are underrepresented in training data are both the least visible to AI and the most harmed by it.<br \/>\nAusterweil also presented cognitive science research on Alzheimer&#8217;s disease as a case study, showing how group-level averaging obscures individual differences \u2014 an instance of a broader problem in which AI systems designed around a &#8220;typical&#8221; user exclude those who deviate from the average. He also raised the issue of &#8220;disability dongles&#8221; (a term coined by Liz Jackson and others): well-intentioned technological solutions to problems that disabled people never identified as priorities. He further emphasized the need for cross-cultural perspectives in both research and design, noting significant cultural differences between Japan and the United States in how mental health disabilities are viewed.<br \/>\nThe Q&amp;A addressed the structural difficulties of disability-inclusive research (the complexity of ethics approvals, increased time and cost, insufficient institutional support), the need for concrete indicators to verify whether inclusion policies are actually working, and the importance of AI literacy. Maya Indira Ganesh noted that approximately 97% of websites used to train large language models have accessibility issues, highlighting the depth of structural exclusion at the foundations of contemporary AI.<\/p>\n<h4>Session 4: Cross-Cultural Approaches to Risk and Regulation<\/h4>\n<p>Chair: Tomasz Hollanek (LCFI, Cambridge)<br \/>\nPanelists: Eleanor Drage (LCFI, Cambridge), Yee Kuang Heng (University of Tokyo), Arisa Ema (University of Tokyo), Hiroki Habuka (Kyoto University), Jeanette Hofmann (Alexander von Humboldt Institute for Internet and Society \/ Freie Universit\u00e4t Berlin)<\/p>\n<p>Chair Tom Hollanek opened with a critical overview of international AI governance initiatives, including the Global Partnership on Artificial Intelligence (GPAI) initiated at the 2019 G7 Biarritz Summit, and offered a critical perspective: despite claiming global consensus, such forums are overwhelmingly dominated by G7-country viewpoints. Citing 2019 and 2023 research showing that major AI policy documents continue to originate disproportionately in Europe, North America, and East Asia \u2014 with Africa, Latin America, and Central Asia significantly underrepresented \u2014 he argued that diversifying participation in global AI governance is necessary both as a matter of justice and as a practical correction.<br \/>\nEleanor Drage presented research on the &#8220;McDonaldization&#8221; of AI ethics principles, conducted in collaboration with a research institution in Qatar (HBKU). An examination of responsible AI guidelines from Microsoft, Google, Meta, Amazon, and Apple found that terms like &#8220;transparency,&#8221; &#8220;fairness,&#8221; and &#8220;accountability&#8221; are repeatedly deployed without meaningful definition, and appear to have been copy-pasted across documents. Analysis of principles documents from Gulf states (Bahrain, Kuwait, Oman, Qatar, Saudi Arabia, and the UAE) found that Qatar&#8217;s and Saudi Arabia&#8217;s Riyadh Charter on AI stood out for its use of culturally specific expressions \u2014 such as &#8220;protection of intimacy,&#8221; a concept distinct from privacy \u2014 rather than relying on Western boilerplate. Drage also cited DAIR (the Distributed AI Research Institute) and Hugging Face as positive examples of organizations that engage critically and transparently with their own principles.<br \/>\nYee Kuang Heng discussed the legal interoperability of AI in multinational military operations. Japan&#8217;s Self-Defense Forces operate under a &#8220;positive-list system,&#8221; which requires explicit legal authorization for every action \u2014 a framework that creates both legal and procedural barriers distinct from those of most NATO member states. He noted that only 24 of 32 NATO member states have joined the shared cloud software platform ACE (Allied Software for Cloud and Edge), illustrating that fragmentation in AI governance persists even among formal allies. He also pointed to the shift since 2025 in AI regulatory discourse, moving from &#8220;product safety&#8221; toward &#8220;economic competitiveness and national security.&#8221;<br \/>\nHiroki Habuka argued that the repetition of high-level principles such as &#8220;transparency,&#8221; &#8220;fairness,&#8221; and &#8220;safety&#8221; has reached its limits, and that discussion is moving into more specific and context-dependent territory. He also made the provocative argument that the very framework of &#8220;AI governance&#8221; or &#8220;AI regulation&#8221; as a standalone field has become an obstacle, since AI now permeates every domain and cannot be governed separately from governance in general. Arisa Ema brought a science and technology studies (STS) perspective, raising questions about how AI governance institutions form, what kinds of expertise are legitimated, and how cultural and disciplinary differences shape regulatory approaches across countries. Jeanette Hofmann reaffirmed the important role that civil society organizations play in driving AI ethics debates in Europe, and asked about the potential for similar movements in Japan.<\/p>\n<h4>Session 5: Enchanted by Code? \u2014 Utopia as Method in Technological Times<\/h4>\n<p>Chair: Eleanor Drage (LCFI, Cambridge)<br \/>\nPanelists: Jonnie Penn (Cambridge), Alyssa Castillo Yap (B&#8217;AI Global Forum \/ The Sarau Collective), Fernando Kague (The Sarau Collective), Yifan Zhuang (Keio University), Sunjin Oh (B&#8217;AI Global Forum, University of Tokyo), Maya Indira Ganesh (LCFI, Cambridge)<\/p>\n<p>This session explored the significance of imagining utopia in the age of advanced technology \u2014 positioning utopia not as a destination to be reached, but as a method for holding open a space of possibility against dominant technological imaginaries.<br \/>\nJonnie Penn screened a short film he had produced by editing clips from various Hollywood films, provoking questions about surveillance, technology, and social control. The film traced how information technology permeates the everyday spaces of work and sociality, and how people increasingly offer their interests and intentions as objects of monitoring and control.<br \/>\nAlyssa Castillo Yap, Fernando Kague, and Yifan Zhuang presented their joint exhibition Way Back Home, held at Clear Gallery Tokyo from March 1 to 7, 2026. Exploring themes of immigration, family memory, and cultural displacement, the exhibition created an airport-like atmosphere through boarding pass displays, black boxes (each &#8220;flight&#8221; linked to a significant moment in immigration history), red suitcases suspended from the ceiling, and immersive video installations. Zhuang described using the fragmented, fluid quality of AI image generation to evoke the layered, overlapping texture of memories experienced in an airport. Kague spoke of the utopian potential of performance: because the names of audience members and the &#8220;flight numbers&#8221; they were assigned changed with each showing, every performance became a unique encounter.<br \/>\nSunjin Oh delivered a philosophical lecture titled &#8220;The Exteriority of Time,&#8221; drawing on Emmanuel Levinas to identify a fundamental blind spot shared by contemporary AI discourse. She argued that both Elon Musk&#8217;s simulation theory and Yuval Noah Harari&#8217;s discussion of AI and religion share the same blind spot: the erasure of what Levinas calls &#8220;diachrony&#8221; \u2014 time that arrives from outside synchrony, beyond control or prediction. Drawing on Levinas, Oh argued that ethical responsibility arising from the encounter with the &#8220;face&#8221; of the Other presupposes vulnerability and unpredictability; when AI attempts to enclose this temporal exteriority within the calculable, it does not make ethics easier but renders it impossible. Within the Levinasian framework, utopia is not hope for a better world, but the very structure of hope \u2014 the wound of diachrony that keeps us open to the call of the Other.<br \/>\nMaya Indira Ganesh drew on Ted Striphas&#8217;s concept of algorithmic culture to reflect on the &#8220;automation of culture&#8221; and the &#8220;culture of automation.&#8221; She took Apple&#8217;s 2024 &#8220;Crush&#8221; advertisement \u2014 in which musical instruments, books, and cameras are crushed by an industrial press and compressed into a thin iPad \u2014 as an emblematic case of the automation of culture, depicting how human work and creative diversity are collapsed into a single tool. She presented the Trump White House&#8217;s use of OpenAI&#8217;s &#8220;Ghiblification&#8221; filter to communicate the serious political event of women being arrested by ICE as an instance of the &#8220;culture of automation&#8221; \u2014 a demonstration of how AI-generated pastiche is becoming a standard register for official communication. Against this, she drew on Jill Bennett&#8217;s &#8220;practical aesthetics&#8221; approach, which treats art and cultural practice not as objects of philosophy or computation, but as practice and action.<br \/>\nIn the closing discussion, there was candid dialogue around the question of hope and despair in relation to technology. Penn recalled Ganesh having said to students in Cambridge&#8217;s MA AI ethics program that she is &#8220;pessimistic about technology, but optimistic about people&#8221; \u2014 a sentiment that encapsulates a recurring insight across the session: critical scrutiny of AI need not be in conflict with confidence in human creativity and solidarity.<\/p>\n<p>Over two days, &#8220;Pro-Justice: Cross-Cultural Approaches to AI Ethics&#8221; built a shared critical understanding of dominant discourses \u2014 including the techno-utopian immortality industry, standardized AI ethics principles produced by major tech companies, and global AI governance built on Western consensus \u2014 and produced a rich set of reference points for thinking through AI&#8217;s challenges from humanistic and social scientific perspectives. The workshop affirmed the importance of centering cultural specificity, lived experience, and a human-centered ethic of care as the critical foundation from which to pose alternative visions of technology \u2014 and deepened participants&#8217; understanding of why sustained international and interdisciplinary collaboration is essential to this work.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>On March 10 and 11, 2026, the B&#8217;AI Global Fo<\/p>\n<div class=\"continue-reading-wrapper\"><a href=\"https:\/\/baiforum.jp\/en\/report\/re148\/\" class=\"continue-reading\">Continue Reading<i class=\"ion-ios-arrow-right\"><\/i><\/a><\/div>\n","protected":false},"author":14,"featured_media":0,"parent":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[6],"tags":[7,321,283,97,333,332,135,381],"class_list":["post-4127","post","type-post","status-publish","format-standard","hentry","category-report","tag-ai","tag-aigovernance","tag-art","tag-culture","tag-desirableai","tag-digitalimmortality","tag-ethics","tag-llms"],"acf":[],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/posts\/4127","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/users\/14"}],"replies":[{"embeddable":true,"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/comments?post=4127"}],"version-history":[{"count":4,"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/posts\/4127\/revisions"}],"predecessor-version":[{"id":4131,"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/posts\/4127\/revisions\/4131"}],"wp:attachment":[{"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/media?parent=4127"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/categories?post=4127"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/tags?post=4127"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}