2025.Jun.02
REPORTSReport on the 37th B’AI Book Club
Articles Exploring Promises, Risks and Human Value Complexity in AI
Jingzhi (Ginger) HUANG (B'AI Global Forum Graduate Student Member)
・Date: Tuesday, May 27, 2025, 13:00-14:30 (JST)
・Venue: On-site (B’AI Office) & Zoom Meeting
・Language: English
・Reviewer: Jingzhi (Ginger) HUANG (Graduate School of Interdisciplinary Information Studies, The University of Tokyo, ITASIA Course, Master Program)
・Articles:
Amodei, D. (2024). Machines of loving grace: How AI could transform the world for the better. Anthropic.
https://darioamodei.com/machines-of-loving-grace
Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., & Mané, D. (2016). Concrete problems in AI safety. arXiv preprint arXiv:1606.06565. https://arxiv.org/abs/1606.06565
Bloom, P. (2023, December 5). We don't want moral AI. The New Yorker. [Also published on Small Potatoes Substack:
https://smallpotatoes.paulbloom.net/p/we-dont-want-moral-ai]
On May 27, 2025, the 37th meeting of the B’AI Book Club took place. The book review session explored the intricate relationship between artificial intelligence and human values through multiple critical texts. The discussion revealed fundamental tensions between technological optimism and ethical concerns, with participants examining how AI systems reflect and shape our understanding of intelligence, morality, and human agency.
The session began with an examination of Dario Amodei’s “Machines of Loving Grace,” which presents an ambitious vision of AI-driven progress. Participants engaged critically with his projection of compressing centuries of advancement. While the potential benefits in healthcare and poverty alleviation sparked interest, the group emphasized how transparency and accountability must serve as foundational pillars for any AI development, yet these very principles become increasingly challenging to maintain as systems grow more complex and autonomous.
A particularly illuminating moment occurred when a lab member shared her experience with ChatGPT’s self-portrait feature. Her boyfriend, who engaged with ChatGPT on philosophical and personal topics, received a remarkably accurate visual representation despite never sharing visual information. In contrast, when she requested the same feature after using ChatGPT purely for functional purposes, the system produced an image of a man that bore no resemblance to her. This stark difference revealed the gendered biases embedded within AI systems and how they respond differently based on interaction patterns, raising profound questions about the values these systems encode and perpetuate.
The conversation evolved into a fundamental questioning of intelligence itself. Participants grappled with whether the metrics and definitions we use to measure AI capabilities adequately capture the full spectrum of human intelligence and wisdom such as emotional intelligence, cultural wisdom, and contextual understanding that characterize human cognition. The concept of indigenous AI emerged as a particularly thought-provoking topic. Participants explored how AI development might look different if grounded in indigenous values and knowledge systems rather than market-centric Western frameworks. However, the practical challenges preserves. This discussion connected to broader questions about value alignment in AI systems. The group recognized that the alignment assumes a universal consensus and raises fundamental questions about whose values get encoded into these systems and who makes these decisions.
Paul Bloom’s argument against moral AI provided a counterpoint to the optimistic visions of aligned artificial intelligence. Participants discussed the paradox that while we want AI systems safe enough not to harm us, we simultaneously resist the idea of machines making moral judgments that might supersede human values. This tension highlighted the complex negotiation between safety and autonomy that defines our relationship with intelligent systems. Despite concerns about AI’s growing influence, participants also identified spaces for human agency and creative resistance. The discussion emphasized how users can maintain moral agency through intentional and creative uses of AI tools.
The session concluded with a recognition that the challenges posed by AI extend far beyond technical problems to fundamental questions about human values, cultural diversity, and the future of human agency. This requires ongoing dialogue, critical reflection, and a commitment to centering human dignity and cultural diversity in all technological development. If we embrace Andy Clark’s view in “Natural-Born Cyborgs” that humans are inherently flexible, open systems designed to incorporate tools and technologies, then perhaps AI represents not a threat to human nature but its next expression—raising the question of how we collaboratively shape this cognitive partnership.