REPORTS

Report on the 14th B’AI Book Club
Brian Christian, The Alignment Problem: Machine Learning and Human Values (2020)

Kayoung KIM (Project Researcher of the B’AI Global Forum)

・Date: Tuesday, September 27, 2022, 17:30-19:00 (JST)
・Venue: Online (Zoom Meeting)
・Language: Japanese
・Book: Brian Christian (2020). The Alignment Problem: Machine Learning and Human Values. W. W. Norton & Company.
・Reviewer: Kayoung KIM (Project Researcher, Interfaculty Initiative in Information Studies, The University of Tokyo)

On September 27, 2022, the 14th meeting of the B’AI Book Clubwhich is a book review session by project members of the B’AI Global Forum, was held. This time, Project Researcher Kayoung Kim reviewed The Alignment Problem: Machine Learning and Human Values by Brian Christian.

The author, an American non-fiction writer and programmer, deals with the implications of AI and machine learning for human society as a major theme. In this book, he uses the keyword “the alignment problem” to explain the various ethical and philosophical issues that have been raised about AI. Humans design the environments of machine learning with the hope that they will perform certain behaviors which humans expect, but they often fail to correctly understand what humans really want, causing social problems by behaving in ways that are not in line with human intentions. Researchers call this “the alignment problem,” the author says. He asks whether it is possible to instill human values and norms in machines as a solution to this problem, and in order to examine the feasibility, has describe in detail the history and theories of psychology that computer science has referred to, as well as conducted nearly 100 interviews with experts.

This book has been highly appreciated by a lot of prominent researchers and experts in the field of AI. It is said that the significance of the book is that it provides a theoretical and practical discussion on the philosophical issues surrounding fairness and transparency, one of the most pressing issues in the AI field today, through a masterful survey.

In the meeting, participants basically accepted those evaluations, and then also discussed the limitations and concerns. For example, there was an opinion that, while the introduction of psychological theories in detail, which form the basis of the argument development, is significant in terms of knowledge acquisition and is in itself an originality of this book, the weight placed on psychology may seem excessive depending on the reader’s motivation and expectation. Moreover, regarding the problem of giving AI an excessive human-like image is often pointed out, some expressed concern that the book’s attempt to closely link computer science and psychology may reinforce such a problem. On the other hand, as the main keyword “alignment problem” suggests, this book aims to align the motivations and goals of AI with those of humans, but it was pointed out that this may be assuming that the goals and judgments of humans are ideal, even though they are not perfect and are still learning.

However, while the invisibility of the process by which AI derives its answers has been highlighted as a huge problem, it is true that this book provides some clues to understand the process. Also, it can be said that it allowed us to see the trend that an increasingly diverse range of theoretical frames is introduced in the field of examining ethical issues in AI.