REPORTS

Report on the 29th B’AI Book Club
Wayne Holmes and Kaśka Porayska-Pomsta eds. (2023) The Ethics of Artificial Intelligence in Education: Practices, Challenges, and Debates

Priya Mu (Research Assistant, B’AI Global Forum)


・Date: Tuesday, May 28, 2024, 1:00-2:30 PM (JST)
・Venue: On-site (B'AI Office) & Zoom meeting
・Language: English
・Book: Wayne Holmes and Kaśka Porayska-Pomsta eds. (2023) The Ethics of Artificial Intelligence in Education: Practices, Challenges, and Debates, Routledge.
・Reviewer: Priya Mu (Research Assistant, B’AI Global Forum)

On May 28, 2024, the 29th meeting of the B’AI Book Club took place. This book club is a book review session organized by project members of the B’AI Global Forum.

Artificial Intelligence (AI) is rapidly transforming the educational landscape, offering promising tools and methods to enhance learning. However, as highlighted in the book “The Ethics of Artificial Intelligence in Education: Practices, Challenges, and Debates,” edited by Wayne Holmes and Kaśka Porayska-Pomsta, these advancements come with significant ethical challenges. This review focused particularly on chapters 6, 7, and 8, which delve into issues of equity, algorithmic fairness, and structural injustices within AI in education.

Kenneth Holstein and Shayan Doroudi’s chapter on equity in AI in Education (AIED) (Chapter 6) explores whether AIED systems will exacerbate or mitigate existing inequities in education. They examine this through four critical lenses: socio-technical system design, historical inequities in datasets, algorithmic factors, and human-AI interaction. In terms of socio-technical system design, disparities in access to technology are significant, with only 35% of children with mobile-only access using the internet for interest-driven learning compared to 52% with desktop or laptop access. Non-native English speakers and those unfamiliar with contextual math problems are also disadvantaged, and MOOCs are predominantly utilized by students from higher socio-economic backgrounds. Historical inequities in datasets often reflect and perpetuate biases, a phenomenon known as “Early-adopter Iteration Bias,” leading to the amplification of social inequities. Even without historical biases, machine learning algorithms can be inherently unfair, particularly when data for different demographic groups varies, and simple models often fail to capture the complexity of learning processes accurately. The design of AI tools can either challenge or reinforce teachers’ biases, promoting equity or maintaining inequitable practices. To address these inequities, Holstein and Doroudi propose investing in tools that support equitable AIED technologies, designing systems that clearly communicate their capabilities and limitations, and incorporating equity-related outcomes into AIED systems.

In chapter 7, René F. Kizilcec and Hansol Lee explore the impacts of algorithmic fairness on various educational stakeholders, emphasizing the need for a critical analysis of AI’s positive and negative consequences. Representativeness and generalizability are significant challenges in developing fair algorithms, with underrepresentation in training data threatening fairness and disadvantaging historically marginalized groups. Biases not addressed during measurement enter the model learning process, and “slicing analysis” helps assess accuracy gaps for different subgroups, addressing the complexities of algorithmic fairness. The rise of “black box” systems raises concerns about the trustworthiness of model predictions, and interpretable machine learning aims to create transparent and understandable models. Different measures of algorithmic fairness, such as Demographic Parity and Equalized Odds, are discussed, with the authors emphasizing that no single fairness definition suits all systems. Evaluating fairness involves considering long-term impacts and engaging in active discussions among stakeholders.

In chapter 8, Michael Madaio, Su Lin Blodgett, Elijah Mayfield, and Ezekiel Dixon-Román use critical theory to explore and redefine equity in educational AI, addressing broader socio-historical processes. Group fairness metrics often fail to account for intersecting oppressions, and educational AI technologies can perpetuate injustices even if algorithms are quantitatively fair. Historical power dynamics influence AI technologies, potentially reproducing existing inequities, and AI-driven educational surveillance can introduce harmful practices from other areas into education. To confront these issues, the authors propose training underrepresented data scientists, adopting a design justice approach, and radically reimagining educational AI to address systemic inequities. They emphasize the need to reshape educational research priorities to focus on justice and equity.

“The Ethics of Artificial Intelligence in Education: Practices, Challenges, and Debates” underscores the complexity of ensuring fairness and equity in AI for education. It calls for a comprehensive approach that addresses socio-technical designs, historical and algorithmic biases, and structural injustices. By engaging in broader discussions and proactive measures, stakeholders can work towards creating educational AI systems that serve all learners equitably, ensuring that advancements in technology do not perpetuate existing inequities but instead foster a more inclusive and just educational environment.