REPORTS

Report on the lecture by Ivana Bartoletti “Power, Politics, & AI: Building a Better Future”

Shenghui Jiang (4th year, Faculty of Education, University of Tokyo)

・Date and Venue: August 30, 2021, 18:00-19:30 (JST) @Zoom
・Language: English
(Click here for details on the event)

On the 30th of August, Ivana Bartoletti joined B’AI Forum to give a talk with the title of “Power, Politics, & AI: Building a Better Future” and kindly shared her thoughts to questions raised by participants. As the last of the three-part series of talks about AI and social justice, it started with an introduction of the program and Bartoletti, given by Dr. Yuko Itatsu, the Associate Director of the B’AI Forum and director of the Forum’s AI and Social Justice summer program. Following this, Dr. Kaori Hayashi, the Director of the B’AI Forum and Executive Vice President of the University of Tokyo, spoke about the vision of B’AI Forum as the opening remarks.

 

Bartoletti started her talk with an introduction of the UK’s A-level grading fiasco. Due to COVID-19, A-level exams were cancelled in 2020 and instead, the UK’s Department of Education introduced an algorithm to give students grades that will be used in the application of universities. However, it was found out that this algorithm cannot work fairly and predict the performance of a student correctly.

 

Following the case in the UK, she gave us a strong message about discriminations in AI, especially those used with allocative, editorial, or predictive functions. In her words, “there is nothing neutral about data in technology,” since data reflects all the inequality and discriminations in real society as it is. With that in mind, using AI to group people, to identify certain trends, or make predictions of people’s behavior can be very dangerous. It is because AI will simply reproduce discriminations which are included within data and we may not even be aware of it. To explain it, she used the example of an airplane. Like the engine system of an airplane, AI is also a system that most of us don’t understand. However, we have no choice but to accept the decision made by an AI and endure the consequences.

 

She also talked about how the asymmetry of power around AI technology does harm to people, especially the most vulnerable parts of the society. Since the companies are collecting data from people and use AI to make decisions such as the amount of loans one can borrow, individuals don’t have a say on how AI is used for or against them. In this case, individuals can only serve as passive roles, which exacerbates the inequalities in our society. In addition, Bartoletti also mentioned the limits of existing legislation. Compared with the rapid development of AI technology and its applications in real life, regulations around AI are still not enough. This situation is putting individuals, who always fall into the weak side of the balance of power, into a rather vulnerable and helpless circumstance.

 

After the discussion of discriminations in AI, Bartoletti continued her talk with a question of whether we can, or should, be satisfied, if we can manage to create an AI without biases. Her answer was a no. Like the algorithm that decides what kind of advertisement will show up on your web browser, AI can cause other problems aside of discrimination. The example above can result in filter bubbles on social media and will contribute to the crash of democracy. If we received different kinds of news under an “individualized recommendation” system then we may recognize the same world very differently which may lead to serious fragmentation of society and a collapse of public discourse. .

 

To close her talk, she suggested the importance of global cooperation and global standards. Even though the race of AI technology development is currently going on among countries, she believes countries need to work together to tackle the ethical problems in AI.

 

During the Q&A session, Bartoletti shared her thoughts on how corporations can benefit from addressing biases in AI. She also encouraged us to join the conversation on AI and social justice to make a change. Not only people inside the coding room, but everyone needs to speak up, because it is everybody’s life that is influenced by the application of AI.

 

Because I’m part of some social movements and trying to bring more people into these conversations about social injustice, I was extremely impressed and empowered by this session. While acknowledging those discriminations and injustice in AI, instead of denying the technology as a whole, she chose to make suggestions for a better system and spread her words to have more people get into the conversation. Her objective but optimistic view of technology and society has made me rethink my attitude towards gender issues that I’m trying to tackle.