{"id":965,"date":"2021-09-15T11:03:08","date_gmt":"2021-09-15T02:03:08","guid":{"rendered":"https:\/\/baiforum.jp\/en\/?p=965"},"modified":"2021-10-19T15:42:35","modified_gmt":"2021-10-19T06:42:35","slug":"re016","status":"publish","type":"post","link":"https:\/\/baiforum.jp\/en\/report\/re016\/","title":{"rendered":"Report on the lecture by Ivana Bartoletti \u201cPower, Politics, &#038; AI: Building a Better Future\u201d"},"content":{"rendered":"<p>On the 30th of August, Ivana Bartoletti joined B\u2019AI Forum to give a talk with the title of \u201cPower, Politics, &amp; AI: Building a Better Future\u201d and kindly shared her thoughts to questions raised by participants. As the last of the three-part series of talks about AI and social justice, it started with an introduction of the program and Bartoletti, given by Dr. Yuko Itatsu, the Associate Director of the B\u2019AI Forum and director of the Forum\u2019s AI and Social Justice summer program. Following this, Dr. Kaori Hayashi, the Director of the B\u2019AI Forum and Executive Vice President of the University of Tokyo, spoke about the vision of B\u2019AI Forum as the opening remarks.<\/p>\n<p>&nbsp;<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignright wp-image-861\" src=\"https:\/\/baiforum.jp\/en\/wp-content\/uploads\/2021\/08\/Bartoletti_photo-203x300.png\" alt=\"\" width=\"170\" height=\"251\" srcset=\"https:\/\/baiforum.jp\/en\/wp-content\/uploads\/2021\/08\/Bartoletti_photo-203x300.png 203w, https:\/\/baiforum.jp\/en\/wp-content\/uploads\/2021\/08\/Bartoletti_photo.png 257w\" sizes=\"auto, (max-width: 170px) 100vw, 170px\" \/>Bartoletti started her talk with an introduction of the UK\u2019s A-level grading fiasco. Due to COVID-19, A-level exams were cancelled in 2020 and instead, the UK\u2019s Department of Education introduced an algorithm to give students grades that will be used in the application of universities. However, it was found out that this algorithm cannot work fairly and predict the performance of a student correctly.<\/p>\n<p>&nbsp;<\/p>\n<p>Following the case in the UK, she gave us a strong message about discriminations in AI, especially those used with allocative, editorial, or predictive functions. In her words, \u201cthere is nothing neutral about data in technology,\u201d since data reflects all the inequality and discriminations in real society as it is. With that in mind, using AI to group people, to identify certain trends, or make predictions of people\u2019s behavior can be very dangerous. It is because AI will simply reproduce discriminations which are included within data and we may not even be aware of it. To explain it, she used the example of an airplane. Like the engine system of an airplane, AI is also a system that most of us don\u2019t understand. However, we have no choice but to accept the decision made by an AI and endure the consequences.<\/p>\n<p>&nbsp;<\/p>\n<p>She also talked about how the asymmetry of power around AI technology does harm to people, especially the most vulnerable parts of the society. Since the companies are collecting data from people and use AI to make decisions such as the amount of loans one can borrow, individuals don\u2019t have a say on how AI is used for or against them. In this case, individuals can only serve as passive roles, which exacerbates the inequalities in our society. In addition, Bartoletti also mentioned the limits of existing legislation. Compared with the rapid development of AI technology and its applications in real life, regulations around AI are still not enough. This situation is putting individuals, who always fall into the weak side of the balance of power, into a rather vulnerable and helpless circumstance.<\/p>\n<p>&nbsp;<\/p>\n<p>After the discussion of discriminations in AI, Bartoletti continued her talk with a question of whether we can, or should, be satisfied, if we can manage to create an AI without biases. Her answer was a no. Like the algorithm that decides what kind of advertisement will show up on your web browser, AI can cause other problems aside of discrimination. The example above can result in filter bubbles on social media and will contribute to the crash of democracy. If we received different kinds of news under an \u201cindividualized recommendation\u201d system then we may recognize the same world very differently which may lead to serious fragmentation of society and a collapse of public discourse. .<\/p>\n<p>&nbsp;<\/p>\n<p>To close her talk, she suggested the importance of global cooperation and global standards. Even though the race of AI technology development is currently going on among countries, she believes countries need to work together to tackle the ethical problems in AI.<\/p>\n<p>&nbsp;<\/p>\n<p>During the Q&amp;A session, Bartoletti shared her thoughts on how corporations can benefit from addressing biases in AI. She also encouraged us to join the conversation on AI and social justice to make a change. Not only people inside the coding room, but everyone needs to speak up, because it is everybody\u2019s life that is influenced by the application of AI.<\/p>\n<p>&nbsp;<\/p>\n<p>Because I\u2019m part of some social movements and trying to bring more people into these conversations about social injustice, I was extremely impressed and empowered by this session. While acknowledging those discriminations and injustice in AI, instead of denying the technology as a whole, she chose to make suggestions for a better system and spread her words to have more people get into the conversation. Her objective but optimistic view of technology and society has made me rethink my attitude towards gender issues that I\u2019m trying to tackle.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>On the 30th of August, Ivana Bartoletti joined B\u2019A<\/p>\n<div class=\"continue-reading-wrapper\"><a href=\"https:\/\/baiforum.jp\/en\/report\/re016\/\" class=\"continue-reading\">Continue Reading<i class=\"ion-ios-arrow-right\"><\/i><\/a><\/div>\n","protected":false},"author":7,"featured_media":0,"parent":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[6],"tags":[7,76,66,64,65],"class_list":["post-965","post","type-post","status-publish","format-standard","hentry","category-report","tag-ai","tag-ai-and-social-justice","tag-ivana-bartoletti","tag-lecture","tag-summer-program"],"acf":[],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/posts\/965","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/users\/7"}],"replies":[{"embeddable":true,"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/comments?post=965"}],"version-history":[{"count":10,"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/posts\/965\/revisions"}],"predecessor-version":[{"id":1072,"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/posts\/965\/revisions\/1072"}],"wp:attachment":[{"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/media?parent=965"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/categories?post=965"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/tags?post=965"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}