{"id":942,"date":"2021-09-10T17:28:08","date_gmt":"2021-09-10T08:28:08","guid":{"rendered":"https:\/\/baiforum.jp\/en\/?p=942"},"modified":"2021-10-19T15:41:41","modified_gmt":"2021-10-19T06:41:41","slug":"re014","status":"publish","type":"post","link":"https:\/\/baiforum.jp\/en\/report\/re014\/","title":{"rendered":"Report on the lecture by Dr. Ana Beduschi \u201cArtificial Intelligence and Digital Technologies at the Border: Migration and Human Rights Considerations\u201d"},"content":{"rendered":"<p>On the 25th of August, Dr. Ana Beduschi joined B\u2019AI Global Forum for an eye-opening talk with a joint Q&amp;A discussion held in English, on the topic of \u201cArtificial intelligence and digital technologies at the border: Migration and human rights considerations\u201d.<\/p>\n<p>&nbsp;<\/p>\n<p>As the first of a tripartite series of talks for \u201cAI and Social Justice\u201d, the program began with an overview of the day\u2019s topic and speaker introduced by Dr. Yuko Itatsu who is the Associate Director the Forum, organizer of the Forum\u2019s AI and Social Justice Summer Program, and moderator for the event. This was followed by the brief and wonderful Opening Remarks by Dr. Kaori Hayashi, the Director of the B\u2019AI Global Forum and Executive Vice President of the University of Tokyo, explaining the goals of the Forum and inviting the engagement of everyone in the Zoom event.<\/p>\n<p>&nbsp;<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignright wp-image-848\" src=\"https:\/\/baiforum.jp\/en\/wp-content\/uploads\/2021\/07\/Beduschi-photo-300x300.jpg\" alt=\"\" width=\"200\" height=\"200\" srcset=\"https:\/\/baiforum.jp\/en\/wp-content\/uploads\/2021\/07\/Beduschi-photo-300x300.jpg 300w, https:\/\/baiforum.jp\/en\/wp-content\/uploads\/2021\/07\/Beduschi-photo-1024x1024.jpg 1024w, https:\/\/baiforum.jp\/en\/wp-content\/uploads\/2021\/07\/Beduschi-photo-150x150.jpg 150w, https:\/\/baiforum.jp\/en\/wp-content\/uploads\/2021\/07\/Beduschi-photo-768x768.jpg 768w, https:\/\/baiforum.jp\/en\/wp-content\/uploads\/2021\/07\/Beduschi-photo-1536x1536.jpg 1536w, https:\/\/baiforum.jp\/en\/wp-content\/uploads\/2021\/07\/Beduschi-photo.jpg 1970w\" sizes=\"auto, (max-width: 200px) 100vw, 200px\" \/>Dr. Ana Beduschi\u2019s 40-minute talk began with an explanation that although there exists no internationally accepted definition of AI, our discussions can often be found under what is broadly defined as a collection of digital technologies linked with data, algorithms and key processes such as machine learning and deep learning. This set the foundation for the rest of the presentation, which covered the key issues, mitigation strategies and new challenges, surrounding the role of AI technologies in the future of borders and migration.<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Key Issues in AI for Human Migration<\/strong><\/p>\n<p>&nbsp;<\/p>\n<p>Dr. Beduschi highlighted four key issues arising from the current conceptualisation and usage of AI for human migration, as follows: Data quality and algorithmic bias; Privacy, surveillance and technologization of borders; Datafication of migration and public-private sector interactions; and Fairness and accountability.<\/p>\n<p>&nbsp;<\/p>\n<p>Raising the point on poor data quality in AI, Dr. Beduschi explored asylum-seeking refugee data as a challenge for policy-making. She used the EU as a case in point for the field of migration and as an exemplary site of debate for how AI will reflect mistakes in assumptions and conflations about individuals crossing borders. For instance, the statistical number of people who crossed a border in a year does not reflect the unique number of people who exited and entered the country &#8211; some individuals may have repeatedly travelled. Thus, at the border, increased human traffic demonstrates a need for better standards for adequate reception and an underlining of the importance of critical analysis of statistical data. We must ask: what do the numbers actually tell us? After all, when data is inaccurate, the cascading policies will also be inaccurate. Dr. Beduschi suggested ways to mitigate this include that we first of all raise awareness about data quality, then encourage inter-disciplinary and communication beyond silos, and finally, normalize proactivity in corrections of errors and inaccuracies in existing and future datasets.<\/p>\n<p>&nbsp;<\/p>\n<p>Regarding algorithmic bias, Dr. Beduschi discussed the findings about facial recognition systems from \u201cGender Shades\u201d (Buolamwini and Gebru, 2018). She notes that this study evinces the undeniable fact that biases not only often permeate from the developers, but also raise crucial legal issues after deployment. Without sufficient trial and scrutiny, AI often becomes a harmful contributing\u2014if not catalysing factor\u2014to online and offline racial discrimination. To overcome this, she suggests an increase in the diversity of datasets and developer teams is necessary. However, this should also be accompanied by comprehensive awareness-raising about the roots and solutions for bias in representation, be it historical, structural or in other forms.<\/p>\n<p>&nbsp;<\/p>\n<p>Further pursuing the critical issue of bias, Dr. Beduschi took to a data privacy perspective concerning AI for surveillance and technologization of borders. The first main concern under this was \u201cthe lack of fully-informed and unambiguous consent or other legal basis to process personal information.\u201d On top of this, she emphasised the stark power imbalances between states, international organizations, and migrants themselves. Considering the dimension of power over data and consent, a question arises: if the law can provide a fixed, strong definition of \u201cconsent\u201d for data privacy, will AI be more ethical or socially just?\u00a0 Moreover, with the increased use of AI for surveillance, Dr. Beduschi explains that usages in careless, arbitrary fashions cannot be tolerated; AI cannot serve to abuse individuals\u2019 privacy. Finally, she also raised some potential risks of over-reliance on not-yet accurate digital technologies such as AI drones and creation of virtual borders. Some suggested mitigation strategies included: the ever-important awareness-raising, a call for human rights-based approaches to be placed at the center of discussions about potential and current uses of AI technology in migration, and the absolute need for constant human questioning of the added value of technologies. According to Dr. Beduschi, this last strategy is particularly required in instances where the risks introduced by AI clearly do not outweigh the benefits.<\/p>\n<p>&nbsp;<\/p>\n<p>The third key issue was the datafication of migration and public-private sector interactions. Dr. Beduschi introduced and defined the term \u2018datafication,\u2019 as the process of increasing collection and processing of different types of data for migration. She explained that it has deepened the intertwined relationships between public and private sectors in technology. With this topic, there remained a lingering question about legally determined limitations and what discrepancies might be observed were we to compare public and private uses of AI and data. Several issues arising from this also involve the lack of cyber secure storage which, Dr. Beduschi claims, concedes to the ease of identification of vulnerable individuals and populations on the part of the officers in-charge of data. With many possible turning points before and after borders, the availability of migrant data to certain people with power exposes vulnerable individuals like refugees to persecution, violence and exacerbated precarity. Dr. Beduschi\u2019s suggested mitigation strategies thus involved the demand for data management strategies to be put in place with support from the law, and for a more comprehensive set of impact and risk assessments for data protection.<\/p>\n<p>&nbsp;<\/p>\n<p>The final key issue explored in the talk was concerned with fairness and accountability for AI. Due to the unpredictability and opacity of the way AI algorithms operate and are built, Dr. Beduschi attested to the \u201cblack box\u201d definition of AI development. She stated that most knowledge processes undergone by the existing technology and human developers are complex, and most often are not made to be fully comprehensible by laymen or non-experts. Despite attempts to decode the foundational complexities and reasons behind AI technology, all too often, although many algorithms remain \u201cauditable\u201d they are impossible to comprehend. Thus, Dr. Beduschi moved on to discuss the shrinking critical attitude we have towards AI. It is a characteristic of digital users today to tend towards the belief that a machine\u2019s instructions are correct. The automation bias that induces our lack of doubt towards technology and machines introduce several practical implications that necessitates moral accountability at the developer\u2019s side. Dr. Beduschi\u2019s suggested mitigation strategies asked for: raised awareness about fairness and automation bias, comprehensible auditability of AI systems, and moral accountability for deployment of AI technologies. Overall, it was clear that we now all need to refrain from the attitude of over-reliance on AI systems for decision-making in migration. It is not to say that AI should be eliminated as a tool at borders. Instead, Dr. Beduschi considers the importance of balance between AI and multilateral human decisions.<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Future challenges for AI in migration<\/strong><\/p>\n<figure id=\"attachment_941\" aria-describedby=\"caption-attachment-941\" style=\"width: 220px\" class=\"wp-caption alignright\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-941\" src=\"https:\/\/baiforum.jp\/en\/wp-content\/uploads\/2021\/09\/daniel-schludi-e1RI3wRelqM-unsplash-Beduschi\u5148\u751f\u8b1b\u6f14\u4f1a\u306e\u6d3b\u52d5\u5831\u544a-300x200.jpg\" alt=\"\" width=\"220\" height=\"147\" srcset=\"https:\/\/baiforum.jp\/en\/wp-content\/uploads\/2021\/09\/daniel-schludi-e1RI3wRelqM-unsplash-Beduschi\u5148\u751f\u8b1b\u6f14\u4f1a\u306e\u6d3b\u52d5\u5831\u544a-300x200.jpg 300w, https:\/\/baiforum.jp\/en\/wp-content\/uploads\/2021\/09\/daniel-schludi-e1RI3wRelqM-unsplash-Beduschi\u5148\u751f\u8b1b\u6f14\u4f1a\u306e\u6d3b\u52d5\u5831\u544a-1024x683.jpg 1024w, https:\/\/baiforum.jp\/en\/wp-content\/uploads\/2021\/09\/daniel-schludi-e1RI3wRelqM-unsplash-Beduschi\u5148\u751f\u8b1b\u6f14\u4f1a\u306e\u6d3b\u52d5\u5831\u544a-768x512.jpg 768w, https:\/\/baiforum.jp\/en\/wp-content\/uploads\/2021\/09\/daniel-schludi-e1RI3wRelqM-unsplash-Beduschi\u5148\u751f\u8b1b\u6f14\u4f1a\u306e\u6d3b\u52d5\u5831\u544a-1536x1024.jpg 1536w, https:\/\/baiforum.jp\/en\/wp-content\/uploads\/2021\/09\/daniel-schludi-e1RI3wRelqM-unsplash-Beduschi\u5148\u751f\u8b1b\u6f14\u4f1a\u306e\u6d3b\u52d5\u5831\u544a-2048x1365.jpg 2048w\" sizes=\"auto, (max-width: 220px) 100vw, 220px\" \/><figcaption id=\"caption-attachment-941\" class=\"wp-caption-text\">Image | Daniel Schludi<\/figcaption><\/figure>\n<p>To end her talk, Dr. Beduschi presented several new challenges for AI. This included the potential of standardizing digital immunity passports during our pandemic and global health crisis. Dr. Beduschi points out that while we have seen immunity passports commonly being required in standard travel in the EU and UK, we should also expect travel everywhere in the globe to soon be requiring some form of digital statement regarding one\u2019s health status. Dr. Beduschi contended that data about health will both restrict and allow freedom of movement.<\/p>\n<p>&nbsp;<\/p>\n<p>In addition to the emergent proliferated use of digital health passports, Dr. Beduschi also noted how AI and other digital technologies would soon rapidly be part of the human response to the climate crisis and global environment-related challenges. Not only would AI provide us with better preparedness and response, but it would also ask of us greater accountability regarding care for the environment and those most vulnerable who are likely to be displaced from the consequences of disaster. Likewise, common concerns about AI technology overtaking human jobs are also an ethical and moral issue which tech developers, decision-makers, politicians and leaders now need to weigh up against efficiency.<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Q&amp;A Discussion<\/strong><\/p>\n<p>&nbsp;<\/p>\n<p>Following her talk, members of the audience were able to engage in a fruitful 40-minute discussion. Our discussion led to an exploration of migration in the context of digital advancements, seen as a cycle whereby conditions of entry, and conditions of the state are constantly impacting migrants at each and every stage of movement and even settlement. We explored the need for a broader reflection on what exactly we want AI to do for us, which responsibilities we want to and must keep, and what the values are of having domestic, national and international interactions in place for these discussions. We were able to discuss the dangers of enhancing non-entree policies with AI during COVID-19, thinking that it would be a factor for accelerating the current trend of nationalism. We found that intensified immigration control with AI might forego certain human rights or exacerbate the conditions for mobility of the already socially and politically vulnerable. Furthermore, the extremely pertinent issue of closing the digital gap between developing countries and the centres of digital advancements was discussed alongside the need for accountability. We touched on the huge debate on how countries and industries with limited AI capabilities and resources could be given guarantees that they would not be isolated on an international level.<\/p>\n<p>&nbsp;<\/p>\n<p>In the same vein of concern regarding those who might be isolated or left behind, more questions were raised about the limitation of rights and freedoms, such as those about unvaccinated individuals. It was clear through the discussion that a balance among public health interests, a human rights-based approach and nondiscrimination was essential for just implementation of AI. During the pandemic, human vulnerability is coupled with intense precarity and uncertainty so safeguards are needed on top of existing technologies.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>On the 25th of August, Dr. Ana Beduschi joined B\u2019A<\/p>\n<div class=\"continue-reading-wrapper\"><a href=\"https:\/\/baiforum.jp\/en\/report\/re014\/\" class=\"continue-reading\">Continue Reading<i class=\"ion-ios-arrow-right\"><\/i><\/a><\/div>\n","protected":false},"author":7,"featured_media":0,"parent":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[6],"tags":[7,76,63,64,75,65],"class_list":["post-942","post","type-post","status-publish","format-standard","hentry","category-report","tag-ai","tag-ai-and-social-justice","tag-ana-beduschi","tag-lecture","tag-migration","tag-summer-program"],"acf":[],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/posts\/942","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/users\/7"}],"replies":[{"embeddable":true,"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/comments?post=942"}],"version-history":[{"count":11,"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/posts\/942\/revisions"}],"predecessor-version":[{"id":1070,"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/posts\/942\/revisions\/1070"}],"wp:attachment":[{"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/media?parent=942"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/categories?post=942"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/baiforum.jp\/en\/wp-json\/wp\/v2\/tags?post=942"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}