REPORTS

Report on the lecture by Dr. Ana Beduschi “Artificial Intelligence and Digital Technologies at the Border: Migration and Human Rights Considerations”

Alyssa Castillo Yap (3rd year, College of Arts and Sciences, PEAK JEA, University of Tokyo)

・Date and Venue: August 24, 2021, 17:30-19:00 (JST) @Zoom
・Language: English
(Click here for details on the event)

On the 25th of August, Dr. Ana Beduschi joined B’AI Global Forum for an eye-opening talk with a joint Q&A discussion held in English, on the topic of “Artificial intelligence and digital technologies at the border: Migration and human rights considerations”.

 

As the first of a tripartite series of talks for “AI and Social Justice”, the program began with an overview of the day’s topic and speaker introduced by Dr. Yuko Itatsu who is the Associate Director the Forum, organizer of the Forum’s AI and Social Justice Summer Program, and moderator for the event. This was followed by the brief and wonderful Opening Remarks by Dr. Kaori Hayashi, the Director of the B’AI Global Forum and Executive Vice President of the University of Tokyo, explaining the goals of the Forum and inviting the engagement of everyone in the Zoom event.

 

Dr. Ana Beduschi’s 40-minute talk began with an explanation that although there exists no internationally accepted definition of AI, our discussions can often be found under what is broadly defined as a collection of digital technologies linked with data, algorithms and key processes such as machine learning and deep learning. This set the foundation for the rest of the presentation, which covered the key issues, mitigation strategies and new challenges, surrounding the role of AI technologies in the future of borders and migration.

 

Key Issues in AI for Human Migration

 

Dr. Beduschi highlighted four key issues arising from the current conceptualisation and usage of AI for human migration, as follows: Data quality and algorithmic bias; Privacy, surveillance and technologization of borders; Datafication of migration and public-private sector interactions; and Fairness and accountability.

 

Raising the point on poor data quality in AI, Dr. Beduschi explored asylum-seeking refugee data as a challenge for policy-making. She used the EU as a case in point for the field of migration and as an exemplary site of debate for how AI will reflect mistakes in assumptions and conflations about individuals crossing borders. For instance, the statistical number of people who crossed a border in a year does not reflect the unique number of people who exited and entered the country – some individuals may have repeatedly travelled. Thus, at the border, increased human traffic demonstrates a need for better standards for adequate reception and an underlining of the importance of critical analysis of statistical data. We must ask: what do the numbers actually tell us? After all, when data is inaccurate, the cascading policies will also be inaccurate. Dr. Beduschi suggested ways to mitigate this include that we first of all raise awareness about data quality, then encourage inter-disciplinary and communication beyond silos, and finally, normalize proactivity in corrections of errors and inaccuracies in existing and future datasets.

 

Regarding algorithmic bias, Dr. Beduschi discussed the findings about facial recognition systems from “Gender Shades” (Buolamwini and Gebru, 2018). She notes that this study evinces the undeniable fact that biases not only often permeate from the developers, but also raise crucial legal issues after deployment. Without sufficient trial and scrutiny, AI often becomes a harmful contributing—if not catalysing factor—to online and offline racial discrimination. To overcome this, she suggests an increase in the diversity of datasets and developer teams is necessary. However, this should also be accompanied by comprehensive awareness-raising about the roots and solutions for bias in representation, be it historical, structural or in other forms.

 

Further pursuing the critical issue of bias, Dr. Beduschi took to a data privacy perspective concerning AI for surveillance and technologization of borders. The first main concern under this was “the lack of fully-informed and unambiguous consent or other legal basis to process personal information.” On top of this, she emphasised the stark power imbalances between states, international organizations, and migrants themselves. Considering the dimension of power over data and consent, a question arises: if the law can provide a fixed, strong definition of “consent” for data privacy, will AI be more ethical or socially just?  Moreover, with the increased use of AI for surveillance, Dr. Beduschi explains that usages in careless, arbitrary fashions cannot be tolerated; AI cannot serve to abuse individuals’ privacy. Finally, she also raised some potential risks of over-reliance on not-yet accurate digital technologies such as AI drones and creation of virtual borders. Some suggested mitigation strategies included: the ever-important awareness-raising, a call for human rights-based approaches to be placed at the center of discussions about potential and current uses of AI technology in migration, and the absolute need for constant human questioning of the added value of technologies. According to Dr. Beduschi, this last strategy is particularly required in instances where the risks introduced by AI clearly do not outweigh the benefits.

 

The third key issue was the datafication of migration and public-private sector interactions. Dr. Beduschi introduced and defined the term ‘datafication,’ as the process of increasing collection and processing of different types of data for migration. She explained that it has deepened the intertwined relationships between public and private sectors in technology. With this topic, there remained a lingering question about legally determined limitations and what discrepancies might be observed were we to compare public and private uses of AI and data. Several issues arising from this also involve the lack of cyber secure storage which, Dr. Beduschi claims, concedes to the ease of identification of vulnerable individuals and populations on the part of the officers in-charge of data. With many possible turning points before and after borders, the availability of migrant data to certain people with power exposes vulnerable individuals like refugees to persecution, violence and exacerbated precarity. Dr. Beduschi’s suggested mitigation strategies thus involved the demand for data management strategies to be put in place with support from the law, and for a more comprehensive set of impact and risk assessments for data protection.

 

The final key issue explored in the talk was concerned with fairness and accountability for AI. Due to the unpredictability and opacity of the way AI algorithms operate and are built, Dr. Beduschi attested to the “black box” definition of AI development. She stated that most knowledge processes undergone by the existing technology and human developers are complex, and most often are not made to be fully comprehensible by laymen or non-experts. Despite attempts to decode the foundational complexities and reasons behind AI technology, all too often, although many algorithms remain “auditable” they are impossible to comprehend. Thus, Dr. Beduschi moved on to discuss the shrinking critical attitude we have towards AI. It is a characteristic of digital users today to tend towards the belief that a machine’s instructions are correct. The automation bias that induces our lack of doubt towards technology and machines introduce several practical implications that necessitates moral accountability at the developer’s side. Dr. Beduschi’s suggested mitigation strategies asked for: raised awareness about fairness and automation bias, comprehensible auditability of AI systems, and moral accountability for deployment of AI technologies. Overall, it was clear that we now all need to refrain from the attitude of over-reliance on AI systems for decision-making in migration. It is not to say that AI should be eliminated as a tool at borders. Instead, Dr. Beduschi considers the importance of balance between AI and multilateral human decisions.

 

Future challenges for AI in migration

Image | Daniel Schludi

To end her talk, Dr. Beduschi presented several new challenges for AI. This included the potential of standardizing digital immunity passports during our pandemic and global health crisis. Dr. Beduschi points out that while we have seen immunity passports commonly being required in standard travel in the EU and UK, we should also expect travel everywhere in the globe to soon be requiring some form of digital statement regarding one’s health status. Dr. Beduschi contended that data about health will both restrict and allow freedom of movement.

 

In addition to the emergent proliferated use of digital health passports, Dr. Beduschi also noted how AI and other digital technologies would soon rapidly be part of the human response to the climate crisis and global environment-related challenges. Not only would AI provide us with better preparedness and response, but it would also ask of us greater accountability regarding care for the environment and those most vulnerable who are likely to be displaced from the consequences of disaster. Likewise, common concerns about AI technology overtaking human jobs are also an ethical and moral issue which tech developers, decision-makers, politicians and leaders now need to weigh up against efficiency.

 

Q&A Discussion

 

Following her talk, members of the audience were able to engage in a fruitful 40-minute discussion. Our discussion led to an exploration of migration in the context of digital advancements, seen as a cycle whereby conditions of entry, and conditions of the state are constantly impacting migrants at each and every stage of movement and even settlement. We explored the need for a broader reflection on what exactly we want AI to do for us, which responsibilities we want to and must keep, and what the values are of having domestic, national and international interactions in place for these discussions. We were able to discuss the dangers of enhancing non-entree policies with AI during COVID-19, thinking that it would be a factor for accelerating the current trend of nationalism. We found that intensified immigration control with AI might forego certain human rights or exacerbate the conditions for mobility of the already socially and politically vulnerable. Furthermore, the extremely pertinent issue of closing the digital gap between developing countries and the centres of digital advancements was discussed alongside the need for accountability. We touched on the huge debate on how countries and industries with limited AI capabilities and resources could be given guarantees that they would not be isolated on an international level.

 

In the same vein of concern regarding those who might be isolated or left behind, more questions were raised about the limitation of rights and freedoms, such as those about unvaccinated individuals. It was clear through the discussion that a balance among public health interests, a human rights-based approach and nondiscrimination was essential for just implementation of AI. During the pandemic, human vulnerability is coupled with intense precarity and uncertainty so safeguards are needed on top of existing technologies.