The consequences of experimenting AI and emerging technologies on migrant communities at the border

Technology Law, Ethics and Policy
Blog
Tabaret building

The weaponization of drones, big data, and algorithms in tracking migration disproportionately impacts migrants communities. Deploying these technologies can result in significant human rights violations such as systemic racism, increased surveillance, and wrongful automation. To highlight the issues on this topic, the AI + Society Initiative hosted a conversation on migration management and AI at the border featuring leading expert Dr. Petra Molnar in conversation with Professor Jamie Chai Yun Liew.

Technology is increasingly being used at the border. From drones to Big Data to algorithms, countries are deploying these techniques to ‘manage’ migration. However, often these technological experiments do not consider the profound human rights ramifications and real impacts on human lives. In fall 2020, Dr. Petra Molnar authored a collaborative report, Technological Testing Grounds: Migration Management Experiments and Reflections from the Ground Up, in collaboration with EDRi (European Digital Rights). Based on interviews with refugees and people on the move and highlighting the participation of increasing surveillance and automation at the border, the report seeks to analyze the impact of migration technology on the migrant population in the Greece and Mediterranean area. Furthermore, it provides insights on the individual impact of these technologies, and the systematic frameworks driving forced migration, and the conversation and deployment of this technology.

Building on the recommendations presented in her report, the conversation with Dr. Molnar seeks to address the questions of what are the consequences of deploying AI and emerging technology at the border? How can we effectively regulate these technologies? What are the frameworks that dictate the behavior of technology and the stakeholders that are deploying these technologies (the issue of the accountability gap)? And finally, how can we create mechanisms to ensure effective participation of migrant communities to reflect the experienced harms and future needs of these communities in developing and deploying technology?

Key insights

At the core of Dr. Petra Molnar's concern regarding the implementation of artificial intelligence (AI) at the border is the lack of governance and regulation of this technology. This concern is rooted in frameworks such as the EU Migration Pact, which reviews how these technologies and policies are designing and deployed. EU Migration Pact is a series of policies that frame migration in the EU. The thematic areas that are particularly important and noted by Dr. Molnar are border enforcement, securitization of the border, and the role of policing. These thematic areas do not exist just in policy but continue to filter into how these technologies are deployed. The deployment of these technologies brings about gross human rights violations, such as the utilization of facial recognition technology on minors, which must be reflected in the Pact to protect human lives from the impact of these technologies. In order to protect migrant communities, policies need to diverge from a draconian normative framework.

Dr. Molnar's detailed analysis included how migrant communities in the Moria Camp located in Lesvos, Greece are experiencing surveillance and increased use of technology at and beyond the border. General conclusions from the use of technology at the border are that technology reflects and heightens power inequality issues, lack of governance and oversight, and continued marginalization of communities.

The question of the types of migration management technologies that are being deployed and how they are deployed is explored by Dr. Molnar. As she notes that the purpose of these technologies is to control migration and the migrant communities before reaching, at and beyond the border, and simply put as Dr. Molnar states, "tracing a person's migration journey". Some modes in which these technologies have manifested to track an individual's migration journey are social media data scraping and big data for population tracking. Other technologies include piloted drones at the border, air stat machines that utilize automation, the rise of smart border technology through thermal cameras, and algorithmic decision-making for visa applications.

AI technology is deployed in various facets of a migrant's journey to reach and cross the border. The lack of governance and regulation of this technology, primarily when used on marginalized communities like migrants, forces us to consider the role of innovation in such a context. These communities do not have the opportunity to equitably and equally participate in the conversation to develop innovation that effectively represents marginalized communities. As a result, Dr. Molnar and Professor Liew highlight that the deployment of such technology on migrant communities is a form of experimentation without regulation or limitation, resulting in the harms experienced by these communities remaining invisible unless effectively documented. We must consider this impact when deploying technology and question whether there are effective mechanisms to protect these communities from unintended and intended harms of technology?  It is important to note that these conversations are largely being had in a western context. The discourse of analyzing the harms of technology needs to continue to exist on a spectrum of diverse perspectives.

To build on the concerns raised above, Dr. Molnar concludes her discussion on the recommendations she has proposed in her EU context report. The first being a call for a moratorium on the at-risk applications of technology at the border. This is due to the lack of governance and regulation of such technology, therefore preventing these technologies' global arms race. Furthermore, Dr. Molnar calls for mechanisms to increase transparency and accountability for the use of technologies (i.e. algorithmic decision-making). Dr. Molnar stresses this recommendation because of the gap and differences in regulation on various stakeholders. Private sector companies, the stakeholder that is most often developing this technology is regulated very differently from public sector actors, the need to bridge this accountability divide is vital. Finally, Dr. Molnar proposes the recommendation to continue evaluating the question of what effective participation looks like to include those who are experiencing the harms and risks of these technologies and those targeted by these technologies. This recommendation is particularly significant as it questions who the stakeholders are shaping the conversation and their roles. This implies who defines transparency, who is creating these core technologies, and whose needs are prioritized?

In short, the legal and policy implications and parameters in addressing and thinking about these questions and gaps will require an evolution of national and regional frameworks to protect and bridge the various divides brought forward by Dr. Molar.

Watch the event

Key resources to learn more

Our event briefs are provided to help amplify the conversation around the ethical, legal, and societal implications of AI, in a short and accessible format. We invite you to watch the video recording of the event and read the additional resources for more information on this topic.

This summary was prepared by M​uriam Fancy​, Research Coordinator at the AI + Society Initiative. Opinions and errors are those of the authors, and not of the Initiative or the University of Ottawa.