InterviewAI at Europe’s Borders

People on the move face dehumanizing and discriminatory treatment at Europe’s borders. Yet, the EU continues to fund projects that aim to automate these practices with the help of AI and by this exacerbates the plight of migrants. An interview with Antonella Napolitano and Fabio Chiusi.

Fabio Chiusi and Antonella Napolitano are sitting on a stage. The Republica 2024 logo is shown on a screen behind them, overlaid with the slogan ‘Who Cares?’. Chiusi, on the left of the picture, speaks into a microphone and gesticulates. Napolitano has an open Macbook on her lap.
“Against Automated Fortress Europe” at re:publica24 – CC0

Antonella Napolitano is an independent researcher – formerly with Privacy International – who investigates the impact of technology on human rights. Fabio Chiusi leads the “Automation on the Move” project at AlgorithmWatch. At re:publica24, they spoke about Fortress Europe – a technological dystopia – and argued that complex issues around migration cannot be solved by the use of Artificial Intelligence (AI). Artificial Intelligence is a very broad term that is used in many discussions right now. Could you give some examples where AI is deployed at the EU borders?

Fabio Chiusi: In this context, AI is used in two ways: for border surveillance and for migration management. There are systems being developed to forecast migration flows. Automated fingerprinting is already in use in large-scale databases at EU level. Biometrics-based identification – such as iris scanning or 3D face mapping – is also becoming increasingly sophisticated. Other EU-funded projects are exploring emotion recognition technologies, like lie detectors.

Then you have systems concerning border surveillance, mainly unmanned vehicles (UVs). The famous project ROBORDER, for example, involves developing surveillance robots for the sky and the ocean floor – and everything in between. They claim these technologies are used to prevent human trafficking, save lives, or prevent marine pollution. However, if not done properly, these can be very dangerous. Could you name a few of these dangers?

Fabio Chiusi: Those technologies can — and will — replicate and amplify social and political bias. This is common to all AI-based tools but all the more dangerous in the context of migration, with the lives of vulnerable individuals on the line. They can normalise and entrench forms of mass surveillance that we once thought as hallmarks of illiberal and authoritarian regimes. And they can ultimately result in even more violence and abuse of the rights of people on the move, as is unfortunately already apparent from how AI and automation are currently applied and researched upon. Which of these technologies are already in use and which ones are still being researched?

Fabio Chiusi: Drones are already in use in many places, but new capabilities are still being developed. The UVs implemented by ROBORDER are meant to operate in swarms – that’s something new.

“People on the move are considered second-tier by default” Your report for “Automation on the Move” mentions the term “automated decision-making” (ADM). How is that different from Artificial Intelligence?

Fabio Chiusi: While the notion of Artificial Intelligence is fundamentally vague, and has been since its inception, I conceive of it as a component of an ADM system. Automated decision-making is not merely characterized by technology; it also involves social, cultural aspects, and a lot of politics.

These systems impact the way people live; they have a social, economic, cultural, or individual impact on people’s lives. So, the common usage of AI usually refers to its merely technical aspects – for example when applied to agriculture or medicine. But in general, we at AlgorithmWatch deal with automated decision-making systems. One could argue that AI is one way to start spotting them. Speaking of politics – the EU tried to invest in the rights of citizens with the Artificial Intelligence Act. But as you hinted in your talk, the regulations are not implemented for everyone. Who is hit hardest by the use of automated surveillance and who might be more protected?

Antonella Napolitano: The AI Act establishes different levels of risk for AI technology. Technology with risk that is considered “unacceptable” cannot usually be deployed. But several “unacceptable risk” or “high-risk” systems are allowed in the context of national security and border control. This means there are technologies that cannot be deployed on EU citizens but can be deployed on people coming to Europe.

This creates a two-tier system: the same rules apply differently to different categories of people. And the category is not determined by something that they have done – they’re not accused of a criminal act, but they are effectively criminalised. People one the move are considered second-tier by default.

“An attempt to push responsibilities away from themselves.” Who decides which automation tools can be deployed and where?

Antonella Napolitano: These systems must be assessed at the EU level, so there will be an AI office. It will play a key role in the implementation, and will support the national authorities in the EU member states assigned with this task.

But there are exceptions: national security and military systems are not covered by the AI Act and still operate on a national basis. This leaves a significant portion of applications unregulated by the AI Act. What do you think is the goal the European Commission is pursuing with the use of these technologies at the border?

Fabio Chiusi: It’s important to understand that the goals are not shaped by politics alone but by private interests and the security community. You have a handful of huge companies lobbying, dictating, and shaping the research done in these areas.

Increasingly, Frontex and the border and coast guard community play a big role in these projects. They can decide what to research. If they identify an operational gap, they ask the EU to fund a project to develop the necessary technology.

To many in the field, this tech-solutionist framework is problematic because it reduces a complicated social issue to a technological one. Security parlance uses terms like „situational awareness“ or „heterogeneous swarms of robots“ as smokescreens to conceal racism, discrimination, abuse, and violence. It is an attempt to push responsibilities away from themselves. Are there any ways people on the move can protect themselves from these discriminatory surveillance technologies?

Antonella Napolitano: The narrative of this question places the burden on people on the move. They shouldn’t have to hide to escape surveillance. This is the law now. In Germany, for instance, some of these surveillance technologies are used legally. They have been challenged by lawyers and civil society, including on the grounds that their use is not compatible with the values of the German Constitution.

When I was at Privacy International in the UK, we fought against mobile phone extraction. In that case, it was used unlawfully. We challenged it in court and won. This kind of technology cannot be used in that way anymore.

There will be conversations about resistance and obfuscation, and there are a number of things that can be done. But in terms of policy, it shifts responsibilities. These systems shouldn’t exist in the first place.

“Challenge the racist assumption that people of another color are inherently dangerous” What can we do to improve the situation at the borders as individual European citizens?

Antonella Napolitano: EU elections are coming up in a few weeks, so voting is of course a first answer. A key issue is challenging the narrative that people coming from other countries are a potential danger. We need to challenge the racist assumption that people of another color are inherently dangerous. This is not in line with the EU values European politicians talk about.

For example, EU money is spent to round up black people and dump them in the desert. This is done indirectly by giving money to other governments. How is this in line with so-called EU values? This is not done in the name of EU citizens. We need to create a disincentive for these actions.

Fabio Chiusi: To add – you could turn the question upside down. Rather than protecting migrants from technology, the technology should be protecting them. One thing I consistently see in my research is that these EU-funded projects focus on saving time, money, and cognitive workload for border guards, making life easier for them. It’s tough to see where the life-saving aim comes in.

“What is all this technology for exactly?” Saving lives is used as a pretense to justify investment in these technologies.

Fabio Chiusi: They do claim that they’re going to protect human rights and save people. But as my research shows – in reality that aspect is marginal at best. As I said during the talk, the ethical assessments these projects undergo are shallow most of the time.

Antonella Napolitano: Take Frontex, for example: They have a large budget and sophisticated technologies meant to patrol the Mediterranean and spot boats in distress to ensure safety. However, as seen with the Pylos shipwreck in Greece and the Cutro shipwreck in Italy, they are often involved but fail to help. Or they push the boats back further…

Antonella Napolitano: Yes, there was a recent instance – I’m not sure if it’s been challenged legally – where Frontex was present during a Greek coast guard pushback. They didn’t intervene while the boat was in distress and did nothing when the coast guard pushed it back to Turkey.

Incidents like this led to the resignation of the former Frontex director. A few months ago, Lighthouse Reports found over 2,000 emails exchanged between Frontex and the so-called Libyan coast guard over three years. This extensive communication resembles that between colleagues.

We know about the horrible situation in Libya. What is the point of all this money and technology? Every time Frontex is questioned, they claim a lack of resources.

Fabio Chiusi: And that’s why they “need automation.”

Antonella Napolitano: The Frontex budget skyrockets yearly – increasing by 50, 80, 100 million euros. Frontex is establishing risk analysis cells in African countries, yet people keep dying and being pushed back. What is all this technology for exactly?

Fabio Chiusi: It’s certainly not to save lives.

Deine Spende für digitale Freiheitsrechte

Wir berichten über aktuelle netzpolitische Entwicklungen, decken Skandale auf und stoßen Debatten an. Dabei sind wir vollkommen unabhängig. Denn unser Kampf für digitale Freiheitsrechte finanziert sich zu fast 100 Prozent aus den Spenden unserer Leser:innen.

0 Ergänzungen

Wir freuen uns auf Deine Anmerkungen, Fragen, Korrekturen und inhaltlichen Ergänzungen zum Artikel. Bitte keine reinen Meinungsbeiträge! Unsere Regeln zur Veröffentlichung von Ergänzungen findest Du unter Deine E-Mail-Adresse wird nicht veröffentlicht.