Technology at the Border: Assessing the effectiveness and human rights implications of iBorderCtrl, ROBORDER, and ITFLOWS

 

Abstract

Horizon 2020 was Europe's largest ever research and innovation funding program, dividing nearly 80 billion euros between 31,000 projects. This thesis examines three of those programs- iBorderCtrl, ROBORDER, and ITFLOWS. [to/from the perspective of/in order to….]

Each project received funding under the sub-category ‘Technologies to enhance border and external security’ and sought to reimagine how the European Union handles migration, specifically in regard to those seeking asylum or refugee status. Each project implements novel technologies such as artificial intelligence, machine learning, and neural networks to secure Europe’s land, sea, and air borders. If the projects are deemed successful, they could be implemented into future border protection initiatives.

All three of these programs in some way come into conflict with human rights legislation within Europe. This dissertation explores if these technologies are suitably aligned with the European standards of human rights, as defined by the Charter of Fundamental Rights of the European Union, the European Convention of Human Rights, and the European Charter of Fundamental Rights.

To date, there has not been much academic research or evaluation of the programs as they are all quite new. As refugees and asylum seekers are a disadvantaged and protected class in Europe, it is important to critically assess these tools for their human rights aspects and how they may impact the lives of human beings before they are employed.

This research assesses these programs via two metrics. First, if they function as planned, and second if they are aligned with human rights legislation and norms within Europe. Finally, the thesis concludes with suggestions on how each tool could be implemented in a way that protects the rights and dignity of all people.

 

Keywords: Horizon 2020, tech, innovation, refugee, human rights 

 

 

Introduction

 

Millions of people leave their homes every day due to persecution, conflict, natural disasters, or economic reasons. Often, these people are some of the most vulnerable members of society, with little access to resources in their home country, the areas they pass through, or their eventual host countries. While only 10% of total world refugees were living in the European Union by the end of 2021, that is still 26.6 million people who interact with the various systems in place to enter and live in the European Union as foreign nationals (UNHCR, 2021).

 

Since the early 1990s, Europe has consistently received between 25,000-50,000 asylum seekers a year, with slight increases after events such as the fall of the Berlin Wall and the start of the Kosovo War (Number of Refugees to Europe Surges to Record 1.3 Million in 2015, 2020). Then, with the start of the Syrian conflict in 2011, European countries saw an unprecedented rise in asylum applications, reaching an apex of 1,325,000 asylum seekers in 2015. 

 

 

Horizon 2020

 

In 2014, just as the refugee crisis in Europe was about to reach its peak, the European Commission launched Horizon 2020, the largest ever funding program for research and innovation (Horizon 2020, 2021). In order to reestablish Europe as a leader in scientific and technological progress, nearly 80 billion euros were allocated between 31,000 projects (Funding & Tenders, n.d.). The European Union highlighted three primary funding categories: Excellent Science, Industrial Leadership, and Societal Challenges (Funding & Tenders, n.d.). Within Societal Challenges was the subcategory “Technologies to enhance border and external security.” The funding category specifically aimed to identify “Innovation for border and external security may draw, in particular, from novel technologies, provided that they are affordable, accepted by citizens and customized and implemented for the needs of security practitioners” (Funding & Tenders, n.d.).

 

This dissertation will evaluate three projects that received Horizon 2020 funding to solve problems and improve situations at European land and sea borders: iBorderCtrl, ROBORDER, and ITFLOWS. These three projects were selected because they span three distinct technologies and were conducted at various points of the Horizon 2020 funding process. Each program will be evaluated to examine if the project achieved its stated goals in terms of how the technology functions and if the project adheres to current human rights legislation in the European Union. Specifically, the Charter of Fundamental Rights of the European Union, the European Convention of Human Rights, and the European Charter of Fundamental Rights. As the right to privacy is a central pillar and repeatedly mentioned in all of these charters and conventions, this argument will also evaluate how these projects adhere to Europe's most significant privacy legislation, the General Data Protection Regulation (GDPR).   

 

Each of the aforementioned projects proposed using novel technologies, including drones, swarm robotics, artificial intelligence, machine learning, neural networks and prediction software, to improve the EU’s ability to monitor its borders and better manage future influxes of incoming migrants. They each touted promises of preventing unauthorized migration, speeding up verification and entry processes, and lessening the load for those working in all aspects of border security and refugee management. 

 

As these are all relatively recent projects, there has yet to be much evaluation on how these tools and technologies function or discussion on the way technology should be used. These assessments are vital, however, as the development of technology almost always outpaces regulation and law. Without public and academic scrutiny, new technologies can often run with little to no oversite. Furthermore, as the groups and companies developing these tools are almost always separate from those who will be primarily impacted by them, it is essential that third-party groups, including human rights groups, refugee rights advocates, and possibly refugees themselves, be included in the evaluation of these technologies.

 

In order to receive funding from Horizon 2020, each project had to be approved by an internal ethics committee and follow a strict process to ensure the project is compatible with all European standards and human rights charters. As part of the application process, candidates were required to complete a preliminary self-administered ethical screening

to assess “ethical aspects of its objectives, methodology and potential impact (European Commission, n.d.). Importantly, any proposed project that included a “severe intervention on humans” was flagged for an additional ethics assessment conducted by Horizon 2020 staff (European Commission, n.d.).

 

Projects must also include "a partner institution expert in legal informatics, data protection, data security and ethics who will lead the EU-wide legal review and the tasks associated with legal and ethical compliance" (Crockett et al., 2018).

 

Under Article 19 of Regulation No 1291/2013 of Horizon 2020

 

“All the research and innovation activities carried out under Horizon 2020 shall comply with ethical principles and relevant national, Union and international legislation, including the Charter of Fundamental Rights of the European Union and the European Convention on Human Rights and its Supplementary Protocols. Particular attention shall be paid to the principle of proportionality, the right to privacy, the right to the protection of personal data, the right to the physical and mental integrity of a person, the right to non-discrimination and the need to ensure high levels of human health protection” (European Union, 2013).

 

Horizon 2020 again and again touted the importance of high moral standards and alignment with human rights laws. However, despite the various ethics assessments administered at various points of the development of the project, and close work with an independent ethics board, each project comes into conflict with the previously stated human rights charters and laws in various ways. In this thesis, I aim to prove that Horizon 2020 failed in its responsibility to monitor the projects properly. As these projects were funded with European money, orizons  Horizon 2020 had a responsibility to uphold European laws and values. The following chapters will explore the various ways that iBorderCtrl, ROBORDER, and ITFLOWS are in violation of human rights laws.

 

 

Problematic Aspects of Technology

 

The implementation of technology into nearly every aspect of our daily lives is also a point of contention and consideration. A tendency has developed to implement tech solutions “just because they exist, rather than because they serve an actual policy need or provide an answer to a particular policy question” (Ziebarth & Bither, 2020). To add to the often-bloated perceived importance of using novel technologies to solve problems, the public often holds a relatively high degree of confidence in technology, even if that trust is undeserved. There is also a common assumption that technology will be a less biased alternative to predictably biased human beings. “One of the greatest myths about artificial intelligence is that it is an objective or neutral tool. It is not. AI is shaped by the prejudices, priorities and decisions of its creators and the people who deploy it” (Heikkilä, 2021).

 

Algorithmic bias often occurs as an AI adopts the biases of its human creators, either with or without them realizing it. This problem often goes undetected until it reaches a stage where it becomes massively and apparently problematic. Due to corporate intellectual property laws, many of these technologies operate in what is often referred to as a corporate black box. The companies building these tools have no responsibility to disclose any information regarding how the tools were created or how they work to either the public or governmental bodies. Often, they are not fully aware of how an artificial intelligence might come to a decision.

 

In specific contexts, data collection, analytics, statistical models, and algorithms can provide extraordinary insights and benefits to organizations. It can help solve and detect problems, help people receive aid more efficiently, and provide a more streamlined system of delivering goods to those who need them. However, all digital data collection is also “subject to risks and harms arising from missing data, misused data, incorrect data, non-representative data, biased models, faulty analysis, and insecure data storage” (Digital Identity in the Migration and Refugee Context, 2019).

 

The lack of understanding regarding how these technologies work and the intentional shroud of mystery companies hold over their intellectual property can lead to the undetected exacerbation of systemic discrimination. Because of this, it is crucial to be sceptical of black-box technologies that claim to solve big problems. “While the aims of these tools might be to contribute to the greater good, they are also overwhelmingly created by private companies with their own metrics and priorities” (Molnar, 2020).

 

According to the International Refugee Rights Association, some of the key concerns regarding AI applications and privacy are: 

 

“Discrimination, unfairness, inaccuracies, and bias: AI-driven identification, profiling, and automated decision-making may also lead to unfair, discriminatory, or biased outcomes. People can be misclassified, misidentified, or judged negatively, and such errors or biases may disproportionately affect certain groups of people” (International Refugee Rights Association, 2021).

 

Associate Director of the Refugee Law Lab, Petra Molnar, often writes about “the so-called AI divide- or the gap between those who are able to design AI and those who are subject to it” (Molnar, 2020). Many countries have laws in place to protect citizens from abuse or the invasion of privacy, but many of these laws do not apply to third-country nationals. This creates what Molnar often refers to as a technological testing ground in refugee spaces (Molnar, 2020). As migrants and refugees are already a disadvantaged and often persecuted class, they deserve more protection from potential biases, not less.

  

In order to ethically use these new technologies in the tracking and management of migrants, there needs to be better collaboration between those creating and implementing these technologies and the refugees themselves. Otherwise, we face an “imbalance of power between asylum seekers and states wielding AI tech that can increasingly decide people’s fate without offering them any recourse” (Keung, 2020).

 

While many x are eager to explore the potential benefits of new technologies, there needs to be a larger focus on the real impacts on human rights and human lives. While they might seem the answer to all of our problems, “technologies for migration issues are largely unregulated, notoriously opaque, with little to zero accountability” (Dialani, 2021). 

 

The European Union was built on multiculturalism, openness, and respect for the diversity of people and thoughts within its borders. In order to be true to these founding ideals, its policies and programs must reflect those values to both European and non-European populations. “If implemented responsibly, AI has the potential to promote the enjoyment of human rights. However, there is a real risk that commercial and state use has a detrimental impact on human rights” (International Refugee Rights Association, 2021).

 

Methodology 

 

In order to assess the legality of iBorderCtrl, ROBORDER, and ITFLOWS under European laws, I primarily compared and contrasted documentation from these projects against the Charter of Fundamental Rights of the European Union, the European Convention of Human Rights, and the European Charter of Fundamental Rights. I specifically focused on issues of privacy, discrimination, and the preservation of human dignity. I chose these three as a significant portion of the above charters and conventions have large sections dedicated to these themes and are repeated throughout the documents.


The programs were primarily assessed through publicly available reports and documentation provided by the European Commission, Horizon 2020, and the individual projects themselves. I also obtained internal documents through Freedom of Information Requests via AskTheEU.org. In particular, I looked for anything that pertained to how the programs collected and stored data and how they would interact with the end user, as I determined these to be the two areas that were most likely to be problematic.

 

iBorderCtrl

1 September 2016 to 31 August 2019

Budget € 4,501,877.50

Coordinated by European Dynamics Luxembourg SA

(The European Commission, 2020)

 

"Welcome to the iBorderCtrl avatar. Can you please state your name?"

 

At first, this AI-powered avatar may seem like a pleasant and faster way to automate registration and the collection of personal data from people entering secured borders. In fact, many countries and airports worldwide have already implemented automation to speed up processing for the thousands of people standing in long airport security lines every day. 

 

Currently, these tools have been used for travellers who do not require visas to enter the country. In this case, EU nationals travelling within the European Union. But now, governments and border control agents are aiming to expand the use of the technology and integrate it into land borders across the Schengen Zone. 

 

Depending on how one responds to questions and micro gestures such as an eye twitch or a momentary scrunching of the corner of the mouth, the avatar might start asking questions like, "Do you have any connections with ISIS?" Or "has anyone you are connected to ever been involved in criminal activity?" 

 

Developed by researchers at Manchester Metropolitan University, iBorderCtrl's primary objective is to automate and accelerate the data collection and identity confirmation segment of border control for third-country nationals attempting to enter the EU. In doing so, the European Union hopes to cut down the number of people entering with fake documentation and the amount of smuggled or otherwise illegal goods entering its borders.

 

According to records from the European Union, in 2019, more than 7,500 people were detected as trying to enter with fake documentation (ETIAS, 2022). Governments and border agents are continuously looking for new ways to confirm a person's identity and prevent mis-documented persons from entering borders. Biometrics, fingerprints, retina scans, vein comparison, and possibly in the near future, voice imprints are becoming popular alternatives as they are currently substantially more difficult to falsify (The European Commission, 2020).

 

However, the creators of iBorderCtrl have ambitions far past mere identity confirmation- they aim to create a 21-century lie detection system. iBorderCtrl uses an ADDS (Automatic Deception Detection System) called Silent Talker. It utilises computer vision and artificial neural networks (ANN) to pick up on 40 channels of facial nonverbal behaviour, including a person's micro gestures, gaze, and posture (Silent Talker, n.d.). It then uses that information to extrapolate conclusions, specifically if the subject is being honest or deceitful (Gallagher & Jona, 2019). On the system's front end is a personalised avatar that can take on various looks, tones and "raise specific interview topics that are of higher relevance for certain travellers but may be irrelevant for others" (Krigel et al., 2018).

 

After completing the interview, Silent Talker will determine a risk score associated with the traveller. Those with a high-risk score are flagged and sent to a border control agent for further questioning. As the program deals with the verification of documents, it is labelled as a high-risk AI system under Horizon 2020 rulings. This means the system “will be subject to strict obligations before they can be put on the market” (European Commission, 2021). However, this appears not to be the case.

 

In its testing phase, iBorderCtrl was deployed in airports in Hungary, Latvia, and Greece. As the Associate Director of the Refugee Law Lab, Petra Molnar, pointed out, the three countries chosen for the testing phase are countries with strict policies in place for immigration, especially when it comes to refugees (Molnar, 2020). While the program was voluntary for those interested in opting in, in future deployment models, the program could become obligatory. The impact of this technology far exceeds these three countries, however. In a conference paper written by the creators of iBorderCtrl, they wrote that they believe the system would become the future of border security worldwide (iBorderCtrl, 2018). 

 

One of iBorderCtrl's primary technological tools is facial recognition technology (FRT), which uses biometrics, image processing, and machine learning to confirm or challenge someone's claimed identity based on unique facial attributes. FRT is considered an intrusive technology because it captures, extracts, stores, and shares specific information on something unambiguously tied to the user (International Refugee Rights Association, 2021). UN human rights experts and organizations have expressed significant concerns about using FTR, as it can be in violation of several EU protection and privacy laws, which will be examined in the following sections. 

 

While more advanced video capabilities allow for better image processing, the interpretation of micro-expressions is new and highly contested. "Can this system account for the cross-cultural differences in which we communicate? What about if you are traumatised and unable to recall details clearly?" (Molnar, 2020). These are important questions to answer, especially as refugee and immigration claims are notoriously complex and filled with nuances that could be difficult for a machine to pick up on.

 

As a person’s ability to receive asylum is tainted by input from an algorithm, it “can cause breaches of protected human rights due to process and procedural fairness issues, bias, and discrimination" (Dialani, 2021). All final decisions regarding entry or asylum status will be made by a border guard or immigration agent, but it is not difficult to see how a program claiming a person is lying might influence their decision.

 

When first tested by researchers at Manchester Metropolitan University, researchers found that Silent Talker had a 75% accuracy rate in detecting dishonesty. However, critics point out numerous problems with this research. First, only 32 people participated in the study. Out of that already small sample size, the group was also unbalanced in terms of ethnicity and gender (OrShea et al., 2018). For example, in the deceptive dataset, there were only four participants of either Asian or Arabic descent and thirteen Caucasian Europeans. Likewise, the truthful dataset consisted of twelve males and only three female participants (OrShea et al., 2018b). This level of unbalanced data points in both gender and ethnicity could greatly impact the performance of the deception classification network.

 

The participants, broken into groups of seventeen deceptive and fifteen truthful, were asked thirteen questions each. All questions remained relatively simple, such as ‘what is your surname’ or ‘have you packed X in your suitcase’ (OrShea et al., 2018b). While this rudimentary level of questioning provides researchers with a baseline, they are not significant enough to prepare an algorithm to understand more complex questions that come with vastly more complex human responses. Training the models to understand and decipher more complex reactions and human emotions is not linear either. It is far more difficult for a machine to understand complex language or an answer that is more of a story than a single one-word or yes or no answer. 

 

Proper testing over time is one way to increase the likelihood of an AI correctly identifying dishonest answers, but a single test of 32 persons, most of whom are white males, is not enough to be academically or scientifically relevant. Not only did iBorderCtrl's study not have enough diversity to ensure accuracy, but FRT is also known for a quality discrepancy, specifically between white male faces and everyone else. 

 

In 2018, two researchers found that "some facial analysis algorithms misclassified Black women nearly 35 percent of the time, while nearly always getting it right for white men" (How Is Face Recognition Surveillance Technology Racist? | News & Commentary, 2020). They later tested various facial recognition softwares and found similar results across the board. The United States Federal Government also investigated the matter and found that facial recognition systems “generally work best on middle-aged white men's faces, and not so well for people of color, women, children, or the elderly" (How Is Face Recognition Surveillance Technology Racist? | News & Commentary, 2020). If computer vision and biometrics are not able to accurately identify the face of anyone who is not a white male, it is unlikely the technology could correctly detect signs of deception. 

 

Even with its flaws, facial recognition software has been implemented in airports for years. This is in part because there are significant benefits to not relying solely on a physical ID or passport. "There are clear deficiencies in a system that depends on legally recognised ID certificates in the form of paper documents that are easily stolen, lost, or destroyed and also difficult to replace once inside the EU. It is here where the promises of technology, through digitally encrypted, decentralised ledgers, for example, may seem like a tempting solution" (Digital Identity in the Migration and Refugee Context, 2019).

 

 

Legality under European Legislation

 

The benefits of increased processing time and a lack of dependency on physical documents, however, were not enough to protect the project from vast and public criticism. The cumulation of which was European Parliament member and digital rights activist Patrick Breyer filing a transparency lawsuit against the EU's Research Executive Agency over the program (Breyer, 2021).

 

First, Breyer put public pressure on iBorderCtrl by asking that all documentation relevant to the project and how it functions to be open and available to the public. However, on December 21, 2021, the General Court "denied the existence of an overriding public interest justifying disclosure of some of the documents Mr Breyer's asked to access" (Breyer, 2022). While some documentation was publicly disclosed, the vast majority was kept secret under intellectual property protection. Human rights watch group Article 19 wrote after the decision, "The appeal's outcome will set an important precedent about the levels of transparency that the EU must uphold when pursuing technological innovation as a means of border control, or any other controls over people in the EU" (EU: Research into Biometric Technologies Must Be Transparent, 2022).

 

After failing to release documents publicly, Breyer sued iBorderCtrl for violating the European Charter of Fundamental Rights (Breyer, 2021). In the lawsuit, he claimed the program “invaded people's privacy by analysing their facial gestures and ultimately letting a computer software make assumptions about their criminal potential" (Breyer, 2021).

 

He continues, "It is clear that civil liberties are not sufficiently reflected in the design of the EU research programmes. They also lack democratic accountability, as the European Research Executive Agency (REA) of the European Commission still casts a veil of silence over most of the projects and treats them as private property of the consortiums" (Breyer, 2021).

 

In addition to not being in compliance with the European Charter of Fundamental Rights, iBorderCtrl also comes into problems with the Schengen Border Code (SBC), the European Convention on Human Rights (ECHR), and the General Data Protection Regulation (GDPR). 

    

 Schengen Border Code (SBC)

 

iBorderCtrl conflicts with Article 7 of the Schengen Borders Code (SBC), protecting human dignity. "Border guards shall, in the performance of their duties, fully respect human dignity, in particular in cases involving vulnerable persons" (The European Parliament of the Council of the European Union, 2016). Specifically, concerns have been raised regarding the potential risk of having a machine interact with a potentially fragile person rather than a human being. Artificial intelligence and computer software follow a set of rules dictating how it should act in certain situations. While it can adapt, it can only do so within the set parameters it has been programmed to recognize. "Non-typical situations, therefore, can be seen as a particular challenge for computational intelligence-based systems, as providing rules for every possible situation appears to be impossible at this point in time" (Krigel et al., 2018). 

 

Some of the examples of situations Krigel et al. details that could be problematic in preserving human dignity include what would happen if the person starts crying or has a breakdown? What if the person does not appear to understand the question? Or, what if there is a clear misunderstanding and the user gives a response that the AI then mistakenly perceives as deceitful?

 

Any of these situations, plus many others, could lead to a situation where the machine might not be able to handle interacting with a traveller in a way that does not violate their human dignity (Krigel et al., 2018). While some of these problematic situations may be avoidable by having the program automatically shut down and send the person to a human border agent at the first sign of trouble, it is unclear to what extent the system would even be able to detect a potentially problematic situation.

 

Charter of Fundamental Rights of the European Union (CFREU)

 

Whether iBorderCtrl goes against the Charter of Fundamental Rights of the European Union is slightly more ambiguous. Article 21 of the CFREU protects against discrimination based on "sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age or sexual orientation" (The European Union, 2000). As these traits are "inherent properties of a person, they cannot be changed and offer a particularly high risk for discrimination" (The European Union, 2000). It is unlikely that the ADDS system could produce a risk of deception score without taking any of these factors into consideration, but as the technology functions as a black box, there is no official information available.

 

Because the ADDS heavily relies on automatisation for both conducting interviews and assessing a traveller's risk score, "as soon as actual human beings may be affected by such decision, [it could] raise ethical questions inter alia with regard not only to human dignity, but also to the principle of non-discrimination" (Krigel et al., 2018).

 

Currently, the only protection from iBorderCtrl conflicting with the CFREU is that, ultimately, the ADDS does not decide if a traveller will be admitted to the Schengen area. While the risk score is likely to be a significant factor in the border guard's decision, ultimately, it is a human who has the last word. "Thus, as there is no legal decision on entry or refusal. From a legal point of view it seems questionable if a respective system would be regarded as automated decision making within the ambit of Art. 11 Directive 680/2016/EU" (Krigel et al., 2018).

 

General Data Protection Regulation (GDPR)

 

Under GDPR, in order for any group or organisation to collect or use personal data, the user must be informed of all of the ways in which their data will be collected and used in a "clear, intelligible and easily accessible form" (General Data Protection Regulation (GDPR) – Official Legal Text, 2019). While those participating in the study were able to provide clear consent, it could become problematic to extend the clear consent requirements on a larger scale. 

 

Article 22 of the GDPR "concerns the rights of an individual when interacting with systems that may automatically make a decision or profile them in any way that they have not given consent to" (Crockett et al., 2018). "The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her" (General Data Protection Regulation (GDPR) – Official Legal Text, 2019). If an automated decision is made, the organisation must provide a clear description of exactly how that conclusion was reached. Because neural networks determine the deception risk score, "individuals cannot get an explained decision on how the ADDS artificial neural network classifiers obtained this score" (Crockett et al., 2018). This suggests clear non-compliance with current regulations. 

 

Article 22 also protects subjects from any type of data collecting that could be used for profiling. Specifically, GDPR restricts the gathering or use of any information "concerning the data subject's performance at work, economic situation, health, personal preferences or interests, reliability or behaviour, location or movements, where it produces legal effects concerning him or her or similarly significantly affects him or her" (General Data Protection Regulation (GDPR) – Official Legal Text, 2019).

 

There are additional reasons that the data collection and processes of iBorderCtrl might be problematic regarding data collection and privacy. "When using private data, safeguards that protect the anonymity of the individuals behind the data should be employed. There are also the risks of unintended consequences: for example, mobility patterns of certain groups could also be misused by political opponents or authoritarian regimes" (Ziebarth & Bither, 2020). However, there is no way to anonymise user data without sacrificing the effectiveness of the program. This is particularly problematic as "the majority of the 35 countries that the EU prioritises for border externalisation efforts are authoritarian, known for human rights abuses and with poor human development indicators" (Molnar, 2020). If there were to be a leak of information, it could endanger the lives of migrants fleeing persecution in their home countries. 

 

Not only does being in violation of the GDPR jeopardise the legality of iBorderCtrl but "organisations that violate the GDPR can be fined up to 4% of their annual global turnover or 20 million" (General Data Protection Regulation (GDPR) – Official Legal Text, 2019).

 

 

  

ROBORDER

1 May 2017 to 31 August 2021

Budget € 8,997,781.50

Coordinated by Ethniko Kentro Erevnas Kai Technologikis Anaptyxis

(ROBORDER, 2021)

 

Europe has a coastline of nearly 68,000 kilometres and a land border of more than 13,000 kilometres (European Environment Agency, n.d.). Currently the organization Frontex is responsible for monitoring and securing these borders, but it has proven to be an immense undertaking. Continuously and properly monitoring even a fraction of these borders would cost insurmountable financial and human resources. Not to mention that many of these areas are incredibly remote, challenging to reach, and potentially dangerous for human beings to access. 

 

This is where technology can be immensely helpful. Frontex has been using unmanned drones for years to keep an eye on some of Europe’s busiest land and sea passages. While an immense improvement over sending border agents, they still require constant human assistance to pilot and monitor. Frontex currently deploys a pool of 1,500 border guards and staff to monitor border segments and a headquarters in Warsaw, Poland, where experts from various countries monitor incoming information (Frontex, 2022). “The agency also deploys vessels, aircraft, vehicles and other technical equipment provided by Member States in its operations” (Kokstaite et al., 2018). These drones and autonomous vehicles come with hurdles, as their size, weight, and battery life mean they can only be deployed in certain areas for a specific amount of time. They are also expensive to operate, so currently, they are primarily used once an issue has already been detected instead of being used as a detection tool itself. 

 

ROBORDER aimed to resolve many of the problems Frontex was having with their use of drones and other unmanned vehicles by creating “a fully functional autonomous border surveillance system through a connected swarm of unmanned mobile robots” (Aims & Objectives – ROBORDER, 2022). ROBORDER combines aerial drones, and water surface, underwater, and ground vehicles equipped with multimodal sensors including optical, infrared, and thermal cameras; radar; and radio frequency sensors. These sensors claim to be able to detect the presence of humans, guns, vehicles, and other objects. The program also utilizes cell phone frequencies to triangulate the precise location of anyone suspected of engaging in criminal activity or behaviour (Aims & Objectives – ROBORDER, 2022). 

 

Paired with swarm robotics technology, this grouping of land, sea, and air vehicles create an interoperable network where information is continuously passed between each individual node. This factor enables the system to work more independently without much human interaction (Aims & Objectives – ROBORDER, 2022). The collected data is then semantically integrated to provide moment-by-moment intel to the relevant border or police forces to enable them to make informed decisions. The heterogeneous autonomous vehicles and detection capabilities aim to simultaneously improve all aspects of border security while reducing human and financial costs. This includes preventing the illegal importing of goods such as drugs and weapons, and the illegal entry of people, both in the context of human trafficking and intentional illegal entry into the European Union. 

 

Legality under European legislation

 

While ROBORDER might have achieved successful benchmarks regarding nearly all of its stated objectives, there are remaining questions regarding the legality and human rights aspect of implementing such a broad surveillance program. There are two primary concerns; first and foremost are concerns about privacy and data collection. While in the testing phase, all participants were able to give the required level of informed consent mandated by GDPR, the European Charter of Fundamental Rights, and the Charter of Fundamental Rights of the European Union; in real-world conditions, this would not be possible. In addition to being impossible to give consent, in the majority of circumstances, people might not even be aware that they are being monitored by drones or aquatic vehicles, adding additional legal concern.

 

Another potential issue is if the technology is exported outside of Europe or is used by authoritarian governments. In the grant agreement, the developers of ROBORDER told the European Commission that they did not foresee the export of this technology from the EU. However, in an interview with The Intercept, a company representative even said they would be open to selling their technology beyond Europe (Campbell, 2019). As the creators and owners of ROBORDER are a private company, neither Horizon 2020 nor the European Union can prevent the company from selling the technology to any interested party.

 

 

European Convention of Human Rights (ECHR) and Charter of Fundamental Rights of the European Union (CFREU)

 

According to the International Refugee Rights Association, “AI-driven consumer products and autonomous systems are frequently equipped with sensors that generate and collect vast amounts of data without the knowledge or consent of the users or those in the proximity” (International Refugee Rights Association, 2021). However, the collection of personal data without the individual’s informed consent is illegal under numerous European laws. Where ROBORDER stands on this depends on what information is gathered or used in their detection process.

 

ROBORDER claims in their documentation that the program “does not intend in any way to collect personal data as defined in Article 2 of the Data Protection Directive 95/46/EC nor to perform identification of persons through any data collected during the project (Oliveira et al., 2017). They further claim that the sensors will not collect any biometric data, and while video will be collected for the detection of persons, there will be no steps in using the technology to identify them (Oliveira et al., 2017).

 

Mass surveillance and monitoring raise ethical concerns even if the Consortium states its intention to detect and recognize but not identify. Privacy is a fundamental human right under Article 8 of the European Convention on Human Rights and Article 7 of the Charter of Fundamental Rights of the European Union. Both articles provide a right to privacy and respect regarding one's private and family life, home and correspondence (Council of Europe, 2010) (The European Union, 2000). Article 7 of the Charter of Fundamental Rights of the European Union goes on to say that while there shall be no interference by a public authority regarding these rights, exceptions can be made in cases of national security, public safety or the prevention of disorder or crime (The European Union, 2000). It has yet to be examined, however, if the uses of ROBORDER could be classified as a matter of safety or security.

Lastly, Article 12 of the Universal Declaration of Human Rights protects individuals from “arbitrary interference with his privacy, family, home or correspondence” and “attacks upon his honour and reputation” (The United Nations, 1948).

 

The protection of privacy is an essential element of democratic societies and a vital capstone in legislation in the European Union. As it currently stands however, there is no evidence that the right to privacy could be protected with the implementation of ROBORDER and its monitoring system.

 

 

Further Findings and Problems

 

ROBORDER is arguably the most successful of the three programs, but much of its praise comes from its success with a secondary objective, the detection of oil spills or other pollutants in the seas and oceans surrounding Europe (Krestenitis et al., 2019). ROBORDERs Public Final Activity Report, section 1.2 Objectives, makes it clear that the detection of goods and persons illegally entering the European Union is the program’s primary objective (ROBORDER, 2021). However, very few of the reports by media outlets or journals focus on the potential for migrant detection and instead focus on other potential uses for the technology.

 

In ROBORDER’s final report, they chronicle measures of success in both the hardware and software programs used in all types of vehicles, as well as success with the various detection tools and radars. The report also claims to “significantly lower costs in comparison to traditional surveillance methods at several levels, including initial cost, personnel training, maintenance and additional operation costs” (ROBORDER, 2021). 

  

As per their funding stipulations by Horizon 2020, since ROBORDER will directly impact human beings, it was required to run a simulation mock-up. A caveat of the simulation was that, once again, each participant needed to give informed consent, which was possible in a testing setting. However, there is no way to enforce this degree of wide-based consent when the technology is deployed in the real world.

 

As a part of their self-conducted mid-term review and progress report, ROBORDER published their results from the Horizon 2020 Ethical Issue Checklist and claimed their project was in full legal scope under Horizon 2020 and European law because…

 

a) The research DOES involve human participants (but not their identification); 

b) There are NO persons unable to give informed consents; 

c) There are NO vulnerable individuals or groups; 

d) There are NO children/minors; 

e) There are NO patients; 

f) The research does NOT involve physical interventions on the study participants; 

g) The research does NOT involve invasive techniques; 

h) The research does NOT involve collection of biological samples. 

(ROBORDER, 2018)

 

While these statements are true regarding the trial and testing phase, A, B, C, D, F, and potentially G, would all likely be false in a real-world setting. However, in the same document, ROBORDER states-

 

“ROBORDER Consortium is extremely committed to make ethics a top priority alongside with the development of technology. All members of the ROBORDER Consortium agree that technological development based on solid ethical valorisation is the key for a sustained solution for the long term” (ROBORDER, 2018). As a part of the same ethics self-evaluation, ROBORDER recognized that potential ethical risks include: 

 

Violation of privacy

Impact on physical health

Impact on mental health

Impact on societal cohesion, stability, ...

Environmental effects

Possible political conflicts (e.g. in border areas)

System failure modes

Responsibilities in case of failures, damages, risks, ...

(Oliveira et al., 2017)

 

There appears to be a significant disconnect between how the ROBORDER Consortium sees the application of the technology in the real world and how logically it stands to function. In their own evaluations they recognize potential legal problems but then are not forced to address them in any way.

  

ITFLOWS

1 September 2020 to 31 August 2023

Budget € 4 871 832,50

Coordinated by Universidad Autonoma de Barcelona

(The European Commission, 2022)

 

The world is full of uncertainty. Academics and experts across industries aim to utilise available resources and data to predict significant world disasters such as conflicts, natural disasters, or political crises. The hope is that if we could predict a major event before it occurs, maybe we could lessen the severity of its impact. 

 

Humans have never been very good at predicting the future, though. We often miss signs or do not fully understand the implications of the various factors until it is too late. While humans might not be very good at predictions, it turns out this is an area where machines far outperform us. 

 

Forecasting is a "process of predicting or estimating future events based on past and present data" (Blasi Casagran et al., 2021b). It has been used in finance and business for decades and has recently become immensely popular because of the added capacities of big data and vastly improved processing speeds. 

 

Prediction software is not particularly new within the migration context either. Numerous other programs have tried to use prediction software to anticipate famine, flood, war, or other disasters that could lead to an unexpected surge of asylum seekers. ITFLOWS is the first, however, to combine migration predictions with sentiment analysis. Combined, these technologies hope to not only predict when or where a mass exodus of asylum seekers might occur but also "identify risks of tensions between migrants and EU citizens" (ITFLOWS, 2021). 

 

By utilizing sentiment analysis in addition to prediction software, ITFLOWS aims to provide predictions and "management solutions for reducing potential conflict/tensions between migrants and EU citizens, by taking into account a wide range of human factors and using multiple sources of information" (ITFLOWS, 2021). Not only does ITFLOWS want to predict when the next large influx of people might occur, but where these people might go, and how they might get along with the local population. This all comes with the hopes that "an in-depth analysis on drivers, patterns and choices of migration as well as public sentiment towards migration will lead to the drafting of adequate recommendations and good practices for policymakers, governments and EU institutions" (ITFLOWS, 2021). 

 

To achieve this, the system uses an information and communications technology (ICT) called the EUMigraTool to "provide assistance for the reception, relocation, settlement and integration of migration" (ITFLOWS, 2021). Sentiment prediction uses text analysis, natural language processing (NLP), biometrics, and computational linguistics to infer human emotions and perceptions of a topic. ITFLOWS gathers its information from "media content from TV-news (video content), web-news and social media (text content)" to make predictions (ITFLOWS, 2022).

 

The European Union has been pushing for the development of such a tool since 2015, when many countries were unprepared for the influx of migrants trying to enter their borders. "Different institutions within the European context have allocated extensive resources and funding in seeking migration forecasting or predictive tools" (Blasi Casagran et al., 2021a). While the technology could help countries prepare for an unexpected event, it is still rife with potential problems and legal hurdles. 

   

Legality under European legislation

 

The Steering Committee, the board that comprises the head decision-makers for ITFLOWS, appear acutely aware of the potential legal issues surrounding their product. A report put out by ITFLOWS on the legal and ethical framework of the project states, "ITFLOWS consortium is fully aware of the risks and their potential impacts in terms of jeopardising human rights that both empirical migration research activities and technological developments foreseen in the Project may pose" (Xanthaki et al., 2021).

 

Nearly every document produced by ITFLOWS mentions apprehension towards the potential risks and misuse of the tool, both on moral and legal grounds. There seems to be a conflict, though, between those making the tool and those in charge of it. One article written by ITFLOWS members, ‘The Role of Emerging Predictive IT Tools in Effective Migration Governance,’ details one dispute where end-users wanted search tools integrated that could select for "criteria such as nationality, language spoken, ethnic group, and skills of migrants.” However, they were not able to add these criteria because of concerns around data protection and ethical requirements (Blasi Casagran et al., 2021b).

 

Both the European Charter of Fundamental Rights and the Charter of Fundamental Rights of the European Union protect against discrimination, which would be nearly impossible to prevent. The AI extrapolates from data collected from a multitude of systems and comprises an assessment, essentially a guess, of how large groups of people might interact with each other, completely removing any consideration for people as individuals and not as a member of a group.

 

The processing of personal data is also protected under Article 5(1)(c) of the GDPR, which establishes that the processing of personal data must be "adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed" (General Data Protection Regulation (GDPR) – Official Legal Text, 2019). Essentially, this regulation states that a migrant's data can only be collected and processed for specific and clear reasons. As migrants also meet the definition of "vulnerable and minority groups," any tool or technology that collects or processes their personal information must ensure that only select end users have access to that data. Any leak or breach would put ITFLOWS in contempt of the EU Charter of Fundamental Rights. 

 

Further Findings and Problems

 

According to the creators of ITFLOWS, there are three primary elements that they find to be potentially problematic: the uncertainty of future events, of migration data, and of different forecasting models producing different results" (Blasi Casagran et al., 2021b). Prediction software also presents particular concerns regarding biases and potential human rights violations. There is a saying in the technology industry- garbage in, garbage out. Meaning, if machines are given a "bad" dataset, they will produce bad results. Sometimes this can be as simple as the algorithm producing errors, but other times it can be more complicated with the algorithm generating incorrect or biased results.  

 

"The common thread seems to be that if data comes from humans, it will likely have bias in it" (Shane, 2021). This can be particularly damaging if a training data set contains explicitly biased data against marginalised or other vulnerable groups. Ensuring that the feeding data is 'clean' and without bias can be a near impossible challenge. Outside of errors of human nature, training data often also contains errors or is incomplete. Finding clean, complete, and unbiased data is far from the only concern, and no one seems to be as aware of the potential problems as those who built it. Many of the primary researchers and board members of ITFLOWS have published journal articles that speak to the risks of bad tech, biases, and potential harm of the technology. 

 

In one article entitled, 'The Role of Emerging Predictive IT Tools in Effective Migration Governance,' written by four primary researchers in ITFLOWS, they outline various concerns with implementing the technology they themselves created. Potential problems include "Intensifying global or regional asymmetries and curtailing human rights, which is at odds with effective migration governance" (Blasi Casagran et al., 2021). 

 

The same group of researchers also found that "even if a particular model works well for a certain period, one sole event might change everything, and from that point on, the predictive tool might have to consider a different degree of uncertainty" (Blasi Casagran et al., 2021). Other researchers even note that "using partial or different models with the same data would produce different forecasts" (Teodoro et al., 2021). In the end, there is neither security nor reason to trust in the results of any one particular model. If there is no way to check the quality of the data used to create these algorithms and no other way than hindsight to recognize if they were correct or not, there is not much reason to trust these algorithms over standard prediction and warning systems currently in place.

 

In addition to the concerns surrounding the effectiveness of such a tool, many both inside and out of the ITFLOWS project have expressed concern about the potential moral qualms of such a system. "International migration governance is a contested field with competing interests and stakeholders, and predictive tools exercise the potential to introduce or reinforce unequal power relations," writes human rights and migration lawyer Anna Beduschi. "In utilising these tools, those states with more technological capabilities can further solidify their position in setting the international migration agenda" (Beduschi, 2020). This is problematic as tools such as ITFLOWS could be used to further promote "non‐entrée policies and human surveillance, at the expense of those rights protected by international human rights frameworks" (Broeders & Dijstelbloem, 2016). In order to create a truly effective global migration governance, it must benefit all stakeholders, including the migrants themselves. 

 

The final concern regarding ITFLOWS regards data privacy and security. Alexander Kjærum, who sits on the board at ITFLOWS and is a senior analyst at the Danish Refugee Council (DRC), said, "there's a big risk that information gets into the hands of states or governments that will not use it to enhance support and protection for these vulnerable groups, but will use it to throw up more barbed wire" (Disclose, 2022). This can be a genuine concern for many, as after the 2015 migration crisis, Denmark, Sweden, Norway, France, Italy, Austria, Poland, Turkey and Hungary, all implemented harsher immigration laws.

 

Unlike iBorderCtrl, which had significant external criticism, most of the criticism targeted at ITFLOWS comes from its own board and committee members. In addition to the aforementioned article, numerous other articles have been published expressing concern regarding how ITFLOWS could be misused. 

 

In a report on the regulatory model of ITFLOWS, members of the board and ethics committee warned, "Member States may use the data provided to create ghettos of migrants"; that discriminate "on grounds of sexuality, race, religion, disability, age" (Teodoro et al., 2021). The report also warns of risks stemming from "reinforcing fear and arguments against migration, or the increasing hate speech in areas where the inhabitants are informed that the inflows will move..." (Teodoro et al., 2021).

 

Another internal report from ITFLOWS on the International and European Legal Framework warns that the algorithm "may pose several risks if misused for stigmatising, discriminating, harassing, or intimidating individuals, especially those that are in vulnerable situations such as migrants, refugees and asylum seekers" (Xanthaki et al., 2021). One researcher working with ITFLOWS, Colleen Boland, even wrote an article entitled, ‘European Muslim Youth and Gender (in)Equality Discourse,’ where she outlines the various ways Muslim and black men, in particular, might face discrimination and be negatively impacted (Boland, 2021).

 

It appears there is no more robust critic of ITFLOWS than the ones who were supposed to oversee it. The Steering Committee and Ethics Board both made various public attempts to question the validity and potential misuse of the very technology they were responsible for. None of these warnings, however, seem to have been heeded. 

 

During an online symposium held on September 16th, 2021, a member of the ITFLOWS ethical board, Alexandra Xanthaki, said:

 

 "We spent six months working day and night to create a report about the human rights framework. "And now it seems to me that what the tech members are saying is: we're not taking it into account. So what's the point in having it in the project?" (Xanthaki, 2021).

  

 

Findings

 

While each of the three projects had some contention with European human rights laws and conventions, at least two out of the three obtained a notable level of success. iBorderCtrl and ROBORDER contain elements that can be implemented in order to improve the conditions and systems in place pertaining to how the European Union addresses immigration and asylum applications. In order to be legally permissible, however, all projects would need to implement significant structural changes or remove features altogether.

 

It appears as if all three projects oversold their capabilities and potential in order to receive funding, rather than settle on something smaller that could properly and safely be implemented. As part of the evaluation process to determine which proposal would receive funding, one of the three award criteria is excellence, defined by Horizon 2020 as:

 

“…Extent that the proposed work is beyond the state of the art, and demonstrates innovation potential (e.g. ground-breaking objectives, novel concepts and approaches, new products, services or business and organizational models)” (Horizon 2020, 2017).

 

In order to be viable in a competitive process, proposals needed to demonstrate that their project would contribute toward massive change. In setting the bar that high, it is no surprise that applying projects felt the need to push the capabilities and legality of some of these tools past what they could reasonably deliver.

 

Even though these projects received millions of euros in funding from public institutions, they are still controlled by private companies and institutions. Because of this, a significant amount of internal information and evaluation is protected under intellectual property laws. Even the internal documents I obtained had large sections that were redacted. This has led to an incomplete analysis and further exacerbates why it is so important for more information to be available to the public.

 

Out of the three projects, the most successful is ROBORDER. While there are still privacy concerns pertaining to what kind of information is being collected and how it is used, for the most part, these concerns can be mitigated. Furthermore, ROBORDER has shown immense potential and capabilities in non-migration arenas, such as identifying oil spills or other illegal pollution, or conducting search and rescue missions in terrain that would otherwise be inaccessible.

 

While many unresolved questions surround the legality of a surveillance program of this scope, ROBORDER still has redeeming value and potential as a security device. “Probably the greatest advantage of technology is that it enables border patrol agencies to concentrate their resources: by deploying drone technology, these agencies could benefit from a flexible tool that can adapt to changing circumstances and emerging threats” (Kokstaite et al., 2018).

 

Whether used for border security and surveillance or another field, the technology developed by ROBORDER has far-reaching use cases across industries. A study of user perception found that “accurate real-time data collection capabilities provide better situational awareness, facilitating decision making, and increases overall efficiency of border surveillance” (ROBORDER, 2021). However, many of those surveyed were more impressed by “ROBORDER’s capabilities to enrol in environmental missions as added value. Unmanned vehicles and mobile cameras could gather information to assist in detecting and minimizing damages caused by natural disasters, pollution levels, fires, and floods” (ROBORDER, 2021).

 

If ROBORDER was to be used for its original purpose as a border security device, more attention needs to be paid to security and data management. While the project has worked with ethics and data security experts since the beginning, there are still questions and conflicts, primarily in the realm of privacy, surveillance, and data. While the program was able to mitigate these concerns in the testing and development phase of its work with Horizon 2020, there is no feasible way to ensure the same level of security once the product is used in the field. 

 

iBorderCtrl was also successful in some regards but became problematic when there was an excessive push for totally new technology and capabilities. The only aspect of iBorderCtrl that is problematic from a legal and human rights standpoint is the attempt at creating an AI that could detect dishonesty. There are too many conflicts with articles in the GDPR, CFREU, ECFR, and ECHR for this type of technology to be implemented at this time.

The segments of iBorderCtrl which focus on the automized collection of personal information such as name and date of birth, and the detection system for smuggled goods and persons were both successful in reaching their functionality goals and do not conflict with any European laws. As long as the algorithms are not involved in any aspect of decision making, and all participants gave informed consented, the basics of iBorderCtrl could be an immensely useful tool in easing the stress and congestion at airports worldwide.

 

While iBorderCtrl documentation does show that there has been concern about the privacy of its users, when it comes to the most vulnerable of the population, iBorderCtrl still comes into conflict with data, privacy, and human rights legislation in the European Union.

 

This is not to say that iBorderCtrl has no value or place in the immigration system. There might be use cases where this technology can be helpful to speed up processes at airports, but further regulations need to be put in place to determine the boundaries and limits of its use. As an identity confirmation tool, the legality is only dependent on how the program collects and stores data. While no information is publicly available on this point due to intellectual property rules, if all users gave informed consent and had a clear avenue to contest results, the system could have inherent value. 

 

All in all, it is the ADDS component of iBorderCtrl that poses various legal, ethical and social challenges, specifically in regards to the protection of human dignity, the principle of non-discrimination, data protection, and privacy. Illegal data collection, profiling, and automated decision-making are subject to strict regulation and crucial aspects of computational intelligence-based systems. Therefore, to be compliant with European legislation, a variety of safeguards need to be put into place to ensure legal compliance. In summary, computational intelligence-based systems such as iBorderCtrl challenge the legal and ethical framework put forth by the European Union in a variety of ways. As any technological involvement in a sociotechnical system rife with problems can exaggerate predominant biases, the use of these future technologies should be closely monitored. 

 

The program that seems the least likely to meet European legal standards successfully is ITFLOWS. This project is a perfect example of how some might be overeager to implement technological solutions, even when they are not needed. As the development and testing phase of ITFLOWS has not yet come to a close, there are no finalized answers regarding how the technology might be adapted or used in the future. As it currently stands, it seems unlikely that they will reach full compliance with European legislation before the end of the period.

 

While the idea of having a migration crystal ball might have been appealing to the European Border and Coast Guard community, in practice, it seems neither the technology, nor the understanding of how we could safely use it, is at a viable point yet.

 

In one of their articles, the ITFLOWS committee wrote,

 

"The ultimate goal of predicting migration flows for governance should be to enable policymakers and appropriate stakeholders to make prudent and robust decisions, by illustrating a clear causal relationship between migrant arrivals and necessary policies for managing future migration" (Blasi Casagran et al., 2021b).

 

As it currently stands, it appears as if that goal cannot be met within the current framework of ITFLOWS. Unlike the other programs, ITFLOWS is still in progress and therefore lacks the final findings. Regardless of the success of this particular project, many futurists foresee big data and prediction to be the most common new type of technology being used in this way. According to some of the project's biggest whistle-blowers, even if ITFLOWS does not make it to market, numerous similar programs are already trying to utilise similar types of data sets (Blasi Casagran et al., 2021). 

 

While there is a possibility that having a warning before the 2015 refugee crisis might have helped countries in the EU better prepare for the millions of people trying to cross their borders, there is no way to know for sure. Good might have come from the situation, but with the vast amount of warnings and uncertainty from those closest to the project, it is likely that the creation and implementation of these tools might do more harm than they could ever do good. 

 

Technology offers innumerable benefits to individuals and society, but there is always a trade-off. Many of us choose to give up certain levels of privacy in order to use these tools, but consent and trust are vital aspects of this arrangement. The European Union already took steps to ensure its citizens were more cognizant of what kind of data companies were collecting from us and how they use it. If these projects were able to give users and the public more information regarding how these black box technologies were made and how they come to specific conclusions, they would be significantly less problematic.  

 

There is currently insufficient oversight to ensure compliance, and as these tools are developed by the public sector, oversite is vital. According to the International Refugee Rights Association, “In many cases, the private company supplies, builds, operates and maintains the AI system they deployed, with public authorities not having sufficient knowledge or effective oversight” (International Refugee Rights Association, 2021). Problems are then often compiled as there is currently a lack of legal frameworks and limited enforcement safeguards in place. Where there is already limited oversite and regulation, it is easy to fall into “responsibility laundering between the private and public sectors, creating accountability gaps without clear legal responsibility” (Ziebarth & Bither, 2020).

 

Margrethe Vestager, Executive Vice-President for a Europe fit for the Digital Age, said:

 

“On Artificial Intelligence, trust is a must, not a nice to have. With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted. By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive along the way. Future-proof and innovation-friendly, our rules will intervene where strictly needed: when the safety and fundamental rights of EU citizens are at stake” (European Commission, 2021).

 

If private companies are going to lead the way in technical innovation that impacts the lives of society's most vulnerable groups, there needs to be better accountability measures in place. “In migration and refugee claims, automated systems can cause life-and-death implications for normal people, who are fleeing for their lives” (Dialani, 2021). Therefore, ethical evaluations need to be stringent and constant.

 

Even the continued ethical assessments and ethical guidance mandated by Horizon 2020 did not stop each of these projects from building their tools in ways that were not compliant with the law. In the future, more robust and stringent assessments must be conducted “prior to the design, during the development, the testing, the deployment and regularly thereafter in order to identify the emerging human rights risks” (International Refugee Rights Association, 2021).

 

Border management and immigration are immensely complex and challenging issues filled with nuances. It is crucial for borders to be somehow secured, and technology might be the most effective way to do that. If however, we choose to use technologies that we do not yet fully understand, it is critical that we be vigilant in our continuous assessment and evaluation of these technologies. The only way to ensure that these tools function as they should and are fair to all members of society is to realize they are not perfect and to approach each idea with a high level of scrutiny.

 

Data collection, algorithms, and analytics can provide unprecedented insights. If correctly utilized, these technologies can provide vital insights and benefits to organizations. However, data collection is also rife with potential harms stemming from missing, misused, incorrect, or non-representative data, biased models, and faulty analysis.

 

Funded with 80 billion euros of European money, these programs are responsible for upholding European ideals. The European Union was founded on six values: human dignity, freedom, democracy, equality, rule of law, and human rights (The European Union, n.d.). Any initiative that is so closely entwined with the European Union needs to strictly adhere to these principles. 

 

These technologies are in their infancy, and the decisions we make now regarding how they will be developed and implemented will have far-reaching implications. We are only at the beginning of our exploration of artificial intelligence, machine learning, and neural networks. If governments and institutions such as the European Union do not set and enforce standards early on regarding how we use these technologies, it will only get more difficult later on. Human rights must be a central focus of all innovation.

 

The intention of Horizon 2020 was to encourage novel ideas. None of the projects were expected to be perfect, and there is no guarantee that any will be implemented in the future. By conducting thorough assessments of these projects, however, we can shape how subsequent innovation research will be conducted and align them more with what we as a society want them to be.

 

 

Conclusion

 

This research examed how the technology used in iBorderCtrl, ROBORDER, and ITFLOWS works and how these tools align and misalign with human rights legislation in Europe. I found that while the majority of aspects within each project were in line with laws and legislation, each project had areas that were problematic. 

 

This could in part be because of the competitiveness of Horizon 2020 funding and the push to bring totally novel technologies and use cases. We already use automation at airports to check some IDs, drones to monitor sections of the border, and prediction software to anticipate conflict. Each of these three programs took something that already existed, and then shot for the moon, potentially overpromising the capabilities of these still rather new and untested technologies. 

 

In addition to the issue of over promising, another large problem for Horizon 2020 and individual projects is the lack of transparency. On the Horizon 2020 website, in the first few paragraphs they say one of the primary goals of the program is to “remove barriers to innovation and make it easier for the public and private sectors to work together in delivering innovation” (European Commission, 2017). This mix of public and private however ultimately is one of the most significant problems of the program. 

 

The private companies and organizations who lead these projects are largely able to hide behind intellectual property laws. The independent legal and ethical boards that were a part of each program should have caught these issues before or during the years long development of these projects. However, they did not and each project was able to present their final findings with these grave errors intact. 

 

If the people who are responsible for ensuring the legality and protection of human beings within these projects were not able to prevent these violations, more information needs to be available to the public. As we still do not fully understand how many of these technologies function or come to decisions, it is vital that the limited information that we do have be open and accessible.

 

Mustafa Suleyman, co-founder of DeepMind, said, "A tech company that applies its technology without due consideration for ethical and social implications is destined to be a bad tech company" (Crockett et al., 2018). If the European Union wants to further and emphasize technology that fits within its principles of equality and human rights, these features need to be a primary focus from the beginning.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Bibliography 

 

Ahmad, N. (2020). Refugees and Algorithmic Humanitarianism: Applying Artificial Intelligence to RSD Procedures and Immigration Decisions and Making Global Human Rights Obligations Relevant to AI Governance. International Journal on Minority and Group Rights, 1–69. https://doi.org/10.1163/15718115-bja10007

Aims & Objectives – Roborder. (2022). Roborder. https://roborder.eu/the-project/aims-objectives/

Beduschi, A. (2020). International migration management in the age of artificial intelligence. Migration Studies, 9(3), 576–596. https://doi.org/10.1093/migration/mnaa003

Bircan, T., & Korkmaz, E. E. (2021). Big data for whose sake? Governing migration through artificial intelligence. Humanities and Social Sciences Communications, 8(1). https://doi.org/10.1057/s41599-021-00910-x

Blasi Casagran, C., Boland, C., Sánchez-Montijano, E., & Vilà Sanchez, E. (2021). The Role of Emerging Predictive IT Tools in Effective Migration Governance. Politics and Governance, 9(4), 133–145. https://doi.org/10.17645/pag.v9i4.4436

Boland, C. (2021). European Muslim Youth and Gender (in)Equality Discourse: Towards a More Critical Academic Inquiry. Social Sciences, 10(4), 133. https://doi.org/10.3390/socsci1004013

Breyer, P. (2021, April 22). EU-funded technology violates fundamental rights. About:Intel. https://aboutintel.eu/transparency-lawsuit-iborderctrl/

Breyer, P. (2022, March 2). Breyer appeals court ruling on secretive EU AI „video lie detector“ research. Patrick Breyer. Retrieved August 11, 2022, from https://www.patrick-breyer.de/en/breyer-appeals-court-ruling-on-secretive-eu-ai-video-lie-detector-research/

Campbell, Z. (2019, May 11). Swarms of Drones, Piloted by Artificial Intelligence, May Soon Patrol Europe’s Borders. The Intercept. https://theintercept.com/2019/05/11/drones-artificial-intelligence-europe-roborder/

Campbell, Z. (2020, December 11). Sci-fi surveillance: Europe’s secretive push into biometric technology. The Guardian. https://www.theguardian.com/world/2020/dec/10/sci-fi-surveillance-europes-secretive-push-into-biometric-technology

Canada’s Adoption of AI in Immigration Raises Serious Rights Implications | International Human Rights Program. (2018, September 6). University of Toronto Faculty of Law. Retrieved April 5, 2022, from https://ihrp.law.utoronto.ca/news/canadas-adoption-ai-immigration-raises-serious-rights-implications

Centre for Global Constitutionalism. (2020, June). New Technologies and Global Governance: Challenge or Opportunity (No. 1). University of St Andrews

Centre for International Governance Innovation. (2019, October). The Role of Technology in Addressing the Global Migration Crisis. https://reliefweb.int/report/world/role-technology-addressing-global-migration-crisis

Council approves conclusions on the EU Action Plan on Human Rights and Democracy 2020–2024. (2020, November 19). [Press release]. https://www.consilium.europa.eu/en/press/press-releases/2020/11/19/council-approves-conclusions-on-the-eu-action-plan-on-human-rights-and-democracy-2020-2024/

Council of Europe. (2010, June). European Convention on Human Rights. https://www.echr.coe.int/documents/convention_eng.pdf

Crockett, K., Goltz, S., & Garratt, M. (2018, July). GDPR Impact on Computational Intelligence Research. 2018 International Joint Conference on Neural Networks (IJCNN). International Joint Conference on Neural Networks. https://doi.org/10.1109/ijcnn.2018.848961

Dialani, P. (2021, April 26). Artificial Intelligence in Migration: Its Positive and Negative Implications. Analytics Insight. https://www.analyticsinsight.net/artificial-intelligence-in-migration-its-positive-and-negative-implications

Digital Identity in the Migration and Refugee Context. (2019, April). Data & Society. https://datasociety.net/library/digital-identity-in-the-migration-refugee-contextDisclose. (2022, June 26). Predicting migration flows with artificial intelligence – the European Union’s risky gamble. Retrieved June 26, 2022, from https://disclose.ngo/en/article/predicting-migration-flows-with-artificial-intelligence-the-european-unions-risky-gamble

Ecorys. (2020, October). Feasibility study on a forecasting and early warning tool for migration based on Artificial Intelligence technology Executive Summary. the European Commission. https://doi.org/10.2837/22266

ETIAS. (2022, August 10). FADO: Preventing Document and Identity Fraud in the EU. https://www.etiasvisa.com/etias-news/fado-what-is#:%7E:text=Total%20number%20of%20refused%20entries,2010%20to%207%2C545%20in%202019.

EU: Research into biometric technologies must be transparent. (2022, February 24). ARTICLE 19. Retrieved August 11, 2022, from https://www.article19.org/resources/eu-research-into-biometric-technologies-must-be-transparent

European Commission. (n.d.). Ethics - H2020 Online Manual. Retrieved July 27, 2022, from https://ec.europa.eu/research/participants/docs/h2020-funding-guide/cross-cutting-issues/ethics_en.htm

European Commission. (2017, March 15). What is Horizon 2020? Horizon 2020 - European Commission. Retrieved May 2, 2022, from https://wayback.archive-it.org/12090/20220124080448/https://ec.europa.eu/programmes/horizon2020/en/what-horizon-202

European Commission. (2021, April 21). Europe fit for the Digital Age: Commission proposes new rules and actions for excellence and trust in Artificial Intelligence [Press release]. https://ec.europa.eu/commission/presscorner/detail/en/IP_21_1682

European Environment Agency. (n.d.). Europe’s seas and coasts. Retrieved August 22, 2022, from https://www.eea.europa.eu/themes/water/europes-seas-and-coastsEuropean Parliament and the Council of the European Union. (2013, December). REGULATION (EU) No 1290/2013 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 11 December 2013 laying down the rules for participation and dissemination in “Horizon 2020 - the Framework Programme for Research and Innovation (2014–2020)” and repealing Regulation (EC) No 1906/2006 (No. 347/81). Official Journal of the European Union. https://ec.europa.eu/research/participants/data/ref/h2020/legal_basis/rules_participation/h2020-rules-participation_en.pdf

The European Union. (n.d.). Aims and values. European Union. Retrieved July 8, 2022, from https://european-union.europa.eu/principles-countries-history/principles-and-values/aims-and-values_en

European Union. (2013). REGULATION (EU) No 1291/2013 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 11 December 2013 establishing Horizon 2020 - the Framework Programme for Research and Innovation (2014–2020) and repealing Decision No 1982/2006/EC. Official Journal of the European Union, 347(104), 11. https://ec.europa.eu/research/participants/data/ref/h2020/legal_basis/fp/h2020-eu-establact_en.pdf#page=11

Frontex. (2021, January). Assessment of Research Projects (Kick-off 2020) - Horizon 2020 (638/18.01/2021). Frontex- European Border and Coast Guard Agency.

Frontex. (2022). EU-wide picture. Retrieved August 22, 2022, from https://frontex.europa.eu/we-know/eu-wide-picture/

Funding & tenders. (n.d.). European Commission. Retrieved July 26, 2022, from https://ec.europa.eu/info/funding-tenders/opportunities/portal/screen/opportunities/topic-search;callCode=null;freeTextSearchKeyword=;matchWholeText=true;typeCodes=1,0;statusCodes=31094501,31094502,31094503;programmePeriod=2014%20-%202020;programCcm2Id=31045243;programDivisionCode=null;focusAreaCode=31087051;destination=null;mission=null;geographicalZonesCode=null;programmeDivisionProspect=null;startDateLte=null;startDateGte=null;crossCuttingPriorityCode=null;cpvCode=null;performanceOfDelivery=null;sortQuery=sortStatus;orderBy=asc;onlyTenders=false;topicListKey=topicSearchTablePageState

Gallagher, R., & Jona, L. (2019, July 26). We Tested Europe’s New Lie Detector for Travelers — and Immediately Triggered a False Positive. The Intercept. https://theintercept.com/2019/07/26/europe-border-control-ai-lie-detector/

General Data Protection Regulation (GDPR) – Official Legal Text. (2019, September 2). General Data Protection Regulation (GDPR). https://gdpr-info.eu/

Heikkilä, M. (2021, May 27). The rise of AI surveillance. POLITICO. https://www.politico.eu/article/the-rise-of-ai-surveillance-coronavirus-data-collection-tracking-facial-recognition-monitoring/

Horizon 2020. (2017). Horizon 2020- Work Programme 2018–2020 Evaluation Rules. The European Commission. Retrieved September 2, 2022, from http://wayback.archive-it.org/12090/20181222155953/http://ec.europa.eu/research/participants/data/ref/h2020/other/wp/2018-2020/annexes/h2020-wp1820-annex-h-esacrit_en.pdf

Horizon 2020. (2021). European Commission - European Commission. https://ec.europa.eu/info/research-and-innovation/funding/funding-opportunities/funding-programmes-and-open-calls/horizon-2020_en

How is Face Recognition Surveillance Technology Racist? | News & Commentary. (2020, June 16). American Civil Liberties Union. https://www.aclu.org/news/privacy-technology/how-is-face-recognition-surveillance-technology-racist

iBorderCtrl. (2018, February). Research Innovation Action Intelligent Portable Control System (No. 6533962). European Commision.

iBorderCtrl? No! | iBorderCtrl.no. (2019). iBorderCtrl No. https://iborderctrl.no/

International Refugee Rights Association. (2021, May). Refugee Rights at International Borders (No. 1–6). United Nations. https://umhd.org.tr/en//page/refugee-rights-at-international-borders/453

ITFLOWS. (2021, April 16). Project. https://www.itflows.eu/about/project/

ITFLOWS. (2022, February 2). EUMigraTool. https://www.itflows.eu/eumigratool/

Kamarinou, D., Millard, C., & Singh, J. (2017). Machine Learning with Personal Data: Profiling, Decisions and the EU General Data Protection Regulation. Journal of Machine Learning Reserach. http://www.mlandthelaw.org/papers/kamarinou.pdf

Keung, N. (2020, November 9). How artificial intelligence is changing asylum seekers’ lives for the worse. Thestar.Com. https://www.thestar.com/news/canada/2020/11/08/how-artificial-intelligence-is-changing-asylum-seekers-lives-for-the-worse.html

Kokstaite, M., Angel Gomez Zotano, M., Rodrigues, F., Papataxiarhis, V., Magliore, G., Lebre, L., Matos, N., Aleandridou, K., & Zoltan, S. (2018, October). Market Analysis Roborder (No. 740593). European Commission.

Krestenitis, M., Orfanidis, G., Ioannidis, K., Avgerinakis, K., Vrochidis, S., & Kompatsiaris, I. (2019). Oil Spill Identification from Satellite Images Using Deep Neural Networks. Remote Sensing, 11(15), 1762. https://doi.org/10.3390/rs11151762

Krigel, T., Schitze, R. B., & Stoklas, J. (2018, July). Legal, ethical and social impact on the use of computational intelligence based systems for land border crossings. 2018 International Joint Conference on Neural Networks (IJCNN). 2018 International Joint Conference on Neural Networks. https://doi.org/10.1109/ijcnn.2018.8489349

Kuner, C., Svantesson, D. J. B., Cate, F. H., Lynskey, O., & Millard, C. (2017). Data protection and humanitarian emergencies. International Data Privacy Law, 7(3), 147–148. https://doi.org/10.1093/idpl/ipx012

Molnar, P. (2020, November). Technological Testing Grounds: Migration Management Experiments and Reflections from the Ground Up. Refugee Law Lab. https://edri.org/wp-content/uploads/2020/11/Technological-Testing-Grounds.pdf

Nalbandian, L. (2021, April 28). Canada should be transparent in how it uses AI to screen immigrants. The Conversation. Retrieved April 5, 2022, from https://theconversation.com/canada-should-be-transparent-in-how-it-uses-ai-to-screen-immigrants-157841

Nellis, A. (2021, October 13). The Color of Justice: Racial and Ethnic Disparity in State Prisons. The Sentencing Project. Retrieved August 10, 2022, from https://www.sentencingproject.org/publications/color-of-justice-racial-and-ethnic-disparity-in-state-prisons/

Niczem, A. (2019, December 28). No roborders, no nation, or: smile for a European surveillance propagation [Confrence presentation]. Chaos Communication Congress, Leipzig, Germany. https://www.youtube.com/watch?v=kHSffVpLyxA

Number of Refugees to Europe Surges to Record 1.3 Million in 2015. (2020, August 20). Pew Research Center’s Global Attitudes Project. https://www.pewresearch.org/global/2016/08/02/number-of-refugees-to-europe-surges-to-record-1-3-million-in-2015/

Oliveira, A., Rodrigues, F., & Carvalho, J. (2017, July). Project Management and Quality Assurance Plan (No. 740593). European Commission. https://roborder.eu/wp-content/uploads/2019/01/D8.1_740593_Project-management-and-quality-assurance-plan.pdf

OrShea, J., Crockett, K., Khan, W., Kindynis, P., Antoniades, A., & Boultadakis, G. (2018, July). Intelligent Deception Detection through Machine Based Interviewing. 2018 International Joint Conference on Neural Networks (IJCNN). International Joint Conference on Neural Networks. https://doi.org/10.1109/ijcnn.2018.8489392

ProPublica. (2020, February 29). How We Analyzed the COMPAS Recidivism Algorithm. Retrieved July 27, 2022, from https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm

Public Opinion Toward Immigration, Refugees, and Identity in Europe: A Closer Look at What Europeans Think and How Immigration Debates Have Become So Relevant. (2019). European Institute of the Mediterranean. Retrieved July 26, 2022, from https://www.iemed.org/publication/public-opinion-toward-immigration-refugees-and-identity-in-europe-a-closer-look-at-what-europeans-think-and-how-immigration-debates-have-become-so-relevant/

Roborder. (2018, May). H- Requirement No. 5 Roborder (No. 740593).

Roborder. (2021, August). Public Final Activity Report (No. 740593-ROBORDER-D8.5). The European Union. https://cordis.europa.eu/project/id/740593/result

Samuelson-Glushko Canadian Internet Policy & Public Interest Clinic. (2020, December). Facial Recognition at a Crossroads: Transformation at our Borders & Beyond. University of Ottawa. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=371429

Shane, J. (2021). You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It’s Making the World a Weirder Place (Reprint ed.). Voracious.

Silent Talker. (n.d.). Silent Talker. Silent Talker LTD. Retrieved August 15, 2022, from https://www.silent-talker.com

Teodoro, E., Guillén, A., & Casanovas, P. (2021, June). Report on the ITFLOWS Regulatory Model. ITFLOWS. https://ddd.uab.cat/pub/infpro/2021/263151/14._D2.4_ITFLOWS_R_.pdf

The EU Framework Programme for Research and Innovation Horizon 2020. (2019, April). Horizon 2020 Programme Guidance How to complete your ethics self-assessment (6.1). The European Commission. https://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/ethics/h2020_hi_ethics-self-assess_en.pdf

The European Commission. (2020, October 22). CORDIS | European Commission. Retrieved August 8, 2022, from https://cordis.europa.eu/project/id/700626

The European Commission. (2022, February 20). IT tools and methods for managing migration FLOWS. Cordis EU Research Results. Retrieved March 31, 2022, from https://cordis.europa.eu/project/id/882986

The European Council. (2022, August 2). Protection and promotion of human rights. European Council. Retrieved August 12, 2022, from https://www.consilium.europa.eu/en/policies/human-rights/

The European Parliament and of the Council of the European Union. (2016, March 9). Regulation (EU) 2016/399 of the European Parliament and of the Council of 9 March 2016 on a Union Code on the Rules Governing the Movement of Persons Across Borders (Schengen Borders Code). EUR-Lex. Retrieved August 16, 2022, from https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A32016R0399

The European Parliament and the Council of the European Union. (2013, December). RREGULATION (EU) No 1291/2013 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 11 December 2013 establishing Horizon 2020 - the Framework Programme for Research an (No. 347/104). Official Journal of the European Union. https://ec.europa.eu/research/participants/data/ref/h2020/legal_basis/fp/h2020-eu-establact_en.pdf#page=11

The European Union. (2000). Charter of Fundamental Rights of the European Union. Eur-Lex. Retrieved August 12, 2022, from https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A12016P%2FTXT

The United Nations. (1948, December 10). Universal Declaration of Human Rights

 United Nations. Retrieved August 23, 2022, from https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf

 UNHCR. (2021, June). Mid-Year Trends Report. The UN Refugee Agency. https://www.unhcr.org/mid-year-trends.html

 Xanthaki, A. (2021, November 16). EUMigraToolSymposium [Online Symposium]. EMT Symposium, Online, N/A. https://www.documentcloud.org/documents/22120596-emt-symposium-agenda-16-sep-2021?responsive=1&title=1

 Xanthaki, A., Brant Hansen, K., & Moraru, M. (2021, January). Report on the ITFLOWS International and European Legal Frameworks on Migrants and Refugees and ITFLOWS Ethical Framework. https://www.documentcloud.org/documents/22120594-report-on-itflows-legal-and-ethical-framework-1

Ziebarth, A., & Bither, J. (2020, June). AI, Digital Identities, Biometrics, Blockchain: A Primer on the Use of Technology in Migration Management. Migration strategy group on international cooperation and development. https://www.gmfus.org/sites/default/files/Bither%2520%2520Ziebarth%2520%25202020%2520-%2520technology%2520in%2520migration%2520management%2520primer%25202.pdf

Previous
Previous

Ethical Considerations, Data Quality, and Compliance in AI Governance

Next
Next

Types of Coding and Coding Languages