Written by Costica Dumbrava.
The EU and its Member States are increasingly turning to artificial intelligence (AI) technologies in their efforts to strengthen border control and mitigate security risks related to cross-border terrorism and serious crime. This is a recent manifestation of a broader trend towards a ‘smartening’ of EU borders, a trend that also includes the development and interlinking of large-scale centralised information systems and the deployment of a decentralised information exchange mechanism for borders and security. These systems have gradually been expanded and upgraded in order to cover ever more categories of persons (that is, to close ‘information gaps’) and to process increasingly varied types of data (including an increased processing of biometric data).
In the course of history, states have been quick to co-opt ‘new’ technologies in order to solve the typically modern problem of accurately identifying individuals for the purpose of controlling mobility and tackling crime. Regardless of the sophistication and effectiveness of various identification technologies and tools (passports, body measurements, fingerprinting, photography, lie detectors or face recognition systems), their adoption has always reflected the scientific, social and political views and concerns that dominated in the relevant times and locations.
This paper identifies and discusses four major types of AI applications that the EU is using or considering using in the context of border control and border security: 1) biometric identification (automated fingerprint and face recognition); 2) emotion detection; 3) algorithmic risk assessment; and 4) AI tools for migration monitoring, analysis and forecasting.
The EU’s centralised information systems for borders and security are increasingly incorporating biometric technologies for the purpose of identity verification or identification. Automated fingerprint identification technology is currently used in three information systems (the Schengen Information System, the European dactyloscopy database (Eurodac) and the Visa Information System) and will also be used in another two (the Entry/Exit System and the European Criminal Record Information System for third-Country nationals). Automated face recognition technology (FRT) is not yet used in any EU information system, but all systems except one (the European Travel Information Authorisation System) are expected to process facial images in the near future for the purpose of verification and/or identification.
Emotion detection technologies constitute one of the most controversial applications of AI at borders and elsewhere. Whereas there are currently no emotion-detection systems deployed at EU borders, a number of EU-funded projects and initiatives have explored and piloted such technologies for the purpose of enhancing border control.
Apart from verifying and identifying known persons, AI algorithms are also used to identify unknown persons of interest based on specific data-based risk profiles. Algorithmic profiling for assessing individual risks of security and irregular migration is currently being developed in the context of the Visa Information System and the European Travel Information Authorisation System. Automated, intelligence-driven risk assessment is carried out by Member States in the framework of the exchange of passenger data among them.
The EU is also investing in AI-based tools for monitoring, analysing and forecasting migration trends and security threats. The European Asylum Support Office is currently using an early warning and forecasting system to predict the number of asylum applications. The European Commission and the EU agencies in the area of freedom, security and justice are exploring other applications in this field, including in the context of the development of the Frontex EUROSUR system and the Europol innovation hub.
There are clear benefits to be reaped from a careful adoption of AI technologies in the context of border control, such as increased capacity to detect fraud and abuses, better and timely access to relevant information for taking decisions, and enhanced protection of vulnerable people. However, these benefits need to be balanced against the significant risks posed by these technologies to fundamental rights.
Despite progress regarding biometric identification technologies, the accuracy of the results still varies across technologies and depends on contextual factors. Even the relatively well-established fingerprint identification applications face challenges, in particular at the stage of the collection of biometric data (related to, for example, subjects’ age and environmental conditions). The reliability of face recognition technologies in ‘real world’ settings is highly dependent on the quality of the images captured and on the quality of the algorithms used for biometric matching. The quality of the algorithms depends, in turn, on the quality of the training datasets (including the quality, completeness and relevance of training images) and the various optimisation techniques. Serious doubts exist about the scientific basis and reliability of emotion-detection algorithms. Concerns about data accuracy have been raised with regard to many EU information systems and information exchange frameworks for borders and security.
Face recognition technologies have come under increased scrutiny due to concerns about fundamental rights, in particular risks related to bias and discrimination, data protection and mass surveillance. Whereas great attention has been paid to the issue of bias and discrimination, it must be noted that even accurate and unbiased AI systems may pose significant other risks, including to data protection and privacy. The increased use of biometric data in EU information systems amplifies the risk of unlawful profiling (for example, facial images may reveal ethnic origin). Even when profiling is not based on biometric or personal data, other types of data or combinations thereof used for algorithmic profiling may lead to discrimination based on prohibited grounds. Existing safeguards, such as the human-in-the-loop safeguard (requiring human interaction) and the right to explanation may not be sufficient to tackle these risks. As transpired in the case of an EU-funded research project focused on developing emotion-detection technologies, there is a need to enhance the transparency and oversight of EU funding on AI research, in particular in highly consequential areas such as borders and security.
Finally, the development and adoption of powerful AI technologies would benefit from a full understanding of and reflection on broader aspects, including the historical roots of technologies and the prevailing social and political views and expectations. Adopting technologies without confronting pitfalls such as technological determinism and the myth of technological neutrality would further weaken fundamental rights, transparency and accountability.
Read the complete in-depth analysis on ‘Artificial intelligence at EU borders: Overview of applications and key issues‘ in the Think Tank pages of the European Parliament.