Woman using facial recognition technology on mobile phone showing the use of facial recognition technology on border control in EU
Bordering on Artificial by Jennifer Baker, EU Policy Correspondent

Bordering on Artificial

An EU project is trialling AI lie detectors for border control, one MEP is challenging in court

CPO Magazine reported last month that Amazon’s facial recognition technology, Rekognition,  launched in 2016, is now able to recognise several emotions, including fear.

Given Amazon’s active marketing of the service to United States police departments, the announcement set off alarm bells with privacy advocates who raised serious civil rights and privacy concerns.

But the desire to use AI to police borders is not limited to the US. The EU has also been trialling facial recognition technology in a project that Member of the European Parliament, Patrick Breyer, called a “highly controversial, video lie detector” for travellers.

The Border Ctrl project has received approximately €4.5 million funding from the European Union’s Horizon 2020 research and innovation programme. The project aims to detect deception by immigrants through video recordings of their faces.

Breyer, a successful civil liberties activist and Pirate Party MEP, has filed a complaint with the EU Court of Justice because the European Commission has refused to allow him access to a legal assessment and an ethics report on the project as well as other project documents.

In its response to Breyer’s request, the Commission’s Research Executive Agency (REA) said: “The deliverables of the project Border Ctrl contain consortium confidential information about the methodology, research approach and strategy as to how the Border Ctrl consortium proposes to achieve the project results. This information has to be considered as inside knowledge of the Border Ctrl consortium. It reflects the specific intellectual property, ongoing research, know-how, methodologies, techniques and strategies which belong to the consortium. The public disclosure of such information would undermine the commercial interests of the Border Ctrl consortium.”

The agency continued, doubling down on the commercial value of the research: “It would give an unfair advantage to the (potential) competitors of the consortium. Public disclosure would give the competitors the opportunity to anticipate the strategies and weaknesses of the partners of the Border Ctrl consortium, including when competing in calls for tenders and proposals. Secondly, the public disclosure would give their competitors the opportunity to copy or use the intellectual property, know-how, methodologies techniques and strategies of the Border Ctrl consortium. The competitors would be able to employ this information in order to improve the production of their own competing products or provision of their own competing services. Given the competitive environment in which the project consortium operates, the information in question can only maintain its commercial value if it is kept confidential.”

“The reasons given for the secrecy demonstrate: It is all about economic profit,” said Breyer. “Regarding this highly dangerous technology the transparency interests of the scientific community and the public must take precedence over private profit interests.”

The Border Ctrl program, dubbed the Automatic Deception Detection System (ADDS), works by an avatar interviewing wannabe visitors to the EU via webcam or smartphone cam before they leave home. ADDS monitors their facial expressions and behavior when answering standard questions, and then “quantifies the probability of deceit in interviews by analysing interviewees non-verbal micro expressions personalized to gender and language of the traveller.” The results of these assessments are then be shared with border control staff and the AI can assist them in making decisions about whether to permit entry.

According to Breyer, “there is an overriding public interest in finding out whether the development of unethical and/or unlawful interference in the human right to privacy are being publicly funded.”

But the REA responded that any disclosure “must outweigh the harm caused by disclosure.” It continued: “The project aims at testing new technologies in controlled border management scenarios that could potentially increase the efficiency of the EU’s external borders management, ensuring faster processing for bona fide travellers and quicker detection of illegal activities. As such, it is not a technology development project, targeting the actual implementation of a working system with real customers.”

However the trial, which concluded at the end of August, took place in real operational scenarios — train, vehicle, pedestrian — in Hungary, Greece and Latvia, three countries with external EU borders with a view to finding out if it could successfully be rolled out across the bloc.

The Border Ctrl consortium website says that as the system is still in development, it cannot be used for actual border checks. “However, the system needs to be tested to validate whether the developed technologies are functioning properly. To achieve this, test pilots are required. To simulate real conditions, these take place at selected border crossing points.” It also stressed that travellers are invited to take part on a purely voluntary basis, without any obligation to do so.

But Breyer is nonetheless worried: “I am convinced that this pseudo-scientific security hocus-pocus will not detect any terrorists. For stressed, nervous or tired people, such a suspicion-generator can easily become a nightmare. In Germany, lie detectors are not admissible as evidence in court precisely because they do not work. We need to put an end to the EU-funded development of technologies for monitoring and controlling law-abiding citizens ever more closely!”

Even the Border Ctrl website itself admits: “Fundamental rights of travellers might be violated” before adding that the voluntary nature of the project and the fact that data collected in the test pilots will be either deleted or anonymised at the end of the project ensures they are not.

ADDS currently has an accuracy of 75%, and the consortium says online that “it is most likely that such a system will only be used at the border if it provides better results than the current system, solely relying on human beings. In fact, an AI-based system with high accuracy might even decrease the risk of discrimination and other fundamental rights issues if implemented properly.”

The argument that AI could potentially reduce discrimination is one that has been discussed for years, however that varies depending on the training data and the assumptions involved in developing it. A system that could, if deployed, make serious decisions affecting the wellbeing of hundreds of thousands of travellers, should be up for scrutiny. As Breyer points out, it is very difficult to do that without access to the data.