(CN) – Border agents in Greece, Hungary and Latvia are experimenting with a new, and controversial, tool at border crossings into Europe: An artificial intelligence system designed to detect if a person is lying.
This high-tech $5 million European Union nine-month pilot project is dubbed iBorderCtrl and its use along Europe’s eastern borders is raising ethical questions. A similar system has been tested on U.S. borders.
Human rights advocates in Greece earlier this month filed a petition to the Greek government seeking more information about how the system works to determine if it violates the law. The government has not responded yet.
Advocates with the nonprofit Homo Digitalis are concerned the use of artificial intelligence is unreliable in determining if someone is lying and violates human rights laws. The technical details underpinning the project are confidential and the group is seeking that information.
Eleftherios Chelioudakis, a co-founder of Homo Digitalis, said in an email to Courthouse News he considers it unlikely such technology will become widely used in Europe because it would be found unlawful.
“The principles of legality, necessity, and proportionality will prevail,” he said. “We are not worried that such a technology will be widely used in Europe.”
But he added that such tools could become common in countries with weak protections for human rights.
Under the iBorderCtrl system, people can volunteer to be processed through an online border check system. If they do so, they are asked questions – about themselves, travel plans and their luggage – that are posed on a webcam by an avatar, a virtual border agent. The avatar is personalized to match the person’s gender and language.
During this interview, the system studies a person’s “micro-gestures” – such as eye movements – to look for suspicious behavior.
Critics question how good AI is at telling when a person is lying.
“This is the embodiment of everything that can go wrong with lie detection,” Bruno Verschuere, a forensic psychology professor at the University of Amsterdam, told the Dutch newspaper de Volkskrant.
Verschuere said micro-expressions are not good indicators about whether a person is lying. He did not respond to a message seeking comment.
The project’s developers said tests on 34 people showed the lie detection was as high as 76 percent. The developers hope to improve accuracy to 85 percent.
“There is no scientific evidence that this is a practicable and reliable method,” Chelioudakis said. “So, we ask: What is the added value for border management by using such a non-reliable technology? What is exactly the problem that this technology solves?”
Facial recognition systems have been criticized as being discriminatory. Studies have shown that they can be biased against women and minorities.
Experimenting with artificial intelligence at border crossings comes at a sensitive time for Europe, which has tightened its borders due to immigration, terrorism and criminal activity. Inside the EU, borders are open.
European politics have been upended in the past three years by immigration. Since 2015, more than 1.7 million people have entered the EU, most of them from war-torn and impoverished nations in Africa and Asia. About 130,000 people have sought refuge in Europe this year.
In a recent news release, the European Commission said the iBorderCtrl system brings together an array of cutting-edge technologies.
Besides acting as a lie detector, the system is designed to tell if documents are real, make a “biometric verification” of a person, and come up with a “risk assessment” for travelers. The system matches faces with photos and video images, checks names against databases, examines fingerprints and palms, and then allocates a risk assessment. All of this information is then available to officials when the traveler gets to the border.
The EU said the system will help speed up the flow of traffic at border checkpoints and alleviate the work of border guards.
“Continuous traffic growth, combined with the increased threat of illegal immigration, is putting nowadays border agencies under considerable pressure,” the commission said.
George Boultadakis, a research director at the technology company European Dynamics and a coordinator for the iBorderCtrl project, said in an email that the system considers a number of factors – not just results of the lie detection portion – in determining how risky a person is.
He called the system safe because it does not rely on “one analysis but on the correlated risks from various analyses.”
He did not immediately reply to questions from Courthouse News.
The project’s developers say the system complies with EU standards, but acknowledged it raises questions.
“The project is well aware of the legal and ethical issues that might arise,” they say, adding that an ethics adviser is involved.
The developers point out that it is only being tested on a voluntary basis and it is not an authorized law enforcement tool. They say the system does not replace regular border checks.
In the future, if it were used as a real border check, the developers say its use would need a legal basis, something which is lacking “in the applicable European legal framework.”
Courthouse News reporter Cain Burdeau is based in the European Union.