International travel is increasing at a rapid rate. In 2017, a record 1.4 billion tourists visited other countries last year and that number is expected to reach 1.8 billion by 2030.
This swelling number of globetrotters also means growing queues at passport control. The vast majority of people who are detained by border agents don’t present a threat, which slows down the already often lengthy process of crossing an international border. Border crossing agents have a tough job. They have to make hundreds of judgement-calls every hour about whether someone should be allowed to enter a country. With the looming threat of terrorist attacks, people trafficking and smuggling, there is a lot of pressure to get it right.
Although they may have some additional intelligence on their computer system, when border guards examine most travellers, they’re relying on their own hunches and experience. And for many border control officers, that experience may not amount to much – it’s a position with a high turnover rate; border guards in the US quit at double the rate of other law enforcement positions.
Anyone who has been stopped from entering a country at immigration, even briefly, will know what an upsetting and stressful experience it can be. Staring into the hard eyes of a border guard as they examine your passport is always a nerve-wracking experience.
But there could soon be another, unseen border agent with a hand in these decisions – one that cannot be reasoned with or softened with a smile.
A number of governments around the world are now funding research on systems powered by artificial intelligence that can help to assess travellers at border crossings.
One of these is being developed by US technology firm Unisys, a company that began working with US Customs and Border Patrol following the 9/11 terrorist attacks in 2001, to develop technology for identifying dangerous passengers long before they board a flight. Their threat assessment system, called LineSight, slurps up data about travellers from different government agencies and other sources to give them a mathematical risk evaluation.
They have since expanded its capability to look for other types of traveller or cargo that might be of concern to border officials. John Kendall, director of the border and national security program at Unisys, uses an example of two fictional travellers to illustrate how LineSight works.
Romain and Sandra are ticketed passengers who have valid passports and valid visas. They would pass through most security systems unquestioned, but LineSight’s algorithm picks up something fishy about Romain’s travel patterns – she’s visited the country several times over the past few years with a number of children who had different last names, something predictive analytics associates with human trafficking.
“Romain also purchased her ticket using a credit card from a bank associated with a sex trafficking ring in Eastern Europe,” says Kendall. LineSight is able to obtain this information from the airline Romain is flying with and cross check it with law enforcement databases.
“All of this information can be gathered and sent to a customs official before Romain and Sandra check in for their flight,” adds Kendall. “We collect data from multiple sources. Different governments collect different information, whether it’s from their own databases, from travel agencies. It’s not neat.”
The system can take a similar approach to analysing cargo shipments, helping to pull together relevant information that might identify potential cases of smuggling.
The power of Unisys’s AI approach is the ability to ingest and assess a huge amount of data in a very short amount of time – it takes just two seconds for LineSight to process all of the relevant data and complete a threat assessment.
But there are concerns about using AI to analyse data in this way. Algorithms trained to recognise patterns or behaviour with historic data sets can reflect the biases that exist in that information. Algorithms trained on data from the US legal system, for example, were found to replicate an unfair bias against black defendants, who were incorrectly identified as likely reoffend at nearly twice the rate as white criminals. The algorithm was replicating the human bias that existed in the US justice system.
Erica Posey of the Brennan Center for Justice fears similar biases could creep into algorithms used to make immigration decisions.
布瑞南司法中心（Brennan Center for Justice）的普赛（Erica Posey）担心，类似的偏见可能会悄然植入入境检查的算法中。
“Any predictive algorithm trained on existing data sets about who has been prevented from travelling in the past will almost certainly rely heavily on proxies to replicate past patterns,” she says.
According to Kendall, Unisys hope to deal with this by allowing its algorithm learn from its mistakes.
“If they stop somebody, and it turns out there was nothing wrong, that automatically updates the algorithm,” he says. “So every time we do an assessment the algorithm gets smarter. It’s not based on intuition, it’s not based on my bias – it’s based on the full population of travellers that come through.”
The company also says LineSight doesn’t assign one piece of data more weight than another, instead presenting all the relevant information to the border and customs officers.
But there are other teams that are looking to go even further by allowing machines to make judgements about whether travellers can be trusted. Human border officers make decisions about this based on a person’s body language and the way they answer their questions. There are some who hope that artificial intelligence might be better at picking up signs of deception.
Aaron Elkins, a computer scientist at San Diego State University, points out that humans are typically only able to spot deception in other people 54% of the time. By comparison, AI-powered machine vision systems have been able to achieve an accuracy of over 80% in multiple studies. Infrared cameras that can pick up on changes in blood flow and pattern recognition systems capable of detecting subtle ticks have all been used.
圣地亚哥州立大学（San Diego State University）的计算机科学家埃尔金斯（Aaron Elkins）指出，人类通常只能发现他人54%的谎言。相比之下，人工智能驱动的机器视觉系统在多个研究中准确率超过80%。可以检测到血流变化的红外摄像头以及可以检测出细微变化的规律识别系统都得以运用。
Elkins himself is one of the inventors behind Avatar (Automated Virtual Agent for Truth Assessments in Real Time), a screening system that could soon be working with real-life border agents. Avatar uses a display that features a virtual border agent that asks travellers questions while the machine scrutinises the subject’s posture, eye movements, and changes in their voice.
After experiments of tens of thousands of subjects lying in a laboratory setting, the Avatar team believes they have managed to teach the system to pick up on the physical manifestations of deception.
Another system, called iBorder Ctrl is to be tested at three land border crossings in Hungary, Greece and Latvia. It too features an automated interviewer that will interrogate travellers and has been trained on videos of people either telling the truth or lying.
Keeley Crocket, an expert in computational intelligence at Manchester Metropolitan University in the UK, who is one of those developing iBorder Ctrl, says the system looks for micro-gestures – subtle nonverbal facial cues that include blushing as well as subtle backward and forward movement. Crocket has high hopes for this first phase of field tests, saying the team are hoping the system will “obtain 85% accuracy” in the field tests.
英国曼彻斯特城市大学的计算机智能专家科洛奇特（Keeley Crocket）是iBorder Ctrl的研发人员之一，表示该系统寻找人们的微表情——非语言的、细微的面部表情信息，包括脸红以及轻微的前后移动。科洛奇特对第一阶段的现场测试寄予厚望，称团队希望现场测试时该系统的“准确率达到85%”。
“Until we have completed this [phase of testing], we will not know for sure,” she cautions.
But there is an ongoing debate about whether such AI “lie detectors” actually work.
Vera Wilde, a lie detection researcher and vocal critic of the iBorder Ctrl technology, points out that science has yet to prove a definitive link between our outward behaviour and deception, which is precisely why polygraph tests are not admissible in court.
谎言检测研究员以及iBorder Ctrl技术的声音分析师旺德（Vera Wilde）指出，科学尚未证明我们的外在行为和欺骗之间的明确联系，这正是为什么测谎仪结果在法庭上不被接受的原因。
“There is no unique ‘lie response’ to detect,” she says.
Even if such a link could achieve scientific certainty, the use of such technology at a border crossing raises tricky legal questions. Judith Edersheim, co-director of the Massachusetts General Hospital Center for Law, Brain and Behavior (CLBB), has suggested that lie-detection technology could constitute an illegal search and seizure.
即使这种关联性强到科学能够证明，在边境口岸使用时也会引起棘手的法律问题。马萨诸塞州综合医院法律、大脑与行为中心（Massachusetts General Hospital Center for Law, Brain and Behavior）的联合主任埃德斯海姆（Judith Edersheim）表示，谎言检测技术可能造成非法搜查和扣押。
“Compulsory screening is a seizure of your thoughts, a search of your mind,” she says. This would require a warrant in the US. And there could be similar problems in Europe too. Article 22 of the General Data and Protection Regulation protects EU citizens against profiling. Can the iBorder Ctrl ever be transparent enough to prove it hasn’t used some element of profiling?
她说：“强制性筛查是对你思想的攫取，是对人思维的检索。”在美国，这么做需要有逮捕令。在欧洲也可能存在类似的问题。 《通用数据和保护条例》（General Data and Protection Regulation）第22条保护欧盟公民免受此类侧写分析（profiling）。 iBorder Ctrl是否足够透明，能否证明它没有使用某些侧写分析元素？
It’s important to note that at this stage, travellers testing out iBorder Ctrl will be volunteers and will still face a human border agent before they enter the countries where it is being tested. The system will give the human border officers a risk assessment score determined by the iBorder Ctrl’s AI.
值得注意的是，在这个阶段，测试iBorder Ctrl的旅客都是志愿者。此外，在他们真正进入这些正在开展测试的国家之前，仍将面对真人边检人员。该系统将为真人边检人员提供由iBorder Ctrl的人工智能确定的风险评估分数。
And it seems likely that AI will never completely replace humans altogether when it comes to border control. The Unisys, Avatar, and iBorder Ctrl teams all agree that no matter how sophisticated the technology becomes, they’ll still rely heavily on humans to interpret the information.
在边检方面，人工智能似乎永远不会完全取代人类。 Unisys、Avatar和iBorder Ctrl团队都认为，无论技术变得多么成熟，他们仍然会严重依赖人类来解释信息。
But a reliance on machines to make judgements about a traveller’s right to enter a country still raises significant concerns among human rights and privacy advocates. If a traveller is determined to be a high risk, will a border patrol agency provide them information about why?
“We need transparency as to how the algorithm itself is developed and implemented, how different types of data will be weighted in algorithmic calculations, how human decision-makers are trained to interpret AI conclusions, and how the system is audited,” says Posey. “And fundamentally, we also need transparency as to the impact on individuals and the system as a whole.”
Kendall, however, believes AI may be an essential tool in dealing with the challenges facing international borders.
“It’s a complex set of threats,” he says. “The threats we face today will be different from the threats in a couple of years’ time.”
The success of AI border-guards will depend not only on their ability to stay one step ahead of those who pose these threats, but also if they can make travelling easier for the 1.8 billion of us who want to see a bit more of the world.