Facial recognition is under the microscope. After the Kings Cross ‘scandal’, huge opposition to the use of facial recognition in trials by British police forces, and now politicians and campaigners calling for the end of live facial recognition trails, every angle of the technology is under intense scrutiny. Police forces who wish to use facial recognition to fight crime are caught in a catch 22, feeling the pressure to prove the technology works while simultaneously finding themselves challenged on attempts to introduce it to the public and obtain the proof.
But while this developing technology is in the eye of a stormy debate on privacy and human rights, instances of violent and hate crime are rising and police forces are stretched and straining under the pressure. Is facial recognition really as ‘chilling’ as the media claims it to be?
Adrian Timberlake, chief technical director of Seven Technologies Group (7TG) and specialist in military, defence and law enforcement security and surveillance solutions, examines the use of facial recognition to support law enforcement operations, its possible biases and how it can be used ethically.
Developing technologies are always subject to debate, but facial recognition has been under more scrutiny than most. This is partly because of reported biases, that the technology is less accurate in correctly identifying people of colour and women, for example, and the lack of understanding about how the technology works.
Targeted facial recognition has been used in trials by London’s Metropolitan Police and has a long history of use in military and defence operations. Of the two types of facial recognition – general and targeted – targeted facial recognition is most suitable to enhance law enforcement operations while protecting the privacy of the general public. The purpose of this system is to identify and alert police to the presence of known criminals or predators, and to bring wanted criminals to justice
How does targeted facial recognition work?
ITargeted facial recognition scans every face in range, but it will only alert police if it believes it has found a match against its ‘watch list’. A match will then be confirmed by a human and appropriate action taken. During the Metropolitan Police facial recognition trials, any data and images of people that the camera had captured that did not match the watch list was deleted after 30 days. For law-abiding citizens, provided that the stored data (until deletion) is protected adequately, this should not present a problem.
Who is on the watch list?
The data programmed into facial recognition technology comes from existing police data. This means that people who have been convicted, prosecuted, arrested, or cautioned by police could be on the watch list.
Why do we need facial recognition?
There is proof that facial recognition can be a useful tool in preventing crime and bringing criminals to justice. So far, it has been reported in the Guardian that the technology has deterred serial pickpockets and, in Wired , it is reported that South Wales Police Force’s head of technology tweeted in June 2018 that the use of facial recognition had led to the arrest of someone wanted for assault. But this is only the tip of the iceberg of the additional public safety and security that facial recognition could provide.
Targeted facial recognition could enhance security in a wide range of cases. It could recognise people on the sex offenders register if they came within range of a school, or alert police if it recognised the face of somebody reported missing or kidnapped.
Society has grown used to and accepted the use of CCTV for security purposes. But targeted facial recognition, despite its flaws, is potentially far more useful than CCTV. We’ve all seen pleas from police forces to the public to identify dangerous criminals, caught on CCTV, circulating in the news. If suspects are unknown to police, they will need to be identified.
If facial recognition instantly identifies a wanted criminal, there will be less risk of that person having time to commit additional crimes before they are caught.
How could targeted facial recognition go wrong?
The worst outcome scenario in using targeted facial recognition is that a ‘match’ may be incorrect. This may lead to an innocent member of the public being questioned or stopped and searched by police. While this may be inconvenient and possibly alarming, there should be no real cause for concern. Use of targeted facial recognition, and any arrests or stops resulting from a direction from the technology, must comply with existing laws, including human rights laws.
Dealing with bias
It is widely reported that facial recognition technology has had more success in identifying white males than any other demographic. This must be addressed so that facial recognition can realise its potential of supporting police forces across Britain and helping to create a safe and fair society for all.
Return to Articles
The issue is that the best way to obtain the data needed to make the technology less biased is to run trials, but there is currently a huge amount of public opposition to facial recognition trials due to the reported bias. For developers who want to improve their facial recognition technology, this is a catch 22.
For the technology to be able to better distinguish between facial features, it needs a wide pool of data to make comparisons with. This would ideally be a large amount of data on male and female faces, from every race, so that the technology can learn the intricacies of facial features and, additionally, how to recognise facial hair and make-up and still provide an accurate result. As Caucasian males currently make up the majority of the existing data that can be used to develop facial recognition, the technology has more success in correctly identifying that particular demographic.
If a facial recognition camera ‘saw’ thousands of people a day, the technology would learn to distinguish between facial features quickly and all biases could potentially be eliminated. Restricting the use of facial recognition technology and trials until the technology is ‘perfect’ severely limits access to the data needed to develop and improve it.
But even as facial recognition is still developing, police forces can still use it ethically. Providing police officers who use the technology with adequate training, which includes being made aware of any possible flaws that may lead to bias, can help to ensure that any result from facial recognition systems is treated with caution and examined properly.
Criminals are becoming ever more knowledgeable of existing technologies used for detection and prevention, and ever more sophisticated at avoiding them. Police forces must be allowed to keep up with technological advancements to be able to keep communities safe