Data released by the UK police force confirmed claims from watchdog groups that the software is inaccurate.
Police officials in South Wales are battling criticism of their new facial recognition technology after it was revealed that the program had a 92% fail rate when it was used at the June 2017 UEFA Champions League Final in Cardiff, meaning only 8% of the people "identified" were actual matches with names and faces in the criminal database.
According to statistics released by the South Wales Police, their Automated Facial Recognition (AFR) 'Locate' system found 2,470 potential matches out of 170,000 attendees and a database of 500,000 images of persons of interest at the event last summer. Only 173 were correctly identified and actually matched someone in the database.
Overall, the program has been used at 15 events and flagged 2,685 people, only 234 of whom were truly persons of interest, according to the statistics.
The South Wales Police countered the troubling fail rate with their own statistics: 2,000 positive matches and 450 arrests in the last nine months since the program was put into use. They add that no one has ever been mistakenly arrested after being flagged and the officers in charge can, and often do, dismiss matches if they believe it is an obvious misidentification. If there is a match, an "intervention team" is sent to question and possibly arrest the person.
"Officers can quickly establish if the person has been correctly or incorrectly matched by traditional policing methods i.e. normally a dialogue between the officer/s and the individual," a police spokeswoman told Wired.
In addition to never erroneously arresting anyone, the South Wales Police claimed in a press release that "no members of the public have complained."
But some members of the public have, in fact, complained. Tony Porter, the UK's Surveillance Camera Commissioner, wrote in a 2017 report that the facial recognition program needed oversight to stop it from becoming "obtrusive."
"The public will be more amenable to surveillance when there is justification, legitimacy and proportionality to its intent," Porter told Wired. "Currently there are gaps and overlaps in regulatory oversight."
In a February report submitted to the House of Lords by watchdog group Big Brother Watch, Silkie Carlo, the group's director, wrote that there is "no law, no oversight, and no policy regulating the police's use of automated facial recognition." The UK government, he said, had not even set a target fail rate, allowing the system to continue flagging thousands of people erroneously at wildly high rates.
Carlo's report also added that facial recognition algorithms are known to be inaccurate, citing statistics from the US Government Accountability Office that showed "facial recognition algorithms used by the FBI are inaccurate almost 15% of the time and are more likely to misidentify female and black people."
In the report, Carlo also criticizes the database of photos taken from events and stored on police hard drives. At large events, CCTV cameras are set up in specific spots near the venue and are fed into a computer, which takes the video and scans every face to match it against the police database of 500,000 people they're looking for, Carlo wrote. But concerns have been raised about the CCTV footage and how long it is kept by police.
"The custody image database, which provides the basis for both facial matching and automated facial recognition, unnecessarily contains a significant proportion of photos of innocent people under what is likely to be an unlawful retention policy," Carlo wrote.
The South Wales Police have released multiple reports addressing this, writing that they are "very cognisant of concerns about privacy and we have built in checks and balances into our methodology to make sure our approach is justified and balanced. We have had detailed discussions and consultation with all interested regulatory partners."
The report later adds that: "Watchlists and the associated metadata are manually added to the system and will be reviewed regularly to ensure accuracy and currency and will be deleted at the conclusion of the respective deployment."
Matt Jukes, the chief constable of the South Wales Police, told the BBC that they needed to use the technology to protect large events like concerts and games from terrorist threats but "don't take the use of it lightly" and were attempting to make "sure it is accurate."
Facial recognition technology is being used by a number of countries, most notably Australia and China, which has a particularly robust algorithm that they use extensively.
NEC, the company that created the software being used by the South Wales Police, admitted to ZDNet in October that the program does not do well when working against a database as large as the one used in Cardiff and said the system was more accurate when used in smaller pools of people.
Chris de Silva, Europe head of Global Face Recognition Solutions, said, "You're going to find false alarms, and you are going to get answers, but they are not going to be always correct, and the more of that you get, the less likely people are going to be happy about using the system."
Being that the system has likely encountered EU citizens, questions could be raised about how its capabilities, and the underlying database, fit into the upcoming GDPR guidelines. Additionally, the high failure rate of such a program could be evidence that artificial intelligence (AI) used in tools like this may not be ready for primetime, especially when it comes to a contentious use case such as predictive policing.
*this article was featured on the Tech Republic website on May 8, 2018: https://www.techrepublic.com/article/welsh-police-facial-recognition-has-92-fail-rate-showing-dangers-of-early-ai/