Last week in south London, the Metropolitan Police used real-time facial recognition cameras to help arrest 17 people.
The arrests took place during specific operations carried out in Croydon on March 19 and 21 and in Tooting on March 21.
Among those arrested was a 23-year-old man who was found in possession of two rounds of empty ammunition. This led police to seize ammunition, stolen cell phones and cannabis from property linked to the individual. A facial recognition system targeted him because there was a warrant out for his arrest.
Currently, the technology is designed to identify individuals on “custom watch lists,” including those with outstanding arrest warrants. The Metropolitan Police said the technology would allow it to carry out “precision policing”.
This follows a previous tally of 42 people arrested using the same technique in February, although it is unclear how many of those arrested have been charged. BBC News.
Arrests cover a wide range of offenses including sexual offences, assault, theft, fraud, theft, racist harassment and anti-social behavior order (ASBO) breaches.
Following the operation, police said they had provided the community with ‘information and assurance’ about their actions.
🧵| Real-time facial recognition technology is an accurate community crime-fighting tool. Driven by intelligence, we direct our efforts where they can have the greatest impact. pic.twitter.com/N5bKwzAEI1
— Metropolitan Police (@metpoliceuk) April 5, 2023
Police station facial recognition controversy
Last year, British House of Lords lawmakers said police Re-evaluation of real-time facial recognition technology After the police minister hinted police could have access to a database of 45 million passport images.
Michael Birtwistle of the Ada Lovelace Institute agreed with the skepticism: “There is much debate about the accuracy and scientific basis of facial recognition technology, and its legality is uncertain.”
Civil rights advocacy group Big Brother Watch also highlighted that 89% of UK police facial recognition alerts fail.
Capital Police Intelligence Commissioner Lindsey Chiswick sought to address privacy concerns. She told the BBC: “We don’t keep your data. If no match is found, the data is immediately and automatically deleted within seconds.” Chiswick also claimed the technology had been “independently tested” for reliability and bias.
Others dispute that. Big Brother Watch’s Madeleine Stone, for example, expressed her concerns about AI surveillance, labeling it “Orwellian.”
Stone continued: “Everyone wants to get dangerous criminals off the streets, but plugging the cracks in our creaky policing system with intrusive, Orwellian surveillance technology is not the solution. “Rather than actively pursuing people who pose a risk to the public, police are relying on chance and hoping that willing people will walk in front of police cameras.”
🚨Real-time face recognition notification🚨
Police are scanning countless people’s faces. #catford It uses invasive and flawed facial recognition technology.
There is no place for Orwellian techniques in British police forces. #Stop facial recognition pic.twitter.com/JTcLYRoW7G
— Big Brother Watch (@BigBrotherWatch) March 26, 2024
Big Brother Watch also warned yesterday (26/03) that new operations are underway in Catford.
How UK police are using AI facial recognition
British police began testing facial recognition technology in 2018. A van equipped with cameras to capture footage in public places.
A recent freedom of information request against the Metropolitan Police Service (MPS) sought clarity on whether AI is being used to automatically screen individuals and how that data is processed.
MPS says it uses AI technologies such as real-time facial recognition (LFR) and retrospective facial recognition (RFR) in certain tasks.
However, MPS refused to respond to most inquiries, citing exemptions under the Freedom of Information Act 2000 relating to ‘national security’, ‘law enforcement’ and ‘protection of security agencies’.
In particular, the Ministry of Security has argued that disclosing details about the covert use of facial recognition technology could undermine law enforcement tactics.
Here is the police response: “Confirming or denying information relating to the covert practice of facial recognition will demonstrate to criminals what the capabilities, tactical capabilities and capabilities of MPS are, allowing them to target and carry out specific areas of the UK. Carry out criminal/terrorist activities.”
lessons from the past
while predictive policing Although designed to make communities safer, it has had troubling consequences, including the wrongful arrest of several individuals.
This is not just an isolated incident, but a pattern that exposes serious flaws in relying too heavily on AI in police work.
Robert McDaniel of Chicago, despite having no history of violence. became a target of the police Because the algorithm put him on the list, he was considered a potential threat.
His story is not unique. There have been cases across the United States where people have been falsely accused and arrested due to incorrect facial recognition matches.
The story of Nizir Parks This is a clear example. Charged with a crime he had nothing to do with, Parks faced jail time and huge legal fees. All of this was due to a mismatch in facial recognition technology.
Facial recognition technology has been shown to be inaccurate for darker-skinned individuals, especially black women. Facial recognition for white faces can be accurate more than 90% of the time, but may: For black faces, it is as low as 35%..
Current evidence suggests that marginalized groups may suffer the most from inaccurate algorithmic policing strategies.
Wrongful arrests are not only distressing to those directly involved. They also cast a long shadow over the affected communities.
Arresting people based on predictions rather than specific actions undermines the foundation of trust between law enforcement and the public.
In fact, public trust in police is at rock bottom in both the UK and the US. AI risks further eroding this if not managed properly.