Published by the Students of Johns Hopkins since 1896
April 18, 2024

Facial recognition technology isn't ready for police use

By ANUSHA RAO | October 8, 2020

404e5019-bbc9-476e-ac73-a98416ab01b0
PUBLIC DOMAIN Biases in facial recognition tools could lead to false accusations and arrests.

Last month I discussed how some nightmares of tech-noir films are becoming reality. As an advocate for artificial intelligence (AI), my last intention is to stoke unreasonable fear over new technologies. Unfortunately, I feel I have to sound the alarm again. 

Amid the vast, ongoing technological revolution, massive changes within policing have spread from the biggest cities to the smallest communities. Police departments are utilizing risk assessment tools, predictive policing algorithms, probabilistic genotyping and, perhaps the most well known, facial recognition software. 

Facial recognition technology has become a staple tool for policing in America. A form of AI, facial recognition software allows police officers to compare images of faces collected from evidence with pictures from government databases. These images are being collected constantly, from surveillance footage and closed-circuit TV to mugshots and driver’s license photos. 

Unlike DNA testing — the golden standard for suspect identification — facial recognition is extremely efficient and cost-effective. This ease of operation has contributed to its widespread use as a tool for solving everyday cases. The global market for facial recognition technologies was valued at $3.4 billion in 2019.

At first a glance, identifying suspects seems like an obvious, practical application of AI. Countless media representations show how facial recognition can be used — with the click of a button, the police can identify the “bad guys.” Yet, like most movies about AI, the repercussions of a faulty algorithm are rarely talked about.

A fundamental dilemma at the core of AI is the issue of bias and how an AI can enable and perpetuate flawed systems of prejudice. One of the biggest issues within facial recognition is the repeated misidentification of Black faces. The National Institute of Standards and Technology (NIST) reported that while algorithms are getting more accurate due to advances in modern computing capabilities, they misidentify faces with darker skin at rates five to 10 times higher than they do white faces. Massachusetts Institute of Technology student Joy Buolamwini brought attention to these flaws with her project Gender Shades, where she asked facial recognition technologies to identify whether 1000 faces were male or female. She found that they performed worst with women of color, and her project was a turning point in the dialogue about the dangers of biased algorithms and facial recognition tools.

One of the primary reasons for deep inaccuracies in facial recognition tools is simply that these algorithms are not trained on diverse faces. AI only learns from the data that it is fed; if an algorithm is only trained on white faces, then it will only know how to accurately identify white faces. Native Americans have the highest false match rate of all ethnicities, as they are virtually unrepresented in all training data sets. 

The implications of such faulty algorithms are not merely theoretical. In June, The New York Times published the story of Robert Julian-Borchak Williams, a Black man wrongfully accused by an algorithm. While at work, he received a call from the Detroit Police Department ordering him to come to the station to be arrested. He dismissed the call as a prank, but after work, two officers detained him in his own driveway, refusing to provide any explanation. 

He later discovered that a facial recognition software had identified him as the suspect for a crime based on a blurry picture of a Black man that was clearly not him. The algorithm used to identify Williams was developed by DataWorks Plus, a company with no formal standard to gauge the system’s accuracy. The NIST found that algorithms from the company were clearly biased, misidentifying Black and Asian faces 10 to 100 times more than those that were white.

After being held in custody for 30 hours and released on a $1000 bond, his case was dismissed, but it was dismissed “without prejudice,” meaning he could be charged again. Only after his story was shared in The New York Times did the prosecutor offer to have the case and his fingerprint expunged. 

What happened to Williams is a tragedy seen all too often in our criminal justice system. What if he had resisted arrest, a perfectly normal and expected response when one is detained without explanation? His story could have ended in his death at the hands of police, like so many other innocent Black men in our country.

Under the facade of machine neutrality, we assume that automated systems are inherently unbiased. Instead, they are very much a reflection of their creators — flawed, biased and prejudiced. 

Every time AI is utilized in a new way, it becomes increasingly clear that the technology can continue to exacerbate bias, threatening gains in racial justice and feminist movements. In this moment especially, we must appreciate and recognize how painstakingly hard-earned these advances were. If inequalities in AI are left to fester and remain unaddressed, bias and prejudice will cripple the age of automation.

Too often, the initial response to issues with any technology is complete elimination of its use. Senators Ed Markey and Jeff Merkley and Representatives Pramila Jayapal and Ayanna Pressley recently introduced the Facial Recognition and Biometric Technology Moratorium Act, a bill that would essentially eliminate the use of facial recognition technology by law enforcement agencies in the United States.

A complete ban isn’t the answer; facial recognition software could be comfortably deployed in law enforcement if the American government developed an evidence-based framework to aggressively regulate the quality and specific applications of these technologies. If federal and state governments responded to some calls for greater accountability and oversight, like ending qualified immunity, requiring body cameras and making footage accessible to the public, the culture of policing could shift. This new culture would set the appropriate foundation for the introduction of tools like facial recognition. Congress should also pass specific legislation to address the flaws within AI tools, like creating standards for diversity within training sets. 

With responsible use, there is a world in which algorithms can be used for their initial intent — to correct the already flawed, biased system that we have. As significant flaws within our criminal justice system continue to draw attention, it’s important to reject the assumption that new technology is bad. With reforms guided by experts, we can make technology what we currently idolize it to be — impartial and immune to the prejudice that humans will always be vulnerable to. Despite significant challenges to overcome, I feel more hopeful than ever that we can not only peacefully coexist with these technologies but also utilize them to improve our world.

Anusha Rao is a freshman studying Cognitive Science from Washington, D.C. She is part of the Artificial Intelligence Society at Hopkins. 


Have a tip or story idea?
Let us know!

Comments powered by Disqus

Please note All comments are eligible for publication in The News-Letter.

Podcast
Multimedia
Alumni Weekend 2024
Leisure Interactive Food Map
The News-Letter Print Locations
News-Letter Special Editions