Skip navigation
PYHS - Header

Police Sketch Bot Arrives

by Carlos Difundo

It is one of those things that seems to be a great idea at first. Once in place though, it becomes something very different. That happened when two coders created Forensic Sketch AI-rtist. The tool was simple enough given the skills of OpenAI’s DALL-E2 image generation model. All they needed to do was collect a list of facial features from a witness, just as sketch artists have done for ages, and pass them on to the AI that would convert them into an image in moments rather than hours. It would save the police time and would provide “hyper-realistic” images at the crime scene.

As it turns out, the project is rife with problems. The first revolves around AI biases. Ask DALL-E2 to draw a CEO, and more often than not, the CEO is white. Biases such as that are not always easy to discover, yet they clearly exist and remain an important problem in AI research. It may take thousands of iterations for a researcher to notice that certain combinations of facial features are typically drawn with a frown, increasing the sense of menace.

No doubt, the perpetrator of a violent crime is menacing. Researchers have found that people remember faces holistically, not by individual features. A frown, or seemingly angry face, could draw a witness into a misidentification since it is the emotion that most sticks out. People’s memories have been repeatedly shown to be easily influenced, especially during emotional moments. Meanwhile, nearly 25% of wrongful convictions that have been overturned by DNA were due to bad forensics, including misleading police sketches. Jennifer Lynch, the Surveillance Litigation Director of the Electronic Frontier Foundation said, “The problem with traditional forensic sketches is not that they take time to produce (which seems to be the only problem that this AI forensic sketch program is trying to solve). The problem is that any forensic sketch is already subject to human biases and the frailty of human memory.”

In a report on facial recognition, the Center on Privacy and Technology said, “Since faces contain inherently biasing information such as demographics, expressions, and assumed behavioral traits, it may be impossible to remove the risk of bias and mistake.”

Source: vice.com

 

 

The Habeas Citebook Ineffective Counsel Side
Advertise here
PLN Subscribe Now Ad