Skip navigation
The Habeas Citebook Ineffective Counsel - Header
× You have 2 more free articles available this month. Subscribe today.

Facial Recognition Run-Down

by Anthony W. Accurso 

Facial recognition is a technology that is rapidly evolving, aided by transformative gains in artificial intelligence and camera resolution, as well as the proliferation of ubiquitous surveillance systems—by both government and corporate actors—which provide the volume of data necessary to train facial recognition systems and create the financial incentive for vendors to innovate.

Due to this rapid development, the industry frequently changes the terms it uses to describe how facial recognition is employed, often tailoring them to new-use cases and marketing trends.

The Electronic Frontier Foundation (“EFF”) has attempted to standardize some of these terms to educate the public about how facial recognition is sometimes used, and how these uses present a “menace to our essential freedoms.”

Face Detection

Face detection forms the bedrock of facial recognition in that it is always the first step in the process. Before a face can be “recognized,” software must determine that an image contains one or more faces.

The EFF ranks this process as non-threatening because it “does not raise significant privacy concerns.” It has been used to make public records or other disclosures more efficient because software can now automatically blur faces in photos and videos to comply with privacy laws. As long as the software does not index or retain data about the faces it blurs, this process poses no serious threat to privacy.

Face Printing

This step involves taking a detected face,then proceeding to use an algorithm to automate the “Translation of visible characteristics of a face into a unique mathematical representation [or pattern] of that face.”

It is this mathematical pattern that actually gets compared for any subsequent recognition process, due to the way that computers process information digitally. The algorithms that create or compare these patterns are what allow computers to perform the “trick” of recognizing faces. However, it is also the place where they fail most often, and research shows that they fail more often when analyzing the face of women and Black, Indigenous, People of Color (“BIPOC”) persons.

Face Matching

Matching involves comparing face patterns. Some form of matching is what many people understand when they hear the term “facial recognition,” but their uses and implications for freedom can vary widely.

“Face verification” matches the pattern from an unverified person to the patterns of a pool of pre-registered persons whose identities are known. Benign uses include verifying an employee’s identity to gain access to sensitive resources or when unlocking our own smartphones.

“Facial identification” is similar, in that a pattern from an unknown person’s face is matched to a set of patterns for people whose identities are known. The Federal Bureau of Investigations (“FBI”) recently used this tactic to match images of January 6 insurrectionists to their social media profiles.

“Face tracking” is used for following a person through a physical space by matching their facial pattern to images collected by variously placed cameras, like a CCTV network. This tracking may also occur weeks or months after the images were originally captured.

“Face clustering” can be similar to identification or tracking but is applied to images containing multiple faces.

Matching in general is a minefield for potential corporate or government intrusions or abuses. It could allow companies to gather data on a person’s cash purchases, or it could enable persistent government surveillance that eliminates much of the privacy we take for granted.

Face Analysis

While many facial recognition systemsinvolve comparing one face pattern to another, some systems are used to drawing conclusions about details of a single face. This is called face analysis, but it is also sometimes known as “face inference” because vendors claim to infer various characteristics of a person, including demographics (gender, race, ethnicity, sexual orientation, age), emotional or mental states (like anger or deceit), behavioral attributes, and even criminality.

This application is pseudoscientific at best since many of the inferred characteristics are highly unique to each person, and getting these wrong can have harmful consequences. Further, research has shown that emotional expressions are not universal and can vary significantly “based on culture, temperament, and neurodivergence.”

Invasive yet relatively-benign uses of this technology could involve systems that detect intense grief or depression and attempt to refer a person to mental health counseling. Or a company could use the system to monitor customer reactions to improve experiences instead of relying on customer feedback surveys.

A real-world example is the Department of Homeland Security’s FAST program (Future Attribute Screening Technology), which uses face inference to detect “mal-intent” and “deception” in people at airports and borders. According to the EFF, “[t]hese system are extremely biased and nowhere near reliable, yet likely will be used to justify excessive force or wrongful detention.”

Also horrifying is the use of face inference software in Chinese prisons to monitor emotional states of prisoners to minimize “disruptions.”

Making inferences about a person’s mental state from their facial expressions is a task that humans consistently overrate themselves at performing, but computers are far worse at this task. There is simply no way to control for variables like cultural influences because artificial intelligence (“AI”) systems cannot accurately determine your culture from an image of you in order to adjust the expected expressions for your specific emotional states.

Because this kind of facial recognition is invasive at best, and horrifyingly abusive at worst, corporate and government use of facial inference algorithms should be banned entirely or at least until they can be demonstrated to be extremely accurate sometime in the future.

Summary

Facial recognition systems have crept into our lives in various ways, some of which are helpful or benign, but are often invasive, abusive, or predatory. This tech has been used to track protesters, identify them from their social media profiles, and track their movements around towns.

Several recent U.S. Supreme Court decisions have addressed tracking a person’s location. These cases presumed we have an expectation to not be surveilled at all times while in public, but these decisions could go the other way if Americans take for granted ubiquitous surveillance, a move that would entirely undermine our right to our private lives.

Further, at least three people have been falsely accused of crimes—that we know of, there may very well be more—due to a false match by these algorithms. It’s also no coincidence they were Black men, as recognition tech is known to have higher error rates with BIPOC faces. Even under laboratory conditions—good lighting and forward-looking faces—these algorithms struggle with BIPOC faces, and things get worse under “real-world” conditions. Using systems that are this faulty exists only to perpetuate already rampant and systemic inequities.

The EFF supports a national biometric information privacy act limiting unconsented collection of facial biometric data by companies and preventing certain uses by government agents, and over a dozen cities have banned the use of facial recognition tech.

It is often difficult to put a tech genie back in its metaphorical bottle, but our freedoms—even democracy itself—may depend on it. 

 

Source: eff.org

As a digital subscriber to Criminal Legal News, you can access full text and downloads for this and other premium content.

Subscribe today

Already a subscriber? Login

 

 

Prison Phone Justice Campaign
Advertise Here 3rd Ad
The Habeas Citebook: Prosecutorial Misconduct Side