Skip navigation
PYHS - Header
× You have 2 more free articles available this month. Subscribe today.

Government Accountability Office Issues a Report on DOJ and DHS Use of Facial Recognition Technology

by Michael Dean Thompson

Considering all the bad press surrounding Facial Recognition Technology (FRT) and its high-profile failures, a recent report from the Government Accountability Office found that the seven agencies believed to be the largest consumers of commercial facial recognition services are doing so without training, accountability, or transparency. Strangely, while all seven agencies surveyed each have their own policies regarding personally identifiable information (PII), such as facial images, all seven of the agencies failed to fully comply with them. The report sadly did nothing to alleviate the justifiable fears of Americans concerned about civil rights abuses.

Historical Background

Facial Recognition Technology provides yet another dimension to identifying people. It can be thought of as operating in two modes. The first is Verification, where companies like Apple use FRT to verify that a user is authorized to access a system or place. For verification, the software only needs to check a provided facial image against a known, authorized image. In that sense, verification is a one-to-one comparison. In contrast, an FRT running in Identification mode must compare a given image (a “probe” image) to a vast number of database images that could number in the billions. Obviously then, a 0.1% failure rate in identification would result in very different outcomes for the two modes.

Humans are worse at recognizing faces than they realize, so FRT holds some real promise. Effective facial recognition systems can assist cops in eliminating large swaths of potential suspects, but they should never be used to identify a sole suspect and issue a warrant.

The National Institute of Science and Technology (NIST) tested a list of FRT providers and issued a report in 2019 that found Microsoft and Amazon’s products to be among the very best available. Yet, in 2018 the American Civil Liberties Union (ACLU) passed the members of Congress through Amazon’s “Rekognition” system. It falsely matched 28 members of Congress to a database of mugshots. Sadly, while Congress is about 20% BIPOC, members with darker skin made up 40% of the false matches. This ties in closely with a study by Joy Buolamwini (then a Microsoft researcher) and Timmit Gebru. The two researchers examined three products, from Microsoft, Face++, and Amazon. They found that the products misclassified women with darker skin between 20.8% and 34.7% of the time.

Despite the high failure rates, it appears that most states do not require cops to reveal that FRT was used to identify a suspect. As of this writing, at least nine people have been arrested or detained due to false matches of FRT systems. That number is likely to be far greater, however, as the suspects are often not told exactly how they were selected from the crowd. And, since the cops are treating FRT identification as being as reliable as a fingerprint, the cases we are seeing of misidentification tend to have in common that the cops arrested the suspects before verifying the ID.

Meanwhile, the largest provider of commercial FRT to cops is ClearView AI, with more than 2,000 law enforcement customers in 2020. Despite proclaiming to possess a massive gallery of more than 3 billion images that they have pulled from the internet, their system has yet to be tested under controlled conditions, as were Microsoft and Amazon’s systems. Nevertheless, while ClearView AI brazenly provides free trials to untrained cops - without requiring departmental approval - Microsoft and Amazon at least temporarily stepped back from providing FRT services to law enforcement agencies during the pandemic. Likewise, the same week Amazon announced its moratorium, IBM announced that it would halt research into facial recognition technology altogether citing the risk it runs to civil rights.

In 2021 the Government Accountability Office presented a report to Congress that found 14 federal agencies employing law enforcement officers made use of FRT. Of those, 13 did not know - or have complete information about - which commercial systems were being used, who was using them, or how they were being used. This included agencies within the Department of Justice (DOJ) and Department of Homeland Security (DHS). Given the danger poorly trained agents armed with faulty, untested technology can present to citizens, more information was necessary.

Facial Recognition Technology and Federal Agencies Today

Given that historical context, Congress tasked the Government Accountability Office with reviewing the use of FRT for law enforcement purposes as well as “its effects on privacy, civil rights, and civil liberties.” [s2, 1-2] The GAO took on four objectives to achieve that goal. First, they would look solely into FRT use for criminal investigations between October 2019 and March 2022. Second, they would try to determine which agencies required training to use FRT and enforced compliance. Third, they would ask about which steps were taken to address privacy concerns for citizens subjected to facial recognition probes. Then as a final step, they wanted to know about the policies put into place to protect civil liberties and civil rights.

Although their previous report had found 13 of 14 agencies had problems, the GAO decided to focus on just seven of the DOJ and DHS agencies that reported using FRT in the previously report. That left out recent and, potentially, agencies that had failed to report their FRT use. It likewise left out agencies in other departments that were using facial recognition technology. Yet, what they found within their restricted scope is alarming enough.

The GAO was only interested in the use of commercial FRT systems. So, while the FBI makes use of the Next Generation Identification (NGI) system, an in-house technology that includes FRT features, that system’s use was ignored. Instead, they looked at how the FBI and others make use of tools like ClearView AI. That they were apparently uninterested in the efficacy of facial recognition solutions probably helps to explain why NGI was not included. They appear to have been less concerned with privacy and personally identifiable information for internal systems.

It is important to note that in May 2022 - between the two GAO reports on facial recognition - President Biden issued an Executive Order that directed the two departments of interest here to work with the National Academy of Sciences to study privacy, civil rights, and civil liberties with regard to FRT. In addition, the order dictated that the White House Office of Science and Technology will work with the DOJ and DHS to additionally generate a set of best practices.

7 Agencies, 4 Facial Recognition Systems

Of the seven agencies examined, two used more than one commercial system. To some degree, that is due to the focus of those systems. One FRT provider is IntelCenter, which provides an interactional “open-source” terrorist database of nearly 2.5 million faces. It is used solely by Customs and Border Patrol (CBP). Marinus Analytics’s Traffic Jam analyzes images from the online sex market to identify victims of human trafficking. It is used by both CBP and the FBI.

The FBI also uses Thorn’s Spotlight to search the online sex market to locate children and their traffickers. Finally, ClearView AI’s system is used by all but CBP. That list includes the Bureau of Alcohol, Tobacco, Firearms, and Explosives (ATF), Drug Enforcement Agency (DEA), U. S. Marshals Service, Homeland Security Investigations (HSI), U. S. Secret Service, and the FBI. Officially, although HSI in part of Immigration and Customs Enforcement (ICE), only HSI has used ClearView AI. Nevertheless, it is possible that CBP or ICE could seek assistance from another agency such as the FBI to perform lookups of suspect images.

The agencies involved were surprisingly bad at tracking how they used the various FRT services considering the potential ramifications to the rights of the people they were probing. For Marinus Analytics, neither the company or the FBI tracked search data like who performed a search, for what reason, or of what image. An agent could submit a probe of his newest girlfriend and there would be no record of the privacy invasion.

Thorn has the rather odd-seeming feature of tracking only the last time a specific probe photo (a photo submitted by the client that is then matched against the database) was searched. If the image has numerous searches against the database, only the most recent search will be shown. That means that a count of the number of probe images used is not an accurate representation of the number of searches. The CBP, in fact, could not track the searches in which it had engaged on either service, Marinus Analytics or IntelCenter.

With neither Marinus Analytics or IntelCenter tracking searches and Thorn only tracking the most recent search of an image, the agencies performed a minimum of 63,000 searches (a clear undercount) between October 2019 and March 2022. Sadly, the statistics appear to grow cloudier. A note within the report [s2, 17] indicates “in 2021, we found that some agencies did not track what systems staff used, and not all agencies have taken actions to address this issue.” Systems that were used outside the investigatory period also were not counted. Even so, they tracked an average of 69 FRT searches per day.

Facial Recognition Technology Training

The seven agencies combined had a known 63,000 probe photo searches during that two-and-a-half-year window. A shocking 60,000 of those were undertaken without any form of training requirement. The sole agency to implement a training requirement during the investigation period was Homeland Security Investigations. Despite HSI having implemented it on March 4, 2021, of the 106 HSI staff who since accessed ClearView AI’s service, 15 did so prior to training, and four remained untrained as of the end of the survey window. HSI told GAO that it was unaware of anyone using the system without training and that it had engaged in a single review but did no conduct periodic reviews.

The FBI is the largest consumer of FRT services at nearly 35,000 searches on commercial systems during the investigation period. Almost half of the known searches were on ClearView AI, though Thorn, being a known undercount, may have been significantly more. As Marinus Analytics has no log, its count cannot be known. Likewise, NGI was not included. That depth of use makes that the FBI has no training requirement all the more surprising. There is a recommended 24-hour training course for using ClearView’s features, such as image manipulation to enable more matches.

Customs and Border Patrol does not have any requirement for facial recognition on either of the two systems they use. GAO states that part of the reason is CBP does not believe its staff makes use of the facial recognition features of those products (and without logging they will be free to continue believing that). They do require that staff take privacy training “and staff could have taken facial comparison training for identifying fraudulent documents.” [S2, 23]

Personally Identifiable Information

One massive concern about FRT is what happens to personally identifiable information (PII). In 2020, New York Times reporter Kashmir Hill was looking into ClearView AI. As part of his efforts, he had some cop friends submit his photo to see what was shown. Those cops soon began receiving calls from ClearView representatives (who were avoiding talking to Hill) asking if the cops were talking to the media. The takeaway from this example is that ClearView was monitoring who was being probed, a significant concern for people who may be a close match for a criminal but are in fact innocent.

DOJ and DHS both have requirements that should reduce privacy concerns. Those policies include reviewing a tool for privacy issues prior to acquisition. They should also conduct a privacy impact assessing (PIA) and determine privacy needs before purchasing the product. Finally, they are to oversee privacy controls for the service with regard to contractor access.

All seven agencies failed to address all privacy requirements. Both the CBP and FBI had determined a PIA was necessary but failed to complete the PIAs all the way through April of 2023 despite using the systems for years. Only HSI determined if “certain privacy requirements applied to their use of facial recognition services.” [S2, 32] Even when the HSI, CBP, and FBI did work toward addressing a specific privacy requirement, they did not do so according to policy requirements. The program officials of the seven agencies told the GAO that they did not address the privacy requirements for reasons that included not understanding that the probed photos were submitted to the vendors, and they failed to coordinate with the privacy officials in their departments. Disturbingly, one excuse was that they were not aware the photo they were submitting in an effort to identify a subject qualified as personally identifiable information.

Civil Rights and Civil Liberties Policies

As the largest apparent user of FRT, the FBI joins with the CBP, ATF, and DEA as having no specific policies regarding how FRT is used. The FBI and CBP somewhat attempted to deflect the GAO by pointing to more general guidance to protect civil liberties. CBP officials did at least reference a DHS memorandum that covered First Amendment protections as a “source of guidance for staff using facial recognition technology.” [S2, 37] ATF and DEA both halted their use of FRT, making direct policies less relevant.

HSI, the Marshals Service, and the Secret Service did implement policies or guidance specific to civil rights and civil liberties. Both HSI and the Marshals Service have limited FRT use to active criminal investigations. HSI does also allow for FRT use for “ongoing investigation relating to HSI’s statutory authorities” or as part of some other program or task force where the impacts have been assessed. In addition, they explicitly limit how probe photos may be collected during First Amendment activities like protests.

Much like the DEA and ATF, the Secret Service has halted its use of FRT. Even so, they did generate guidance on its usage and limits in April of 2023. The examples of HSI and the Secret Service stand in direct juxtaposition of the FBI’s lack despite being the biggest consumer.

It seems no agency investigated for the GAO report was unscathed. HSI seemed to fare the best, but it was a very low bar compared to how the FBI performed. Meanwhile, the DOJ lacks a policy indicating whether a facial recognition match can result in a warrant. [S2, 40] Anyone familiar with the challenges of individual identification via facial recognition technology should find the lack of such a policy position that prevents the issuance of a warrant based on a FRT match chilling.

Facial Recognition Technology is one area where technological capability has far outpaced both law and policy. It also remains a tremendous challenge as America is increasingly surveilled by ubiquitous cameras. Within some States, it is a near free-for-all while in a few cities there have been a few restrictions and even bans on the use of FRT, leaving Americans with no real certainty as to how they might be treated if ClearView AI, a commercial, untested technology, were to identify them as a suspect, or if the operator using the equipment is sufficiently trained to recognize false matches.

Sources: nytimes.com, EFF.org, GAO.gov, techdirt.com

As a digital subscriber to Criminal Legal News, you can access full text and downloads for this and other premium content.

Subscribe today

Already a subscriber? Login

 

 

PLN Subscribe Now Ad
PLN Subscribe Now Ad 450x450
The Habeas Citebook Ineffective Counsel Side