Skip navigation
Disciplinary Self-Help Litigation Manual - Header
× You have 2 more free articles available this month. Subscribe today.

The Battle Against CSAM: The Front Line of the Government’s War on the Fourth Amendment

by Anthony W. Accurso

Few topics elicit the level of disgust, outrage, and hyperbole as the subject of the sexual abuse of children does in America. No child should be subjected to sexual assault, but our collective efforts to prevent this harm should be mindful of other values we hold as a society. Wholesale sacrifice of those other values would bequeath to those same children a world far worse than one where sexual assault goes unpunished.

Technology matures at such a rate that it is difficult for legislators and law enforcement to keep up. In the past few decades, the internet and related computer technologies have exposed the prevalence of child sexual abuse in America (and around the world) as evidenced by the spread of visual depictions of this abuse.

“It’s important to remember that child pornography is really a reflection of child sexual molestation. A molestation had to occur and somebody had to photograph, videotape, document, memorialize the abuse,” said Michelle Collins of the U.S. National Center for Missing and Exploited Children (“NCMEC”). “So the problem with child sexual molestation has always existed. We’re now seeing it with our own eyes because the people are taking photographs, videos and sharing them online.”

Various actors have taken steps to address this problem, though such efforts lack coordination, efficacy, or even a fundamental understanding of the causes of child sexual abuse. Out of frustration, government entities have engaged in whatever tactics they feel will facilitate the capture and prosecution of abusers, regardless of the implications for the constitutional rights of citizens. Lawmakers and courts have justified these tactics with post-hoc rationalizations, secure in the belief that doing whatever is necessary to suppress child abuse will be supported by a majority of Americans.

But as these tactics are applied to more people, dissenting voices criticize these actions as unconstitutional attacks on privacy and freedom. The internet may still at times seem like a “Wild West” of lawlessness, but things have changed radically since millions of Americans got online in the 1990s.

It is both important and necessary that we have a conversation about how we have arrived at this point and the implications for a future where personal privacy and the rights of citizens have been eroded in the pursuit of abolishing child sexual abuse material (“CSAM”). This discussion would be incomplete without a full understanding of the efforts to suppress CSAM and how these efforts relate to our fundamental conception of constitutional rights.

America Leads the Way

Efforts to suppress CSAM have their roots in two distinct cultural movements in the United States: the largely religious anti-pornography movement and the shift in cultural attitudes about who deserves our compassion, often expressed in terms of civil rights. These two forces are often at odds, both in ideology and stated goals, but they have occasionally joined forces and changed the policy landscape in America and around the world.

In his book, The Better Angels Of Our Nature, cognitive psychologist, psycholinguist, and author Stephen Pinker chronicles the rise of the notion of “rights,” as well as the expansion of groups deserving of those rights. It wasn’t until roughly the 1980s that our conception of children began to change in a way that regarded them as “special” and not just as “little people.” This shift began to mark childhood as valuable and fragile in ways that were previously inconceivable. Consider, by contrast, how little compassion was afforded to the children in the novels of Charles Dickens by their adult contemporaries.

The result of this reconceptualization has been “helicopter parenting” on an individual level and movements to “save the children” at the societal level. What the children require saving from, exactly, is often a matter of fierce debate. Frequently topping the list of threats is sexual assault, followed by exposure to hardcore pornography, which is believed to cause a range of developmental dysfunctions, depending on the age of exposure and the type of material involved.

The anti-pornography movement originally grew out of religious opposition to sexually explicit material, which became increasingly available in the United States in the 1950s. Groups such as “Morality in Media” and Jerry Falwell, Sr.’s “Moral Majority” sought to influence government officials for the purpose of eradicating “commercialized obscenity,” and several high-profile prosecutions occurred. The U.S. Supreme Court’s ruling in Miller v. California, 413 U.S. 15 (1973), linked the definition of obscenity to “contemporary community standards,” paving the way for the U.S. Department of Justice (“DOJ”) to crack down on “offensive material.” The GOP’s platform in 1992 called for a “national crusade against pornography,” leading anti-porn activists to feel a sense of victory.

“And then everything shifted, thanks to two factors,” said Patrick Trueman, who led the DOJ’s obscenity strike force, “the rise of the internet and Bill Clinton’s presidency.”

The DOJ under Clinton shifted government prosecutions of pornography in general and instead focused on CSAM suppression and limiting exposure of children to pornography. In the meantime, pornography on the internet exploded, and the only law designed to protect children that survived Supreme Court review was the Children’s Internet Protection Act of 2000, which set the low bar of mandating pornography filters in schools as a condition for federal funding.

While the anti-pornography movement was hopeful that the election of George W. Bush would result in a renewed focus on obscenity prosecutions, the Bush DOJ would eventually devote the vast majority of its efforts to fight terrorism after the attacks on 9/11. Hereafter, the only significant prosecutorial efforts would be about pornography that featured children.

While several U.S. Supreme Court decisions since Miller upheld adult access to pornography in terms of free speech protections, the Court in New York v. Ferber, 458 U.S. 747 (1982), held that “child pornography — visual depictions of actual children engaged in explicit sexual conduct — was categorically devoid of First Amendment protection,” according to James Weinstein in The Context and Content of New York v. Ferber. The only limitation placed on such prosecutions was that the images involved in the case had to depict actual minors, as opposed to fictional children. See Ashcroft v. Free Speech Coalition, 535 U.S. 234 (2002). Though pornography depicting fictional minors is still classed as obscenity, it is punished less harshly.

From the early days of the internet until now, responses have evolved through three approximate stages delineated by the type of tactics used to investigate and suppress CSAM.

The First Era:
The Early Days of the Internet

The internet began as the Advanced Research Projects Agency Network (“ARPANET”), conceived in the late 1960s, and by 1986, it was connecting several U.S. universities for the purpose of researching resilient communications networks for the military. However, the advent of commercial internet service providers in 1989 shifted the internet’s purpose away from research to a more commercial model, broadening access. Companies — especially early digital media companies — saw the internet as a way to reach customers, as more people were buying home computers and connecting to them. But growth skyrocketed in the early 1990s when curated media companies like America Online (“AOL”) and CompuServe began distributing software that made connecting to the internet easy.

Suddenly, millions of Americans were stumbling onto a new world with unfamiliar modes of communication. Various protocols were in operation that formed the backbone of web pages, chat rooms, and an early form of message board known as “newsgroups.”

With this influx of population, pornography — both commercial and private — became ubiquitous. This included CSAM, which at the time, was often scanned images of commercially available magazines from Europe. However, as digital cameras dropped in price, this material also included graphic images of children being sexually assaulted in their own homes.

During this time, law enforcement tactics used to investigate and prosecute online CSAM possession, distribution, and production were very similar to the way that offline prosecutions occurred. Police would receive a tip about a person having or distributing CSAM, they would research and verify the complaint, and then track down the responsible party. It was relatively easy to link an IP address to a specific person and household where a search warrant could be executed, and prosecution could proceed when the police confirmed the presence of CSAM on the person’s computer.

From a constitutional perspective, such prosecutions were not treading new ground. While courts debated whether certain images were, in fact, pornographic images featuring children, and therefore excluded from First Amendment free speech protections, very little debate occurred around other constitutional concerns, such as limits to government searches pursuant to the Fourth Amendment.

Once law enforcement linked the distribution or receipt of CSAM to an IP address, they could obtain a warrant or subpoena to link that address to a subscriber’s home. Then a separate warrant could be obtained for the search of that home, including provisions to seize electronic devices likely to contain evidence of CSAM. While police rooting around indiscriminately in someone’s computer wouldn’t become problematic until Riley v. California, 134 S. Ct. 2473 (2014), such search warrants were considered routine by the magistrates who issued them and district court judges who upheld them.

The postal service and telephone technology both served as reliable analogs in this burgeoning digital era, with decades of case law to guide courts in determining where the boundaries of the Fourth Amendment lie.

Then, after passage of the Communications Online Decency Act in 1996, commercial service providers were incentivized to clean up material posted on their platforms. Part of this act stated that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” 47 U.S.C. § 230.

Prior to this law, content providers like AOL were reluctant to publicly acknowledge the existence of CSAM on their networks because they feared prosecution and civil litigation for having “published” it. Once absolved of this liability for content uploaded by their users, these companies ramped up efforts to identify offensive or illegal content (including CSAM) and to purge it from their servers.

The most notable example of this was the community action teams organized to police AOL’s networks. These employees facilitated the removal of illegal content from AOL’s servers by responding to complaints from users, which included tips about illegal content, and made referrals to law enforcement where appropriate.

While this system worked as expected, it was expensive. Scaling the program meant hiring more employees and paying the premium involved with labor costs. AOL and similar companies wanted to remove illegal content from their servers, but they did not want to pay extravagantly for this endeavor. New, more automated solutions were needed to address this ongoing problem.

Also influencing this era was the Missing Children’s Assistance Act of 1984 and the resulting establishment of the NCMEC to address what was perceived as an epidemic of child kidnappings and brutal murders. Public figures, such as John Walsh of the TV show America’s Most Wanted, touted the number of children kidnapped by strangers as being approximately 50,000 per year. The Denver Post won a Pulitzer Prize in 1985 for “The Truth About Missing Kids,” a series of stories laboriously debunking the statistics that caused widespread alarm, finding that the actual number of stranger-kidnapped children (not runaways or pawns of vicious custody battles), according to FBI documentation, was fewer than 100 per year.

According to Lenore Skenazy of Reason magazine, NCMEC was responsible for perpetuating debunked figures by continuing to lie to Congress that the number was similar to Walsh’s claim of 50,000 per year. They further stoked the public’s fears by printing images of missing children on over three billion milk cartons, though only the most attentive person would have noticed that all the photos were of “the same 106 faces,” according to The Atlantic.

Thanks to its relentless publicity efforts, NCMEC was able to raise sufficient money from private donors to create its CyberTipline program in 1998, which provided electronic service providers a central authority for the submission of tips relating to instances of CSAM. In its first year, the service received 3,000 submissions. A decade later, the service logged over 100,000 submissions. It was clear the problem of CSAM was not going away anytime soon.

Contemporaneous to the explosion of internet access came the proliferation of various cryptography-based tools that would not only change the way that nearly all internet activities were transacted but would also start a technology arms race between private actors and government agencies.

Initially, the U.S. added cryptography software to the list of munitions whose export was prohibited, with severe statutory penalties for its violation. Phil Zimmermann — creator of Pretty Good Privacy (“PGP”), one such cryptography program — was investigated for sharing the code of his software in the mid-1990s.

Public key cryptography (of which PGP was an example) would eventually be used to secure almost all transmissions between providers and users, preventing malicious parties from capturing credit card numbers and other sensitive data when users shopped online. Because these systems are so secure, malicious actors (including governments) now find it easier to attack a company’s servers directly, instead of trying to eavesdrop on customer interactions.

The Dark Web was also conceived in the late 1990s by researchers at the U.S. Naval Laboratory for the purpose of allowing users to mask their identity, a crucial tool for spies, dissidents, and journalists. The first instances of servers running The Onion Router (“Tor”) software came online in October 2002, but it wasn’t until it was paired with a hardened web browser — Mozilla Firefox — that its popularity took off. The Tor Project is now a private, 501(c)(3) non-profit that is funded by a variety of groups ranging from the Electronic Frontier Foundation (“EFF”) to the U.S. Department of Defense.

The U.S. Government is ambivalent about such technology. On one hand, law enforcement folks (FBI, DOJ, and DHS) are constantly pushing to weaken these systems, often by requesting or legally mandating “backdoor” access. On the other hand, the DOD and State Department recognize the value of these tools being robust and secure.

Testifying in federal court in relation to a suppression hearing, Christopher Soghoian — former principal technologist for the ACLU’s Speech, Privacy, and Technology Project — described the government’s purpose in funding Tor as follows:

“If you only have naval investigators using Tor, then the moment a website receives someone coming from Tor … they know that it is the U.S. government. So the creators of Tor have a phrase they use, and they use it in research papers and elsewhere, is that anonymity loves company. If you want to have a technology that lets people blend into the crowd, you need a crowd. And so the creators of Tor from day one knew that there would be uses of Tor that society would love and uses of Tor that society would not love as much.”

Compare this to the efforts of the U.S. government to undermine public access to cryptography. Periodic scandals, such as the Clipper Chip controversy, would erupt frequently enough to convince the American people they were being spied on, though few beyond academics and professionals seemed interested in changing their behavior.

This appears to have changed since the revelations made by Edward Snowden in 2013. Most notable of these revelations in relation to this article was “Bullrun,” an NSA program whose goal was to compromise all public use of cryptography available at the time. The program cost more than $250 million per year beginning in 2011, much of which went to clandestine efforts to convince companies to weaken the encryption systems they have made available to the public.

It was reported in 2013 that RSA Security — a company whose cryptography products “protect” over a billion people worldwide — accepted $10 million from the NSA to use an insecure random number generator as part of its software toolkits. According to The New York Times, this was consistent with Bullrun’s mission to “[i]nsert vulnerabilities into commercial encryption systems, IT systems, networks, and endpoint communication devices used by targets.” Of course, “targets” being anyone who isn’t working for the NSA.

Soghoian, along with security expert Bruce Schneier, inferred from Snowden’s leaks that the NSA had exploited a flaw in the RC4 algorithm behind at least 50% of the SSL/TLS traffic (the protocol websites used to secure transmissions with their users) at the time.

It was fallout from the Snowden leaks that seems to have moved the needle on users opting for more privacy-centric solutions. Other high-profile privacy breaches, like Facebook’s Cambridge Analytica scandal, have undermined consumer confidence further, and major platforms are now having to implement the kinds of widespread encryption the government so often tries to undermine.

The Second Era:
Scalable Mitigation and Government-Sponsored Hacking

After nearly a decade of watching the number of CyberTipline submissions climb impossibly high, and being similarly dismayed at law enforcement’s inability to meaningfully stem the tide of CSAM, private organizations developed tools to mitigate its proliferation. These tools were initially the product of corporations — usually partnering with universities — but were later enabled for law enforcement access. It was also during this era that government cyber security experts began to behave like traditional “hackers” in an attempt to catch distributors of CSAM.

 

Hashes and Cloud Scanning

NCMEC does more than receive tips about CSAM on electronic platforms and generate referrals for law enforcement. They also sort through the CSAM submitted by providers to identify victims, often for the purpose of rescuing them from their abusers. It also coordinates efforts to obtain victim impact statements that can be used during sentencing for individuals found in possession of CSAM.

Due to the rapidly increasing number of submissions, NCMEC had to develop a system to determine if an image had been previously detected, and if so, which victim it was linked to. Images not previously linked were to be prioritized for interdiction and victim rescue.

NCMEC adopted a “hashing” system to address this issue. Hashing involves taking a large file and performing an analysis based on a mathematical function — something a computer can do quickly — which results in a unique string of letters and numbers (the “hash”) that is much shorter. For instance, the MD5 algorithm generates 128-bit hashes that are expressed as a 32-character string.

According to reporting by the BBC, by 2014, NCMEC had catalogued a database of over 720,000 unique images of CSAM.

Importantly, hashes cannot be reversed, which means NCMEC can share its database of hashes of known CSAM with its partners (law enforcement and trusted electronic service providers) without inadvertently leaking CSAM. Police can use this hash database from known images to quickly scan computers and cloud services for CSAM.

This system worked relatively well for a number of years until NCMEC staff started noticing CSAM images that were visually indistinguishable from previously detected images, but resulted in different hashes. They learned that CSAM purveyors had begun to change one pixel in each image to fool the hashing systems. This minuscule change could not be perceived by the human eye, but would result in a different hash value.

To resolve this loophole, Microsoft partnered with researchers — including Dr. Hany Farid, a professor at the University of California, Berkeley, whose research focuses on forensics, image analysis, and human perception — to develop a program called “PhotoDNA,” which went online in 2009.

According to Microsoft’s press releases, PhotoDNA works like this: “The original PhotoDNA helps put a stop to this online recirculation [of CSAM] by creating a ‘hash’ or digital signature of an image: converting it into a black-and-white format, dividing it into squares, and quantifying that shading. It does not employ facial recognition technology, nor can it identify a person or an object in the image. It compares an image’s hash against a database of images that watchdog organizations and companies have already identified as illegal. “[Internet Watchdog Foundation], which has been compiling a reference database of PhotoDNA signatures, now has 300,000 hashes of known child sexual exploitation materials,” as of 2018.

Importantly, PhotoDNA allows for “fuzzy” matches. While earlier hashing programs were all-or-nothing, PhotoDNA will recognize an image even if it has had a few pixels altered, a face blurred, or an added watermark.

Also by 2018, Microsoft developed “PhotoDNA for Video,” which scans a CSAM video, and generates a unique PhotoDNA hash for each frame. Then, when any video is uploaded to a cloud service employing PhotoDNA for Video, each frame of the video will be scanned and compared to the database. This means the system can easily detect a 30-second CSAM video embedded in an hours-long superhero movie much faster than a human could.

While this system sounds ideal, there are two main drawbacks: First, this is a proprietary tool developed by Microsoft, and the company is unwilling to release its source code or database of hashes to anyone but its “most trusted partners.” As of 2014, this list included the largest online service providers: Google, Apple, Dropbox, Facebook, and Twitter. However, this service is provided for free to any site or server hosted by Microsoft on its Azure cloud service, making it simple for smaller organizations to avail themselves of the service — willingly or not, as it comes with the cost of hosting on Azure.

There are important caveats to this list, though. Apple did not start scanning uploads to iCloud until 2020, and Google and Dropbox do not generate NCMEC notices unless they detect CSAM being shared with other users, not immediately after it is uploaded (as of 2019, according to The New York Times). Notably absent is Amazon, whose Amazon Web Services is the largest cloud services provider in the world, yet it does not scan its platform for CSAM the way Microsoft does on Azure.

Smaller providers, who are not on the trusted partners list, can still use Microsoft’s PhotoDNA cloud service. Using an application programming interface (“API”), smaller providers can program their servers to transmit each uploaded photo and video to Microsoft’s servers for instant scanning with PhotoDNA. While this protects Microsoft’s intellectual property, it comes at a cost of time and bandwidth, something smaller platforms may not be able to afford while remaining competitive.

Second, there is a very real problem with hash database poisoning. Intentionally or not, services or staff who are authorized to identify an image as CSAM may classify an offensive or pornographic image as CSAM despite the person featured not being underage. While this is less of a problem for CSAM featuring prepubescents, U.S. law classifies an image as CSAM even if it features a “victim” that is 17 years and 364 days old.

Somewhat famously in 2009, federal prosecutors brought a case against a New York man returning from visiting family in Venezuela after Customs agents discovered a pornographic DVD in his luggage labeled “Little Lupe The Innocent — Do Not Be Fooled By Her Baby Face.” Prosecutors solicited testimony at trial by a pediatrician who insisted the female in the video was underage.

However, the man’s defense attorney tracked down the woman from the video, Zuleidy Piedrahita, also known as Lupe Fuentes, a well-known Colombian music producer who stars in pornography under the stage name “Little Lupe.” After getting a free plane ticket from the U.S. Marshals to attend the trial, she testified for the defense, establishing the date the video was made and providing her passport and photo ID as proof she was a legal adult, although a very petite one.

Unfortunately, it took over a year — which the accused man spent entirely in custody — to clear up this issue, one that could have been easily solved by googling “Little Lupe.” NCMEC staff are unlikely to be qualified pediatricians (but as we’ve seen, even pediatricians aren’t able to reliably determine a person’s age by visual inspection alone), yet they are often responsible for determining if a person in a video or image is an underage victim and then adding such images to the database of “known CSAM.”

Other providers have released their own tools in recent years that operate similarly to Microsoft’s PhotoDNA. Facebook developed its own algorithm, known as PDQ, and released it publicly in August 2019. Researchers at Monash University in Victoria, Australia — with assistance from Australian Federal Police — tested the algorithm and found that it performed roughly as well as PhotoDNA. Unfortunately, though it is widely available, a database of hashes for use with it is not.

Both Google and a lesser-known company named Thorne released AI-driven tools that claim to go beyond hashes and attempt to use machine learning to recognize CSAM. However, Google has yet to enable access for third parties, and Thorne’s product ranges from $26,000 to $118,000 per year in licensing fees, which makes these tools difficult to implement for smaller providers.

Constitutionally speaking, while users may believe the files they upload to cloud services are private, courts have relied on the third-party doctrine created by the U.S. Supreme Court in Smith v. Maryland, 442 U.S. 735 (1979) (the Fourth Amendment does not preclude the government from accessing, without a warrant, information voluntarily provided to a third party), when considering such cases. Further, considering the servers belong to corporate entities who are not themselves regarded as agents of the government, it is not considered a search under the Fourth Amendment when a service provider locates CSAM uploaded by a user and then notifies police. See United States v. Reddick, 900 F.3d 636 (5th Cir. 2018) (police opening a file after a hash was made and matched to known image of child pornography is not a “search”); see also United States v. Rosenschein, 2020 U.S. Dist. LEXIS 211433 (D.N.M. 2020) (denying suppression motion where provider used PhotoDNA to identify CSAM and notified NCMEC, because user had no reasonable expectation of privacy in a chat room and because providers were “private actors” not “government agents” for purposes of search).

 

Surveillance of File-Sharing Networks

Simultaneous to the development of PhotoDNA, a Florida-based data aggregation company known as TLO (which stands for “The Last One”) developed a program known as the Child Protection System (“CPS”). Working with law enforcement, TLO released CPS in 2009 for use by police investigating CSAM distribution over filesharing networks. After the death of TLO’s founder, Hank Asher — a longtime friend of NCMEC co-founder John Walsh — Asher’s daughters sold TLO to TransUnion on the condition that they could spin CPS into a new non-profit, the Child Rescue Coalition (“CRC”), though litigation has revealed that the CPS database is still being hosted on TransUnion’s servers.

CPS works by automatically scanning filesharing networks such as LimeWire for hashes of known CSAM. The system flags “hits” and logs the IP address where it is located. Police can then use the program to download and confirm that the file is actually CSAM, an action that provides the basis to charge the user with distribution of CSAM (which carries a five-year mandatory minimum federal sentence), instead of mere possession (which has no minimum, as those state laws differ widely).

Importantly, CPS also uses publicly available services to automate the tracing of IP addresses to specific internet service providers, making it easier to link an IP address to a location in the real world. Then, a referral can be forwarded to local law enforcement to obtain a search warrant for the physical location. According to CRC, the system can track a device even if the owner moves or uses a VPN (virtual private network) to mask their IP address.

According to court documents, another similar program was developed to surveil BitTorrent networks. “Torrential Downpour” was developed by Brian Levine, a researcher at the University of Massachusetts, Amherst. Operating under a $440,000 grant from the FBI, Levine created the software which he then licensed to the FBI for use in CSAM investigations. Torrential Downpour has since been adapted to access CRC’s CPS database, so it can identify CSAM and feed results back to CRC when being used by law enforcement.

According to CRC, CPS is used by 8,500 investigators located across all 50 states but also provides access to police in 95 other countries including Canada, the U.K., and Brazil.

ProPublica has reported that several criminal indictments originating from the use of CPS tools were dropped after defense attorneys, with the help of forensic analysts, made credible allegations that the CPS software may have flagged IP addresses for sharing CSAM, despite subsequent analysis showing it never existed on the device or that CPS allowed police to access protected areas of a person’s computer using a vulnerability in the LimeWire protocol.

The only way to determine whether CPS software made mistakes or allowed police to violate the law is to allow defense experts access to the program and its code. Yet CRC and the University of Massachusetts, Amherst have filed briefs with courts alleging that any such disclosure would jeopardize their property interests. After judges ordered the software disclosed anyway, prosecutors dropped the cases rather than allow disclosure.

“When protecting the defendant’s right to a fair trial requires the government to disclose its confidential techniques, prosecutors face a choice: give up the prosecution or give up the secret. Each option has a cost,” said Orin Kerr, a former DOJ lawyer and expert in computer crime law.

“These defendants are not very popular, but a dangerous precedent is a dangerous precedent that affects everyone,” said Sarah St. Vincent, a Human Rights Watch researcher who has examined the issue. “And if the government drops cases or some charges to avoid scrutiny of the software, that would prevent victims from getting justice consistently. The government is effectively asserting sweeping surveillance powers but is then hiding from the courts what the software did and how it worked.”

Read more about the conflict between the Sixth Amendment and proprietary tools in CLN’s previous cover article, “The Clash Between Closed-Source Forensic Tools and the Confrontation Clause.” [See: CLN, Oct. 2021, p.1.]

 

Lawless Government Hacking

In 2013, a coalition of local, state, and federal law enforcement agents took down the Silk Road, a Tor-based service that facilitated international drug transactions using Bitcoin as payment. The trial of Silk Road’s founder, Ross Ulbricht, detailed how well-trained cyber security professionals — including at least one man employed by the NSA — compromised several of the website’s administrators, leading to the takedown of the site and spawning multiple investigations and arrests.

Using similar tactics in 2015, FBI agents were able to take control of Playpen, which was then the largest known Tor site for the distribution of CSAM. However, the Playpen case demonstrated a new tactic by the FBI: Instead of shutting down the site after seizing its data from servers located in Lenoir, North Carolina, the FBI transferred the site to its own servers in Newington, Virginia. The FBI continued to operate the site between February 20 and March 4 of 2015.

According to court documents, the FBI obtained a warrant allowing it to plant malware on the website for the purpose of unmasking users who logged in. The malware is referred to in court documents as a Network Investigative Technique (“NIT”). This malware would be accessed by client computers, which it would then compromise and force to disclose information such as the true IP address of the client, along with other identifying details.

Reporting by WIRED magazine and Vice showed that NITs had previously been deployed in phishing emails against bomb-threat suspects in 2002 and had also been used to take down other Tor-based CSAM sites in 2011 as part of the FBI’s “Operation Torpedo.” But the Playpen case was unique in that the FBI continued to distribute CSAM to thousands of people all over the globe, while simultaneously hacking their computers. Further, agents often try to deceive magistrates about what they are obtaining a warrant for.

“Although the application for the NIT in this case isn’t public, applications for NITs in other cases are,” said Christopher Soghoian, a technologist for the ACLU mentioned earlier in this article. “Time and time again, we have seen the [DOJ] is very vague in the application they’re filing. They don’t make it clear to judges what they’re actually seeking to do.… And even if judges know what they’re authorizing, there remain serious questions about whether judges can lawfully approve hacking at such scale.”

According to Andrew Crocker, staff attorney for the EFF, every court that considered the validity of the NIT warrant in Playpen-related cases found that the magistrate had exceeded their legal authority under Federal Rules of Criminal Procedure Rule 41 in issuing the warrant, though many courts still refused to suppress the evidence.

“Not coincidentally,” wrote Crocker, “in late 2014, the Justice Department proposed —and the Judicial Conference and Supreme Court have approved — an amendment to Rule 41 that would allow magistrate judges to issue warrants for remote hacking of unknown computers in any district, if users have concealed their locations ‘through technological means.’”

Despite opposition from privacy and tech experts, as well as senators like Ron Wyden (D-OR), this change to Rule 41(b) went into effect on December 1, 2016. However, this change may not necessarily solve the government’s Fourth Amendment problem.

The warrants clause of the Fourth Amendment requires more than mere probable cause to authorize a search. The application must also “particularly describ[e] the place to be searched, and the person or things to be seized.”

“By contrast,” wrote Crocker, “the PlaypenNIT warrant authorized the search and seizure of information located on unknown computers in unknown places belonging to any and all users of the site (and there were over 150,000 of them in this case).”

The Third Era:
Mass Surveillance for All

Despite the tools — legal, ethical, or otherwise — brought to bear on the problem of CSAM, the number of reports submitted to NCMEC’s Tipline kept escalating. In 2014, the service reported over one million reports for the first time ever. But by 2019, the CyberTipline received just shy of 17 million reports.

According to prostasia.org, “[i]n 2019, only a dozen of the largest tech companies were responsible for 99% of the abuse images reported to NCMEC.”

By 2021, after Big Tech began developing and making scanning tools like PhotoDNA, PDQ, and other related services more accessible to smaller operators, this number almost doubled to just under 30 million reports.

“The first thing people need to understand is that any system that allows you to share photos and videos is absolutely infested with child sexual abuse,” according to Alex Stamos, the former security chief at Facebook and Yahoo, who is now a professor at Stanford.

Even the Big Tech companies don’t seem to be doing all that good of a job tracking and removing CSAM. In 2017, the Canadian Centre for Child Protection (“C3P”) — similar to NCMEC, and founded around the same time — developed Project Arachnid, a web crawler designed to follow links on websites reported to C3P, to possibly locate other similar sites containing CSAM. Since it began operation, the project has identified over six million images and issued takedown notices to the servers responsible for hosting them.

According to The New York Times, it partnered with Project Arachnid in 2019 to scan DuckDuckGo, Yahoo, and Bing using search phrases like “porn kids.” By using over three dozen search terms known to be associated with CSAM, the joint project identified 75 instances of CSAM across the three search engines. DuckDuckGo and Yahoo both said they relied on Microsoft to filter illegal content from their search results.

A spokesman for Microsoft calls the problem of CSAM a “moving target” and said that “[s]ince the NYT brought this matter to our attention, we have found and fixed some issues in our algorithms to detect unlawful images.”

Hemanshu Nigam, who served as a director overseeing child safety at Microsoft from 2000 to 2006, commented on the report, saying “it looks like they’re not using their own tools.”

 

End-To-End Encryption

Recently, major platforms that provide messaging apps have been under industry pressure to implement end-to-end encryption (“E2EE”). In previous years, the content of messages sent between users of a service like Facebook Messenger were sent “in the clear,” meaning that Facebook and possibly malicious third parties could, with a little or no effort, monitor that content.

Email — because its protocol originated in the early years of the internet — is transmitted this way, and the FBI famously operated a program called Carnivore between 1998 and 2005 that scanned emails during transmission over the internet. Agents could then scan those intercepts for conversations about illegal, often terrorism-related behavior. After 2005, the agency switched to “unspecified commercial software” to do this scanning, as well as using its authority under the Communications Assistance for Law Enforcement Act of 1994 to have telecoms companies intercept transmissions for them.

However, this interception of messages is threatened by E2EE. Once all messages among users are encrypted, such mass-spying programs become obsolete practically overnight — this is the stuff nightmares are made of for government spooks.

Some services, like WhatsApp and Signal, have used E2EE since their inception (though WhatsApp only recently started encrypting its backups to cloud storage). It is for this reason that WhatsApp only generated 26 abuse tip-offs to U.K. law enforcement in 2020. Compare that to the 24,780 reported by Facebook in 2019.

According to thetimes.co.uk, that was the same year that Facebook announced it would default to using E2EE for all Messenger conversations. Multiple governments complained that doing so would create a “superplatform for pedophiles,” and the implementation was delayed, allegedly until 2023. Facebook has since allowed users to turn on E2EE for Messenger (using the “Secret Conversations” option buried deep in menus users rarely access) and enabled E2EE for voice and video calls in August 2021 for users who opted in.

 

Client-Side Scanning

In response to several services announcing their intent to enable E2EE by default (if they have not already done so), law enforcement agencies and some technologists — such as Dr. Hany Farid, who helped develop PhotoDNA — have proposed that platforms implement client-side scanning (“CSS”). This would move the hashing and CSAM detection process from the servers onto a user’s computer or smartphone.

If Facebook implemented E2EE and CSS at the same time, the company could claim it was protecting the privacy of its users’ communications by default, while still taking an affirmative step to prevent the distribution of CSAM. If a user intended to transmit a CSAM image to another user, the Messenger app would first scan the image using Photo­DNA (or some similar program) and transmit the hash to Facebook for comparison against its (or NCMEC’s) database of known CSAM.

There are several important drawbacks that make this “solution” less than ideal.

Any software run on a client device will be vulnerable to deconstruction, attack, and scrutiny. Many software systems like Microsoft’s PhotoDNA are, so far, non-public. Microsoft maintains this secrecy by only allowing its software to be run on its own servers and servers of trusted partners. If PhotoDNA were part of a software package being run on a client device, its algorithm could easily become public.

There would also be a question of which authority would provide the hash database for comparison. While NCMEC has the largest known database, it refuses to allow public access. And any CSS app would, by necessity, be allowing a kind of public access. Absent a central authority, proposed systems risk fragmentation to such a point that known CSAM on one platform might be unknown on another. And absent real transparency, users (and maybe service providers) would not be sure that the hashes in the database are only CSAM images and not, for instance, copyrighted images slipped in to enforce one platform’s private copyright property interest.

So far, courts have ruled that private companies running hash comparison software are not government actors for purposes of Fourth Amendment searches. This determination could shift if services are required by law to do this scanning — as the EU has proposed for larger providers as part of its “risk analysis” framework designed to detect and suppress CSAM. If providers are required to scan for CSAM, who will they then be required to report to? And are all “matches” reported? What about false positives?

Not insignificantly, placing the scanning code on a user’s device shifts the processing burden to the user. All this will be less noticeable on fast computers running on a dedicated power source, while mobile devices will likely see a significant hit in battery longevity and device speed, already big concerns for users.

Finally, users are likely to see CSS as an invasion of privacy. While platforms already scan uploaded images and videos, this is done seamlessly in the background on the server side. Any CSS implementation is likely to be more obvious and intrusive in a way that will likely alienate users. Open-source messaging apps, not centrally organized under any single corporate entity or government, are unlikely to implement CSS, causing illicit users to flock to such services, leaving “innocent” users with the burden of intrusive scanning.

Further, not all scanning systems are hash-based. Options from Google and Thorne both attempt to detect CSAM using AI image recognition. This sounds good on paper, but AI systems are notoriously unreliable at recognizing faces (especially of women and people of color) and can be expected to have serious difficulty distinguishing between adult pornography and CSAM.

Other proposed systems go beyond CSAM itself and propose to tackle sexual enticement of minors (a.k.a. “grooming”). Instead of scanning images and videos intended to be transmitted among users, some technologists have considered training text recognition AIs on chat transcripts from child enticement stings, and then using the AIs to scan all texts sent among users in an attempt to detect grooming on E2EE-enabled applications. Needless to say, such efforts are likely to face even more pushback than image and video scanning attempts.

Apple announced in August 2021 that it intended to roll out CSS on all devices eligible to update to iOS 15.2, as part of its “Expanded Protections for Children.” Pushback from tech and security professionals was swift and fierce. Within weeks, some experts claimed to have reverse engineered the system and forced it to register all scanned images as CSAM (false positives). Others accused Apple of building a backdoor that would allow governments to scan for all manner of content.

This last allegation seemed reasonable in light of an EU proposal to mandate some form of CSS on major platforms but also to force scanning for content linked to terrorism and organized crime in addition to CSAM.

“It’s allowing scanning of a personal private device without any probable cause for anything illegitimate being done,” said Susan Landau, professor of cyber security and policy at Tufts University. “It’s extraordinarily dangerous. It’s dangerous for business, national security, for public safety and for privacy.”

Less than a month later, Apple pulled the CSS feature from the Expanded Protections for Children update to iOS — though it rolled out other planned features such as a filter for nude images sent and received by children’s accounts managed by an adult.

“Based on feedback from customers, advocacy groups, researchers, and others, we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features,” said Apple in September 2021.

Apple spokesperson Shane Bauer told reporters that the company still intends to implement CSS on its devices, but when that will be and what it will look like is still up in the air. Given the legislative push for such systems, it is likely that Apple and other Big Tech companies will move forward with some kind of CSS implementation.

Such invasive scanning — mass surveillance in all but name — is likely coming soon to many of the mainstream platforms regardless of objections by users. Given early forays into using AI applications deployed against CSAM and grooming, we can reasonably expect that the means used to detect prohibited content will go beyond hash comparison systems. Also, once more flexible scanning systems are widely accepted on major platforms, expect governments to require these scanners to identify a larger range of content for suppression.

“Mission creep is very real, said Matthew Guariglia in a July 16, 2021, editorial published by the EFF. “Time and time again, technologies given to police to use only in the most extreme circumstances make their way onto streets during protests or to respond to petty crime.”

And don’t expect police to use anything resembling common sense when scanning software detects prohibited content on a user’s device. Consider the 2015 case of Cormega Copening and Brianna Denson, two 16-year-old high school students from North Carolina. During an unrelated investigation, Copening’s phone was searched, revealing consensually taken nude photos of the young couple.

Prosecutors charged Copening (as an adult) with five felony counts of sexually exploiting a minor: two for taking nude selfies, two for sending them to his girlfriend, and one for possessing an explicit photo of Denson. She was charged with two counts of exploitation: one for taking a nude selfie and another for sending it to Copening.

Both teens took plea deals allowing them to serve a term of probation which, if completed without further “incidents,” will result in their convictions being removed from the record. Despite widespread outrage about these prosecutions, they still proceeded and victimized these youths more than the persistence of nude selfies ever could have.

Allowing or mandating CSS systems will only increase the likelihood that police and prosecutors will take the opportunity to destroy lives by prosecuting victimless crimes, while actual abusers will likely use other systems to elude detection.

No Easy Solutions

Broad expansions of CSS have the potential to enable the kind of pervasive mass surveillance desired by — but not previously available to — governments, repressive or otherwise. Combined with lawless hacking by police and a criminal justice system that largely operates without transparency (due to the broad use of plea agreements), this brave new world will be a dangerous one where the relationship between technology and constitutional protections is radically altered.

But this need not be the case. Governments and corporations could (in theory) step back from this precipice and recognize that over 30 years of censorship and criminalization of CSAM has failed to make any appreciable dent in child sexual assaults. Attempting to suppress and remove CSAM from the web, without addressing the root causes behind its creation and distribution, is like trying to fight deforestation of the Amazon by outlawing axes and saws. Such an approach fails to address the system of cultural conditions and incentives that lead to such behavior.

To be clear, there will always be a need for law enforcement, in coordination with groups like NCMEC, to investigate the production of CSAM. As long as children are being subjected to sexual assault, police should act swiftly to identify victims and remove them from abusive situations.

We should also consider that attitudes in America have changed over the decades in regard to CSAM offenses. While adults who sexually assault children should serve lengthy prison sentences, it serves no rational purpose to imprison non-contact offenders for decades.

According to Reason magazine, federal judges routinely issue sentences below the advisory guidelines — and sentences recommended by the government — where the offender did not sexually assault anyone.

“Every time I ever went back in the jury room and asked the jurors to write down what they thought would be an appropriate sentence,” said federal district court judge Mark W. Bennett, “every time — even here, in one of the most conservative parts of Iowa, where we haven’t had a ‘not guilty’ verdict in seven or eight years — they would recommend a sentence way below the guidelines sentence. That goes to show that the notion that the sentencing guidelines are in line with societal mores about what constitutes reasonable punishment — that’s baloney.”

As long as police and prosecutors prioritize the low-hanging fruit of those who possess CSAM or obtain it from file-sharing networks, they are failing to prioritize actual producers of CSAM who are often more difficult to identify. Governments could utilize the kind of resources only they possess to disrupt these operations.

“Technology can build cryptographically secure fortresses that can shelter you from the authorities … just not indefinitely,” wrote Cory Doctorow, a Canadian tech writer and activist. “Even if your ciphers are secure, your movement isn’t, because your human network has to be perfect in its operational security to remain intact, while the authorities only need to find a single slip up to roll you all up.”

We must also consider the vast sums of money being spent on efforts that clearly have little or no effect on the root causes of these problems. These funds would likely be better spent on public awareness programs, anti-poverty campaigns, and making it easier to identify and report children being sexually assaulted.

Apple’s Expanded Protections for Children went live with its release of iOS 15.2 (without the client-side CSAM scanning component). This included the ability for parents to enable warnings for children under 13 when they encounter nudity or pornography — or attempt to create and share it themselves. It makes it easier for these children to report material they find frightening or harmful. And it is easily justifiable for Apple to provide these tools to help parents protect their children.

Apple’s update also scans user input into Siri and Search for CSAM-related topics and will “intervene to provide information and explanations to users.” While this is still somewhat invasive, Apple has chosen to make detection result in assistance to users, rather than automatically reporting them to law enforcement. While automatic referrals in such circumstances may save some children, it would be much more likely to result in upending the lives of innocent and curious persons over what amounts to false positives.

We should resist efforts to enable pervasive mass surveillance. E2EE, when implemented properly, affords us “[r]oom to explore, space to be ourselves, and protection for our online life,” said Runa Sandvik, a computer security expert from Norway. “There is a need to balance online privacy, everyday security and the ability to solve crime. But not at the cost of individuality, freedom, and self-expression.”

There are no easy solutions to the problem of sexual assaults against children. This goes for tech solutions as well. We can’t expect companies to “tech harder” to magically create solutions to solve offline problems whose online presence is merely a symptom. We cannot fix social and cultural problems by attacking their manifestations on the internet any more than we can subdue a criminal by tackling his reflection in a mirror.

Should we proceed further down the road of pervasive mass surveillance, citizens should educate themselves on how to stay safe online, including from “well-meaning” corporations and governments. The EFF publishes a guide on Surveillance Self-Defense, and the University of Toronto’s Citizen Lab publishes additional guides for journalists and dissidents to gain awareness about online censorship and surveillance. Finally, readers are encouraged to look into Cryptoparty, a movement that hosts meetings where individuals can learn about cryptography, technology, and privacy in a fun atmosphere.

Keeping children safe is not an effort that need be mutually exclusive of protecting our most cherished constitutional protections. Readers should be skeptical of the motives of any person or group who insists otherwise. 

 

Sources: 5rights Foundation, The Atlantic, BBC, CBC, citizenlab.ca, theconversation.com, eff.org, enough.org, freerangekids.com, freeross.org, gizmodo.com, hide.me, intego.com, justsecurity.org, lawfareblog.com, leagle.com, macrumors.com, The Marshall Project, missingkids.org, NBC News, NBC Philadelphia, newamerica.org, news.microsoft.com, The New York Times, NY Post, Politico, ProPublica, prostasia.org, reason.com, rsa.com, ssd.eff.org, static.newamerica.org, techcrunch.com, thetimes.co.uk, torproject.org, The Verge, Vice, The Washington Post, wired.com, xbiz.com, January 22, 2016 Hearing Transcript, United States v. Michaud, 2016 U.S. Dist. LEXIS 11033 (W.D. Wash. 2016).

As a digital subscriber to Criminal Legal News, you can access full text and downloads for this and other premium content.

Subscribe today

Already a subscriber? Login

 

 

PLN Subscribe Now Ad
Advertise Here 4th Ad
The Habeas Citebook: Prosecutorial Misconduct Side