Why We Need to Keep the Communications Decency Act Intact
by Dale Chappell
While the First Amendment of the U.S. Constitution protects our right to free speech, a federal law that protects platforms and users who repost that free speech is under attack and at risk from being pulled. Critics say that could let the government arrest people who criticize the government and prevent internet companies from providing those platforms.
The issue is highlighted in a recent arrest of several people who retweeted a post by a Twitter user’s posting of a photo of a cop who covered his name on his uniform at a Black Lives Matter protest in Nutley, N.J. When Kevin Alfaro thought the officer had acted out of line, he tweeted the photo to his 900 followers, asking if anyone could identify the officer “to hold him accountable.”
Several people, including Georgana Sziszak, retweeted Alfaro’s post and found themselves charged with felony “cyber harassment” of the officer under New Jersey law. While the charges were eventually dropped, mainly because the complaint failed to allege a violation of the law, it raised concerns over a vindictive government bent on silencing its critics.
Such “cyber harassment” statutes often target what should be free speech protected by the First Amendment. Speaking out against the government is at the very heart of the First Amendment. And the amendment’s protection should have applied to not only Alafaro but also to the re-posters of his tweet.
What should have further protected the re-posters was § 230 of the Communications Decency Act (“CDA”). Under 47 U.S.C. § 230, “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” The intent of the law was to protect platforms, such as YouTube and Twitter, from liability should one of its users post harmful content.
But § 230 also protects users of that platform — including the re-posters here of Alafaro’s tweet about the officer. Now some want to take away § 230’s protection. If the focus is on forcing the platforms to better police the content on their sites, it’s at the cost of two things. First, it would likely kill many platforms because no company would want to (or be able to) police all that data. YouTube, for example, receives 100 hours of video posted every minute. Second, it would open any re-posters of speech critical of the government to civil and criminal liability. Much of what users repost on the internet is often not original content of their own.
“Think about it: think about how many of us share content online,” wrote reporter Cathy Gellis on techdirt.com. “Many of us even share far more content created by others than we create ourselves. But all that sharing would grind to a halt if we could be held liable for anything wrong with that content.” She urges that keeping § 230 is important. “Our sole policy goal should be to enhance our speech protections,” she said. “The last thing we should be doing is taking steps to whittle away at them and make it any easier to chill discourse than it already is.”
As a digital subscriber to Criminal Legal News, you can access full text and downloads for this and other premium content.
Already a subscriber? Login