Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Proposals to Reform Section 230

Ashley Johnson Daniel Castro February 22, 2021
February 22, 2021
Twitter Image: 

Introduction

Preserve Section 230

Repeal Section 230

Establish Size-Based Carve-Outs

Establish Carve-Outs for Certain Types of Content or Activity

Require Notice and Takedown

Use Liability Protection as a Bargaining Chip

Exempt State Criminal Law

Expand Federal Criminal Laws

Exempt Federal Civil Enforcement

Eliminate the “Or Otherwise Objectionable” Clause

Establish a Good Faith Requirement

Recommendations

Conclusion

Endnotes

Introduction

Section 230 of the Communications Decency Act of 1996 is a vitally important law that governs intermediary liability for online services and Internet users in the United States. While the First Amendment gives online services the right to allow or deny lawful speech on their platforms, Section 230 says that these online services are not liable for unlawful third-party content, even when these services make decisions to allow or deny third-party content. This liability protection has had a profound impact on the development of many online services Internet users enjoy daily, including social networks, online retailers, online games, news sites, podcasts, blogs, and more.

Recently, the law has received extraordinary attention from policymakers and pundits, with prominent voices on both sides of the political spectrum blaming the law for a variety of both real and perceived harms on the Internet, including harassment, hate speech, disinformation, violent content, child sexual abuse material, and nonconsensual pornography. Many critics have grown vocal in arguing that the law is broken and are calling for Congress to repeal the law entirely, while others argue that the law should be amended to address concerns its authors could not have envisioned. However, many of the law’s proponents say that Section 230 is still appropriate and effective more than two decades after Congress enacted the law, and that attempts to change it, especially repealing it, would come with far-reaching negative consequences.

While it is true that many proposals to eliminate or alter Section 230 would undermine online services and pose a major setback to free speech and innovation, that does not mean some targeted reforms are not needed.

Especially in the aftermath of the attack on the U.S. Capitol, criticism of political speech on Social Media has reached a crescendo. President Trump, along with many of his supporters on the right, have argued that social networks are unfairly removing lawful content, alleging political bias in response to social networks banning accounts linked to far-right groups and conspiracy theories, and labeling some posts as false or misleading. At the same time, President Biden, along with many on the left, have argued that social media companies are too permissive, allowing or even fostering extremist views on their platforms and failing to take sufficient action to moderate harmful political speech. Since the First Amendment prevents policymakers from regulating online speech directly, many have used the threat of Section 230 reform to try to compel Social Media Platforms to either tighten or loosen their content moderation policies. As a result, Section 230 has become a political football; but Section 230 reform is orthogonal at best to address political speech on online platforms .

While it is true that many proposals to eliminate or alter Section 230 would undermine online services and pose a major setback to free speech and innovation, that does not mean some targeted reforms are not needed. Indeed, as this report shows, it is possible to narrow the liability shield to avoid protecting “bad actors” that are not acting in good faith, while also establishing a voluntary safe harbor provision to minimize nuisance lawsuits and negative spillover effects on innovation. But while reforming Section 230 could address many harms on the Internet, it would not resolve the ongoing debate about political speech, which is grounded more in a debate about the First Amendment and the right set of rules to moderate political speech on large social media platforms than in online intermediary liability. That issue is the subject of a forthcoming Information Technology and Information Foundation (ITIF) report.

This report reviews most of the major proposals for addressing Section 230, including proposals that congress:

  • Preserve Section 230 as it is.
  • Repeal Section 230.
  • Establish size-based carve-outs.
  • Establish carve-outs for certain types of content or activity.
  • Require online services to comply with a notice-and-takedown requirement.
  • Require “bargaining chips” to receive liability protection.
  • Exempt state criminal laws.
  • Expand federal criminal laws.
  • Expand federal civil enforcement.
  • Eliminate the “or otherwise objectionable” clause.
  • Establish a “good faith” requirement.

As the report shows, there are a number of options besides keeping Section 230 as it is and repealing it entirely. Each proposed solution has arguments for and against it, but some are more likely to succeed than others.

The report concludes by offering recommendations for how Congress can move forward to address legitimate concerns about Section 230’s shortcomings while safeguarding the benefits of the law. To that end, Congress should take the following steps:

  • Establish a good faith requirement to prevent bad actors from taking advantage of Section 230(c)(1)’s liability shield.
  • Establish a voluntary safe harbor provision to limit financial liability for online services that adhere to standard industry measures for limiting illegal activity.
  • Expand federal criminal laws around harmful forms of online activity that are also illegal at the state level.

Notably, as explained later in this report, the establishment of a good faith requirement or a safe harbor provision would be problematic on their own. However, if pursued jointly as part of a Section 230 reform, they would address the weaknesses of implementing either proposal independently.[1]

Preserve Section 230

One potential solution to the issue of online intermediary liability would be to keep the law in the United States as it is. Many, but not all, proponents of Section 230 argue for this approach on the grounds that Section 230 is responsible for creating many of the best parts of the Internet, and that changes to the law would have serious, and potentially unforeseen, consequences for the online world. Although Section 230 may not be a perfect law, its proponents believe that its myriad benefits outweigh its few flaws.

It is impossible to know exactly how the Internet would have developed without Section 230, but the online world would almost certainly look very different than it does today, likely with less freedom of expression and less of the user-generated content that now forms the backbone of some of the Internet’s most visited websites. Indeed, protecting the Internet as it is today is a frequent argument for preserving the liability shield the way it is.[2]

The types of websites and online platforms that benefit from Section 230’s liability shield are as diverse as the Internet itself. A lot of the recent controversy surrounding Section 230 primarily focuses on social media giants such as Facebook and Twitter and popular video sharing platforms such as YouTube, but the influence of Section 230 extends much farther. It protects knowledge-sharing websites such as Wikipedia, online marketplaces such as eBay, online classified ads such as Craigslist, countless smaller forums and blogs, and every other website that features product reviews or a comments section, including countless websites of small businesses. It also protects users from liability for forwarding emails or retweeting, thereby facilitating communication between users.

Section 230 protects online services from a wave of lawsuits that could attempt to hold them liable for their users’ actions. By allowing these services to thrive, Section 230 forms the foundation of the Internet economy. It has enabled the creation of entire business models that rely on user-generated content.

Section 230 makes it easier for smaller online services to compete with larger ones. In a world without Section 230, larger tech companies would have the resources to defend themselves against lawsuits and bulk up their content moderation systems, while smaller online services would not.[3] Smaller online services that rely on user-submitted content—or large-but-less-profitable ones such as Wikipedia, which is run by the nonprofit Wikimedia Foundation—would have to make the difficult decision of whether to continue operating and risk litigation they cannot afford, fundamentally change the services they offer to decrease their risk, or shut down entirely. Such change would further consolidate market share in the hands of a few large online services, giving a boost to some of the social media giants that are the target of much of the anti-Section 230 rhetoric.

Finally, many proposed changes to Section 230 would have serious implications for the freedom of speech online. Without Section 230 guaranteeing that they will not face liability for third-party content on their platforms, online services would have strong incentives to take a more restrictive approach to content moderation. Instead of just removing content that clearly violates the law or their terms of service, they would also likely remove any content that falls into a gray area where it may or may not be objectionable, because to not do so would mean risking legal trouble. This is known as “collateral censorship,” a form of self-censorship that occurs “when A censors B out of fear that the government will hold A liable for the effects of B’s speech.”[4] For example, platforms may choose to remove lawful, but controversial, political speech—exactly the type of speech the First Amendment was designed to protect—in order to avoid expensive nuisance lawsuits from those who claim to find that political speech objectionable.

Any changes to Section 230 will have far-reaching consequences, but given the current controversy surrounding the law, doing nothing is increasingly not a politically feasible option. The calls for reform are part of a larger trend of public backlash against Big Tech—or “techlash”—that do not appear to be going away any time soon. If Section 230’s supporters refuse to budge from their stance that Section 230 should remain exactly the way it is, they will effectively hand the reins over to the law’s detractors to craft a new intermediary liability law that may go too far in the other direction. Instead, to address legitimate concerns about stopping bad actors, supporters should offer solutions that still protect freedom of expression and innovation.

Repeal Section 230

Some of Section 230’s critics want to repeal the law altogether and leave the issue of online intermediary liability to the courts. They argue that the law does more harm than good, unfairly protecting bad actors, enabling various forms of illegal or harmful online content, immunizing providers from liability for unfairly removing users and content, and giving online services a free pass that no other type of business enjoys. For example, Rep. Louie Gohmert (R-TX) introduced H.R. 8896, the Abandoning Online Censorship Act, to repeal Section 230.[5]

The first argument, that Section 230 protects websites that host illegal content, is a common one. Critics frequently refer to the so-called bad actors that hide behind 230’s liability shield. Before Congress passed the Allow States and Victims to Fight Online Sex Trafficking Act and the Stop Enabling Sex Traffickers Act (FOSTA-SESTA) in 2018, adding an exception to Section 230 so that it would no longer apply to sex trafficking, critics frequently cited Backpage as an example of a bad actor.

But sites such as Backpage are not the only bad actors online. There are untold numbers of online services whose users post child sexual abuse material, nonconsensual pornography, defamatory “gossip,” terrorist communication, and more. While some of this illicit content slips through the cracks of the content moderation systems of legitimate platforms, other platforms do little to stop it. By immunizing platforms against civil liability for third-party content, critics argue, Section 230 prevents the victims of these crimes from seeking justice against the online services that possibly could have prevented others from sharing this content.

In addition to hosting illegal content, some online services are a source of legal but harmful forms of online abuse, including hate speech and harassment. Online abuse can lead victims to delete their social media profiles, shut down their websites and blogs, and in extreme cases when online abuse trickles into the physical world, move and change their name or engage in self-harm.[6] Because online abuse disproportionately affects marginalized populations, this is detrimental to equal protection.[7] Again, because of Section 230, victims cannot sue social media platforms for failing to act against hate speech and harassment posted by their users.

Some conservative policymakers, including former President Trump, call for repealing Section 230. They believe it is unfair for large social media platforms to benefit from Section 230’s liability shield when, in their view, these sites are biased against conservative viewpoints, blocking or suspending accounts from conservatives and removing posts that express conservative political opinions. There is no evidence of systemic conservative bias, and the First Amendment protects the free speech rights of these platforms to make decisions about what content and which users they allow on their platforms.[8] However, Section 230(c)(2) protects these companies from liability for removing content they believe to be objectionable.[9] Eliminating Section 230 would expose these companies to nuisance lawsuits.

Some liberal policymakers, including President Biden, have called for repealing Section 230, but for the opposite reason. They believe that it is unfair for large social media platforms to benefit from Section 230’s liability shield when users spread hate speech, misinformation, and other objectionable content on their platforms. However, repealing Section 230 would negatively impact the free speech of marginalized populations that these policymakers are often trying to protect. Online services would be disinclined to host content relating to controversial political movements such as #MeToo or Black Lives Matter if individuals and groups who opposed those movements, including the targets of their activism, could sue the online services that hosted their discussions and facilitated their organization.

Finally, some critics argue that Section 230 treats online services differently from other businesses. If a physical business facilitated child exploitation or terrorist communication, or if a traditional publication printed user-submitted nonconsensual pornography or defamatory statements, they likely would not escape civil liability. Why, they ask, is the law different for online services, especially since many websites profit from user-submitted content, including illegal content? Critics argue that if moderating that content proves difficult, online services should solve the problem or design their services in a less negligent way to prevent these problems from occurring in the first place.[10]

But the legal landscape prior to Section 230’s passage reveals how repealing the law would be detrimental. Section 230 arose out of a pair of court cases in the 1990s: Cubby v. CompuServe (1991) and Stratton Oakmont v. Prodigy (1995).[11] Taken together, these cases established a counterintuitive precedent for websites that rely on user-generated content: Websites that exercised no control over what was posted on their platforms and allowed all content would not be liable for user content, while websites that exercised good faith efforts to moderate content would face liability. This is the legal landscape America would return to if Congress repealed Section 230.

Some critics argue for repealing Section 230 and also overturning the Cubby and Stratton Oakmont cases that made online services that moderate content liable for their users’ speech, so online services would still have an incentive to moderate content. But even without that legal precedent, repealing Section 230 would still have negative consequences for innovation, free speech, and competition. Large online services would adapt to a world without Section 230, while smaller ones may not have the resources, which would only further consolidate the market share of large platforms. Moreover, platforms would turn to overly cautious and restrictive content moderation practices, removing any potentially objectionable content, which may include valuable forms of expression such as political speech and marginalized speech.

Establish Size-Based Carve-Outs

One proposal to reform Section 230 would introduce size-based carve-outs for intermediary liability so that only large online services would lose Section 230 protection. In other words, Section 230 would only apply to smaller companies, not large ones. The purpose would be to safeguard competition from smaller online services that would not survive without Section 230 protections. This type of proposal is also a manifestation of the ongoing techlash, as it aims to create stricter rules for tech giants for their perceived content moderation failures.[12]

The problem with Section 230, these critics argue, is that the law says online services that host third-party content “shall not be treated as the publisher or speaker” of that content. But large social media platforms are like publishers in two important ways.[13]

First, large social media platforms actively moderate content, deciding what content appears on their platforms and what is taken down. This is not too different from how some early forums and online bulletin boards operated. The difference, critics claim, is that large social media platforms such as Facebook and Twitter are far more ubiquitous than their 1990s counterparts, and their content moderation decisions impact hundreds of millions or even billions of users.[14]

Second, social media platforms amplify content, running algorithms that determine who sees what, and sometimes these algorithms promote harmful content.[15] Critics argue that when large platforms amplify harmful content, the impact is so significant (because hundreds of millions of users may see it), they should be liable for this content.[16]

The first problem with size-based carve-outs is, counterintuitively, they would actually be detrimental to competition. A small online company would benefit from Section 230 immunity, which would hopefully enable it to succeed and grow. But as it grew and approached the threshold at which it would lose immunity, it would have to make a difficult decision: pass the threshold and adapt on its own to a difficult new set of rules, or get acquired by a larger company that has already established its ability to succeed without immunity. Acquisition by a large, successful company is already a tempting offer; size-based carve-outs would further incentivize small companies to get acquired instead of continuing to grow on their own.[17]

Additionally, virtually all the “bad actors” critics reference when debating Section 230 are smaller companies. Large, established online services such as Facebook, Twitter, and Google have many incentives to address illegal and harmful content on their platforms, not the least of which being their reliance on advertising revenue. Most advertisers, especially national brands, do not want to be associated with websites known for hosting illegal activities or abuse. But there are smaller online services that profit directly from illegal or abusive third-party content—revenge-porn websites, for example—and under a size-based carve-out, they would continue to benefit from Section 230 immunity while many legitimate larger online services would not.

Finally, even if only large platforms had to do without Section 230, collateral censorship would still pose a problem. Smaller websites would have more freedom in their content moderation practices, but larger websites—the websites billions of people use daily around the world—would be more restrictive about the types of content they allow, thereby limiting free expression online. In addition, to the extent this allows smaller, more niche online services to thrive, it could further drive political polarization as people flock to like-minded online communities.

Establish Carve-Outs for Certain Types of Content or Activity

Similar to the proposal to keep Section 230 as is but create an exception for online services of a certain size, another proposal would keep Section 230 as is but create an exception for certain types of content or activity. These proposals usually target a



This post first appeared on ITIF | Information Technology And Innovation Foundation, please read the originial post: here

Share the post

Proposals to Reform Section 230

×

Subscribe to Itif | Information Technology And Innovation Foundation

Get updates delivered right to your inbox!

Thank you for your subscription

×