What is Section 230 of the Communications Decency Act?
You have likely heard increasing discussions and debates in recent years about a law commonly referred to as “Section 230” and its impact on the Internet. But what is Section 230? And why is this law so critical to online discourse?
Section 230 is the keystone law that allows for the operation and functionality of the Internet as we know it. Passed in 1996 as part of the Communications Decency Act (CDA), this critical statute provides absolute legal immunity to online platforms for claims that may arise out of user-generated content and activity. Without the protection of Section 230, many of the most popular websites, such as Facebook, Instagram, TikTok, Reddit, YouTube, X (Twitter), Yelp, and others, could be sued out of existence.
But Section 230 is certainly not without controversy. Many argue that it allows for the spread of misinformation, hateful speech, and other harmful online behavior. There are mounting calls to reform or even repeal the law.
In this article, I will walk you through exactly what Section 230 says and does, why it was enacted, how it has been interpreted by the courts, and the potential implications of the current battles over its future. By the end, you will have a clear understanding of the law which is most important to the functionality of the Internet today.
What Does Section 230 Actually Say?
At its core, Section 230 does two main things:
- Section 230(c)(1) states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This means that online platforms such as Facebook or YouTube cannot be sued for things their users post. Conversely, a print newspaper can be held liable for publishing content contributed by one of its subscribers, such as a letter to the editor. As it relates to speech-related claims, these online platforms are treated as mere distributors, not publishers, of their users’ content.
- Section 230(c)(2) states that no provider or user of an interactive computer service shall be held liable for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” This subsection protects online platforms from liability for moderating or removing user content they deem inappropriate, even if that content is constitutionally protected speech.
Section 230, therefore, grants broad immunity to online platforms for displaying and hosting user-generated content. A website does not lose that immunity by simply exercising its editorial control or otherwise policing its platform.
There are a few key limitations and exceptions:
- 230 may not apply to violations of federal criminal law, intellectual property law, or the Electronic Communications Privacy Act so long as the platform is put on notice and still fails to act.
- An online platform may still be deemed liable if it materially contributes to or develops the content itself.
- The law does not shield websites from any action arising out of their own publications and other contributions.
But in general, exceptions to Section 230 are intentionally few and far between. Lawmakers deliberately departed from the more traditional definition of a publisher for purposes of liability and that decision has proven essential to the growth of the Internet and the free flow of information it allows.
Why Was Section 230 Enacted?
To understand why Congress passed Section 230, we have to go back to the early days of the Internet. In the 1990s, there were two seminal court cases that established conflicting legal standards for when online service providers could be held liable for user content.
The first was the 1991 case Cubby, Inc. v. CompuServe. The defendant, CompuServe, hosted online forums but did not moderate them. When a user posted defamatory content on one of its forums, the court ruled that CompuServe was not liable because it was merely a distributor, not a publisher, of the content. The court compared CompuServe to a library or bookstore that cannot be expected to review every publication it carries for unlawful material.
A few years later, a New York court reached the opposite conclusion in Stratton Oakmont v. Prodigy Services. Prodigy more actively moderated the forums it hosted, screening for profanity and otherwise offensive content. Because of this, the court ruled that Prodigy could be held liable as a publisher for defamatory statements made by one of its users. The court said that, by moderating content, Prodigy was exercising editorial control and therefore taking responsibility for what was published.
Together, these two cases created a major dilemma for online service providers. If they chose to moderate any user content, they could potentially be held liable for all of it. But if they took a totally hands-off approach and just let anything go, they would be protected. This discouraged much needed content moderation and threatened to let the Internet become an unmoderated cesspool lacking any oversight or control.
Congress recognized this problem and stepped in. The lawmakers behind Section 230 – Rep. Chris Cox and Sen. Ron Wyden – wanted to encourage online service providers to self-regulate content without fear of liability. They also wanted to empower parents to restrict their children’s access to inappropriate material online. The Communications Decency Act, which Section 230 was a part of, prohibited the transmission of indecent content to minors. (This part of the CDA was later struck down by the Supreme Court on First Amendment grounds, but Section 230 remained intact).
At the same time, Congress did not want platforms to be completely unaccountable for unlawful content. So Section 230 included the limited exceptions mentioned above. The idea was to strike the proper balance between competing and compelling interests.
As Rep. Cox explained at the time, “We want to encourage people like Prodigy, like CompuServe, like America Online, like the new Microsoft network, to do everything possible for us, the customer, to help us control, at the portals of our computer, at the front door of our house, what comes in and what our children see.”
The final product of Section 230 effectively overruled Stratton Oakmont and established a clear national standard. The law gives online platforms the freedom to make content moderation choices as they see fit without facing liability for everything posted by their users. This immunity would become the critical legal pillar upon which much of the modern internet was built.
How Have Courts Interpreted Section 230?
Over the past three decades, the applicability and scope of Section 230 have been extensively litigated in court. With very limited exceptions, the overwhelming majority of court decisions have upheld a broad interpretation of 230’s immunity. Courts have repeatedly affirmed that online platforms are generally not liable for user-generated content, even when they have been placed on notice, and still refuse to remove it.
One of the earliest and most influential Section 230 cases was Zeran v. America Online in 1997. In that case, an anonymous user posted false and defamatory messages on an AOL bulletin board, implicating the plaintiff (Zeran) in the Oklahoma City bombing. Even after Zeran notified AOL of this defamation and harassment, the company did not immediately remove the posts.
The Fourth Circuit Court of Appeals ruled that Section 230 shielded AOL from liability for the defamatory content posted by third parties. The court emphasized that the purpose of Section 230 was to protect online intermediaries from the burdens of monitoring and removing potentially harmful content. The decision noted that if platforms could be held liable as publishers every time they received notice of objectionable content, they would face an impossible burden to mitigate the harm, and the natural consequence would be to severely restrict users’ speech in order to avoid legal exposure.
The Zeran precedent established a strong presumption of immunity for online platforms and set the tone for future cases. In 2003, the Ninth Circuit extended 230’s protections to a matchmaking website in Carafano v. Metrosplash.com. The court ruled that even though the site required users to complete questionnaires with pre-populated answers, it was still not an “information content provider” responsible for user responses.
However, a few years later, the Ninth Circuit established an important limitation to 230 immunity in Fair Housing Council of San Fernando Valley v. Roommates.com. In this case, a housing website required users to answer questions about their gender and sexual orientation, as well as their preferences for tenants. The court held that by requiring users to provide this information as a condition of using the service and by providing pre-selected discriminatory answers, Roommates.com was indeed a content creator not entitled to immunity under Section 230.
The Ninth Circuit therefore drew a distinction between an online platform passively hosting user-generated content and one materially contributing to the development of unlawful content. For a website to lose 230 immunity, the court held that it must be “responsible, in whole or in part, for the creation or development of the offending content.”
The Roommates.com decision suggested that 230’s protection is not unlimited, and that an online platform could be liable if it explicitly encourages or contributes to the development of the objectionable content. But in practice, courts have applied this exception very narrowly.
For example, in Jones v. Dirty World (2014), the Sixth Circuit found that the “material contribution” test was not met when an offensive website selected and edited user-submitted gossip stories. Even though the site screened, edited, and commented on the submissions, the court held it was still protected by 230 because the “essential published content” was created by third parties.
Similarly, in Herrick v. Grindr (2019), the Second Circuit upheld 230 immunity for a dating app that was accused of negligently failing to police its platform after a user impersonated his ex-boyfriend and subjected him to harassment. The court held that 230 protects an online service provider even when it has actual knowledge of misconduct by its users and even when it makes editorial decisions about what content it will allow to remain published.
A litany of court decisions illustrates how courts have consistently interpreted Section 230 to provide broad immunity for online platforms, except in very rare and well-defined circumstances. Given the clear intent of the law, no exceptions have been found when publishers exercise traditional editorial functions.
This vital legal protection has allowed all of the most popular online platforms to flourish and massively scale up without facing crippling liability exposure for each piece of user-generated they host. However, as we will explore more in depth below, there is a growing backlash from those who argue that Section 230 gives online platforms a pass to ignore serious harms facilitated through their services.
What Impact Has Section 230 Had?
It is hard to overstate the impact that Section 230 has had on the growth of the Internet over the past three decades. By shielding online platforms from liability for user-generated content, the law has enabled the proliferation of social media, online marketplaces, discussion forums, consumer review sites, and countless other services that billions of Internet users rely upon each day.
Without Section 230, only platforms like Facebook, YouTube, Twitter, Yelp, and Wikipedia would likely not exist, at least not in their current form. If these websites were potentially liable for every piece of content posted by their users, they would face an insurmountable moderation burden and devastating legal costs. They would have to heavily censor user speech or simply not host it at all.
Consider a site like YouTube, where users upload 500 hours of video every minute. If YouTube could be sued every time a user posted a defamatory or infringing video or YouTube was placed on notice of the same, it would be impossible for the platform to operate. The potential liability and actual burdens would be astronomical. With Section 230, YouTube can give users the freedom to post content instantly while still retaining the discretion to remove videos that violate its policies.
The same is true for consumer review platforms like Google, Yelp, or TripAdvisor. These sites host millions of user opinions about businesses and professionals, providing valuable information and insight to the consuming public. But these review platforms would face a deluge of lawsuits from businesses trying to silence their critics if they were legally responsible for every negative review.
Another good example is Wikipedia, the nonprofit online encyclopedia that anyone can edit. Wikipedia hosts millions of articles on every conceivable topic, all written and edited by volunteers. This model of collaborative knowledge-sharing simply would not be possible if Wikipedia could be sued for any mistakes or defamatory statements that a user might add to a page. Section 230 allows Wikipedia to operate as an open platform while still enforcing standards and removing inaccurate content.
In this way, Section 230 has been credited with facilitating the “democratization” of content creation online. No longer do you need to own a printing press or a broadcast tower to share your ideas with the world. Anyone with an internet connection can post their thoughts on social media, contribute to a blog, or leave a review of a product or service. Section 230 has not only allowed such platforms to exist, it has encouraged them to thrive and expand. These effects have undoubtedly led to greater opportunities for free expression and civic participation.
Section 230 has also allowed for the development of advanced content moderation systems that can detect and remove harmful content at scale. Because online platforms cannot be held liable for a decision to remove or not remove user posts, they have the flexibility to develop and enforce their own community standards. They possess unfettered discretion to use or not use a combination of automated filters, reporting systems, and human review to identify and take down content like hate speech, harassment, and misinformation.
Without Section 230, websites would have much less incentive to proactively moderate their platforms for fear of being treated as a publisher subject to legal liability. The law’s Good Samaritan provision – 230(c)(2) – ensures that platforms can take steps to regulate themselves and create healthier online spaces, without facing a flood of lawsuits over each content decision.
In economic terms, Section 230 has been a major driver of the U.S. digital economy. A 2017 study by NERA Economic Consulting estimated that weakening intermediary liability protections could reduce U.S. GDP by over $400 billion and cost over 4 million jobs. The analysis found that the legal certainty provided by 230 has been essential for the growth of small and mid-sized technology companies.
But while Section 230 has undoubtedly enabled much of what we value about the modern internet, it has also come with serious costs and unintended consequences. The same protections that allow for the free exchange of ideas and information online can also shield bad actors from accountability for destructive real-world harms – as the next section will discuss in more detail.
What Are the Main Critiques of Section 230?
For all of its benefits, Section 230 is certainly not without its critics. In recent years, there has been a growing bipartisan backlash against the law, with politicians and commentators from across the political spectrum arguing that it gives online platforms too much immunity and too little incentive to police harmful content.
One of the main critiques of 230 is that it has enabled the proliferation of illegal, offensive, and abusive content online by removing any liability risk for online platforms that host the content. This includes things like:
- Hate speech, harassment, and cyberbullying
- Disinformation, conspiracy theories, and fake news
- Illegal goods and services, like drugs, weapons, and counterfeit products
- Child sexual abuse material and sex trafficking ads
- Terrorist propaganda and extremist content
- Deep fake images, video, and audio
- Nonconsensual pornography (aka “revenge porn”)
Critics argue that Section 230 gives platforms a “free pass” to ignore or even actively promote this harmful content since they face no legal consequences for doing so. As long as the platform itself does not create or co-develop the content, it is free to monetize and algorithmically amplify it without fear of liability.
A tragic example of this came to light in 2016 when it was revealed that the classified ads website Backpage.com had been facilitating underage sex trafficking on its platform for years. Because of Section 230, victims and state prosecutors were unable to hold Backpage accountable for its role in enabling and profiting from these crimes. It took a two-year Senate investigation and a federal criminal prosecution to finally shut Backpage down in 2018.
In response to the Backpage scandal, Congress passed the Fight Online Sex Trafficking Act (FOSTA) in 2018, which created an exception to Section 230 for content related to sex trafficking. But critics argue that FOSTA has had unintended consequences, like pushing sex work offline and censoring legitimate sexual health content. They say the law jeopardizes the safety of consensual sex workers who rely on online platforms to screen clients and protect themselves.
Another frequent critique of 230 is that it facilitates the spread of misinformation and disinformation online. Because platforms face no consequences for hosting false or misleading content, critics argue, they have little incentive to fact-check posts or remove viral conspiracy theories. This has come to the forefront during the COVID-19 pandemic and the 2020 presidential election, with rampant falsehoods about vaccines, voting fraud, and other issues spreading rapidly on social media.
Some argue that the current interpretation of 230 actually discourages platforms from moderating content because doing so could make them look more like publishers exerting editorial control. They say platforms are incentivized to take an “anything goes” approach to avoid liability. At a House hearing in 2019, one law professor compared 230 to a “get out of jail free” card and argued the law should be reformed to require a duty of care for platforms.
Critics on the right accuse platforms of using 230’s content moderation protections as cover to suppress conservative viewpoints, arguing the law enables liberal “censorship” by tech companies. When Twitter began labeling President Trump’s tweets as potentially misleading in 2020, Trump signed an executive order attempting to limit 230 immunity for platforms that allegedly censor speech in bad faith (though many legal experts questioned the order’s validity and enforceability.)
Other common critiques of Section 230 include:
- It protects anonymous trolls and bad actors from facing consequences for their words by allowing them to easily create throwaway accounts.
- It makes it difficult for victims of online abuse or defamation to identify their attackers or seek legal redress.
- It gives a handful of dominant tech companies outsized control over online speech and information access, with little public accountability.
- It was written for the internet of 1996 and has not kept pace with the modern realities of mega-platforms that shape public discourse.
Fundamentally, the debate over Section 230 is a debate over who should be responsible for the real-world impacts of online content – the people who create it, the services that host it, or both. Critics argue that the pendulum has swung too far towards shielding platforms from any responsibility, while defenders warn that diluting 230 would fundamentally break and cripple the Internet as we know it.
What Does the Future Hold for Section 230?
As the critiques of Section 230 have grown louder in recent years, so too have the calls to reform or even repeal the law entirely. Both liberal and conservative lawmakers have introduced a flurry of bills in Congress aimed at modifying the statute, though for different reasons.
Many Democrats want to amend 230 to pressure platforms to more aggressively moderate hate speech, harassment, and misinformation. Proposals include stripping 230 immunity for platforms that amplify certain types of harmful content through their algorithms, display behavioral advertising, or fail to meet a “duty of care” to remove unlawful material.
Some Republicans, on the other hand, want to use 230 reform to combat perceived anti-conservative bias by tech platforms. Several GOP proposals would remove immunity for platforms that are found to moderate content in a politically biased manner or without clear, consistently applied standards. The goal is to discourage platforms from taking down contested speech.
Still, other proposals focus on increasing transparency and due process in content moderation. One bipartisan bill, the PACT Act, would require platforms to publish clear content policies, provide detailed takedown notices to users, and offer appeals processes for removed content. Another bill, the CASE-IT Act, would establish an FTC-administered program for users to appeal content decisions.
In 2022, the Supreme Court weighed in on the scope of Section 230 for the first time in Gonzalez v. Google. The plaintiffs argued that Google’s YouTube should be liable for recommending ISIS recruitment videos to users through its algorithms. The Court upheld 230 immunity, ruling that recommending third-party content is indeed an activity shielded by the law.
However, in a series of concurring opinions, several justices signaled an openness to reconsidering 230’s scope in future cases. Justice Clarence Thomas wrote that in an “appropriate case,” he would consider whether the “sweeping immunity” courts have read into 230 goes beyond the law’s original text. Liberal Justice Neil Gorsuch suggested that the law’s protections may be too broad.
The political pressure and judicial scrutiny of Section 230 are unlikely to let up anytime soon. Even the most ardent defenders of the law may agree it could use some updating after several decades in play and considering the evolution of the Internet. But exactly how to modify 230 to better address harmful online content without gutting the law’s core protections for free expression and innovation, remains a difficult question.
Some proposed reforms, short of a full repeal, include:
- Carving out additional exceptions to 230 immunity for specific types of harmful content, such as civil rights violations, cyberstalking, or nonconsensual pornography. This would open platforms up to more lawsuits for hosting this material.
- Conditioning 230 immunity on meeting certain standards or best practices for content moderation, such as those developed by civil society groups. Platforms that fail to make good-faith efforts to address illegal content could lose their protection.
- Requiring greater transparency from platforms about their content moderation practices and outcomes, as well as government-mandated appeals processes for users who believe their content was wrongly removed. This could provide more accountability and “due process” for speech.
- Narrowing 230’s protection to cover only truly third-party content, not content that platforms actively amplify or profit from via behavioral ad targeting. This aims to incentivize platforms to avoid spreading harmful material.
- Exempting small businesses and startups from regulation to avoid entrenching incumbent platforms. Some propose a “tiered” system where larger platforms shoulder more responsibility.
Importantly, proponents argue that any Section 230 reforms must be carefully crafted to avoid constitutional issues. Requiring platforms to host certain types of speech could violate the First Amendment, while vague standards could chill protected expression. Policymakers must also be cautious not to impose one-size-fits-all mandates on a diverse online ecosystem.
Ultimately, the challenge facing any effort to reform Section 230 is how to balance three competing values: free speech, innovation, and online safety. How do we preserve the Internet’s openness and dynamism while mitigating its very real harms? How do we hold powerful platforms accountable without crushing smaller services and startups? And how do we adapt decades-old legal frameworks for an increasingly complex online world?
As policymakers, courts, and the public grapple with these questions in the years ahead, the fate of Section 230 – and the future of online expression – hangs in the balance. The next quarter-century of the internet will be shaped by the choices we make about platform responsibility and regulation. We must proceed thoughtfully and with great care, understanding just how much is at stake.
Conclusion: Why Section 230 Still Matters
Section 230 remains the most important law shaping online speech and content today. While the Internet has evolved dramatically since 1996, the core insight behind Section 230 endures: if platforms could be sued for everything their users say or do, much of the Internet would cease to exist.
At the same time, legitimate concerns about online harms have put Section 230 in the crosshairs. As policymakers debate the law’s future, it’s crucial that any changes carefully balance the needs to foster innovation, empower users, and limit real-world damage.
One thing is clear: the fight over Section 230 is perhaps best described as a proxy battle over the soul of the Internet itself. The choices we make about online platform immunity will determine what kind of digital public sphere we have for generations to come.
Preserving an open and vibrant internet while mitigating its worst abuses is one of the great challenges of our time. Section 230 alone cannot solve the myriad problems of our information ecosystem. But it remains a vital tool for balancing the competing values at the heart of the Internet.
We must not take Section 230’s benefits for granted, nor ignore its costs. We need an honest and nuanced debate about how to adapt this law for a radically changed online world. But that debate must be grounded in a clear understanding of what Section 230 actually says and does – not just political talking points.
Hopefully, this article has given you that understanding. Section 230 is complicated and controversial, but it is not some esoteric detail of tech policy. It shapes the very fabric of our digital lives, even if most people have never heard of it.
The next time you see a politician or pundit opining about 230, ask yourself: do they actually understand the law and its impact? Are they grappling with the real tradeoffs involved, or just trying to score political points? Have they considered the potential unintended consequences of the reforms they propose?
Holding our policymakers accountable for the future of Section 230 – and the internet at large – begins with an informed citizenry. It is up to all of us to dig beneath the rhetoric and engage this critical issue with clarity and care. The conversations we have and the choices we make in the coming years about online speech will shape the trajectory of digital discourse for decades to come. We owe it to ourselves and future generations to get it right.
So keep learning about Section 230 – its history, its complexities, and its stakes. Share your knowledge with others. We must work together to build an internet that reflects the best of who we are: open and innovative, yet also safe and sustainable for all.
The future of our online public square depends on these critical societal decisions.
You Don’t Have to Face Online Defamation Alone
If you are the target of defamatory attacks, don’t suffer in silence. Take action today and schedule a confidential consultation with one of our skilled defamation attorneys.
We’ll listen to your story, assess your legal options, and develop a tailored strategy to help you remove the false content and hold the defamers accountable. Together, we can help you reclaim your online reputation and your peace of mind. To learn more and set up a consultation for experienced legal advice, fill out our contact form below or call us at 216-373-7706.
Get Help Today.
Meet Darcy, our intake & paralegal manager, and watch this video to learn more about what will happen when you submit this form.