top of page

Section 230: The 26 Words That Built the Internet (And Why Everyone Wants to Change Them)

  • Writer: Elle
    Elle
  • 2 minutes ago
  • 9 min read

Every time you post on social media, comment on a YouTube video, write a review on Amazon, or send a message in Discord, you're benefiting from a 26-word sentence written in 1996 that most people have never heard of.


Those 26 words are Section 230 of the Communications Decency Act, and they say:

"No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."


Sounds boring and technical, right? But these words are the reason Facebook, Twitter (now X), YouTube, Reddit, Wikipedia, and basically every website where users can post content exists in the form we know today. Without Section 230, the internet would be radically different. Smaller. More controlled. Possibly nonexistent in its current form.


Section 230 has been called "the 26 words that created the internet." It's also been called "the most important law protecting internet speech." And lately, it's been called "a disaster" by politicians from both the left and the right who want to change or eliminate it entirely.


So what is Section 230? Why was it created? And why is it so controversial? Let's break down one of the most important (and misunderstood) laws shaping your online life.


The Problem Section 230 Solved

To understand Section 230, you need to understand the problem it was designed to solve. And that problem started with two court cases in the early 1990s that created a bizarre catch-22 for early internet companies.

Case 1: CompuServe Gets Off the Hook (1991)

In 1991, a company called CompuServe (an early internet service provider) hosted online forums where users could post messages. Someone posted defamatory (false and reputation-damaging) statements about a company called Cubby Inc.


Cubby sued CompuServe. But the court said CompuServe wasn't responsible because it didn't review or edit any content before posting. CompuServe was just providing the platform. It was like a newsstand selling newspapers. The newsstand isn't responsible for what's printed in the papers.


This seemed reasonable. CompuServe wasn't creating the content, so it wasn't liable.


Case 2: Prodigy Gets Punished for Being Responsible (1995)

Then came the Stratton Oakmont case. Prodigy was another early internet company that hosted message boards. Unlike CompuServe, Prodigy actively moderated its forums. It had guidelines about acceptable content. It employed moderators who removed some offensive posts.


Someone posted defamatory statements about Stratton Oakmont (a stock brokerage firm). Stratton Oakmont sued Prodigy.

And here's where things got weird: the court said that BECAUSE Prodigy moderated content, it was acting like a publisher (like a newspaper editor), which meant it WAS responsible for everything posted on its platform, even things it didn't see or approve.


The Catch-22

These two cases created an impossible situation:

If you DON'T moderate content at all (like CompuServe), you're not liable. If you DO moderate content (like Prodigy), you ARE liable for everything.


The message was clear: the safest legal strategy was to do absolutely no moderation. Don't remove hate speech. Don't take down harassment. Don't filter anything. Just let it all stay up, and you're protected.


This was terrible policy, especially as Congress was simultaneously worried about children accessing pornography and other inappropriate material online. Congress wanted companies to moderate and filter content to protect kids. But the law was punishing companies for doing exactly that.


Enter Section 230

In 1996, Congress passed the Communications Decency Act (CDA) as part of the massive Telecommunications Act. The CDA was primarily aimed at restricting online pornography and indecent content.


Two Congressmen, Chris Cox (Republican) and Ron Wyden (Democrat), saw the Stratton Oakmont problem and added a provision that would become Section 230. Their goal was simple: encourage online platforms to moderate content without fear of being held liable for every single thing users post.


Section 230 has two main parts:

Section 230(c)(1): The famous 26 words. It says online platforms are not treated as the "publisher or speaker" of content posted by users. In other words, if someone posts something illegal or harmful on your platform, you're not automatically responsible for it the way a newspaper would be responsible for what it prints.

Section 230(c)(2): The "Good Samaritan" provision. It protects platforms from being sued for moderating content in good faith. If you remove posts that are "obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable," you can't be sued for making those moderation decisions.


Together, these provisions say: You can host user content without being liable for what users post, AND you can moderate that content without losing your protection.


The catch-22 was solved.


What Section 230 Actually Does

Section 230 creates a simple rule: the person who creates content is responsible for that content, not the platform hosting it.

Examples of how this works:

Scenario 1: Someone posts a defamatory lie about you on Twitter. Under Section 230, you can sue the person who posted it, but you generally can't sue Twitter just for hosting it.

Scenario 2: Someone uploads a video to YouTube containing false medical advice that harms people. Victims can sue the person who made the video, but Section 230 protects YouTube from liability just for hosting it.

Scenario 3: A website moderates its forums and removes posts it deems offensive. Even if you disagree with the moderation decision, Section 230 protects the website from being sued over that decision (as long as it was made in good faith).


What Section 230 Doesn't Protect

This is crucial: Section 230 is not a blanket immunity from all laws. It has specific exceptions.


Section 230 does NOT protect platforms from:

Federal criminal law: If your platform violates federal criminal statutes, Section 230 won't save you.

Intellectual property law: Copyright and trademark claims aren't covered by Section 230. (That's why YouTube has to respond to copyright takedown requests under a different law called the DMCA.)

Sex trafficking law: In 2018, Congress passed FOSTA-SESTA, which carved out an exception to Section 230 for material that "promotes or facilitates prostitution" or sex trafficking.

Content the platform itself creates: If the platform creates or substantially develops content (not just hosting what users post), it's responsible for that content.

Communications privacy violations: Section 230 doesn't protect against violations of wiretapping or privacy laws.


Why Section 230 Matters to You

You might think, "Okay, but how does this affect me? I'm not running Twitter."


Here's how Section 230 shapes your daily online experience:

You can comment on news articles: Websites allow comments because Section 230 protects them from liability for what you write.

You can review products and businesses: Amazon, Yelp, Google Reviews all exist because Section 230 protects these platforms from being sued over negative reviews users post.

Social media exists: Facebook, Instagram, TikTok, Reddit, and every other platform where users create content rely on Section 230. Without it, they'd either have to pre-approve every single post (impossible at scale) or shut down entirely to avoid liability.

Wikipedia works: Anyone can edit Wikipedia. Without Section 230, Wikipedia would be sued constantly for misinformation or defamation in user-contributed articles.

Small websites can exist: It's not just big tech. Small blogs that allow comments, local news sites, forum communities, all benefit from Section 230. Without it, the legal risk would be too high for most small operators.


Section 230 essentially makes user-generated content possible at scale. It's why the internet is a participatory platform where anyone can contribute, not just a one-way broadcast medium like television.


The Controversy: Why Everyone's Mad at Section 230

Despite its importance, Section 230 has become intensely controversial. Critics from both political sides want to change or eliminate it, though for very different reasons.


Criticism from the Left

Progressive critics argue that Section 230 allows platforms to host harmful content without consequences:

Hate speech and harassment: Platforms can host racist, sexist, and hateful content and claim Section 230 immunity.

Misinformation: False information about health, elections, and other critical topics spreads unchecked.

Extremism: Platforms can host content from extremist groups and violent movements.

Insufficient moderation: Big tech companies could do more to remove harmful content but choose not to because Section 230 protects them either way.


The left-leaning critique essentially says: Section 230 gives platforms too much freedom to host bad content without accountability.


Criticism from the Right

Conservative critics argue that Section 230 allows platforms to censor conservative voices:

Biased moderation: Platforms disproportionately remove conservative content while leaving up liberal content.

Political censorship: Big tech companies suppress conservative viewpoints, particularly around elections and COVID-19.

Inconsistent enforcement: Moderation policies are applied selectively based on political ideology.

Too much power: Section 230 protects platforms' moderation decisions, giving them unchecked power to decide what speech is allowed.


The right-leaning critique essentially says: Section 230 gives platforms too much freedom to remove content without accountability.


The Irony

Here's the strange part: both sides are mad at Section 230, but for opposite reasons. The left thinks it protects platforms from liability for hosting too much bad content. The right thinks it protects platforms from liability for removing too much legitimate content.


This suggests Section 230 is doing exactly what it was designed to do: staying neutral and letting platforms make their own moderation decisions.


Recent Legal Challenges

Section 230 has faced several major legal challenges recently:

Gonzalez v. Google (2023)

A family sued Google, claiming YouTube's recommendation algorithm helped ISIS recruit members and spread propaganda, contributing to their daughter's death in a terrorist attack in Paris.

The question: Does Section 230 protect YouTube's algorithm recommendations, or only the hosting of content?

The Supreme Court punted, declining to address Section 230 directly and instead resolving the case on other grounds.

Twitter v. Taamneh (2023)

Similar facts: Did Twitter aid and abet terrorism by allowing ISIS content on its platform?

The Supreme Court ruled against the plaintiffs, finding that merely hosting content (even extremist content) doesn't constitute aiding and abetting terrorism.

These cases reaffirmed Section 230's broad protections, even for algorithmic recommendations of user content.


Proposals to Change Section 230

Numerous bills have been proposed to modify or eliminate Section 230. Some examples:

Sunset it entirely: Some proposals would eliminate Section 230 after a certain date, forcing Congress to write a replacement.

Make it conditional: Platforms would have to meet certain requirements (transparency, appeals processes, etc.) to maintain Section 230 protection.

Remove it for algorithms: Platforms would be protected for hosting content but not for algorithmically recommending it.

Strengthen it with reforms: Add requirements for transparency and due process while keeping the core protections.

Carve out more exceptions: Create additional categories (like the sex trafficking exception) for other types of harmful content.


As of 2026, Section 230 remains intact, though the debate continues.


What Would Happen Without Section 230?

If Section 230 were repealed entirely, several things would likely happen:

Mass content removal: Platforms would become extremely cautious, removing anything remotely controversial to avoid lawsuits. Over-moderation would replace under-moderation.

Small platforms would close: Only giant companies with huge legal budgets could afford the risk of hosting user content. Small forums, blogs with comments, and community sites would disappear.

Less free speech, not more: Ironically, eliminating Section 230 in the name of "free speech" would likely result in much more censorship as platforms play it safe.

Pre-approval systems: Some platforms might require pre-approval of all content before it's posted, destroying the real-time nature of social media.

Internet fragmentation: The internet might become more like cable TV, with a small number of approved channels rather than open participation.


The Bottom Line

Section 230 is a 26-word sentence that protects online platforms from being held liable for content their users post, while also protecting their right to moderate that content in good faith. It was created in 1996 to solve a specific problem: companies were being punished for moderating content, which discouraged them from filtering out harmful material. Section 230 removed that disincentive.


The law has made the modern internet possible. User-generated content platforms, from social media to review sites to Wikipedia, all depend on Section 230's protections. But the law is controversial. Critics on the left say it allows too much harmful content to stay up. Critics on the right say it allows too much legitimate content to be taken down. Both sides want changes.


The debate over Section 230 is really a debate about who should decide what content is acceptable online: platforms themselves, the government, or the courts? Should platforms be treated like neutral phone companies (just carrying whatever comes through) or like publishers (responsible for what they host)?


There are no easy answers. Section 230 represents a particular balance, one that has enabled incredible innovation and communication but also enabled harm at unprecedented scale.


As you scroll through social media, post comments, or share content, you're participating in an ecosystem shaped by those 26 words written in 1996. Whether that ecosystem survives in its current form or changes dramatically depends on political decisions being made right now about Section 230's future.


The internet you know was built on Section 230. The internet of the future might be built on something very different.


Sources

Congressional Research Service. (2021). Section 230: An Overview. Retrieved from https://www.congress.gov/crs-product/R46751

Electronic Frontier Foundation. Section 230. Retrieved from https://www.eff.org/issues/cda230

First Amendment Encyclopedia. (2024). Communications Decency Act and Section 230. Retrieved from https://firstamendment.mtsu.edu/article/communications-decency-act-and-section-230/

Information Technology and Innovation Foundation. (2023). Overview of Section 230: What It Is, Why It Was Created, and What It Has Achieved. Retrieved from https://itif.org/publications/2021/02/22/overview-section-230-what-it-why-it-was-created-and-what-it-has-achieved/

Legal Information Institute, Cornell Law School. 47 U.S. Code § 230 - Protection for private blocking and screening of offensive material. Retrieved from https://www.law.cornell.edu/uscode/text/47/230

PBS NewsHour. (2023). What you should know about Section 230, the rule that shaped today's internet. Retrieved from https://www.pbs.org/newshour/politics/what-you-should-know-about-section-230-the-rule-that-shaped-todays-internet

Wikipedia. (2025). Section 230. Retrieved from https://en.wikipedia.org/wiki/Section_230

Wilson Sonsini Goodrich & Rosati. (2023). Section 230 of the Communications Decency Act Under Fire Once Again. Retrieved from https://www.wshblaw.com/publication-section-230-of-the-communications-decency-act-under-fire-once-again

Comments


bottom of page