looking glass sposts section 230 on the stars and stripes

Two Supreme Court cases could change the Internet as we know it

The Supreme Court is about to reconsider Section 230, a law that’s been the foundation of the way we have used the Internet for decades.

The court will be handling a few cases that at first glance are about online platforms’ liability for hosting accounts from foreign terrorists. But at a deeper level these cases could determine whether or not algorithmic recommendations should receive the full legal protections of Section 230.

The implications of removing that protection could be huge. Section 230 has frequently been referred to as a key law, which has allowed the Internet to develop to what it is now. Whether we like it or not.

The are two cases waiting to be heard by the Supreme Court are Gonzalez v. Google and Twitter v. Taamneh. Both seek to draw big tech into the war on terror. The plaintiffs in both suits rely on a federal law that allows any USA national who is injured by an act of international terrorism to sue anyone who knowingly provided substantial assistance to whoever carried it out. The reasoning is that the platforms, Google and Twitter, provided assistance to terrorists by giving them the ability to recruit new members.

Section 230 is the provision that has, until now, protected those platforms from the negative consequences of user-generated content.

Section 230

Section 230 is a section of Title 47 of the United States Code that was enacted as part of the Communications Decency Act (CDA) of 1996, which is Title V of the Telecommunications Act of 1996, and generally provides immunity to websites from the negative effects of third-party content.

What’s in question is whether providers should be treated as publishers or, alternatively, as distributors of content created by its users.

Before the Internet, a liability line was drawn between publishers of content and distributors of content. A publisher would be expected to have awareness of the material they published and could be held liable for it, while a distributor would likely not be aware and as such would be immune.

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

Section 230 protections have never been limitless though, and require providers to remove material illegal on a federal level, such as in copyright infringement cases.

It all became a bit more complicated when online platforms—and social media in particular—started using algorithms that are designed to keep us occupied. These algorithms make sure that we are presented with content we have shown an interest in. The goal is to make us spend as much time on that platform as possible while the platform earns advertising dollars. While the content was not created by the platform, the algorith definitely does the bidding of the platform.

In the early days (cases that played out before the turn of the century) moderation was seen as an editorial action which shifted a platform from a distributor role into a publisher role, which didn’t exactly help to get some form of moderation started.

In modern times, now that moderation has become the norm on social platforms, the scale of content moderation decisions that need to be taken is immense. Reportedly, within a 30-minute timeframe, Facebook takes down over 615,000 pieces of content, YouTube removes more than 271,000 videos, channels and comments, and TikTok takes down nearly 19,000 videos.

Possible implications

Section 230, from an Internet perspective is an ancient law, written at a time when the Internet looked very different than it does today. Which brings us back to the algorithms that have people scrolling social media all day. One of the consequences of these algorithms noticing a preference for a particular subject is that they will serve you increasingly extreme content in that category.

Making platforms liable for the content provided by their users is likely to make everything a lot slower. Imagine what will happen if every frame of every video has to be analyzed and approved before it gets posted. We would soon see rogue social media platforms where you can’t sue anyone because the operators are hiding behind avatars on the Dark Web or in countries beyond the reach of US extradition treaties.

It could even have a chilling effect on freedom of speech, as social media platforms seek to avoid the risk of getting sued over the back and forth in a heated argument.

And what about the recent popularity surge we have seen in chatbots? Who will be seen as the publisher when ChatGPT and Bing Chat (or DAN and Sydney as their friends like to call them) uses online content to formulate a new answer without pointing out where they found the original content?

Let’s not forget sites that have an immense userbase, like Reddit, which largely depend on human volunteer moderators and a bit of automation to keep things civilized. Will those volunteers stick around when they can be blamed for million dollar lawsuits against the site?

Even easily overlooked services like Spotify could be facing lawsuits if their algorithm suggested a podcast that contains content considered harmful or controversial.

The Halting Problem

Stopping bad things from happening on platforms like Google and Twitter is an admirable ambition, but it is probably impossbile. Even if they were able to fully automate moderation, they would quickly run into the halting problem associated with decision problems.

A decision problem is a computational problem that can be posed as a yes–no question of the input values. So, is this content allowed or not? That sounds like a simple question, but is it? Turing proved no algorithm exists that always correctly decides whether, for a given arbitrary program and input, the program halts when run with that input. This is called the halting problem.

A direct derivative of the halting problem is that no algorithm will always make the correct decision in a decision problem as complicated as content moderation.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

ABOUT THE AUTHOR

Pieter Arntz

Malware Intelligence Researcher

Was a Microsoft MVP in consumer security for 12 years running. Can speak four languages. Smells of rich mahogany and leather-bound books.