Judges Rule Big Tech's Free Ride on Section 230 Is Over
Algorithms are no longer a Get out of Jail free card. The Third Circuit ruled that TikTok must stand trial for manipulating children into harming themselves. The business model of big tech is over.
“TikTok reads 230 of the Communications Decency Act to permit casual indifference to the death of a ten-year-old girl.” - Judge Paul Matey
Want to know why it’s so difficult to touch the business models of extraordinarily powerful big tech firms? Well, for years, a law known as Section 230 of the Communications Decency Act granted them a de facto Get Out of Jail free card, as long as they could say ‘the algorithm did it.’
I don’t tend to expect good things from the Federal judiciary, but on Tuesday, the Third Circuit issued a shocking opinion rolling that law back, and ending the liability shield that large tech firms use to commit bad actions without consequence. “This is a really substantial and important decision,” wrote legal scholar Zephyr Teachout. I emailed a political contact who has worked in this space and has tangled with big tech for more than ten years to ask how important this decision is. His response? “HUGE.”
Before getting to what just happened, I want to explain why you haven’t heard about this news. Many big legal fights in politics have a lead-up, akin to a boxing match, to help us understand the stakes. “In this corner, the challenger, in that corner the world heavyweight champion…” With such a setup, journalists can write with a clear narrative.
For instance, in the Moody vs NetChoice opinion in earlier this summer, the Supreme Court came out with a strange argument on whether big tech firms get to shield themselves from regulation by saying that their activity is speech, and therefore protected by the First Amendment. This decision got a lot of coverage, because it was a Supreme Court decisions with a lot of briefings and groups paying attention.
But this Third Circuit decision just dropped on a random Tuesday. No one expected three judges to upend the business model of large tech firms, and shake up decades of precedent. But they did just that.
For sure, this news will trickle out, every big tech CEO is or will get a briefing from their panicked general counsel. Every law firm in the country is likely writing a client alert about this opinion. Plaintiff lawyers are sharpening their pens and examining new legal theories to go after tech platforms. Academics are ripping up their lectures on internet law. Big tech-friendly scholars are very upset. Supreme Court watchers are or will consider when and how this gets to the highest court. But it’ll take a few days, or maybe weeks, to internalize what just happened.
Speaking of… what did just happen? Well let’s start with the facts of the case, which are pretty horrible.
TikTok, Inc., via its algorithm, recommended and promoted videos posted by third parties to ten-year-old Nylah Anderson on her uniquely curated “For You Page.” One video depicted the “Blackout Challenge,” which encourages viewers to record themselves engaging in acts of self-asphyxiation. After watching the video, Nylah attempted the conduct depicted in the challenge and unintentionally hanged herself. Nylah’s mother, Tawainna Anderson, sued TikTok and its corporate relative ByteDance.
TikTok knew that such videos were causing kids to get into tragic accidents, and yet had its algorithm target children nonetheless. Anderson’s family sued under Pennsylvania state law for product liability, negligence, and wrongful death. The lower district court dismissed the claim, ruling that TikTok isn’t responsible, because TikTok was merely hosting the speech of others, not making the speech itself.
The part of the code governing this conduct is, as I noted above, Section 230, a law passed in 1996 to deal with cases involving whether the hosting service such as Prodigy or CompuServe was liable for defamatory comments posted by users in online message board chat rooms. The law says that if an ‘interactive computer service’ hosts the speech of someone else, it is not liable for that speech, but is only liable for its own speech. It also gave services the right to filter without liability. A platform is not liable for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.”
So basically, these nascent platforms, or any future platform, would not be liable for the speech of users in their chatrooms, but would be liable for what its own moderators said. If one of them imposed filters on the chatroom to keep out porn, it would not be considered liable for the resulting speech.
Two things happened to change the scope of the legal impact. First, the internet grew tremendously, encompassing the kinds of activities that did not exist in 1996, such as social media and data-heavy surveillance advertising. Second, judges expanded the reach of Section 230 to everything from online commerce to hotel booking sites, and found that platforms were never liable for what happened, even if the site was facilitating the harm. Following a 1997 Fourth Circuit expansive decision called Zeran v. America Online, eight different circuit courts have ruled along these lines.
It’s not that big tech platforms took advantage of this law, though they do take advantage of it today. This law created the business model of big tech platforms. There is no way to run a targeted ad social media company with 40% margins if you have to make sure children aren’t harmed by your product. This legal reading not only made a world where Mark Zuckerberg didn’t have to care whether he was hurting kids, it made a world where he would lose out to rivals if he did.
As one judge wrote, “Today, § 230 rides in to rescue corporations from virtually any claim loosely related to content posted by a third party, no matter the cause of action and whatever the provider’s actions.” There’s a shape-shifting element here as well. When corporations want to avoid being regulated, they call what they do speech and claim First Amendment rights. But when someone tries to hold them liable for defamation or unlawful speech, they claim they are mere vessels for others, and thus immune under Section 230. This dynamic creates what one court called “a lawless no-man’s-land.”
What are some of the specific harms? Well such bad acts include Grindr knowingly facilitating sexual violence, credit reporting agencies trying to avoid being regulated for mistakes in credit reports, or Facebook fostering ethnic violence. Identity thieves use social media sites to steal money, with the IRS flagging more than 1 million tax returns for identity fraud in 2023, with information largely captured on LinkedIn or Meta’s sites. Today, “scammers use Facebook to impersonate soldiers so as to start fake long-distance relationships with lonely people, eventually tricking their victims into sending their ‘boyfriends’ money.” And social media firms use algorithms to target gambling addicts with ‘social casino’ apps. These platforms aren’t liable for any of it, because of the weird and creepy reading of Section 230 combined with a corporatized First Amendment.
This dynamic is like pollution, with large platforms financed by targeted ads offloading costs onto the rest of us. I wrote this up in 2022, when Clarence Thomas took a swipe at the expansive reading of the law:
The reason for the use and misuse of Section 230 is simple. Advertising money. In particular, the kind of advertising facilitated by large swaths of personal data depends on Section 230 immunity, otherwise dominant platforms would have to spend large amounts on content moderation, and probably couldn’t avoid liability even if it did so. Thomas doesn’t like this setup, and in his statement, he pointed out that Facebook refused to do anything to stop the use of its services by human traffickers “because doing so would cost the company users—and the advertising revenue those users generate.”
That said, starting five years ago or so, courts have begun narrowing Section 230, with such cases as Lemmon vs Snap, which began to treat harmful features on tech platforms under product liability law instead of speech law. Still, all of these decisions are at a circuit court level. The Supreme Court heard several cases in 2023, Gonzalez v Google and Twitter vs Taamneh, but has yet to take a real position on the law.
The last major case heard by the Supreme Court on the problem of big tech platforms and speech was, as I noted earlier, in Moody vs NetChoice. In that case, whose opinion was released earlier this summer, Florida and Texas passed laws saying that large tech platforms couldn’t discriminate based on the speaker’s political views. The Supreme Court ruled, with five votes, that such laws were likely unconstitutional, as the editorial discretion of the platforms to exclude political speech in curated feeds was protected by the First Amendment.
But interestingly, the court made a distinction about which algorithms get First Amendment protection. If a feed responds “solely to how users act online—giving them the content they appear to want, without any regard to independent content standards," it is not speech, and therefore it can be regulated. But if an algorithm includes editorial or content moderated elements, it is speech and cannot.
In the opinion released today, the Third Circuit drew on the NetChoice decision, and inverted it. NetChoice, a trade association of big tech trade companies, sought, and got, First Amendment protection for curated feeds, which meant certain aspects of how platforms operate were beyond government regulation. But be careful what you wish for, because the Third Circuit ruled that now everything said on those feeds is now the responsibility of the tech platform.
The Court held that a platform’s algorithm that reflects “editorial judgments” about “compiling the third-party speech it wants in the way it wants” is the platform’s own “expressive product” and is therefore protected by the First Amendment.
Given the Supreme Court’s observations that platforms engage in protected first-party speech under the First Amendment when they curate compilations of others’ content via their expressive algorithms, it follows that doing so amounts to first-party speech under § 230, too.
Because TikTok’s “algorithm curates and recommends a tailored compilation of videos for a user’s FYP based on a variety of factors, including the user’s age and other demographics, online interactions, and other metadata,” it becomes TikTok’s own speech. And now TikTok has to answer for it in court. Basically, the court ruled that when a company is choosing what to show kids and elderly parents, and seeks to keep them addicted to sell more ads, they can’t pretend it’s everyone else’s fault when the inevitable horrible thing happens.
And that’s a huge rollback of Section 230.
The business model of significant corporate actors today, from Google to Meta to TikTok, relies on them being immune from liability for what their users say even as they serve targeted advertising. That business model, of “keep ‘em swiping,” is now in jeopardy. With the Third Circuit splitting with how most other judges have interpreted Section 230, and using the recent NetChoice opinion to do so, policymakers will now have no choice but to start working through problems in Section 230. This case will get appealed, and it’ll likely go to the Supreme Court.
So what happens going forward? Well we’re going to have to start thinking about what a world without this expansive reading of Section 230 looks like. To do that, let’s look at one of the more interesting parts of the decision. It doesn’t really make sense to say that TikTok is the “publisher” of the challenge videos, but it’s also not just the neutral host of the videos. And one judge really got that.
Surprisingly, questions of big tech power and speech do not split on partisan lines. On the Supreme Court, for instance, Clarence Thomas and Ketanji Brown Jackson frequently align against Brett Kavanaugh and Elana Kagan, with Amy Coney Barrett as a swing justice. Both Trump and Biden are hostile to Section 230. Republican regulators like Brendan Carr and Democratic regulators like Rohit Chopra and Lina Khan have for years expressed concern over the way the law has been misused.
In Tuesday’s Third Circuit case, an Obama appointee - Patty Shwartz - and a Trump appointee - Peter J. Phipps - agreed on the substantive decision, with another Trump appointee - Paul Matey - wanting to go much further to ratchet back Section 230. Big tech power, in other words, is not always partisan, but has a populist/anti-populist split. As if to prove the point, Matey is a Federalist Society conservative, but he cited my book Goliath in his opinion.
So where did Matey go? Well he agreed with overruling the district court, but broke with libertarianism, bringing back a reliance on common law traditions. He showed how the Congressional reading of common carriage rules in 1996 suggested that internet companies should have the same degree of liability as, say, tobacco companies, pharmaceutical manufacturers, appliance makers, and food service establishments.
Instead of arguing that TikTok was the publisher of the challenge videos, Matey made the point that TikTok was its distributor. Drawing on common carriage traditions, he pointed to an older pre-internet legal distinction between “publisher” liability for speech, which is to say that author or publisher’s responsibility, and “distributor” liability for speech, which is to say the bookstore, library, or newsstand’s role.
The sale of indecent material has traditionally conferred different liability on the publisher of that material versus the distributor of it. The early Section 230 case law wrote that publisher/distributor distinction out of existence, and Matey says that case law was in error. With its algorithmic boosting, he said, TikTok is best understood as a distributor. That’s consistent with how Chopra and several Supreme Court Justices see the problem.
In other words, the fundamental issue here is not really whether big tech platforms should be regulated as speakers, as that’s a misconception of what they do. They don’t speak, they are middlemen. And hopefully, we will follow the logic of Matey’s opinion, and start to see the policy problem as what to do about that.
This case is going to be catalytic. If/when this goes to the Supreme Court, there are going to be a gazillion amicus briefs, and endless stories on how This Is the Case That Could Destroy/Save the Internet. And now plaintiff lawyers will think about the litigation they can bring. Without Section 230 as a shield, at least in the Third Circuit, is Facebook now liable for facilitating military scams? Are the big tech platforms going to have to face claims for helping violate state gambling laws or being a party to mass identity theft or sexual assault or child abuse? What about garden variety defamation claims they have been able to ignore until now? These are billion dollar questions.
With this in mind, I’d be surprised if most of Silicon Valley isn’t gearing up for a lobbying campaign to get Congress to enact Section 230 “reform,” which is to say, get this out of the court system by passing new laws. Some of the more realistic execs will start thinking about how to modify their business models to accord with a world where they are liable for what they do. As a different contact told me, “Huge nail in 230 coffin, maybe the only nail needed, although likely there will be a circuit split on question of whether/how to understand algorithm as expressive activity with respect to 230 so the Supreme Court will have to finish 230 burial in a few years.”
Of course, the Supreme Court could try to avoid the issue, or rule badly. Congress could override the courts. But at the very least, this decision forces the issue. And that means the gravy train, where big tech has been able to pollute our society without any responsibility, is likely ending.
Thanks for reading. Send me tips on weird monopolies, stories I’ve missed, or comments by clicking on the title of this newsletter. And if you liked this issue of BIG, you can sign up here for more issues of BIG, a newsletter on how to restore fair commerce, innovation and democracy. If you really liked it, read my book, Goliath: The 100-Year War Between Monopoly Power and Democracy.
cheers,
Matt Stoller
Thanks Matt for starting my day off with good news.
Best piece of news I have had in a decade and a half, thanks for sharing. Have been arguing against 230 for years and years.