Elon Musk and Selling Tickets to the End of the World
We shouldn't be arguing about Elon Musk. It's time for public utility rules and the end of Section 230.
Welcome to BIG, a newsletter on the politics of monopoly power. If you’d like to sign up to receive issues over email, you can do so here.
Today, Elon Musk announced a takeover attempt of Twitter, the social networking company that dominates elite political and cultural discourse. I don’t know what his end goal is, and that is not, in my view, very important. The basic problem isn’t going to be addressed by new ownership. That basic issue is simple. Twitter’s current business model is bad for us, and censorship, while real, is not the only, or perhaps even the most significant problem. Same with Facebook, YouTube, TikTok, and all social media. These models are addictive, they depress us, they make us hate each other, and they tear societies apart by fostering ethnic conflict.
The causal factor, as usual, is the regulatory scheme we use to organize internet models, a regulatory scheme that facilitates, instead of prevents, conflicts of interest. Twitter is a communications network that sells ads, and so it has an incentive to manipulate what we say to each other to keep us using and selling ads. But even though Twitter structures speech, it is not liable for the speech it structures. Same with Facebook, Google, Grindr, TiKTok, etc. This model is not how telephone networks work. You pay your telecom provider directly for the service, and they do not care who you call or what you say, and they do not try to manipulate their customers. They are not conflicted because they do not sell ads, they sell a common carrier service. As a result, they are not liable for what their customers talk about.
Should tech platforms be liable for what users say on their platforms? Of course. Let’s start with the silly idea that the first amendment has no limits. Defamation, product liability, harassment, intentional infliction of emotional distress, et al are all common law torts designed to let people use the court system to block harmful speech. Civil rights rules and credit reporting rules, and even things like laws against insider trading, are also ways to structure speech to allow us to have a coherent society. But these limits work because they are public, aka they require going through courts and meeting a high burden before limiting what someone else can say.
For hundreds of years, courts evolved these rules based on new social norms and technologies, so that we could do things like stop people from dueling to protect their reputation. Court cases brought forth evidence that showed where to draw and redraw lines. Through legislatures, we also structured media rules and built things like the Post Office to make sure that we had free expression but within localized communities (though things like local newspapers and local radio stations). You could say anything you want, but you couldn’t be insulated from the consequences of doing that. There were many problems with this model, but it basically worked to sustain a coherent democratic system.
The problem is that in 1996, we passed a law called Section 230 of the Communications Decency Act that immunized website owners from speech that users say on their platforms. Section 230 is the legal cornerstone of digital platforms. There’s even a book titled “The Twenty-Six Words that Created the Internet,” because the key section is just twenty six words long. This law got rid of all of these public limits on speech situated in local courts. Section 230 was also part of a libertarian move to deregulate media, communications, and antitrust rules, and so what ended up replacing the structuring of the public square by local communities was private censorship by globe-spanning platforms.
These firms make money when there is incendiary content, because that gets people to pay attention and hand over more attention and data. But these firms aren’t liable for the content, even if it facilitates things like race riots, gang shootings, or political violence. To put it differently, if a newspaper regularly trumpeted the need to shoot someone, it would be shut down under common law unless it stopped doing that. But Twitter, or any other social network, can amplify such threats without taking any steps at all to mitigate harm, and it can even sell ads against death threats.
The answer, therefore, isn’t for Elon Musk to buy Twitter, or to not buy Twitter. It’s to repeal Section 230 so that these social networks have to actually take responsibility for the content they disseminate. Communications platforms should once again be seen as a public asset. This history is coming back; Clarence Thomas, for instance, recently realized that common carrier rules should apply to tech platforms. The sooner we get started the better. It’s problematic that we haven’t developed tort law since 1996 on platforms, but it’s time to start. Otherwise Twitter will keep making money selling tickets to the end of the world.
I like, Matt. Hope you are familiar with an idea of the earliest internet idea: connect every post everywhere with the IP address it originated from. Charge two-cents to five-cents for each post to any online website or app. This money goes to government to increase internet access to all.
As Mark Stahlman taught me to say, freedom disconnected from healthy, right, duties and obligations is immature and childish.
"To put it differently, if a newspaper regularly trumpeted the need to shoot someone, it would be shut down under common law unless it stopped doing that." And if you did it over the phone the phone company would not be, because common carrier. Section 230 is the only way you have anything but giant megacorps throwing third-worlders into the content moderation trauma woodchipper. Terrible, as they say, take, OP. Stick with what you know, cause this ain't it, chief.