Discover more from BIG by Matt Stoller
Take the Profit Out of Political Violence
Censorship isn't the right approach. Just stop letting Facebook earn money fostering violence.
Welcome to BIG, a newsletter about the politics of monopoly and finance. If you’d like to sign up, you can do so here. Or just read on…
Today I’ll be writing about how to think about big tech in the context of political violence. I’ll also cover the first antitrust-related move of the Joe Biden era, an important personnel change at the Federal Trade Commission.
Who Draws What Lines?
The Capitol Hill riot shook American politics, with the prospect of a violent challenge to an electoral outcome creating a sense of frightening possibility, as well as offering a reminder of just how fragile democracies really are.
In the aftermath of the violence, we are seeing large technology firms engaged in overt and explicit censorship of an organized political faction. Such a reality might have appeared shocking, even a fake conspiratorial fantasy, just a few weeks ago. And yet, Twitter removed tens of thousands of conservatives from its service, AWS refused to allow Parler on its hosting facilities, and Trump has not only lost his voice online, but he isn’t even able to sell red hats through Shopify.
This bans aren’t crazy. While it’s hard to find a more blatant exercise of unregulated and arbitrary monopoly power for political purposes than what big tech just did, it’s also hard to find a political purpose – the violent overturning of an election - more worthy of suppressing.
And yet, now that Trump and QAnon have been muzzled, proponents of big tech political control are arguing for a broader set of bans. For instance, former Facebook executive Alex Stamos recently explained on CNN why it is critical to mobilize private infrastructure to deny the ability of conservatives to communicate. Stamos is not some isolated character, but represents how Silicon Valley elites think about the problem.
On this segment, Stamos discussed how traditional press freedoms are “being abused” and calls for communications firms with market power to collaborate to suppress domestic right-wing extremism, including both social media icons popular on YouTube and conservative TV channels, the way they did content from ISIS. "We are going to have to figure out the OAN and Newsmax problem,” he said. “These companies have freedom of speech, but I'm not sure we need Verizon, AT&T, Comcast and such bringing them into tens of millions of homes.”
Stamos’s view is deeply disconcerting, and I think he even feels uncomfortable with it, which is why he has to use imprecise words like “figure out the OAN and Newsmax problem” instead of saying what he really means, which is that it’s time for a small group of media and tech barons to work with political leaders to censor things they don’t like. And make no mistake, that is what Stamos is seeking, and that’s what’s beginning to happen. YouTube is already censoring press coverage of rallies around gun rights because the corporation’s algorithm found that the coverage "violates our firearms policy." There’s going to be a lot more of that, and big tech will do it overtly and unapologetically.
I’m sympathetic to the concerns that Stamos (and many Democrats) have. A society simply cannot exist without drawing firm lines about what kinds of behavior are and aren’t acceptable, and violent riots to overturn elections are a red line. And yet who draws these lines? For Stamos, the answer is a narrow and powerful clique of insiders. That is also unacceptable, because empowering private unaccountable tech firms to draw those lines is also an overturning of self-government.
The argument in which Stamos is participating is not new. The Capitol Hill riot wasn’t the first act of violence induced by social media. The UN essentially accused Facebook of fostering a genocide in Myanmar, and ethnic divisiveness all over the world, as well as strife domestically is now a common result of social media. I’m reminded of this article in 2018 on how Facebook fuels gang violence in Chicago by making gang taunts go viral. Similar to what happened in the aftermath the Capitol Hill riot, tech firms got aggressive in censoring gangs. This created a problem, however, as one professor specializing in gang work noted. “At some point,” he asked, “Are you just taking down posts from black and brown kids?” The analogy of platform monopolies trying to regulate gangs of teenagers vs two parties in a polarized political system isn’t perfect, but it’s not ridiculous, either. Humans are humans.
Rather than seeing the polarization and monopoly problems in isolation, it helps to recognize that the cause of both is a policy framework that has shaped the internet into a series of dominant platforms who radicalize their users. Sure, conservative infrastructure mattered; rioters used Parler as an organizing forum. But more important to their movement were mainstream platforms, like Twitter, Facebook, and YouTube. These services, unlike Parler, made money by selling ads as the riot occurred. Facebook even placed assault weapon ads next to the groups organizing an overthrow of democracy.
The problem gets more interesting when one considers the motivation behind the riots. When the rioters attacked the Capitol, they did so not to destroy democracy but in their minds, to save it. Steeped in an years-old ecosystem of disinformation and rage, most of them sincerely believed they were stopping an election from being stolen, and in what one analyst called an Extremely Online Riot, tweeted and broadcast their attack to their fans and followers. And here’s where the business model problem comes in. Instead of marginalizing the violent cranks who exist in every society, social media over the past decade has turned them into stars. For instance, Alex Jones’s monstrous claims about mass shootings reflect a deranged individual, but YouTube recommending his videos to users 15 billion times reflects a policy problem.
One consequence of Google, Facebook, and Twitter’s main business is to act as a radicalization engine. They do this by selling advertisements targeted using data gathered through intrusive surveillance. Because they profit from advertising, they want keep users on their sites for as long as possible to sell more ads. As a result, the platforms tend to show sensationalistic or otherwise addictive content, to keep people using and the ad money flowing. They also gamify it, putting Like buttons, retweets, and video view counters to keep people hooked.
The user experience decisions and algorithms encourage inflammatory or conspiratorial content and harms our mental health and the ability to participate in rational politics. “In a targeted advertising model,” privacy experts Jeff Gary and Ashkan Soltani have written, “misinformation and conspiracy theories are often the product, not an accident.” Facebook, with its addictive user interface designed to maximize engagement, has helped foster deadly mob attacks worldwide. And Google has provided ad services to 86% of sites carrying pandemic conspiracies. This environment affects all of us who use these platforms. I suspect, for instance, that much of the Russia-gate paranoia among liberals was fueled by the performative nature of the platforms on which many journalists discussed the topic.
We have a set of laws, everything from defamation to harassment to product liability, that would ordinarily have made publishers or products that foster harmful social pollution illegal. But those laws are inoperative online. In 1996, lawmakers sought to encourage ‘Good Samaritan’ behavior on the part of technologists, giving them legal immunity so they could curate their platforms to eliminate anti-social behavior. They authored Section 230 of the Communications Decency Act to immunize social media and tech firms from responsibility for what their users do with their service. The goal behind this law wasn’t bad. The real effect however has been to encourage all sorts of illegal activity - like harassment, defamation, fraud, incitement, as long as it is done online. These platforms, though they make money from this behavior, are immunized from any costs of it, under the false premise that they are merely conveying speech.
You can still sue individuals and firms for defamation. Indeed Dominion, which makes voting machines, just used defamation law to force a conservative magazine to post a humiliating apology admitting it falsified information to damage the firm (with no outcries about censorship). But you can’t get at the entities that are knowingly re-publishing this information - and selling ads so they can profit from it - far more effectively than the original source. The problem is, as Jason Kint noted, “velocity and targeted reach without liability.”
It goes beyond defamation, into problems like fraud and harassment. Scammers often create fake Facebook accounts impersonating military personnel and use those accounts to lure lonely women into sending them money. When the solders return home, these women are waiting for a romance they think is real. Obviously, the scammers are committing fraud, but Facebook is also profiting by selling ads and collecting data as the scam happens and expending little to no effort to stop it. And why should Facebook? Facebook bears no liability for this behavior, because Section 230 immunizes it from legal claims like negligence. Similarly, Grindr knowingly enables stalkers to use its platform to harass and, in some cases, induce violence against victims, but bears no liability for doing so, because of Section 230. And yet, these platforms cannot be held liable for any of the harm they cause or profit from.
Repealing Section 230 or reforming it so platforms who profit via advertising are not covered, would reduce the incentive for social media to enable illegal behavior. If we did so, a whole range of legal claims, from incitement to intentional infliction of emotional distress to harassment to defamation to fraud to negligence, would hit the court system, and platforms would have to alter their products to make them less harmful. There are other paths to taking on targeted advertising, like barring it through privacy legislation, a law for a real Do Not Track List, or using unfair methods of competition authority of the Federal Trade Commission. But the point is, we need to stop immunizing platforms who enable illegal behavior from offloading the costs of what they inflict.
Once the problem of radicalization goes away, then it becomes possible to solve the monopoly problem. Two months ago, Democrats on the House Antitrust Subcommittee completed a 16-month investigation, and found that Apple, Amazon, Facebook, and Google essentially control the internet. The subcommittee recommended Congress act to break that control with a new legal framework neutralizing their power. Now, however, they are afraid that doing so would force AWS to carry violent content. That is why ending the shield for business models that promote illegal activity is so important. Doing so would in turn allow policymakers take away the power of tech firms to choose who gets to be a part of our politics.
Both the radicalization and the monopolization are threats to democracy, and both are a result of a business model policymakers have allowed to continue for too long. Or to put it differently, we’ve set up a policy framework in which organizing human beings to tear each other apart is extremely profitable. It’s easy to fix, if we can just calm down and change a few laws.
Our Best Antitrust Enforcers Is Headed to Take on Wall Street: Last week, I wrote about the need for Joe Biden to appoint Federal Trade Commissioner Rohit Chopra to head the FTC, which is the commission tasked with regulating social media and unwinding the dysfunction at the heart of our economy. Yesterday, the Biden team let it out that Chopra is instead going to be the Director of the Consumer Financial Protection Bureau. The CFPB is a very powerful agency tasked with dealing with Wall Street abuses, and Chopra will be enormously creative and effective there. It’s a big promotion.
However, as someone who focuses on antitrust, I’m in a bit of a panic. There’s now a big hole at the FTC, which is a five person commission. Chopra had been leading the commission to get far more aggressive, breaking staid norms and discovering dormant legal authority. The current remaining Democratic commissioner, former Chuck Schumer staffer Rebecca Kelly-Slaughter, is something of a mystery in terms of what she’s trying to accomplish. And now there are two empty slots, both Chopra’s, and the Chairman slot that comes with a new administration.
There’s also a third important slot for antitrust, but this one’s at a different agency. Biden’s Antitrust Division at the Department of Justice needs a leader. The names Biden has floated for the DOJ are not good, and include as a leading candidate Renata Hesse, a lawyer who has worked for both Google and Amazon. That’s bad, and suggests that antitrust policy is just not very important to the administration.
Not all is lost. Joe Biden can as easily pick good anti-monopolists as not. My organization and 40 other groups wrote a letter to Biden asking him not to pick people associated with the antitrust defense bar in those slots. Before the election, I wrote up how I think Biden will govern, which is as a “mild populist” who sees policy as a secondary matter. That is what I am seeing, with a mix of anti-monopolists and status quo traditionalists sprinkled all over government. With Chopra going to the CFPB, there are three key antitrust slots that are open at a pivotal moment. Here’s hoping Biden chooses well.
Thanks for reading. Send me tips, stories I’ve missed, or comment by clicking on the title of this newsletter. And if you liked this issue of BIG, you can sign up here for more issues of BIG, a newsletter on how to restore fair commerce, innovation and democracy. If you really liked it, read my book, Goliath: The 100-Year War Between Monopoly Power and Democracy.