Regulating AI: If You Make a Mess, You Clean It Up
A very simple rule is to make sure that firms who deploy AI are liable for the problems they cause. Section 230 shouldn't apply to AI models.
Welcome to BIG, a newsletter on the politics of monopoly power. If you’d like to sign up to receive issues over email, you can do so here.
I’ve been doing a lot of research and interviews on large language models and their impact on our world, which is going to be significant. These “intelligent” chatbots and models can do all sorts of seemingly magical things, like have conversations, create images and video, invent new medicines, and instantly generate usable code. But they also hallucinate, lie, and manipulate users, and no one seems to know how to get them to stop doing so.
Right now, the strategy of Google and Microsoft seems to be to ignore the potential harms and put AI into every product and service they have, so they can bake them into our social infrastructure before anyone has time to assess the potential downsides. That’s a classic monopolist’s strategy, though they are of course talking about it as the ‘democratization of AI.’
So what’s the right way to address this policy problem? Well, I keep coming back to this discussion of Microsoft’s new AI-powered Bing, which when asked, simply made up characteristics of a pet vacuum consumer brand.
According to this pros and cons list, the “Bissell Pet Hair Eraser Handheld Vacuum” sounds pretty bad. Limited suction power, a short cord, and it’s noisy enough to scare pets? Geez, how is this thing even a best seller?
Oh wait, this is all completely made up information.
Bing AI was kind enough to give us its sources, so we can go to the hgtv article and check for ourselves.
The cited article says nothing about limited suction power or noise. In fact, the top amazon review for this product talks about how quiet it is.
The article also says nothing about the “short cord length of 16 feet” because it doesn’t have a cord. It’s a portable handheld vacuum.
I hope Bing AI enjoys being sued for libel.
So who is responsible for libel when an AI engages in bad behavior? The answer is, we don’t know. Microsoft will probably argue that Section 230 of the Communications Decency Act applies. Section 230 says that firms who run search engines or social networks aren’t liable for third party content, because you don’t want to hold a website manager responsible for what users say using that person’s tools. But what about an AI engine? That’s not really the same thing.
So one simple response is to make the firms who run these models liable for the consequences. And to do that is fairly simple. Just pass a law that says that Section 230 does not apply to AI engines that create new or substantially transformed content. Make Microsoft, Google, or any AI firm responsible for the outputs of their models. If you make a mess, you have to clean it up.
That’s not a comprehensive approach to AI, it’s just the simplest and best idea I’ve come up with so far.