70% off

AI’s Growing Legal Troubles

Without Section 230 protection, copyright issues and defamation suits are cropping up. By Andy Kessler July 30, 2023 4:15 pm ET Photo: Cfoto/Zuma Press Wow, that was fast. It took only eight months to find ChatGPT and generative artificial intelligence’s Achilles’ heel. No, it isn’t naive management, though OpenAI CEO Sam Altman did declare that AI poses a “risk of extinction” for humanity and practically begged a Senate Judiciary subcommittee to regulate AI, and then when the European Union actually did pass regulations, he threatened to pull out of the region. And no, it’s not the White House, through which top AI execs did a walk of shame, allowing President Biden to extract from them a pledge of voluntary safety guardrails for AI. A pledge! Is

A person who loves writing, loves novels, and loves life.Seeking objective truth, hoping for world peace, and wishing for a world without wars.
AI’s Growing Legal Troubles
Without Section 230 protection, copyright issues and defamation suits are cropping up.

Photo: Cfoto/Zuma Press

Wow, that was fast. It took only eight months to find ChatGPT and generative artificial intelligence’s Achilles’ heel. No, it isn’t naive management, though OpenAI CEO Sam Altman did declare that AI poses a “risk of extinction” for humanity and practically begged a Senate Judiciary subcommittee to regulate AI, and then when the European Union actually did pass regulations, he threatened to pull out of the region.

And no, it’s not the White House, through which top AI execs did a walk of shame, allowing President Biden to extract from them a pledge of voluntary safety guardrails for AI. A pledge! Is that stronger or weaker than a pinkie promise?

Nor was it the Federal Trade Commission’s recent fishing-expedition letter to OpenAI demanding minuscule details of everything the company does. Well, Mr. Altman certainly was asking for that. Given that the FTC has been on the losing end of so many cases recently, maybe the company should let ChatGPT write the answers.

Instead, AI’s Achilles’ heel consists of good old-fashioned media issues. The first is copyright. Tools like ChatGPT, Google’s Bard and Meta’s Llama are large language models, meaning they input up to a trillion parameters by reading everything they can get their servers on, including Wikipedia, the cesspool of snark formerly known as Twitter, and racy Reddit of Roaring Kitty and GameStop fame. No wonder ChatGPT is so prone to hallucination. Social-media companies now want to be paid and have restricted the scanning of their innards.

Copyrighted snippets showing up in today’s search results are considered fair use. But can OpenAI or Google or Meta legally scan copyright material and create brand-new content from it, or is generative AI’s output considered “derivative,” requiring the original copyright owners’ permission? We are about to find out in a slew of copyright lawsuits. Comedian and sometime author Sarah Silverman this month joined a class-action suit against OpenAI and Meta, claiming they “copied and ingested” copyrighted materials and created “derivative” versions.

It turns out OpenAI scanned giant repositories of books, named Books1 and Books2—reminiscent of Dr. Seuss’s Thing 1 and Thing 2. OpenAI hasn’t said what’s in Books2, which may include more than 300,000 books. Ms. Silverman’s enjoined suit says these “shadow libraries” are “flagrantly illegal.”

Novelists Paul Tremblay and Mona Awad have sued OpenAI, claiming ChatGPT does a great job summarizing their books and can do that only because it scanned them. I asked ChatGPT about specifics inside a few of my books, and it gave good answers. I may be caught in the same net. This may also be true for artists—Stability AI is being sued by Getty Images and others for scanning their libraries and using them as a basis for AI-generated images resulting from text prompts.

You may have heard that ChatGPT is increasingly good at writing code. Just over a week ago, Microsoft started charging $30 a month for its AI assistant, Copilot. Maybe that will help pay a potential $9 billion in damages claimed by programmer and, it turns out, lawyer Matthew Butterick, who filed a class-action suit against Microsoft and OpenAI on behalf of open-source programmers for use of copyrighted code.

Copyright is a headache, but defamation liability can be a lobotomy. Recall that social-media companies and cloud services are protected under Section 230 of the 1996 Communications Decency Act. Those that host content aren’t liable for what is said on their services, but the publishers are. Without Section 230, there would be no Twitter or Facebook.

But tools like ChatGPT don’t host things, they create things. They’re publishers. No Section 230-like protections exist for them. Radio host Mark Walters is suing OpenAI for defamation, saying it accused him of embezzling from a nonprofit. He may have a case.

The new and increasingly valuable AI industry desperately needs legal protection—let’s call it Section 230.ai. But these companies foolishly launched without it. The lobbying money floodgates will soon open to try to get something through Congress. It might cost more than that White House pledge.

Read More Inside View

Corporate America is cautious. I’ve heard of companies not allowing ChatGPT on company-issued laptops. They are worried about their proprietary code and intellectual property being stolen by AI tools and probably similar liability issues.

Among Microsoft, Google and late-entry players Apple and

Amazon, there is more than $8 trillion in market capitalization for the class-action wolves to chase. AI is a sitting duck. AI is the future and will drive the economy’s next leg of productivity and wealth creation, but the rush to market with tools neither ready for prime time nor strictly legal will slow rollout. What a shame. These companies should protect their Achilles by fixing AI’s legal vulnerabilities pronto.

Write to [email protected].

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow

Media Union

Contact us >