The legal gauntlet for “generative AI” chatbots continues as OpenAI is now under FTC investigation, an action that could firm up questions about the extent to which consumer protection laws apply to AI tools and signal the direction of future federal regulation.
Ever since OpenAI initiated the chatbot wars by rolling out ChatGPT to the public in late 2022, the FTC has taken the position that the industry has no exemption from existing consumer protection laws. This is the first concrete action on the issue from the agency, which has demanded that OpenAI turn over a large amount of records pertaining to how the company evaluates the potential risks of its AI models along with complaints filed by people that were potentially libeled or disparaged by its output.
FTC investigation looking into false and harmful statements, prior security incidents
The Washington Post published a copy of the FTC investigation letter, which spans 20 pages and makes expansive requests for business records. Among the items of interest in this request is a complete accounting of all the claims of “false, misleading, disparaging or harmful” made against people by ChatGPT to date. OpenAI is headed to court on defamation claims after a Georgia radio DJ by the name of Mark Walters found that asking ChatGPT for a summary of an unrelated lawsuit produced a false claim that he had been charged with embezzling from an organization he was never involved with. The entire story appears to have been fabricated, and may have been a “hallucination” by the chatbot.
Hallucinations and potential libel are not the only focal points of the FTC investigation. The documents being requested also expand on a security failing that ChatGPT had in March, when private data from certain subscribers was displayed to others seemingly at random. Some users saw the chat title history of other users, and some that were logged in received emails containing detailed billing information belonging to other people. OpenAI blamed this on the malfunction of a process that queues user requests.
The FTC investigation is also seeking information on consumer perceptions of how accurate the chatbot’s output is, including any internal surveys or tests. And it also specifically wants to know what prompts the AI to generate a disparaging statement about someone, or what could prompt it to generate a false statement or totally fabricated story. The agency also requested information about any internal decisions by OpenAI to restrict or delay the use of a large language model due to safety concerns.
Another area in which OpenAI might run into consumer protection troubles is in the sources it uses for scraping its training data, which the FTC investigation has also requested extensive documentation of. There are now several different lawsuits against the company by authors of books and other copyrighted material that say the AI generated results that it could only have hit upon by training on the book, and potentially reproducing content that is under intellectual property protection.
FTC head Lina Khan has said that these issues are under the purview of FTC enforcement due to the terms of the FTC Act, which enables the agency to get involved when “substantial injury” by misuse of personal information can be demonstrated. Kahn indicated that OpenAI could be looking at fraud or deception charges under consumer protection laws, depending upon how the investigation turns out.
Violations of consumer protection laws can result in substantial fines, and can also result in a business being slapped with a consent decree for years (sometimes even decades, as Twitter can attest). This could put limits on how OpenAI can collect and use personal data, and subject it to stronger ongoing scrutiny for the lifetime of the decree.
Consumer protection campaign could be stymied by challenges to ftc authority
There is some political pushback against the FTC’s consumer protection campaign, largely from Republican members of Congress. During a recent hearing, Khan was challenged on the FTC’s right to investigate what are essentially defamation or libel claims; this sort of speech has traditionally been something that falls under state law, and in all but 13 states is an entirely civil matter. It remains unclear if the FTC Act is truly broad enough to cover all the areas in which the FTC investigation is seeking information.
The FTC investigation has also once again put the spotlight on AI regulation, which seems to be moving along faster than a federal data protection law. Senate Majority Leader Chuck Schumer recently said that AI legislation is likely coming before the year is out. Schumer’s preferred form is the SAFE Innovation framework, but it currently has competition from two other bipartisan proposals: the National AI Commission Act and the Global Technology Leadership Act.
Legal gauntlet for generative #AI chatbots continues as OpenAI is now under FTC investigation, which could firm up questions about the extent to which #consumerprotection laws apply to AI and signal the direction of future federal regulation. #respectdataClick to TweetOpenAI head Sam Waltman has said that he welcomes AI regulation, but his prior statements on its risks have focused more on existential Terminator-type threats than consumer protection issues. His Twitter response to the FTC investigation indicated that the company would work with the FTC, but expressed “disappointment” that the regulatory action is beginning with “a leak.”