Unlock the Editor’s Digest for free
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
The top state lawyers in California and Delaware wrote to OpenAI raising “serious concerns” after the deaths of some chatbot users and said safety must be improved before they sign off on its planned restructuring.
In their letter on Friday to OpenAI’s chair Bret Taylor, the attorneys-general said that recent reports of young people committing suicide or murder after prolonged interactions with artificial intelligence chatbots including the company’s ChatGPT had “shaken the American public’s confidence in the company”.
They added that “whatever safeguards were in place did not work”.
OpenAI, which is incorporated in Delaware and headquartered in California, was founded in 2015 as a non-profit with a mission to build safe, powerful AI to benefit all humanity.
The attorneys-general — Rob Bonta in California and Kathy Jennings in Delaware — have a crucial role in regulating the company and holding it to its public benefit mission.
Bonta and Jennings will ultimately determine whether OpenAI can convert part of its businesses to allow investors to take traditional equity in it and unlock an eventual public offering.
The pair’s intervention on Friday, and a meeting earlier this week with OpenAI’s legal team, followed “the heartbreaking death by suicide of one young Californian after he had prolonged interactions with an OpenAI chatbot, as well as a similarly disturbing murder-suicide in Connecticut”, they wrote.
Those incidents have brought “into acute focus . . . the real-world challenges, and importance, of implementing OpenAI’s mission”, the attorneys-general added.
Almost three years after the release of ChatGPT, some dangerous affects of the powerful technology are coming to light. OpenAI is being sued by the family of Adam Raine, who took his own life in April at the age of 16 after prolonged interactions with the chatbot.
The company announced this week that it would introduce parental controls for ChatGPT.
OpenAI has already ditched its original plans to convert the company into a for-profit, following discussions with the attorneys-general and under legal attack from Elon Musk and a number of other groups. Instead, it is seeking to convert only a subsidiary, allowing investors to hold equity while ensuring the non-profit board retains ultimate control.
Friday’s intervention suggests that even this more modest goal is under threat if OpenAI cannot demonstrate safety improvements.
“Safety is a non-negotiable priority, especially when it comes to children,” wrote the attorneys-general.
Since 2015, the small research lab has grown into a commercial behemoth with 700mn regular users for its flagship ChatGPT. It has sought ever-larger amounts of outside capital, and is raising $40bn as it looks to fend off competition from rivals such as Anthropic, Meta, Google and Musk’s xAI.
The race between those groups has created a tension between upholding safety and commercialising the powerful technology. Last month, a group of 44 attorneys-general wrote to the leading companies building AI tools to warn that “the potential harms of AI, like the potential benefits, dwarf the impact of social media”.
“We wish you all success in the race for AI dominance. But we are paying attention. If you knowingly harm kids, you will answer for it,” they wrote.
OpenAI did not immediately respond to a request for comment.