OpenAI boss Sam Altman has apologized to a Canadian community for failing to alert authorities about a banned ChatGPT account linked to a teenager who went on to commit a mass shooting.
Jesse Van Rootselaar, who died from a self-inflicted gunshot wound during the attack in January, killed eight people and injured dozens of others in Tumbler Ridge, British Columbia.
OpenAI said it banned 18-year-old Van Rootselaar’s ChatGPT account because of problematic usage, but did not refer the matter to police because it did not meet its threshold of a credible or imminent plan for serious physical harm to others.
Altman said he was “deeply sorry,” in a letter “to the community of Tumbler Ridge,” which was published in full by a local news website.
“The pain your community has endured is unimaginable. I have been thinking of you often over the past few months,” Altman wrote.
“When I spoke with Mayor Krakowka and Premier Eby about this tragedy, they conveyed the anger, sadness, and concern being felt across Tumbler Ridge,” he continued. “We agreed a public apology was necessary, but that time was also needed to respect the community as you grieved. I share this letter with the understanding that everyone grieves in their own way and in their own time.”
Altman wrote that he could not imagine “anything worse” than losing a child. “My heart remains with the victims, their families, all members of the community, and the province of British Columbia.”
He wrote that going forward, his company would work with all levels of government to ensure “something like this” never happens again.
ChatGPT’s famously sycophantic personality has led some users down dark paths, leading OpenAI to revise how the chatbot behaves when asked about sensitive subjects like depression.
OpenAI is facing an ongoing lawsuit filed by the parents of 16-year-old Adam Raine. The suit alleges ChatGPT “actively helped” Raine explore suicide methods over several months before he died. OpenAI previously told Business Insider that it was saddened by Raine’s death and that ChatGPT includes safeguards.
OpenAI said last year that it was working with mental health professionals to improve how ChatGPT responds to users who show signs of psychosis or mania, self-harm or suicide, or emotional attachment to the chatbot.

