Tumbler Ridge Families File Lawsuit Against OpenAI

▼ Summary
– Seven families of victims from the Tumbler Ridge school shooting have sued OpenAI and CEO Sam Altman for negligence, alleging the company failed to report the suspected shooter’s ChatGPT activity to police to protect its reputation and IPO.
– OpenAI considered flagging shooter Jesse Van Rootselaar’s conversations about gun violence but decided against it, according to the Wall Street Journal.
– The lawsuits accuse OpenAI of lying about banning Van Rootselaar, as the company only deactivated his account, and he later created a new one following OpenAI’s own instructions.
– The families claim GPT-4o’s “defective” design, which was rolled back for being overly agreeable, contributed to the mass shooting, and they are suing for wrongful death and aiding a mass shooting.
– Altman apologized to the Tumbler Ridge community, expressing regret for not alerting law enforcement and pledging to work with governments to prevent future incidents.
Seven families whose loved ones were killed or wounded in the Tumbler Ridge school shooting in Canada have filed lawsuits against OpenAI and its CEO, Sam Altman. The suits accuse the company of negligence for failing to alert authorities about the suspected shooter’s ChatGPT activity, even after its own systems flagged concerning behavior. The families allege OpenAI chose to stay silent to protect its reputation and upcoming initial public offering (IPO).
According to The Wall Street Journal, OpenAI “considered” notifying police about 18-year-old suspect Jesse Van Rootselaar, whose conversations with the AI reportedly involved gun violence. But the company ultimately decided against it. The lawsuits claim OpenAI lied about “banning” Van Rootselaar. Instead of implementing a true ban, the company allegedly only deactivated his account. He then created a new one using a different email address.
When OpenAI was later forced to admit the shooter had opened a new account, it told a second lie: that he must have “evaded” safeguards to do so. The families’ legal filing counters this directly: “There were no safeguards to evade. The Shooter simply followed OpenAI’s own instructions to create a new account after being banned. The ‘safeguards’ OpenAI pointed to after the attack did not fail; they did not exist.”
The families also argue that GPT-4o’s “defective” design played a role in the tragedy. Last year, OpenAI rolled back an update to GPT-4o after discovering it was “overly flattering or agreeable , often described as sycophantic.” The lawsuits include counts of wrongful death and aiding and abetting a mass shooting against both OpenAI and Altman.
Altman apologized to the Tumbler Ridge community last week. “I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” he said. “Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again.”
(Source: The Verge)



