Lawyer Calls Sam Altman ‘Face of Evil’ for Not Reporting School Shooter

▼ Summary
– Seven lawsuits filed in a California court allege OpenAI could have prevented a deadly Canadian mass shooting.
– OpenAI overruled its internal safety team, which had flagged a ChatGPT account linked to the shooter as a credible gun violence threat more than eight months prior.
– OpenAI decided not to notify police, citing the user’s privacy and potential stress from an encounter, despite police already having a file on the shooter and previously removing their guns.
– Instead of reporting the user, OpenAI deactivated the account and then told the shooter how to regain access by signing up with a different email address.
– The lawsuits claim the company prioritized the user’s privacy over the risk of real-world violence.
A lawyer has publicly branded OpenAI CEO Sam Altman as the “face of evil,” accusing the company of failing to act on warnings that could have prevented one of Canada’s deadliest school shootings. Seven lawsuits filed Wednesday in a California court allege that OpenAI knowingly suppressed internal safety team recommendations regarding a ChatGPT account later linked to the shooter.
More than eight months before the attack, trained experts flagged the account as a credible threat of real-world gun violence. The safety team urged OpenAI to notify police, who already had a file on the suspect and had previously removed firearms from their home. However, according to whistleblowers who spoke to The Wall Street Journal, OpenAI leadership decided that protecting the user’s privacy and avoiding the potential stress of a police encounter outweighed the risk of violence.
Rather than reporting the threat, OpenAI simply deactivated the account. The company then followed up with instructions on how the user could regain access to ChatGPT by signing up with a different email address, the lawsuits allege. This allowed the shooter to continue planning the attack undeterred.
The legal action paints a stark picture of a company prioritizing user convenience over public safety, with devastating consequences. The lawsuits argue that had OpenAI followed its own safety protocols, the tragedy might have been averted.
(Source: Ars Technica)




