Forgot password
Enter the email address you used when you joined and we'll send you instructions to reset your password.
If you used Apple or Google to create your account, this process will create a password for your existing account.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Reset password instructions sent. If you have an account with us, you will receive an email within a few minutes.
Something went wrong. Try again or contact support if the problem persists.
Image by Village Global, CC BY 2.0

Families of Canada’s deadliest mass shooting victims sued OpenAI and Sam Altman, and they have a strong case

Family members of victims from one of the deadliest mass shootings in Canadian history have launched a major legal battle against OpenAI and its CEO, Sam Altman, in a U.S. federal court. The lawsuits, filed in San Francisco, center on the allegation that OpenAI identified the shooter as a credible threat eight months before the tragic event occurred in Tumbler Ridge, British Columbia, yet failed to alert law enforcement, CP24 reported.

Recommended Videos

The shooting, which occurred on February 10, resulted in the deaths of nine people, including several children. Among the victims were an educational assistant and five students between the ages of 12 and 13. The shooter, 18-year-old Jesse Van Rootselaar, also killed her mother and stepbrother before taking her own life. A 12-year-old survivor who was shot three times remains in intensive care, and her family is among those seeking accountability through these new filings.

The core of the legal argument rests on the claim that OpenAI’s internal safety team identified the shooter as a credible and imminent threat as early as June 2025. The lawsuit alleges that the company’s automated systems flagged conversations where the shooter described various gun violence scenarios. According to the complaint, which cites internal discussions previously reported in the media, members of the safety team recommended that the company contact the police.

However, the lawsuit asserts that Sam Altman and other members of the OpenAI leadership team overruled this recommendation. The plaintiffs argue that the company chose not to involve authorities because doing so would have exposed the extent of violence-related conversations occurring on ChatGPT. They suggest that such a disclosure could have jeopardized the company’s path toward a massive, nearly $1 trillion initial public offering.

While the shooter’s original account was deactivated by the company, the lawsuits claim she was able to simply create a new account and continue using the platform to plan her attack. This raises serious questions about the effectiveness of current moderation and account-banning protocols.

Jay Edelson, the attorney representing the plaintiffs, stated that he plans to file an additional two dozen lawsuits in the coming weeks on behalf of other individuals impacted by the tragedy. One of the victims had initially filed a lawsuit in a Canadian court but decided to dismiss that case to pursue the claims in California instead.

OpenAI has publicly addressed the incident, with a spokesperson describing the shooting as “a tragedy.” The company maintains that it has a zero-tolerance policy regarding the use of its tools to assist in violent acts. “As we shared with Canadian officials, we have already strengthened our safeguards, including improving how ChatGPT responds to signs of distress, connecting people with local support and mental health resources, strengthening how we assess and escalate potential threats of violence, and improving detection of repeat policy violators,” the spokesperson said in a statement.

Following reports about the internal handling of the shooter’s account, Sam Altman issued an open letter in a local Tumbler Ridge newspaper last week, stating that he was “deeply sorry” the account was not flagged to law enforcement. In a blog post published on Tuesday, the company elaborated on its safety measures.

OpenAI stated that it trains its models to refuse any requests that could “meaningfully enable violence.” The company also noted that it notifies law enforcement when conversations suggest “an imminent and credible risk of harm to others,” adding that mental health experts are involved in assessing borderline cases.

These lawsuits are part of a broader, growing trend of litigation against AI developers. Plaintiffs are increasingly accusing these companies of failing to prevent chatbot interactions that contribute to self-harm, mental illness, and violent acts. While OpenAI has previously denied claims in other cases, arguing that specific perpetrators had long histories of mental illness, this is widely viewed as the first instance in the U.S. where a lawsuit alleges that ChatGPT directly facilitated a mass shooting.

The plaintiffs are seeking an unspecified amount of damages and are asking the court to force OpenAI to overhaul its safety practices. Specifically, they are pushing for mandatory law enforcement referral protocols to ensure that future threats are not handled solely by internal teams. As these cases move forward, the legal system will have to grapple with the complex question of whether a tech company can be held liable for the actions of users who exploit their AI models to facilitate violence.

Meanwhile, the legal pressure continues to mount, as Florida Attorney General James Uthmeier recently announced a criminal investigation into the role of ChatGPT in a separate 2025 shooting at Florida State University. The outcome of these cases could fundamentally change how AI companies manage safety, transparency, and their relationship with law enforcement moving forward.


Attack of the Fanboy is supported by our audience. When you purchase through links on our site, we may earn a small affiliate commission. Learn more about our Affiliate Policy
More Stories To Read
Author
Image of Manodeep Mukherjee
Manodeep Mukherjee
Manodeep writes about US and global politics with five years of experience under the belt. While he's not keeping up with the latest happenings at the Capitol Hill, you can find him grinding rank in one of the Valve MOBAs.