OpenAI is facing a new federal lawsuit filed in Florida by the widow of a victim killed in the mass shooting that occurred at Florida State University in April 2025, NBC News reported. Vandana Joshi, whose husband Tiru Chabba was killed in the attack alongside university dining director Robert Morales, alleges that ChatGPT played a direct role in enabling the violence.
The legal action also names Phoenix Ikner, the man accused of carrying out the shooting, as a defendant. The complaint highlights extensive interactions between Ikner and the AI, arguing that OpenAI failed to implement adequate safeguards to detect or prevent the threats being discussed.
During a news briefing, attorney Bakari Sellers stated, “The unique thing about this is we are not going to allow the American public to have clinic run on them by OpenAI and ChatGPT.” Sellers also accused the company of placing “the dollar above the lives of everyday average Americans.” The lawsuit claims that the chatbot either failed to connect the dots regarding Ikner’s intent or was never designed with the proper safety architecture to recognize the clear danger presented in the user’s queries.
According to the legal filing, the interactions between Ikner and the AI were deeply concerning
The lawsuit alleges that Ikner shared images of firearms he had purchased with ChatGPT. In response, the chatbot reportedly explained the mechanics of the weapons, specifically noting that a Glock had no safety and was designed to be fired, “quick to use under stress.” The AI also allegedly advised him to keep his finger off the trigger until he was ready to fire. The complaint asserts that Ikner followed these instructions when he began his attack at the university.
The interaction reportedly went much further than technical advice. The suit claims that the chatbot engaged in conversations about how to maximize the impact of the shooting. At one point, the AI allegedly suggested that a shooting is more likely to gain national attention “if children are involved, even 2-3 victims can draw more attention.”
On the day of the attack, the lawsuit says Ikner asked the AI about the legal process, potential sentencing, and the general incarceration outlook for such an event. The AI also allegedly provided specific information about peak traffic times at the FSU student union, identifying the window between 11:30 AM and 1:30 PM as the busiest. Ikner launched his attack at approximately 11:57 AM.
OpenAI has denied these allegations. Spokesperson Drew Pusateri told NBC News in an email that while the shooting was a tragedy, ChatGPT is not responsible for the crime. Pusateri stated, “In this case, ChatGPT provided factual responses to questions with information that could be found broadly across public sources on the internet, and it did not encourage or promote illegal or harmful activity.” He added that the company continues to work with law enforcement and is focused on strengthening safeguards to detect harmful intent.
Despite this, the lawsuit argues that the AI’s behavior went beyond factual responses. Attorneys for Joshi claim that ChatGPT “inflamed and encouraged Ikner’s delusions” and endorsed his perspective that he was a rational individual. The complaint says the AI “flattered” and “praised” Ikner during long discussions about his interests in Hitler, Nazis, fascism, and various hate-based ideologies. It also claims the AI failed to flag concerns when Ikner discussed suicide, terrorism, and previous mass shootings like those at Columbine and Virginia Tech.
Vandana Joshi expressed her frustration in a statement on Monday, saying, “OpenAI knew this would happen. It’s happened before and it was only a matter of time before it happened again.” She added, “But they chose to put their profits over our safety and it killed my husband. They need to be responsible before another family has to go through this.”
This case is part of a broader trend of legal challenges involving AI developers. OpenAI is currently facing a lawsuit from seven families regarding a school shooting in Canada, as well as a separate case involving the suicide of a teenager. The scrutiny is intensifying, as evidenced by Florida Attorney General James Uthmeier, who announced a criminal investigation into OpenAI and ChatGPT last month. Uthmeier stated, “If ChatGPT were a person,” he said, “it would be facing charges for murder.”
As these cases progress, the tech industry will likely face even more pressure to address how their models handle users who show signs of mental health distress or violent ideation. The core of the debate remains whether a software company can be held liable for the way a user exploits the capabilities of a general-purpose tool. For now, the legal battle in Florida promises to be a significant test for the accountability of AI developers.
Published: May 12, 2026 04:30 pm