The family of a young girl critically injured in a deadly mass shooting in Canada has filed a lawsuit against OpenAI, alleging that the technology firm failed to alert authorities about troubling activity linked to the attacker on its chatbot platform.
Lawyers representing 12-year-old Maya Gebala, who was severely wounded during the attack, said the legal action seeks to determine whether the company could have done more to prevent the tragedy. According to the legal team, the case aims to uncover what occurred before the incident and whether warning signs were missed.
The lawsuit centres on an account connected to the suspect, 18-year-old Jesse Van Rootselaar, which had reportedly been banned by OpenAI in June 2025 after concerns emerged about conversations linked to violent behaviour. Despite the suspension, the company did not notify law enforcement officials at the time, saying there was no clear indication that an attack was imminent.
Several months later, the suspect carried out a devastating shooting spree in the small mining community of Tumbler Ridge in Canada’s British Columbia province. Investigators said the attacker first killed her mother and brother at their family home before proceeding to a nearby secondary school, where five children and a teacher were shot dead. The gunwoman later died from a self-inflicted gunshot wound when police arrived at the scene.
Maya Gebala, one of the survivors, remains hospitalised and has undergone multiple emergency brain surgeries as doctors continue efforts to stabilise her condition. Her family says the long-term outlook for her recovery remains uncertain.
In a statement responding to the lawsuit, OpenAI described the shooting as an “unspeakable tragedy” and reiterated its commitment to cooperating with governments and law enforcement agencies to strengthen safeguards around its technology.
Canadian authorities have also taken an interest in the issue. Federal officials summoned OpenAI representatives to Ottawa to discuss the company’s safety measures, while British Columbia’s premier has held talks with OpenAI chief executive Sam Altman regarding the firm’s policies on detecting potential threats.
OpenAI said it has since revised several of its security procedures, including consulting mental-health specialists, behavioural experts and law-enforcement authorities to better identify conversations that may signal credible risks of violence.
The lawsuit, lawyers say, is intended not only to seek compensation for the victim’s family but also to establish accountability and prevent similar tragedies in the future.

0 Comments