OpenAI vows safety policy changes after Canada school shooting

6 hours ago 8

Nadine YousifSenior Canada reporter

AFP via Getty Images A woman stands beside a memorial for the victims of the Tumbler Ridge shooting, set up around a spruce tree that is surrounded with flowers, teddy bears and notes. The woman has her head in her hands.AFP via Getty Images

Canadian officials have criticised OpenAI for failing to report the suspect's ChatGPT account to police, and say they believe the shooting could have been prevented

OpenAI says it will strengthen its safety measures after the company failed to alert police about the Tumbler Ridge shooting suspect's ChatGPT account despite it being flagged internally months before the attack.

In an open letter to Canadian officials, the company said the suspect was able to create a second account after the first was banned, slipping past its internal detection systems.

It said it has also since changed how it reports accounts to police, and that the suspect's activity would be referred to law enforcement if it were flagged today.

An account linked to the suspect, 18‑year‑old Jesse Van Rootselaar, was banned by OpenAI in June 2025 — seven months before the shooting.

Eight people were killed in the 10 February attack, which took place at a residence and the local secondary school in Tumbler Ridge, a small town in British Columbia, Canada.

The victims included the suspect's mother and 11‑year‑old stepbrother, as well as five young school children and an educator. Van Rootselaar died of a self-inflicted gunshot wound, police said.

The shooting was one of the deadliest in Canadian history.

Canadian officials met OpenAI senior staff earlier this week in Ottawa, after the company revealed it had shut down a ChatGPT account used by the suspect in June 2025 for violating usage terms.

That account was not reported to police, however, because it did not at the time meet its threshold for "credible and imminent planning" of serious violence, the company said.

In its letter to Canadian officials on Thursday, penned by OpenAI's vice-president of global policy and shared with media outlets, the company said it had implemented a series of changes in recent months, including enlisting the help of "mental health and behavioural experts" to assess cases and making the criteria for referral to police "more flexible".

Because of the changes, OpenAI said it would have reported the suspect's ChatGPT account under the new guidelines.

The letter does not specify when those new protocols took effect.

The company also revealed that the suspect was able to create a second account, despite being flagged by OpenAI systems in the past. That second account was shared with police after the shooting, it said.

"We commit to strengthening our detection systems to better prevent attempts to evade our safeguards and prioritize identifying the highest risk offenders," the company wrote.

OpenAI said it will also establish a direct point of contact with Canadian law enforcement so it can quickly flag any possible future cases with "potential for real world violence".

That direct line of communication is one of the requests made by Canadian officials following their meeting with OpenAI staff on Tuesday.

Canada's AI minister Evan Solomon has described what occurred as a "failure".

He told reporters that he was left "disappointed" after the meeting, saying that he did not hear "any substantial new safety protocols" from OpenAI.

Solomon also opened the door for future legislation on the matter if OpenAI fails to implement changes quickly. "All options for us are on the table, because at the end of the day, Canadians want to feel safe," Solomon after Tuesday's meeting.

British Columbia Premier David Eby has said he believes the shooting would have been prevented if the company had alerted police to Van Rootselaar's account months ago.

"They tragically missed the mark in [not] bringing this information forward. The consequences of that will be borne by the families of Tumbler Ridge for the rest of their lives," Eby told reporters on Thursday.

Eby added that OpenAI Sam Altman has agreed to meet to discuss the company's safety policies.

"I think it's important that Mr Altman hear about how his team's decision not to bring this information forward has resulted in devastation," he said.

Read Entire Article
Sehat Sejahterah| ESPN | | |