Canada Demands Answers from OpenAI After Shooter's Account Suspension

OpenAI failed to alert Canadian police after suspending the account of the Tumbler Ridge school shooter, raising concerns from the country's AI minister.
Canada's artificial intelligence minister has summoned representatives from OpenAI after the company declined to alert police following the suspension of a user's account who later became the perpetrator of one of the country's worst-ever school shootings.
Evan Solomon says he is "deeply disturbed" by reports that the company, which operates the popular ChatGPT chatbot, suspended the account of Jesse Van Rootselaar over the "furtherance of violent activities" in June 2025 but did not reach out to Canadian law enforcement.

The Tumbler Ridge shooting, which occurred in February 2026, left several students and faculty members dead, making it one of the deadliest attacks on a Canadian school in recent history. Solomon is now demanding answers from OpenAI about why they did not share information with authorities that could have potentially prevented the tragedy.
"We are deeply concerned by reports that OpenAI was aware of this individual's troubling behavior and failed to take appropriate action," said Solomon. "As the minister responsible for artificial intelligence oversight in Canada, I expect full transparency and cooperation from technology companies operating in our country."
According to the Wall Street Journal, several OpenAI employees had raised internal alarms about the user's account and the potential threat it posed months before the shooting occurred. However, the company ultimately decided not to alert Canadian police, citing privacy concerns and a lack of a clear legal obligation to do so.
This decision has now drawn intense scrutiny, with critics arguing that OpenAI had a moral and ethical responsibility to share information that could have prevented a deadly attack. The company has not yet publicly addressed the minister's demands for answers.
The Tumbler Ridge shooting has reignited broader debates around the role and accountability of technology companies when it comes to monitoring user behavior and potential threats. As artificial intelligence continues to play an increasingly prominent role in society, questions persist about how these powerful tools should be regulated and what obligations their creators have to protect the public.
Solomon and other Canadian officials are now vowing to closely examine OpenAI's actions and decision-making processes in this case, underscoring the growing scrutiny facing the AI industry as it grapples with complex ethical dilemmas.
Source: The Guardian


