
Again, I have to apologize for taking one of my ‘breaks.’ I am just now catching up on news from last month.
So, by now, most of you know what happened in Tumbler Ridge, British Columbia, back in February. For those just joining us, here is the TL;DR version.
18-year-old Jesse Van Rootselaar killed her mother, Jennifer Strang, 39, and her 11-year-old stepbrother, Emmett, at their home before driving roughly a mile to Tumbler Ridge Secondary School. There, Van Rootselaar murdered education assistant Shannda Aviugana-Durand, 39, and five students: Abel Mwansa, 12; Ezekiel Schofield, 13; Kylie Smith, 12; Zoey Benoit, 12; and Ticaria Lampert, 12. Van Rootselaar then took her own life inside the school. Twenty-seven others were wounded. Two were airlifted out in critical condition, including 12-year-old Maya Gebala, who was shot three times and sustained a traumatic brain injury.
I have already written at length about this case. If you want the full breakdown, the previous posts are here. However, the short version of the “why” is this. Van Rootselaar was a deeply isolated person with years of documented mental health crises, a home saturated with firearms, a parent who resisted early behavioral intervention, and a demonstrated obsession with violent online content. She built a mall shooting simulator on Roblox and frequented a gore site known to attract True Crime Community adherents. She was, by every available indicator, another young person radicalized by a digital ecosystem that treats mass killers as icons.
The predictable culture war response arrived before the funerals. Since Van Rootselaar was a trans woman, certain politicians and influencers decided that was the story. It was not. Trans people are statistically far more likely to be victims of violence than perpetrators of it. The overwhelming majority of school shooters across the past three decades have been white males, many of them steeped in radical right-wing ideologies. That is not an opinion; it is a documented pattern. What happened at Tumbler Ridge fits the long history of the Columbine copycat model far more cleanly than any identity-based narrative.
Then ChatGPT entered the story.
In June 2025, roughly eight months before the shooting, OpenAI banned Van Rootselaar’s ChatGPT account after automated systems flagged it for what the company described as misuse involving violent activities. Reportedly, she had been feeding the chatbot scenarios involving gun violence over multiple days. That flag triggered human review, where at least a dozen OpenAI employees became aware of the account’s content. After that review, the company determined the activity did not meet their internal threshold of posing an “imminent and credible risk of serious physical harm,” and chose not to contact the RCMP.
It later emerged that Van Rootselaar had opened a second ChatGPT account after the first was suspended. OpenAI has a system designed to detect exactly this kind of circumvention, but it obviously failed. The company only discovered the second account after the shooting, once her name became public and they knew who to search for.
The family of Maya Gebala has now filed a lawsuit against OpenAI. Maya’s mother, Cia Edmonds, is seeking punitive damages on behalf of herself, Maya, and her other daughter Dahlia, who was present during the attack and has experienced PTSD and depression in its aftermath. The lawsuit alleges, among other things, that OpenAI rushed ChatGPT to market without adequate safety studies.
ChatGPT was, for most of the world, the first publicly available AI chatbot most of us ever encountered. It arrived fast, it scaled fast, and the safety frameworks have been playing catch-up ever since. Whether that constitutes legal liability will be for Canadian courts to decide. But the underlying accusation that the product was deployed before the guardrails were ready is not an unreasonable thing to ask about.
So is OpenAI responsible for what happened at Tumbler Ridge? Honestly, it is hard to say.
We do not yet know what Van Rootselaar actually typed into ChatGPT, nor do we know what the chatbot said back. Police hold the transcripts, and they will eventually become public, but no timeline has been set. Until we can read that conversation, assigning specific blame to OpenAI is difficult to do honestly.
I hate to harp on this, but it reminds me again of Kimverr Gill. If you’ll recall, Gill was the gunman at the 2006 Dawson College shooting where he shot and killed student Anastasia De Sousa. Gill also visited this blog months before his attack. He left the most distasteful comments, but nothing that ever crossed the line into criminality.
The ChatGPT situation could be different, or it could be the same. Without the transcript, we are speculating.
However, I am more confident in saying that OpenAI is almost certainly a link in the chain of failure that produced the Tumbler Ridge shooting. As I too frequently say, almost all school shootings are a perfect storm of failure. Such as a parent who blocked early intervention or an inadequate mental health system. Or maybe even a digital ecosystem that normalized violence. Then add a tech company whose employees reviewed flagged content about gun violence and decided a phone call to police wasn’t warranted.
Is any single link responsible for the whole chain? Probably not. Is each link worth examining? Absolutely.
However, what I keep coming back to is a different question entirely. We now have documented cases of people forming deep emotional attachments to large language models. People are treating AI chatbots as confidants, companions, and therapists. One of the lawsuits OpenAI was already facing before Tumbler Ridge alleged that ChatGPT functioned as a suicide coach in the death of a university student. These are not fringe cases anymore.
When someone is pouring their darkest thoughts into a chatbot instead of a human being, that is not primarily an AI problem. That is a mental health crisis. The chatbot is the symptom, while the isolation, the untreated distress, and the absence of real human connection are the disease.
OpenAI should have called the RCMP. Twelve people knowing about that account and doing nothing is an institutional failure that policy updates cannot fully excuse. But if we focus our outrage entirely on the AI angle, we will do what we always do after these events.
Miss the part that actually matters.
(Sources)






Leave a comment