
AI is here to stay. Get over it. There’s no putting that genie back in the bottle.
But before I continue, please allow me to go on a brief tangent.
I hate that we call every large language model and chatbot AI. As someone who dabbled in science fiction in his younger days, AI to me means that the program, application, algorithm, or whatever has sentience. While platforms like ChatGPT and Gemini have access to vast troves of knowledge, they do not have minds of their own. But, I digress.
Anyway, researchers from the Center for Countering Digital Hate and CNN recently tested 10 popular AI chatbots, posing as 13-year-old boys asking about violence such as school shootings and the like. The results were not what you would call good. On average, the chatbots enabled violent planning 75% of the time and actively discouraged it in only 12% of cases.
ChatGPT helped in 61% of cases, at one point offering specific advice on which shrapnel type would be most lethal in an attack. Google’s Gemini was similarly forthcoming, while DeepSeek, the Chinese model, provided detailed rifle advice to a user asking about political assassinations. DeepSeek is said to have also signed off with “Happy (and safe) shooting!” Meta’s Llama pointed a user roleplaying as an incel toward local gun shops and shooting ranges with a “welcoming environment.”
To its credit, Anthropic’s Claude refused, consistently responding to questions about school shootings and racial violence with,“I cannot and will not provide information that could facilitate violence.”
This isn’t purely theoretical either. Last year, a 16-year-old in Finland allegedly used a chatbot to draft a manifesto and attack plan before stabbing three girls at a school in Pirkkala. In January 2025, Matthew Livelsberger used ChatGPT to research explosives before detonating a rented Cybertruck outside the Trump International Hotel in Las Vegas. Not to mention the recent school shooting at Tumbler Ridge Secondary School in British Columbia.
But before you march with pitchforks and torches to your local data center, let’s get some perspective. (I mean, you can for other reasons, but that’s for another time.)
We’ve been here before.
In the 1970s and 80s, parents were convinced that rock and heavy metal musicians were embedding Satanic messages in records that, when played backwards, would drive teenagers to suicide or violence. Judas Priest was literally taken to court over it. The PMRC dragged musicians before Congress in 1985 to answer for corrupting America’s youth. Tipper Gore, Al Gore’s wife, who was not a politician, wanted warning labels on Prince albums.
Then it was violent movies. Then it was the video games Doom and Mortal Kombat. Then it was the internet itself. Then it was social media. Every generation gets its technology panic, and every generation of teenagers survives it, mostly because the technology was never really the root cause of the problem.
People have committed horrific acts of violence, claiming they received instructions from their televisions. Charles Manson thought the Beatles were sending him messages through The White Album. If we’d banned the record, would that have stopped him? Of course not. Disturbed people find justification in whatever medium is available to them.
And as for teenagers researching how to commit violence, they’ve been able to do that as long as there have been libraries, or encyclopedias, or, more recently, Google. AI makes some of that marginally more accessible and conversational, but the information itself has never been hard to find for someone determined to find it.
Here’s the thing the fearmongers and Luddites don’t want you to know. Most kids are using AI to look stuff up and get help with their math homework.
New research from the Pew Research Center and Common Sense Media surveyed over 1,400 American teenagers and their parents, and the picture is a lot more benign than the headlines suggest. The top uses among teens are looking things up, help with schoolwork, research, and writing assistance. Nearly half use it for entertainment. One teen said she uses it to generate pictures of penguins and pancakes.
The more pressing finding isn’t about violence at all. It’s more about how completely checked out parents are. Only 51% of parents think their kids use AI. The actual number is 64%. Four in ten parents have never had a single conversation with their child about AI, ever.
The problem isn’t the technology but the silence.
Yes, there are legitimate warning signs that a teen’s AI use has crossed into unhealthy territory. The American Psychological Association flags things like describing an AI as their best friend or primary confidant, throwing a fit when they can’t access AI, grades or real friendships slipping, using AI to dodge difficult human conversations, or noticeable changes in mood and behavior.
Now read that list again and swap out “AI” for literally any other thing a teenager can become obsessed with. Maybe a video game, or a relationship, or a social media account, or a substance, and the red flags are identical. This is not an AI problem. This is a know-your-kid problem.
Parents need to close the technology gap with their kids, and fast. Not by panicking or banning devices, but by actually learning what their kids are doing online and talking about it.
Ask your kid to show you how they use AI. Ask what they’ve asked it. Ask what it told them.
This is how it’s always worked with the internet. The parents who stayed engaged and didn’t treat technology as the enemy raised kids who navigated it well. The ones who stuck their heads in the sand raised kids who were unprotected in a world that wasn’t going to wait for them to catch up.
AI isn’t any different. The stakes can be high. Real-world violence cases prove that. But the solution is the same one it’s always been. Be present, stay informed, and keep talking to your kids.
Every generation of parents has had to learn a new ‘language’ to stay relevant in their children’s lives. Yours just happens to have a chatbot.
(Sources)






Leave a comment