
Close to three months ago, I warned that Omnilert’s “AI gun detection” technology would eventually mistake something harmless for a weapon, maybe a saxophone, maybe an umbrella. Turns out, it didn’t even take that much. All it took was a bag of Doritos.
In Baltimore County this week, a Black teenager was sitting outside Kenwood High School after football practice eating chips. Minutes later, police rolled up, eight cars deep, guns drawn, and ordered him to the ground. Why? Because an AI system mistook his crumpled Doritos bag for a gun.
The teen was handcuffed, searched, and humiliated before officers realized he had nothing but a snack. The police showed him the image that triggered the alarm, a grainy frame from a school security camera that the Omnilert system had flagged as a weapon. It wasn’t. It was junk food.
Baltimore County schools adopted Omnilert last year, part of a growing trend of districts using AI to “spot potential weapons” in surveillance footage. The company promises a human review before police are dispatched, but that clearly didn’t stop this from happening. And when WBAL-TV asked Omnilert for comment, the company refused to discuss the incident, citing “internal school procedures.”
That’s convenient.
But let’s be honest. This isn’t just a technological failure. It’s a moral one. When an algorithm sees danger where there is none, and the response is armed police confronting a Black teenager, it’s not simply a bug in the system. It’s a reflection of the biases baked into the data that train these programs. The same societal assumptions that already put Black youth at greater risk of being perceived as threats.
And while school administrators rush to assure parents that “safety is our highest priority,” that phrase rings hollow when safety means pointing guns at innocent students.
The truth is, this is exactly what I warned about. AI gun detection sounds good on paper until it mistakes a snack for a firearm and someone ends up on the pavement. Until the human “verification” process fails under pressure. Until a tool meant to protect students ends up traumatizing them instead.
This isn’t progress. It’s security theater wearing a circuit board. It’s the illusion of control sold to frightened parents and underfunded schools by companies that want to look like heroes in a crisis.
AI wouldn’t have been able to stop Uvalde. It wouldn’t have been able to prevent Nashville. And it didn’t make anyone safer in Baltimore County this week. What it did do was turn a normal teenager eating Doritos into a “suspicious person with a weapon.”
And like every other “innovation” sold as a fix for school violence, it still doesn’t address the real problem that kids can get guns in the first place. Until that changes, all the cameras, algorithms, and buzzwords in the world won’t make a single classroom any safer.
If that’s the future of school safety, it looks a lot like the past, just with more cameras and less accountability.
(Source)






Leave a comment