
Previously, I wrote about Baltimore County’s shiny new AI gun detection system from Omnilert. You know, the one that mistook a bag of Doritos for a firearm and sent police swarming a teenager who was just trying to eat a snack after practice. At the time, I said the whole thing was security theater with the potential to go very, very wrong.
Well, now we have the official response, and it may even be worse than the incident itself.
This week, a Baltimore County Councilman called for a full review of what happened. This makes sense, because multiple officers rolling up with guns drawn on a minor over a snack food should at minimum trigger some reflection. But when reporters reached out to the district and Omnilert to ask how this could have happened, the answer was…
…the technology worked as intended.
Let’s break that down.
A student sitting outside eating chips gets flagged as a potential active shooter. The alert goes through multiple layers of human review. Administrators scramble. Police are called. And officers show up ready to fire.
And the official line from both the school system and the corporation selling them this hardware is that nothing malfunctioned. Nothing went wrong. This was how it’s supposed to function.
If your system’s intended behavior ends with a teenager face-down on the pavement at gunpoint over Doritos, your system is a failure by design. Full stop.
And yes, we need to talk about the part the press releases gloss over.
The student is Black.
Which means the risk here wasn’t just humiliation. It was life or death. Because when you pair an algorithm that misidentifies harmless objects as weapons with a school environment primed for panic and police trained to treat every scenario as an imminent threat, you’re stacking the deck in the worst possible direction for a Black teenager.
Everyone involved wants to treat this as a technical hiccup, a training issue, or a miscommunication, but the truth is simpler. This is what happens when school districts outsource “safety” to a Silicon Valley fever dream that treats children like threat silhouettes and trains adults to respond in military posture to vending machine crumbs.
This is what happens when decision-makers would rather buy more cameras than confront the factors that actually cause violence.
And this is what happens when a system is built without ever asking, “Who is this most likely to harm?”
So maybe Omnilert is right. Maybe the system did work as intended. And that should terrify everyone.
Because AI false positives don’t just cause inconvenience, they create danger, and they escalate ordinary moments into weapons-drawn standoffs. If even one thing had gone differently that night—a flinch, a startled reaction, a loud noise nearby—this story ends in a graveyard.
We’re not talking about hypothetical risk. We’re talking about the difference between snacks and funerals.
Until schools stop confusing surveillance with safety, we’re going to keep seeing more of this.
And sooner or later, someone won’t walk away from it.
(Source)






Leave a comment