Well, it didn’t take that long to prove me right, but then again, it never does.

In my previous post about Omnilert’s AI gun detection system being used at Uvalde, I talked about the problem of false positives. That was in the context of guns. This post isn’t about AI and guns, but it shows the same fundamental breakdown between AI, schools, and police. Honestly, the more I read about these cases, the more Orwellian the whole thing feels.

For example, a 13-year-old Tennessee girl cracked a stupid, offensive joke in a private chat with classmates. The school’s surveillance software, known as Gaggle, flags it. Before the morning is over, she’s arrested, interrogated, strip-searched, and spends the night in a cell. A court hands her eight weeks of house arrest, a psych evaluation, and 20 days in an alternative school.

The joke wasn’t a threat. Context made that clear. But zero-tolerance laws and an algorithm with no sense of humor make for a dangerous combination.

Of course, being the corporate douchenozzle he probably is, Gaggle’s own CEO admitted the school didn’t use the software the way it’s intended. Ah, the corporate way: blame and deflect. However, that is a major part of the problem. The supposed purpose of the AI is to identify potential issues before they become law enforcement matters. Instead, the school went straight to cops and cuffs.

Let’s also not forget that AI doesn’t ‘understand’ context. That’s especially concerning when you remember how adept incels, columbiners, and other would-be school shooters are at using coded language to stay under the radar. These systems will happily overreact to a bad joke while completely missing the real threats.

If I were a teenager today, I’d probably be given the death sentence just for researching stories for this blog. Between looking up school shooters, extremist forums, and court records, the software would be setting off alarms every time I opened my laptop.

What makes it even worse is that AI companies refuse to give out meaningful data on how often their systems get it wrong. We only have rare glimpses, like from the nearby Lawrence, Kansas, school district, where Gaggle sent over 1,200 alerts in 10 months, and almost two-thirds turned out to be non-issues. Over 200 were flagged from homework. Students in a photography class were even called to the office over ‘nudity’ in their school projects, which turned out to be nothing at all.

How much training do schools actually get on these systems? Do administrators even understand the tools they’re using, or is it just plug-and-play and let the AI be the hall monitor? One Lawrence school board member actually defended it with the phrase “the greater good.” Do you know who else said they were acting for the greater good? Let’s just say history is full of examples, and none of them are flattering.

Meanwhile, the irony is mind-boggling. We have all this surveillance power, yet time after time, real-world social media warnings about future school shooters go ignored by police, parents, and schools.

It’s hard to see these programs like this as anything other than another revenue stream for tech bros who are angling for state and federal handouts. The sales pitch is always about safety, but the result is kids being criminalized for words taken out of context. And just like the AI gun cameras, it still doesn’t address the core issue of how easy it is for teens to legally get their hands on the weapons used in these shootings in the first place.

Until we fix that, everything else is just a high-tech security theater. A very expensive, very invasive, and very flawed theater.

(Source)

Leave a comment

Featured