New York has unveiled a World Economic Forum-backed AI surveillance system to predict crimes before they occur, targeting the city’s subway system. The Metropolitan Transit Authority (MTA) is deploying this technology to detect “irrational or concerning conduct,” aiming to enhance safety amid rising public concern.
Governor Kathy Hochul, who has prioritized subway surveillance since 2021, is responding to a wave of high-profile assaults and robberies that have fueled anxiety. With 40 percent of subway platform cameras currently monitored in real-time by human operators, the AI system marks a significant escalation in efforts to preempt criminal activity.
Naturalnews.com reports: In line with this, the MTA is collaborating with AI firms to implement real-time video analysis technology that scans for “problematic behaviors” and alerts law enforcement before incidents occur.
Unlike facial recognition systems, the AI would not identify individuals but instead detect suspicious actions, such as unattended bags or aggressive movements, to predict possible threats. The agency claimed that this is a “predictive prevention” – a pre-crime dragnet that turns nervous tics, anxious pacing or even talking to yourself into potential red flags for law enforcement.
The AI system would expand surveillance coverage without requiring additional staff, automatically notifying the New York Police Department (NYPD) of potential dangers.
“AI is the future,” said MTA Chief Security Officer Michael Kemper. “We’re working with tech companies literally right now and seeing what’s out there right now on the market, what’s feasible, what would work in the subway system.”
Policymakers believe that AIs are inherently more objective than human beings
This move has raised alarms among civil liberties advocates who warn of a slippery slope toward mass behavioral policing.
In an article written by Cristina Maas for Reclaim the Net, she claimed that there is a dangerous illusion spreading among policymakers: the belief that algorithms are inherently objective, that they transcend the biases and blind spots of human judgment.
However, she argued that AI is no oracle. Instead, it is a tangled web of code and human assumptions, trained on flawed data and peddled by tech evangelists who have never had to navigate the chaos of a rush-hour subway, let alone the complexities of real-world decision-making. Their glossy presentations promise progress, but in reality, they are selling a veneer of precision over the same old prejudices, repackaged as innovation.
“Whatever patterns these systems detect will reflect the same blind spots we already have; just faster, colder and with a plausible deniability clause buried in a vendor contract,” Maas wrote. “And while the MTA crows about safer commutes, the reality is that this is about control. About managing perception. About being able to say, ‘We did something,’ even if that something is turning the world’s most famous public transit system into a failed sci-fi pilot.”