Law enforcement agencies in the US and Europe are using forecasting software and algorithms to decide who might commit future crimes, and a new documentary convincingly argues that this puts us at risk of abuse and government overreach.Pre-Crime—appearing in Toronto's Hot Docs Festival—signals its position early on. It's sleek and stylish, but it's also more than a little creepy, offering aerial shots and satellite surveillance of cities and the people in them. You're made to feel guilty of spying and nervous about being watched all at once. There are no fans of the surveillance state here.
The filmmakers catch up with both the cops using predictive technologies and the potential criminals identified and flagged. Software by companies like PredPol compile data on individuals based on everything from their prior interactions with law enforcement to the last thing they tweeted. Individuals are assigned numbers that represent their "likelihood of violence," and areas are designated as high risk—the logic is that crime is contagious and prone to aftershocks.Those selected as potential risks actually receive letters or visits from the police, reminders that they're being watched. It's a deeply condescending approach that makes innocent people feel like criminals. They're never told how they were identified. Nor are we. The precise function of pre-crime programs has to be kept at least somewhat vague, presumably so that no one can cheat the system.If that all sounds weird and dystopic, a Chicago cop offers some semblance of reassurance early on in the doc. "I don't know the science of it, but it was all through mathematical algorithms, basically," he says.Since 2013, predictive technologies have been slowly rolling out in England, Los Angeles, and elsewhere, with plenty of praise from law enforcement agencies seeing quick results. Ottawa has been developing predictive policing programs since 2016, giving its officers real-time access to things like social media posts before they actually engage with citizens.
Critics worry that legal protestors are being unfairly targeted and that the data is likely to be collected and used in ways that will recreate problematic biases. And they have reason to worry. Who decides what data is relevant? How much certainty is required before a person is interfered with by police? There's no clear rulebook.Sci-fi author Philip K. Dick coined the term "pre-crime" and inspired Steven Spielberg's dystopic Minority Report. Unlike the Precogs of Minority Report though, there's little evidence to suggest that the programs used by police work with any degree of accuracy. It's not surprising. Netflix can barely figure out patterns in the films and TV I watch. But a bad movie recommendation is pretty low stakes compared to ending up on a government watch-list. The private sector can screw up as long the system works well enough to increase profits overall. When police allow margins of error and target people who tweeted the wrong combination of words, civil rights are likely to be violated.Of course there's more to it than that. Flaws in the system aren't just flukes, and the problem isn't just that anyone can be placed on a watch list (although that's part of it too). Instead, it's the same people who always get fucked over who are showing up as potential threats: it's generally black people and people of colour who end up on watch lists, while poor and marginalized neighbourhoods are labelled as criminal hot spots.
It's not surprising that this historical blind spot is replicated by the supposedly neutral mathematics of algorithms. The data has to come from somewhere, and a lot of it comes from law enforcement agencies themselves. It's no secret that minority neighbourhoods are already disproportionately over-policed. This is the context that pre-crime software works in, and it's where at least some of the data comes from.The Toronto Police Services Board has consistently refused to destroy data collected through the practice of carding, where civilians are stopped and their information documented without warning or justification. While these street checks are ostensibly meant to gather general intelligence on communities in an unbiased way, a report on carding found little real evidence that the street checks help police officers do their jobs. More importantly though, the report revealed what most black Torontonians probably already knew: that they were being carded in disproportionately high numbers.This all begs the question, is there simply more data identifying black people and other targeted minorities as future criminals? If certain groups are being hassled and searched, it stands to reason that they're more likely to be charged with any number of offences, regardless of actual crime demographics—you can't charge white frat boys with anything if you weren't harassing them in the first place.This is such a glaring weakness in the usefulness of pre-crime algorithms that a new app has taken them to task by offering a predictive heat map for white-collar crime.The filmmakers behind Pre-Crime are keenly aware of this and spend a great deal of time with young black men who have been targeted after minor charges like possession of marijuana and illegal gambling. The film openly asks whether existing algorithms have incorporated corporate crime data. We're left without an answer—and without much doubt that corporations somehow get a pass.The Minority Report mentality is leaving a lot of people justifiably suspicious. It's probably safe to say that data mining and algorithms will continue to be tools of capitalism, used to sell us more things faster. Internet privacy be damned. But law enforcement? Let's look past the sleek science fiction surface of pre-crime tech and realize that nothing's been done to address the root problems of police brutality, racial profiling, and excessive surveillance. Gadgets won't fix a broken system.You can check out the Pre-Crime world premiere at Toronto's Hot Docs on April 29.