PredPol’s algorithm uses historical crime datasets on location, time/date, and type of crime to produce 500 x 500 square foot areas called “hotspots,” where it forecasts that crime is most likely to happen. In some cases these squares may be as specific as an individual house or groups of houses.
The reliance of PredPol on data collated in police reports is vulnerable to existing biases, and likely to contribute to the over-policing of minority areas. Here’s how that works: police have historically been more active in black and minority ethnic areas — gathering more data,and filing more police reports on those areas. By inputting these reports into their algorithm, PredPol “hotspots” are liable to generate feedback loops that are predisposed to identify these spaces as criminal hotspots.
Predpol is partnered with 60 police departments across the United States. Many of which have a history of heavy-handed and racialised policing. The LAPD – PredPol’s first law enforcement partner — was placed under a federal consent decree for more than a decade as a result of its record of racial profiling and discriminatory policing practices.
In light of these of these pre-existing forms of discriminatory practice, concerns over the effectiveness of PredPol and its ability to reduce crime are even more important. There is little to show that PredPol is an effective policing tool, indeed the singular academic review of PredPol’s algorithm was authored by the founders of the company itself.
PredPol technology is rooted in violence: the algorithm was initially designed to predict strikes by enemy combatants in Iraq, its present use continues this divisive and violent legacy. At best, PredPol is an ineffective policing tool that provides cover to violent police departments who already over-police black and minority communities. At worst, PredPol utilises biased police report data to actively encourage over-policing — an action that all too frequently results in the tragic death of black people.
Either way, @PredPol – No Tech For U.