Police Will Use AI to Predict Violent Crimes, Minority-Report Style
The world’s first system to predict violent crime identifying perpetrators and victims before a killing is committed may come online on March 2019.
It sounds a lot like Minority Report, the futuristic movie in which an DC-based police force uses beings with psychic powers to predict assassinations. Except this is freaking real.Credit: DreamworksInstead of using precogs — as Philip K. Dick called these special humans in the story that inspired the film — the police in the United Kingdom will use Artificial Intelligence and Big Data gathered from multiple police databases at the local and national level. Their system has already produced more than one terabyte of information and more than five million potential suspects and victims.
A report by New Scientist says that this system — called National Data Analytic Solution or NDAS — uses AI and statistics to asses the risk of someone “committing or becoming a victim of gun or knife crime, as well as the likelihood of someone falling victim to modern slavery.”
The AI analyzes 1400 indicators, including 30 factors — like previous crime records or association with criminals — that are especially useful to predict who will commit the next crime and when.
A total of nine British police forces — including London’s Metropolitan Police and Greater Manchester Police — are participating in the development of this predictive technology, which will see its first prototype become sentient as soon as March 2019 in the West Midlands.
According to the NDAS project manager Iain Donnelly, the objective of the system — which its promoters claim is the first of its kind — is to prevent crime in order not to spend money solving crimes and chasing criminals. The idea is that, if the system flags anyone as a criminal or victim, the police will be able to stop the crime with counseling and social assistance programs. Or just scaring the bejeezus out of bad guys.
While Britain’s NDAS seems to be pioneering in the way it combines multiple databases and AI to pinpoint future criminals at the individual level, police forces in the United States have used AI systems to identify criminal hotspots. And while these systems don’t identify specific future killers, just potential areas of conflict, their use have drawn criticism by the American Civil Liberties Union, the Brennan Center for Justice, and other civil rights organizations that believe could be misused and, worse, biased against certain populations.
Critics like the Alan Turing Institute have released a report that raises serious ethical concerns over this crime predictive effort.
According to West Midlands Police’s Deputy Chief Constable Louisa Rolfe, however, the NDAS “sought this independent review at a very early stage as we think an ethical approach should guide the development of this work.” According to the Data Ethics Group in The Alan Turing Institute, the “West Midlands Police have begun work on the National Analytics Solution project and are actively drawing on advice offered by the Turing and IDEPP to help develop their approach to the ethical governance of the project.”
In other words: It seems that the project will be going forward with some modifications and Britain — a country known for festooned with CCTVs with 420,000 in London alone, only second to Beijing’s 470,000— will have a new Big Brother as soon as next year.