An artificial intelligence was able to predict crimes with 90% efficiency

Tech&Co

University of Chicago researchers have developed an algorithm that has two functions: predicting crime and revealing a bias in police interventions in the United States.

University of Chicago researchers have developed artificial intelligence capable of predicting the locations and dates of crimes in several US cities. According to the authors, it achieved a 90% success rate.

The software was “trained” on crime-related data from the city of Chicago (USA) compiled between 2014 and 2016 to create a predictive model. It then predicts future crimes committed in the following weeks.

Article published on June 30 In Science Journal Nature, indicating that the accuracy of localization is over 300 meters and is performed one week in advance. The city of Chicago wasn’t the only one concerned: Seven other major cities also underwent the exercise.

To combat the discriminatory biases of algorithms now recognized by various players in the sector, artificial intelligence does not identify potential suspects but only potential locations where crimes may take place.

Fight against bias

The purpose of this research is not only to develop a crime prevention tool. This should improve the development of police interventions. In fact, the work of these researchers highlights reduced police protection in some of the worst neighborhoods in several major cities, including Chicago and Los Angeles.

The report specifically highlights the higher number of arrests in more affluent neighborhoods compared to poorer neighborhoods over the same time period and study period.

“not that Minority Report“, defers Professor Ishanu Chattopadhyay, head of the study group. “The resources of law enforcement are not infinite. So they should be used in the best possible way. It would be better to know where the killings are likely to take place,” he said. With the Journal of Science New Scientist.

A team of researchers has also made its data public, As is its software, so that they can be analyzed by other experts. One way to make their algorithm transparent is to leave everyone the opportunity to detect and report potential biases.

“Rather than increasing a state’s power by predicting crime based on location and date, our tools help test whether law enforcement is affected by bias and better understand the processes by which urban policing evolves,” says Professor Chattopadhyay.

However, algorithms get a bad rap from privacy advocates. Questionable is their often inaccurate and biased use of bias. However, the researchers in question seem to be aware of these biases and declare above all that their aim is to highlight them in order to better eliminate them.

Leave a Reply

Your email address will not be published. Required fields are marked *