The use of Artificial Intelligence in the police force is controversial to say the least. With crime prediction AI taking centre stage in states like Chicago, many are worried that police are overstepping their boundaries.
Despite this, one of the tool’s creators has defended the AI program, claiming it won’t be abused. But how do we know that it will be used respectfully?
Crime Prediction AI isn’t exactly Minority Report
In an interview with Science Focus, AI crime prediction Inventor Prof Ishanu Chattopadhyay explained that the technology will not be abused. Chattopadhyay explained that the tools in place are currently detached from the (even more) dystopian world of Minority Report.
In Minority Report, individual humans are predicted to commit crimes and are arrested before they do them. For the real-life equivalent, AI can tell that a specific crime will happen in a city zone. What the police do with that information is up to them.
Using publicly available crime data, the AI splits a city — such as Chicago — into 1,000sq foot tiles. The AI can then determine a time frame for when a crime will take place in that two-block segment.
“In one of those tiles, we’ll see this time series of these different events, like violent crimes, property crimes, homicides and so on,” They said. “You can then make predictions on what’s going to happen, say, a week in advance at a particular tile, plus or minus one day.”
Chattopadhyay discussed the fears that this tool will be used to jail people before they’ve committed any crime. He said:
“People have concerns that this will be used as a tool to put people in jail before they commit crimes. That’s not going to happen, as it doesn’t have any capability to do that. It just predicts an event at a particular location. It doesn’t tell you who is going to commit the event or the exact dynamics or mechanics of the events. It cannot be used in the same way as in the film Minority Report.”
Read More: 4Chan AI creator berated by AI community for creating such a dangerous tool
A deterrent for gang violence?
Chattopadhyay continued to explain that the crime prediction AI tools currently available are rudimentary. While they can commit crimes, they are only predicting crimes with precedence. For example, frequent gang violence. They explained:
“In Chicago, most of the people losing their lives in violent crimes is largely due to gang violence. It is not like a Sherlock Holmes movie where some convoluted murder is happening. It is actually very actionable if you know about it a week in advance – you can intervene. This does not just involve stepping up enforcement and sending police officers there, there are other ways of intervening socially so that the odds of the crime occurring actually goes down and, ideally, it never happens.”
Nevertheless, Chattopadhyay is aware of AI bias, a rampant issue with artificial intelligence today. However, they claim that due to the technology’s reliance on event logs instead of human data, it’s safer, if not perfect.
“[Previously,] they were putting people on the list who were likely to be perpetrators or victims of gun violence, using an equation involving characteristics like arrest histories. And that resulted in a large proportion of the black population being on the list.”
“We are trying to start only from the event logs. There are no humans sitting down figuring out what the features are, or what attributes are important. There’s very little manual input going on, other than the event log that is coming in. We have tried to reduce bias as much as possible.”
Read More: Xiaomi CyberOne is a humanoid helper robot for the home
AI crime tech is still an issue
Chattopadhyay’s insistence that his Crime Prediction AI tool won’t get abused is hard to be believed. After all, the way in which police take advantage of the tools they’re given isn’t really up to the creator of said tools.
Furthermore, AI tools have already been used by police in ineffective ways before. For example, AI audio tool Shotspotter was used to arrest a man for murder. However, it turned out that the wrong man was serving a months-long sentence for a crime he didn’t commit.
We’re still in a rather experimental phase with AI tools, and huge sweeping reliances on AI like this can be rocky. From crime prediction to military conflict prediction, governments are going hard on AI tools. Are they for the best?