The science fiction world of Minority Report is upon us. In a bizarre turn of events, American scientists have successfully created a crime prediction AI that is said to accurately predict crimes before they happen.
But how could this technology possibly be real? And does it suffer from the same biases that most artificial intelligence programs suffer from?
Did scientists really create Minority Report AI?
Via Interesting Engineering, scientists at the University of Chicago created a crime-prediction AI tool. Designed to cover urban areas, the software is able to predict crimes before they happen.
In order to work, the algorithm splits the city of Chicago into 1,000 square feet tiles. In these tiles, the AI collates prior violent and property crimes in the area. Upon finding patterns, the software is able to deduce when new crimes will occur.
The Minority Report style AI is said to be 90% accurate in the city of Chicago. However, the tool is also working in Los Angeles, Atlanta, Philadelphia and more.
The scientists behind the crime Prediction tool are aware of the biases found in previous attempts. However, they are attempting to avoid many of them. For example, other models using state-surveillance data are likely to be biased against minority communities as those communities are always given more surveillance.
Crime prediction AI is already in use
Crime prediction AI is already a tool that exists in numerous countries around the world, and is still in use. However, the technology is far from perfect.
For example, clear biases are shown in crime prediction software due to the biases evident in its dataset. With datasets provided by police records and police surveillance, both of which more prominently cover minority populations, the AI reinforces those trends.
Of course, there’s also the ethical dilemma of crime prediction AI. If a program detects a future crime and it’s stopped ahead of time, should that person be arrested? After all, no crime took place.