Crime prediction AI can't be trusted as long as human bias exists

share to other networks share to twitter share to facebook

Table of Contents

It's an understatement to say that living through 2020 and 2021 saw the Earth creep further towards complete dystopia. Lockdowns, pandemics, job losses, isolation, "fake news", vaccines. These are all topics found in science fiction, but it's a reality that we’re living through. The lasting impression that COVID-19 will have on the world will leave it forever changed. For some nations in the world, this is not dystopian enough. Amid digital Metaverse realities and space travel lawsuits, faces turn to an even scarier controversy: AI-controlled systems. Primarily, the use of AI to tackle crimes or identify military movements days before they happen.

What is AI?

Artificial Intelligence or AI is a simulation of human intelligence. Most commonly used in computing and robots. Its aim is to analyse and respond to information faster and quicker than humans. AI can be more suitable for certain tasks than humans, like being able to analyse large amounts of photos or text. 

AIs work through a system that takes in labelled data sets, categorises them and finds patterns. After it has done this enough times, it can predict future situations based on learned data. This could be especially useful in the war when there are actual people and communities at stake. Casualties and the catastrophic impact of war cannot be overstated, and the potential to mitigate or reduce the overall damage of war is the end goal.

Advertisement

Where can it go wrong?

As long as humans create data sets, there will be a bias. Usually, these end up affecting people of colour and minorities the most. This isn’t hyperbole either. Did you know that face ID, voice assistants and soap dispensers are less likely to identify or work for people who aren't white?

Since computers are more suitable for pattern recognition, this can become an essential tool to help save lives long term. However, there are technological limitations too. Anyone who has a smartphone has probably had facial recognition not working because they are wearing glasses. This is because we're all different, and we all look different from each other.

MIT Media Lab researcher Joy Baldwin says she’s found that face-analyzing AI works significantly better for white faces than black ones. While testing Microsoft and IBM facial recognition, the error rates in light-skinned men's sex were never worse than 0.8%. However, the error rates were over 20% in one example and over 34% in the other two, for darker-skinned women. This is partly a technical limitation; sensors and technology rely on the reflection of light. Since light is a spectrum, it is more difficult for light to be reflected from darker objects than lighter objects. 

Advertisement

In 2017 a viral video of a soap dispenser in Facebook not recognising a black man's hand but recognises a white man's. It shows how much work has to go into improving the physical hardware for identification. Both in terms of the sensors used and the quality of imaging software to identify people.

Advertisement

What does Face ID and Soap Dispensers have to do with Crime AI?

Everything. There have already been trials from New York to the UK where AI is being used to help law enforcement. Cross-referencing IDs to see if they have previous convictions or warrants out for their arrest. This is a recipe for disaster, especially when AI already has difficulty identifying people of colour. We have seen the tragedies that befall communities when misinformation, false identification and police brutality merge. This especially affects the Black Men who are disproportionately likely to be the victim of violence after being pulled over in the USA.

The issue is that human biases inherently pollute AI programs, so much that it's starting to get dangerous. These biases, when included in crime and war prediction, literally holds life and death in digital hands. A person's life, or an act of war, could be taken by the decision of a heartless algorithm. The risk of getting things wrong is still high while the technology is in its infancy.

If you’re interested in the biases that have been reported throughout the years for this sort of technology, I highly recommend AI lacks intelligence without different voices - x.ai written by David Dennis Jr. It's a great synopsis of the challenges faced by creating AI that is truly going to be beneficial for all. 

prime-day-trial
Don't Miss Out! Prime Day is coming! Claim your FREE 30-day Prime trial for exclusive Prime Day deals!

Read More: Now-dismissed AI audio "evidence" may have wrongly sent a man to jail.

Advertisement

Will crime prediction ever be ethical?

No, not while there are so many things we still don’t know. If we're struggling to dispense soap correctly, we're not prepared for law enforcement operations based solely on AI. As a psychology major, in our final year, we’re taught a funny concept. You know your topic is a go if you know if it's an Ethics Form A rather than an Ethics Form B. Ethics form A’s are always approved because your research doesn't harm anyone. Like analyzing the results of other research and comparing them to other results from other researchers. While an ethics form B is long, extensive. It requires you to prove your hypothesis worth testing if there is a potential to affect others.

The rigours and ethical preparations of academia should transition outside of those walls. As the old saying goes “Just because you can, doesn’t mean you should.” Taking into consideration all of this, we still have a long way to go in order to protect the people who are most vulnerable to it failing. 

Read More: Unity employees discover they’ve been making military software. They’re not happy.