Racist Facebook AI mislabelled black men as primates


In the ever-changing world of AI development, we have instances of where AI has tripped up when it comes to identifying people and things correctly. While sometimes these instances are harmless and just a part of machine learning, it's already proven that AI is not ready to be making big decisions

Oftentimes, built-in biases in machine learning hold life or death over complete subsets of people. Yesterday, another example of bias in AI came to light when Facebook's AI mislabeled Black Men as Primates.

Facebook AI mislabelled black men

On June 27th, a Facebook user watching a video was auto-prompted to “keep seeing videos about Primates.” The content they were watching didn't feature animals, but instead featured black men. 

When The Verge reached out for Facebook to comment on the incident, a spokesperson revealed the entire topic recommendation feature was swiftly disabled. Facebook explained the situation “was clearly an unacceptable error” and apologised to any users seeing incorrect and offensive recommendations.

Why was Facebook using an algorithm?

Facebook is not the first company to use AI for its recommendation systems. From Watch Next on Youtube, to suggested items on Amazon, algorithms are used throughout the internet to predict upcoming interactions. Curating feeds makes for a better user experience. However, it also allows the algorithm to know what you may be more likely to engage with next.

When you first join a service like Facebook, you’re asked to select topics that you like the most. This allows the algorithm to push content you like hide topics that you don’t like. After all, if you repeatedly saw topics you hated, you wouldn’t keep using the service for very long.

Read More: AI will always be biased while humans creat

How can we prevent this in future?

AI is created through the process of learning from data sets. As data sets are created by humans, this makes these learning systems naturally biased. While no one (hopefully) was intentionally mislabeling black men as primates, algorithms learn and then predict. Sometimes, these predictions don’t hit the mark for various reasons. This is another unfortunate example of bias in AI. Something that can only be addressed through expanding what we have to include more people who are not usually represented. 

The onus is on us to be able to expand our horizons to include the people we don’t see and make them more common and prolific in our day to day. By doing that, not only are we enabling more diverse voices to be heard but for AI to be able to reach its potential of being accurate for all.

Read More: AI homogenisation is reaching dangerous levels, says Stanford

This Article's Topics

Explore new topics and discover content that's right for you!

News