Facebook moderation AI can’t tell the difference between car washes and mass shootings
Facebook’s tools for moderation are awful.
Social media websites are not great at moderation. Facebook, TikTok, YouTube and more all have issues with misinformation. On Facebook, misinformation and harmful content should be handled by the Facebook moderation AI. However, it’s a bit too crap.
Reported by Ars Technica, the current Facebook moderation AI has far too many problems. Additionally, the software has failed to show any useful improvements over the past few years of use.
Facebook moderation AI gets confused easily
According to the report, internal documents discuss a moderation AI that frequently fails to work properly. Based on information leaked by Facebook whistleblower Frances Haugen, the AI software is woefully inept.
One instance details a time where the AI failed to stop livestreams of mass shootings. The AI allowed the videos to stay streaming, incorrectly labelling them as paintball matches or, hilariously, car wash videos.
Additionally, the artificial intelligence couldn’t stop harmful content involving animals. Videos of cockfighting were not flagged as such. Instead, the Facebook moderation AI believed that the videos of chickens fighting were actually videos of car crashes.
Hate Speech was not detected
All social media platforms have an issue with hate speech. The anonymity afforded by social media results in many acting in volatile ways. As such, the Facebook moderation AI is supposed to detect and remove hate speech. However, it doesn’t really work.
While Western countries have issues with the AI failing to detect hate speech, other countries are far worse. Ars notes that the artificial intelligence program completely lacks a dictionary for foreign hate speech. As such, only 0.23% of hate speech has been detected in countries such as Afghanistan.