New Facebook AI is highly racist, biased and toxic, says Meta researchers


Alongside its quest to dominate The Metaverse, Facebook parent company Meta is also attempting to become an AI powerhouse. Despite high-ranking AI researchers leaving the company en-masse, Meta is funnelling money into artificial intelligence and a new supercomputer to run them. However, the shiny new Facebook AI is just as problematic as those that came before.

Why the new Facebook AI is still dangerous

Reported by VICE, Meta's new artificial intelligence system — OPT-175B — fails to improve upon the biggest faults of its predecessor. The web AI tools are said to still be stuck in the past with its dataset continuing to uphold the biases of past software, leading to the same racism, sexism and toxic behaviour as before.

In a detailed study, Meta AI researchers determined that the new Facebook AI is still not good enough at avoiding biases. In fact, the software may even be worse than prior versions of Facebook moderation software as OPT-175B has a “has a high propensity to generate toxic language and reinforce harmful stereotypes, even when provided with a relatively innocuous prompt.”

This means that the artificial intelligence software is more likely to generate toxic responses when faced with generally normal statements. Additionally, the AI isn't even suited to moderation on Facebook’s services as small changes in the structure of a statement will make toxic comments go unnoticed.

When compared against Facebook’s previous AI modelnewOPT-175B was found to have “a higher toxicity rate” overall. This means that the technology has only become more toxic. Presumably, this is because the AI program was trained on unfiltered social media conversations.

Read More: Meta cancels all developer conferences to focus on the metaverse

Avoid Usage Entirely

Meta AI researchers were unsurprisingly unimpressed with the state of OPT-175B. While Meta is still releasing the new Facebook AI online for free, it is doing so with the express opinion that it should not be used in any major capacity. Due to the software’s “strong awareness of toxic language”, the researchers believe it should be avoided.

The research team at Meta explained that if the AI is to be used, the toxicity should be considered. Additionally, users should take “additional mitigations” to factor in the inherent toxic biases in the model. However, they simply suggest that people should “avoid usage entirely”.

This Article's Topics

Explore new topics and discover content that's right for you!

AINews