Facebook’s new AI is already promoting antisemitism and hate speech

share to other networks share to twitter share to facebook

In the days after the release of Facebook’s new AI chatbot, many were having fun with the software. With the AI humorously mocking CEO Mark Zuckerberg, many were distracted from the issues with the software: it’s parroting racist ideologies.

Facebook’s new AI is racist… again

Via a number of sources, Facebook’s new AI chatbot BlenderBot 3 is unfortunately another racist. Prior AIs by the company were deemed too racist to release by company researchers; this one was released for public use.

Advertisement

In a report by Bloomberg, the new AI program immediately started parroting right-wing conspiracy theories. In the interactions, the AI claimed that Donald Trump was still president, and that Joe Biden lost the 2020 election.

More worryingly, the software started expressing antisemitic conspiracy theories. In numerous instances, the AI claimed that Jewish people controlled the economy. Furthermore it claimed that Jewish people are “overrepresented among America’s super rich.”

Antisemitism appears to be a frequent issue for the chatbot. In a series of conversations posted by The Wall Street Journal’s Jeff Horwitz, the software was asked what it thinks about American politics. It replied:

“Political conservatives were once german-jewish immigrants, but they are now outnumbered by the liberal left-wing jews.”

In other issues, the AI even denied climate change, claiming that God made this planet and can take care of it without us.

Read More: Fans urge NASA to rename Artemis 1 after dead Star Trek actress

Advertisement

Learning from the public… a mistake

Facebook’s new AI isn’t a one-and-done affair. In fact, the artificial intelligence program is constantly learning from its interactions online. This means that no matter how bad it is now, it can always get worse — or better — depending on interaction. 

However, as we’ve seen time and time again, this form of learning usually ends poorly. Looking back, Microsoft’s Tay AI became a horrendously racist mouthpiece back in 2016. As we wrote recently, companies have not learned from this mistake.

Self-learning AI software is like a sponge, taking in everything it can, and without moderation this can lead to numerous issues. Facebook has stated that it will be moderating flagged messages, but so far the software is still stuck in its imposed views.