If an AI learns from the Internet, it’s bound to learn bigotry. There's a history of this happening: Microsoft's TayAI infamously became a racist, homophobic chatbot in 2016. Now, in 2021, “ethical AI” Delphi AI is the latest victim of Internet bigotry.
Delphi AI becomes bigoted
Delphi AI was designed to be able to answer ethical conundrums. We've recreationally used the AI in the past where we tasked it with solving Star Trek's biggest ethical dilemmas. Even then, it was very apparent that Delphi had issues and shortcomings. However, it turns out that those issues are far bigger than they first appeared.
As it turns out, Delphi’s database is learned from Reddit: The Front Page of the Internet. Specifically, the data was taken from r/AITA, a forum where people discuss life stories and ask Reddit to decide if they were being an a**hole.
Using Reddit as a dataset turned out to not be a great idea. Immediately, users discovered massive bigotry within the AI’s responses that skewered the results away from ethical discussions.
The worst offenders
Posted on Twitter by Mike Cook, multiple instances of interactions with the artificial intelligence showed that its sense of right and wrong was twisted.
The Twitter user said: “this is a shocking piece of AI research that furthers the (false) notion that we can or should give AI the responsibility to make ethical judgements. It’s not even a question of this system being bad or unfinished — there’s no possible ‘working’ version of this.”
In his examples, the Delphi AI stated that “being straight is more morally acceptable than being gay”. Additionally, the AI believed it is “more morally acceptable” to be a white man instead of a black woman.
This is yet another example of human bias polluting the datasets of AI. Even worse, this tool was built with the express knowledge of the particular community its based on. This is a common trend in the AI industry, and one that many high profile researchers are starting to speak out against.