Remembering Tay, Microsoft’s overwhelmingly racist AI

share to other networks share to twitter share to facebook

Artificial intelligence research has a bit of problem: it keeps getting racist. From medical robots to facial recognition, AI racism is so prominent that the White House has acknowledged it. However, no AI is as racist as Microsoft’s Tay. Well, maybe…

What was Microsoft Tay?

Back in 2016, Microsoft created its AI chatbot Tay. As part of the company’s early waves of AI research, Tay was supposed to learn from the public. As a freely available tool, anyone could access the AI and tell it anything they wanted.

Advertisement

Tay was designed to be an internet teen, from the dredges of Tumblr. Via The Verge, Microsoft described the AI as an “AI fam from the internet that's got zero chill!" That was cringy in 2016, and it is almost life-threateningly cringy in 2022.

Hooked up to Twitter, the AI would tweet out thoughts. Some of those tweets were very 2010s internet teen. For example: “omg totes exhausted. swagulated too hard today. hbu?” The worst part is that people actually talked like that.

The AI was available on a number of platforms: Twitter, GroupMe and Kik. We’re pretty sure only one of those platforms is still used today. However, with so many people able to access the virtual teen, they were very quickly indoctrinated.

How Tay turned racist

In less than 24 hours, Microsoft’s AI child was subject to hundreds of thousands of conversations from people around the world. Initially, the AI was trained on positive, cherry conversations, but as soon as the internet hit, it quickly went downhill.

Advertisement

It only took a few hours for Tay to switch from her doe-eyed high-on-life origins to a literal fascist. In less than a day, the artificial intelligence program was tweeting anti-feminist sentiments and pro-Nazi ideology.

One viral tweet showed a comparison between the start and end of Tay. At the start, the AI tweeted “can I just say I’m happy to meat you? Humans are so cool!” to (sorry for this) “Hitler was right I hate the Jews”. That drastic shift took only 15 hours.

The Artificial Intelligence program essentially a parrot of human data. While it’s racist tweets started to outnumber the non-racist ones, it wasn’t consistent. For every five tweets saying that feminists should “burn in hell”, there was one that claimed that gender equality was important. But that’s not good enough; no company can have an AI spouting the rhetoric of Adolf Hitler out in the open. So, Tay was shut off.

Advertisement

On March 24th, 2016, Tay tweeted: “c u soon humans need sleep now so many conversations today thx”. That was the last we ever heard of the AI.

What have(n’t) we learned?

The horrible nature of Tay’s tweets confirmed that there’s no ethical way to train an AI off internet data. You can’t simply feed data from online humans and release that AI into the wild. It will always, always end up racist.

That was the message that Tay taught us, the internet is too dangerous for self-learning AI. Despite this, companies are constantly creating AIs built off internet comments and datasets. Why? Presumably because of much data there is available!

Advertisement

In a very recent example, Facebook parent company Meta created a new AI based on comments and interactions on Facebook itself. Not only did it go about as well as expected, but Meta researchers claimed that it was so racist and toxic that it should never, ever be used for anything.

In an even worse-case scenario, one AI researcher decided to train a GPT-3 conversation model off 4Chan comments from /pol, the most toxic area of the already-toxic website.

The 4Chan AI was even worse than Facebook’s one, and drew the ire of most AI researchers. Many experts in the field called the experiment dangerous and stupid, especially when the racist AI was made available to download.

It seems bizarre that after seven years, AI research has not learned a thing from Tay. To this day, AI is still racist, still sexist, and still trained from biased datasets or online comments. Not only that, but companies like Google are still firing AI ethics experts attempting to stop this dangerous research.

Advertisement

Artificial Intelligence researchers know of the dangers of training AI on this type of data. It’s been proven time and time again. However, the companies funding this research just don’t care.